Senior Data Engineer

See more jobs from Catena Media

about 16 hours old

Apply Now

We are looking for a Senior Data Engineer to be a part of our Data and Analytics Team. The Senior Data Engineer will be responsible for accomplishing designed tasks, projects and operations related to data estate and will report to the Head of Data Engineering.
As a Senior Data Engineer, your focus will be to ensure that you carry out designated tasks, projects and operations, related to the data estate.

YOUR CHALLENGE:

  • Develop data processes using pyspark, implementing the best practices to reach the best performance on the processes. 
  • Capacity ingesting and integrating different type of sources in the datalake, assuming responsibility on the data quality and data accuracy
  • Build and maintain data model and the code base of the data estate
  • A proven track record of administration, engineering, and operationalizing Cloud Data Platform
  • Assist in the daily data operations and maintenance tasks, monitoring the data processes and troubleshooting the possible problems
  • Work closely and collaborate with a team of engineers, operational specialists and analysts
  • Assist business stakeholders, by carrying out ad hoc data analysis tasks
  • Continuously improve business knowledge and knowledge on data and analytics estate through training and self-development
  • Any other Adhoc requirements by the business
  • TO DO IT, YOU WILL NEED:

  • An IT Degree or a Degree in a relevant field
  • Minimum of 3 years previous working experience as a Data Engineer or a DWH developer, with a proven record of designing and building highly scalable, automated ETL processes for data lake and data warehousing (batch and real-time)
  • Previous experience with data integration and data computing in Cloud Platforms, especially Azure
  • Experience in SQL, and working with relational databases and complex queries
  • Proven experience in computing tools such as Databricks or Jupyter
  • Experience in pipeline orchestration using tools/components as ADF, Fabric Pipeline, AWS Step Functions, AWS Glue, Oozie, Airflow
  • Knowledge of software development, preferably in one or more of the following languages: Pyspark, Scala, Python, JavaScript, Java, C/C++, etc.
  • Any work experience in building distributed environments using any of Kafka, Spark, Hive, Hadoop, etc. (considered as an asset)
  • Experience with CI/CD using tools like Git, Azure Devops, Jenkins
  • Experience with Microsoft Fabric(considered as an asset)
  • Proven experience using dashboard and reporting tools such as Power BI, Looker, Tableau, Qlik (considered as an asset)
  • Ability to work and deliver in a fast-paced agile environment
  • To be a team player
  • To be highly motivated with good analytical skills and excellent communication