We are looking for a Senior Data Engineer to be a part of our Data and Analytics Team. The Senior Data Engineer will be responsible for accomplishing designed tasks, projects and operations related to data estate and will report to the Head of Data Engineering.
As a Senior Data Engineer, your focus will be to ensure that you carry out designated tasks, projects and operations, related to the data estate.
YOUR CHALLENGE:
Develop data processes using pyspark, implementing the best practices to reach the best performance on the processes. Capacity ingesting and integrating different type of sources in the datalake, assuming responsibility on the data quality and data accuracyBuild and maintain data model and the code base of the data estateA proven track record of administration, engineering, and operationalizing Cloud Data PlatformAssist in the daily data operations and maintenance tasks, monitoring the data processes and troubleshooting the possible problemsWork closely and collaborate with a team of engineers, operational specialists and analystsAssist business stakeholders, by carrying out ad hoc data analysis tasksContinuously improve business knowledge and knowledge on data and analytics estate through training and self-developmentAny other Adhoc requirements by the businessTO DO IT, YOU WILL NEED:
An IT Degree or a Degree in a relevant fieldMinimum of 3 years previous working experience as a Data Engineer or a DWH developer, with a proven record of designing and building highly scalable, automated ETL processes for data lake and data warehousing (batch and real-time)Previous experience with data integration and data computing in Cloud Platforms, especially AzureExperience in SQL, and working with relational databases and complex queriesProven experience in computing tools such as Databricks or JupyterExperience in pipeline orchestration using tools/components as ADF, Fabric Pipeline, AWS Step Functions, AWS Glue, Oozie, AirflowKnowledge of software development, preferably in one or more of the following languages: Pyspark, Scala, Python, JavaScript, Java, C/C++, etc.Any work experience in building distributed environments using any of Kafka, Spark, Hive, Hadoop, etc. (considered as an asset)Experience with CI/CD using tools like Git, Azure Devops, JenkinsExperience with Microsoft Fabric(considered as an asset)Proven experience using dashboard and reporting tools such as Power BI, Looker, Tableau, Qlik (considered as an asset)Ability to work and deliver in a fast-paced agile environmentTo be a team playerTo be highly motivated with good analytical skills and excellent communication