Posted on: 04/02/2026
Description :
Role Overview :
As a Data Engineer, you will play a key role in designing, developing, and optimising data pipelines and storage solutions in a complex, enterprise-scale data warehouse environment. You will contribute to full life-cycle software development projects, leveraging modern technologies and best practices to deliver high-quality, actionable data solutions.
Key Responsibilities :
- Participate in the full software development life cycle for enterprise data projects, from requirements gathering to deployment and support.
- Design, develop, and maintain robust ETL processes and data pipelines using Snowflake, Hadoop, Databricks, and other modern data platforms.
- Work with a variety of databases : SQL (MySQL, PostgreSQL, Vertica), NoSQL (MongoDB, Cassandra, Azure Cosmos DB), and distributed/big data solutions (Apache Spark, Cloudera).
- Write advanced SQL queries and perform complex data analysis to support business insights and operational reporting.
- Develop Python and shell scripts for data manipulation, automation, and orchestration.
- Perform data modelling, analysis, and preparation to support business intelligence and analytics solutions.
- Maintain and optimise Unix/Linux file systems and shell scripts.
- Collaborate with cross-functional teams to translate business requirements into scalable data solutions.
- Present analytical results and recommendations to technical and non-technical stakeholders, supporting data-driven decision-making.
- Troubleshoot, diagnose, and resolve complex technical issues across the data stack.
- Stay current with industry trends, tools, and best practices to continuously improve data engineering processes.
Required Skills and Qualifications :
- Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field (or equivalent experience).
- Demonstrated full life-cycle experience in enterprise software and data engineering projects.
- Hands-on experience with Snowflake and Hadoop platforms.
- Proficient in SQL, PostgreSQL, Vertica, and data analysis techniques.
- Experience with at least one SQL database (MySQL, PostgreSQL) and one NoSQL database (MongoDB, Cassandra, Azure Cosmos DB).
- Experience with distributed/big data platforms such as Apache Spark, Cloudera, Vertica, Databricks, or Snowflake.
- Extensive experience in ETL, shell or Python scripting, data modelling, analysis, and data preparation.
- Proficient in Unix/Linux systems, file systems, and shell scripting.
- Strong problem-solving and analytical skills.
- Ability to work independently and collaboratively as part of a team; proactive in driving business decisions and taking ownership of deliverables.
- Excellent communication skills, with experience in presentation design, development, and delivery to communicate technical insights and recommendations effectively.
Preferred/Desirable Skills :
- Industry certifications in Snowflake, Databricks, or Azure Hyperscale are a strong plus.
- Experience with cloud platforms such as AWS, Azure, or Snowflake.
- Familiarity with BI reporting tools like Power BI or Tableau.
- Proficient in using Git for branching, merging, rebasing, and resolving conflicts in both individual and team-based projects.
- Familiar with GitHub Copilot to accelerate code writing, refactoring, and documentation tasks.
- Knowledge of industry best practices and emerging technologies in data engineering and analytics.
The job is for:
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1609529