Posted on: 27/04/2026
Your future duties and responsibilities :
- Design & Development : Lead the design, development, and implementation of scalable, high-performance, and reliable data pipelines using various ETL/ELT tools and programming languages (e.g., Python, Scala).
- Data Modeling : Develop and optimize data models (dimensional, relational, columnar) for efficient storage, retrieval, and analysis of large datasets.
- Infrastructure Management : Build, maintain, and optimize data warehousing solutions (e.g., Snowflake, Redshift, BigQuery, Databricks) and data lakes (e.g., S3, ADLS).
- Performance Optimization : Identify and resolve performance bottlenecks in data pipelines and queries, ensuring optimal data flow and accessibility.
- Data Governance & Quality : Implement and enforce data governance best practices, ensuring data quality, integrity, security, and compliance.
- Collaboration : Work closely with data scientists, data analysts, and product managers to understand data requirements and translate them into technical solutions.
- Automation : Automate data extraction, transformation, loading (ETL/ELT) processes, monitoring, and alerting.
- Mentorship & Leadership : Mentor junior data engineers, provide technical guidance, and contribute to the overall growth and best practices of the data engineering team.
- Innovation : Stay up-to-date with emerging data technologies and trends, evaluating and recommending new tools and approaches to improve our data platform. Documentation : Create and maintain comprehensive documentation for data pipelines, models, and processes
Required qualifications to be successful in this role :
- Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field.
- 6+ years of professional experience as a Data Engineer, with a strong track record of designing and implementing complex data solutions.
- Expert proficiency in SQL for data manipulation, analysis, and optimization.
- Strong programming skills in Python for data engineering tasks.
- Extensive experience with cloud-based data platforms such as AWS (S3, Glue, Lambda, Redshift, EMR), Azure (Data Lake, Data Factory, Synapse), or GCP (BigQuery, Dataflow, Cloud Storage).
- Proven experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Databricks).
- Solid understanding of ETL/ELT processes and tools.
- Experience with big data technologies like Spark, Hadoop, or Kafka.
- Familiarity with data modeling techniques (star schema, snowflake schema, 3NF).
- Experience with version control systems (e.g., Git).
- Excellent problem-solving skills and the ability to troubleshoot complex data issues.
- Strong communication and interpersonal skills to collaborate effectively with cross-functional teams.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1631601