Posted on: 29/11/2025
Description :
Core Responsibilities :
- Design and manage data pipelines in the cloud.
- Build and maintain systems for collecting, transforming, integrating, and delivering customer data.
- Perform data processing and transformation using technologies such as Apache Spark and cloud-native services.
- Integrate data from multiple sources into centralized data warehouses.
- Explore and evaluate new technologies and architectural patterns.
- Collaborate with agile teams and actively participate in Scrum ceremonies.
- Utilize source control systems effectively and manage CI/CD pipelines for continuous delivery.
Qualifications :
- Bachelors degree in Computer Science, Engineering, or a related field.
- Proficiency in Python, Apache Spark, and at least one cloud platform (Azure, AWS, or GCP).
- Strong understanding of ETL/ELT frameworks.
- Familiarity with data warehousing platforms such as Snowflake, Redshift, BigQuery, or Synapse.
- Knowledge of various data formats, including JSON, Avro, and Parquet.
- Strong command of SQL for data querying and manipulation.
- Ability to quickly adapt to and implement new tools and technologies.
Preferred Qualifications :
- Cloud certification from one of the major cloud providers (AWS, Azure, or GCP).
- Experience with tools and platforms such as Snowflake, PySpark, Apache Airflow, Terraform, and Looker.
- Familiarity with CI/CD and collaboration tools such as Jenkins, GitLab, Jira, and Confluence
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1582470