Posted on: 27/03/2026
Description :
Role & Responsibilities :
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency with a preference for GCP/BigQuery expertise.
Responsibilities :
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or dbt to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate :
Strong Data Engineer Profile :
Mandatory (Experience 1) - Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
Mandatory (Experience 2) Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
Mandatory (Experience 3) Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
Mandatory (Experience 4) Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
Mandatory (Core Skill 1) Must have strong SQL skills with experience in writing complex queries and optimizing performance.
Mandatory (Core Skill 2) Must have programming experience in Python and/or SQL for data processing.
Mandatory (Core Skill 3) Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
Preferred (Experience 1) Exposure to data migration projects and/or data mesh architecture concepts.
Preferred (Skill 1) Experience with Spark/PySpark or large-scale data processing frameworks.
Preferred (Company) Experience working in product-based companies or data-driven environments.
Preferred (Education) Bachelors or Masters degree in Computer Science, Engineering, or related field.
The job is for:
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1624065