Posted on: 07/01/2026
About The Opportunity :
Join an engineering-focused team building scalable, production-grade data platforms on Google Cloud.
You will design, implement, and operate end-to-end data pipelines that power analytics, reporting, and ML use cases optimising for cost, performance, and reliability.
Role & Responsibilities :
- Design, build, and maintain scalable ETL/ELT pipelines on GCP (ingest, transform, publish) using Cloud Dataflow, BigQuery, and Pub/Sub.
- Implement data modelling and data warehouse solutions in BigQuery to support analytics and BI workloads.
- Author reusable infrastructure as code for data platforms using Terraform and manage deployments across environments.
- Develop Python-based data processing jobs and optimise SQL for performance and cost in large-scale datasets.
- Troubleshoot pipeline failures, implement monitoring and alerting, and drive SLO/SLA improvements for data products.
- Collaborate with data consumers, analytics, and ML teams to translate requirements into robust, observable data solutions.
Skills & Qualifications :
Must-Have :
- Google BigQuery.
- Cloud Dataflow.
- Cloud Pub/Sub.
- Python.
- SQL.
- Terraform.
Preferred :
- Apache Beam.
- Apache Airflow.
- Kubernetes.
Qualifications & Other Requirements :
- Proven hands-on delivery building data pipelines and data warehouses on Google Cloud.
- Ability to work on-site in India and collaborate closely with cross-functional teams.
- Strong focus on engineering best practices: testing, CI/CD, observability and cost optimisation.
Benefits & Culture Highlights :
- Opportunity to shape cloud data architecture and work on high-impact enterprise analytics projects.
- Collaborative, technically driven environment with mentorship and scope for skill growth.
- Competitive compensation and career progression for high-performing engineers.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1597865