Posted on: 29/10/2025
Description :
Job Title : GCP Data Engineer (Hybrid)
Location :
Bangalore, India
Employment Type :
Contract-to-Hire (C2H) 23 months contract, with potential for full-time conversion
Experience Required :
Minimum 5+ years of experience in Data Engineering
About the Role :
Our client is seeking a highly skilled and motivated GCP Data Engineer to join their data engineering team.
The ideal candidate will have strong expertise in SQL, Python, PySpark, and Google Cloud Platform (GCP) services such as BigQuery, DataProc, and Dataflow.
In this hybrid role, you will be responsible for designing and developing scalable data pipelines, optimizing large-scale data processes, and supporting data migration initiatives to GCP.
Youll collaborate closely with cross-functional teams to deliver high-quality, production-ready data solutions in a dynamic and fast-paced environment.
Key Responsibilities :
Data Engineering & Development
- Design, develop, and maintain efficient, scalable, and reusable data pipelines on GCP.
- Write and optimize complex SQL queries, including multi-table joins and query performance tuning.
- Build and manage backend services and APIs using Python frameworks such as FastAPI or Flask.
- Develop and optimize PySpark pipelines for distributed and large-scale data processing.
Cloud Platform & Migration
- Work extensively with Google Cloud Platform (GCP) services including BigQuery, DataProc, Dataflow, and preferably Bigtable.
- Support data migration of ODLs (Operational Data Layers), reporting dashboards, and SOR (System of Record) tables to GCP.
- Ensure data accuracy, consistency, and availability during migration and transformation processes.
Collaboration & Delivery
- Collaborate with data architects, analysts, and business stakeholders to translate business requirements into technical solutions.
- Participate in code reviews, performance optimization, and troubleshooting to ensure best practices are followed.
- Coordinate across cross-functional teams to ensure timely and high-quality project delivery.
- Document data processes, architectures, and workflows for maintenance and knowledge sharing.
Required Technical Skills :
- Strong proficiency in SQL, with expertise in complex joins, subqueries, and performance optimization.
- Hands-on experience in Python, particularly with frameworks such as FastAPI or Flask for backend service development.
- Solid understanding of PySpark for distributed computing and big data processing.
- Proven experience with GCP services, including :
BigQuery data warehousing and analytical querying.
DataProc managed Spark and Hadoop clusters.
Dataflow streaming and batch data pipelines.
Bigtable (preferred) NoSQL data management.
- Experience in data migration projects from on-premise or other cloud environments to GCP.
- Understanding of ETL/ELT concepts, data modeling, and data architecture principles.
Soft Skills & Attributes :
- Strong analytical and problem-solving skills with a focus on data accuracy and efficiency.
- Excellent communication and collaboration abilities to work effectively with distributed teams.
- Highly organized and self-motivated, with the ability to prioritize multiple tasks.
- Comfortable working in overlapping time zones to coordinate with global teams.
- Adaptability and flexibility to work in a hybrid setup.
Educational Qualification :
Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or a related field
- - - - - - -
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1566785
Interview Questions for you
View All