Posted on: 17/03/2026
Job Title : Data Engineer GCP | Spark SQL | ETL
Location : Hyderabad
Work Mode : Initially Work From Office (5 Days/Week) Until KT Completion after that Hybrid Mode
Employment Type : Contract to Hire (CTH)
Notice Period : Immediate to 15 days
Job Description :
We are looking for a Data Engineer with strong SQL and GCP experience to join our client delivery team. The role involves working on cloud-based data platforms, optimizing clusters, and building ETL pipelines for large-scale data processing.
Key Responsibilities :
- Develop and optimize SQL queries for large-scale data processing.
- Work on GCP Cloud environment for data engineering solutions.
- Perform cluster optimization and performance tuning.
- Utilize Spark SQL and PySpark for data processing and validation.
- Design and implement ETL pipelines using the existing data platform.
- Analyze and process large datasets to support data analytics and business insights.
- Collaborate with cross-functional teams to ensure data quality and performance.
Required Skills :
- Strong expertise in SQL.
- Hands-on experience with GCP Cloud.
- Experience with Spark SQL and PySpark.
- Knowledge of cluster optimization and performance tuning.
- Strong understanding of data engineering and data analytics concepts.
- Experience in ETL development using existing data platforms.
Important Notes :
- This is not a BI role (Power BI / Tableau).
Important Note :
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1621168