Posted on: 28/03/2026
Description :
- Work extensively with GCS, Dataproc, BigQuery, and Composer (Airflow) for data ingestion, processing, orchestration, and analytics.
- Develop and optimise PySpark-based data processing jobs for large datasets.
- Ensure data quality, reliability, and performance through monitoring and optimization.
- Collaborate with data analysts, architects, and business stakeholders to understand requirements and deliver data solutions.
- Implement best practices in data engineering, including data governance, security, and cost optimisation.
- Troubleshoot performance bottlenecks and provide scalable solutions.
- Maintain documentation for pipelines, workflows, and system architecture.
Required Skills & Qualifications :
- 4 - 7 years of experience in Big Data engineering and cloud-based data platforms.
- Strong hands-on experience with Google Cloud Platform (GCS, Dataproc, BigQuery, Composer/Airflow).
- Proficiency in PySpark and distributed data processing frameworks.
- Solid understanding of ETL/ELT processes, data warehousing, and data modelling.
- Experience with workflow orchestration tools and pipeline automation.
- Good knowledge of SQL, scripting languages, and performance tuning.
- Strong analytical, problem-solving, and communication skills.
Preferred / Good to Have :
- Experience with CI/CD pipelines and DevOps practices.
- Exposure to other cloud platforms or modern data stack tools.
- Knowledge of data security, governance, and compliance standards.
Why Join Us :
migration.
- Be part of a dynamic, high-growth environment at NucleusTeq.
- Competitive salary and comprehensive benefits package.
Did you find something suspicious?
Posted by
Namrata Solanki
Technical Recruiter - Human Empowerment at NucleusTeq Consulting Private Limited
Last Active: 27 Apr 2026
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1624419