Posted on: 24/09/2025
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines using GCP services such as Cloud Storage, BigQuery, Composer, DataProc, and Pub/Sub.
- Work with both SQL and NoSQL databases for data extraction, transformation, and analysis.
- Develop and optimize Python scripts for data processing, automation, and workflow orchestration.
- Implement and manage data lake architectures, ensuring proper governance, lifecycle management, and cost optimization.
- Collaborate with cross-functional teams (data science, analytics, product, engineering) to deliver reliable, cloud-based data solutions.
- Apply best practices in ETL processes, cloud data architecture, and data quality frameworks.
Required Skills :
- Strong hands-on experience with GCP data services: BigQuery, Pub/Sub, DataProc, Composer, Cloud Storage.
- Proficiency in Python for data processing and automation.
- Strong understanding of ETL pipelines and data warehousing concepts.
- Experience with SQL (query optimization, schema design) and exposure to NoSQL databases (e.g., MongoDB, Bigtable, Cassandra).
- Knowledge of data lake design, governance, and lifecycle management.
- Familiarity with CI/CD, version control (Git), and modern data workflows.
Good to Have :
- Experience with streaming data pipelines.
- Exposure to data modeling and metadata management.
- Knowledge of data security and compliance best practices.
- Google Cloud Professional Data Engineer Certification.
Education :
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1551528
Interview Questions for you
View All