Posted on: 26/11/2025
Description :
Roles & Responsibilities :
i. Design and develop scalable data pipelines and ETL processes using Google Cloud data services like BigQuery, Dataflow, Pub/Sub, and Dataproc.
ii. Build and optimize data architectures to support AI/ML applications and model training at scale.
iii. Collaborate with data scientists and ML engineers to implement data ingestion, feature engineering, and model-serving pipelines.
iv. Develop and manage data integration solutions that align with enterprise data governance and security standards.
v. Support GenAI/Vertex AI model deployment by ensuring reliable data access and transformation pipelines.
vi. Implement monitoring, logging, and alerting for data workflows and ensure data quality across all stages.
vii. Enable self-service analytics by building reusable data assets and data marts for business stakeholders.
viii. Ensure cloud-native, production-grade data pipelines and participate in performance tuning and cost optimization.
ix. Experience with programming languages such as Python, SQL, and optionally Java or Scala.
Professional & Technical Skills :
- Must To Have Skills : Strong experience in Google Cloud Data Services (BigQuery, Dataflow, Pub/Sub) and hands-on with scalable data engineering pipelines.
- Good To Have Skills : GenAI/Vertex AI exposure, Cloud Data Architecture, PCA/PDE certifications.
- Understanding of data modeling, data warehousing, and distributed computing frameworks.
- Experience with AI/ML data pipelines, MLOps practices, and model deployment workflows.
- Familiarity with CI/CD and infrastructure-as-code tools (Terraform, Cloud Build, etc.) for data projects.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1581042
Interview Questions for you
View All