Posted on: 11/09/2025
Key Responsibilities :
- Cloud Data Architecture : Architect and deploy large-scale data solutions on Azure, AWS, or GCP.
- CI/CD for Data Pipelines : Define and implement CI/CD strategies using Terraform, Azure DevOps, or GitHub Actions.
- Data Governance : Implement and maintain data cataloging, lineage tracking, and access control solutions (Unity Catalog, Collibra, Alation).
- Compliance & Security : Ensure compliance with GDPR, CCPA, and other industry-specific data security and privacy standards.
- Scalability & Performance : Develop strategies for distributed computing, parallel processing, and caching mechanisms to improve performance.
- Collaboration : Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to deliver robust data solutions.
Requirements :
- 7+ years of experience in data engineering, cloud data platforms, and distributed systems.
- Strong hands-on experience with Databricks and Apache Spark.
- Proficiency with at least one major cloud provider (Azure, AWS, GCP).
- Experience with CI/CD pipelines and IaC tools (Terraform, Azure DevOps, GitHub Actions).
- Familiarity with data governance tools (Unity Catalog, Collibra, Alation).
- Strong knowledge of data privacy and compliance frameworks (GDPR, CCPA, HIPAA preferred).
- Proven ability to optimize large-scale distributed systems for performance and reliability.
- Excellent problem-solving skills, strong communication, and collaborative mindset.
Nice-to-Have :
- Knowledge of Kubernetes and containerized workloads.
- Experience with streaming platforms (Kafka, Event Hubs, Kinesis).
- Exposure to machine learning workflows on Databricks.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1544837
Interview Questions for you
View All