Posted on: 03/10/2025
Key Responsibilities :
- Develop robust data pipelines and ETL workflows using Python and Apache Spark.
- Orchestrate complex workflows using Databricks Workflows or Azure Data Factory.
- Translate business rules, retention metadata, and data governance policies into reusable, modular,
and scalable pipeline components.
- Ensure adherence to data privacy, security, and compliance standards (e.g., GDPR, HIPAA, etc.
- Collaborate with cross-functional teams including data architects, analysts, and business stakeholders to align data solutions with business goals.
Required Skills & Qualifications :
- 4 to 7 years of experience in data engineering
- Expert-level proficiency in Python, Apache Spark, and Delta Lake.
- Strong experience with Databricks Workflows and/or Azure Data Factory.
- Deep understanding of data governance, metadata management, and business rule integration.
- Strong communication skills.
- Experience with cloud platforms such as Azure.
Preferred Qualifications :
- Experience with CI/CD pipelines and DevOps practices in data engineering.
- Familiarity with data cataloging and data quality tools.
- Certifications in Azure Data Engineering or related technologies.
- Exposure to enterprise data architecture and modern data stack tools
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1555277
Interview Questions for you
View All