Posted on: 15/10/2025
Description :
Key Responsibilities :
Data Pipeline Development :
- Design and implement robust batch and real-time streaming data pipelines leveraging Azure Databricks and Apache Spark.
- Build scalable and maintainable ETL/ELT workflows to support data ingestion, transformation, and integration from diverse sources.
Data Architecture Implementation :
- Apply the Medallion Architecture to organize data into distinct layers raw, enriched, and curated ensuring clarity, quality, and usability of data assets.
- Collaborate in designing data models and schemas that support analytics and business intelligence needs.
Data Quality & Governance :
- Define and enforce data quality rules and principles to maintain data accuracy and consistency across the pipeline.
- Implement data governance policies and manage metadata using tools like Azure Purview and Unity Catalog.
- Monitor data lineage and ensure compliance with organizational and regulatory standards.
Performance Optimization :
- Optimize Spark jobs, Delta Lake tables, and SQL queries for improved efficiency, reduced latency, and cost-effectiveness.
- Identify and resolve bottlenecks in data processing workflows to ensure high availability and reliability.
Collaboration & Delivery :
- Work closely with data analysts, solution architects, business stakeholders, and other engineering teams to deliver end-to-end data solutions aligned with business goals.
- Participate in Agile/Scrum ceremonies and contribute to continuous improvement initiatives within the team.
Required Qualifications :
- Bachelors or Masters degree in Computer Science, Engineering, Information Technology, or related fields.
- 5+ years of experience in data engineering, with strong expertise in Azure Databricks and Apache Spark.
- Proficiency in data integration techniques, ETL/ELT processes, and data pipeline design.
- Experience implementing Medallion Architecture or similar data layering concepts.
- Knowledge of Delta Lake and experience managing large-scale data lakes.
- Familiarity with data quality frameworks and governance tools such as Azure Purview and Unity Catalog.
- Strong SQL skills and experience optimizing complex queries for performance.
- Experience with cloud data storage, processing, and orchestration tools within Azure ecosystem.
- Good understanding of data security, privacy, and compliance requirements.
- Excellent communication, collaboration, and problem-solving skills.
Did you find something suspicious?
Posted By
Vinay Sakarkar
Deputy Manager - Talent Acquisition at InfoCepts Technologies Pvt. Ltd.
Last Active: 23 Nov 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1561250
Interview Questions for you
View All