Posted on: 15/10/2025
Key Responsibilities :
- Design, develop, and manage data pipelines using Databricks (Spark, Delta Lake).
- Optimize large-scale data processing workflows for performance and reliability.
- Collaborate with Data Scientists, Analysts, and Business Stakeholders to gather requirements and deliver actionable insights.
- Maintain and enforce data quality and integrity across multiple data sources and systems.
- Work with cloud data platforms such as Azure, AWS, or GCP.
- Implement data governance and lineage tracking using tools like Unity Catalog, Great Expectations, or similar.
- Monitor, debug, and troubleshoot data pipelines and jobs.
Required Qualifications :
- 7+ years of professional experience in data engineering or similar roles.
- Strong experience with Databricks, Apache Spark, and Delta Lake.
- Proficient in SQL, Python, and distributed data processing concepts.
- Experience working with cloud platforms (Azure/AWS/GCP) and cloud-native tools.
- Hands-on experience with ETL/ELT processes, data warehousing, and modern data stack.
- Familiarity with CI/CD practices and version control tools (e.g., Git).
- Strong problem-solving skills and ability to work independently or in a team environment.
Did you find something suspicious?
Posted By
Rahul Sharma
Senior HR Executive at INFOOBJECTS SOFTWARE (INDIA) PRIVATE LIMITED
Last Active: 4 Nov 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1560875
Interview Questions for you
View All