Posted on: 07/11/2025
Job Title : Data Engineer
Experience : 5+ Years
Location : Indore
Employment Type : Full-Time
Mandatory Tech Stack :
- Programming: Python
- Big Data Processing: PySpark
- Database: SQL
- ETL Tools & Processes
- Platform: Databricks
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines and ETL workflows.
- Work with large datasets using PySpark for data transformation and processing.
- Write and optimize SQL queries for data extraction, reporting, and analytics.
- Implement data quality checks, validation, and monitoring mechanisms.
- Collaborate with Data Scientists, Analysts, and Business stakeholders to deliver insights.
- Develop and maintain solutions on Databricks for data integration and analytics.
- Ensure compliance with data governance, security, and performance standards.
- Troubleshoot data-related issues and optimize pipeline performance.
Required Skills & Experience :
- Minimum 5+ years of experience in Data Engineering.
- Strong expertise in Python and PySpark for big data processing.
- Proven ability to design and maintain ETL pipelines.
- Proficiency in SQL and working with relational as well as NoSQL databases.
- Hands-on experience with Databricks.
- Good understanding of data modeling, data warehousing concepts, and distributed systems.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Good to Have :
- Experience with cloud platforms (AWS, Azure, GCP).
- Familiarity with Delta Lake, Apache Kafka, or similar technologies.
- Exposure to Agile methodologies and DevOps practices.
- Knowledge of data security, compliance, and governance frameworks.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1570450