Posted on: 06/03/2026
Job Description :
We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.
Key Responsibilities :
- Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.
- Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.
- Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.
- Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.
- Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems.
- Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards
- Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.
- Utilize Docker and Kubernetes for scalable data processing.
- Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.
Desired Skills and Qualifications :
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field from Tier 1 institutes. (IIT, NIT, IIIT, DTU etc.)
- 5+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP
- Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.
- 2 years of team handling experience.
- Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra)
- Experience in containerization (Docker, Kubernetes).
- Strong analytical and problem-solving skills with attention to detail.
- Good to have MLOps, DevOps including model lifecycle management
- Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.
- Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1618595