Posted on: 17/12/2025
Role Overview :
We are seeking a Senior Data Engineer to design, build, and maintain scalable, high-performance data platforms and pipelines. The role focuses on data ingestion, transformation, storage, and reliability across batch and real-time systems. You will work closely with analytics, data science, and platform teams to deliver trusted, production-grade data solutions.
Key Responsibilities :
- Design, develop, and maintain scalable ETL/ELT pipelines for batch and streaming data.
- Build and optimize data lakes and data warehouses to support analytics and downstream consumption.
- Implement data modelling (dimensional, normalized, and analytical models).
- Ensure data quality, consistency, lineage, and observability across pipelines.
- Optimize data processing performance, cost, and reliability.
- Work with stakeholders to translate business requirements into technical data solutions.
- Develop and maintain data integration frameworks and reusable components.
- Perform code reviews and enforce data engineering best practices.
- Support production systems, including troubleshooting and root cause analysis.
- Contribute to documentation, standards, and continuous improvement initiatives.
Required Skills & Experience :
- 5-8 years of hands-on experience in Data Engineering roles.
- Strong expertise in SQL, including complex queries, optimization, and performance tuning.
- Proficiency in Python or Scala for data pipeline development.
- Experience building ETL/ELT pipelines using modern data engineering frameworks.
- Strong understanding of data warehousing and data lake architectures.
- Hands-on experience with batch and streaming data processing.
- Experience with big data technologies such as Spark, Databricks, or equivalent.
- Strong understanding of distributed systems and data processing concepts.
- Experience working with cloud data platforms (AWS, Azure, or GCP).
- Experience in logical and physical data modelling.
- Strong focus on data quality, validation, and monitoring.
- Understanding of data governance, security, and access controls.
- Proficiency with Git and collaborative development workflows.
- Experience with CI/CD pipelines for data workloads.
- Familiarity with Agile/Scrum methodologies.
- Ability to write clean, maintainable, well-documented code.
Good to Have :
- Experience with orchestration tools such as Airflow, Prefect, or Azure Data Factory.
- Exposure to streaming platforms (Kafka, Kinesis, Event Hubs).
- Experience with NoSQL databases and columnar storage formats (Parquet, Delta, Iceberg).
- Knowledge of BI and reporting tools (Power BI, Tableau).
- Familiarity with data security and compliance practices.
Education : Bachelors degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
Did you find something suspicious?
Posted by
Putrevu Sai Chaitanya
Recruiter at Cubic Transportation Systems India Pvt. Ltd.
Last Active: 18 Dec 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1591221
Interview Questions for you
View All