Posted on: 19/11/2025
Description :
Key Responsibilities :
- Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
- Collaborate with cross-functional stakeholders to understand business requirements and translate them into technical data solutions.
- Drive performance tuning, monitoring, and reliability of data pipelines.
- Write clean, modular, and production-ready code with proper documentation and testing.
- Contribute to architectural discussions, tool evaluations, and platform setup.
- Mentor junior engineers and participate in code/design reviews.
Must-Have Skills :
- Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
- Experience with distributed data processing frameworks (e.g., Apache Spark, Flink, or similar).
- Exposure with Java is mandate
- Experience with building pipelines using orchestration tools like Airflow or similar.
- Familiarity with CI/CD pipelines and version control tools like Git.
- Ability to debug, optimize, and scale data pipelines in real-world settings.
Good to Have :
- Exposure to Databricks, dbt, or similar platforms is a plus.
- Experience with Snowflake is preferred.
- Understanding of data governance, data quality frameworks, and observability.
- Certification in AWS (e.g., Data Analytics, Solutions Architect) or Databricks is a plus.
Other Expectations :
- Strong analytical and problem-solving skills with attention to detail.
- Ability to adapt across tools, stacks, and business domains.
- Willingness to travel within India for short/medium-term client engagements as needed.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1577461
Interview Questions for you
View All