Posted on: 22/12/2025
Description :
Role Overview :
We are hiring a Data Engineer with strong expertise in Java and SQL to build and optimize scalable batch and real-time data pipelines. The role focuses on ETL/ELT, streaming systems, and modern Lakehouse architectures on cloud (Azure preferred).
Must Have Skills :
- Strong Java development (OOP, performance tuning)
- Advanced SQL (CTEs, window functions, analytical queries)
- Experience building ETL / ELT pipelines
- Batch & real-time data processing
- Distributed data systems
Good to Have :
- Apache Flink (Streaming, Flink SQL, CDC)
- Apache Spark
- Apache Iceberg / Lakehouse architecture
- OLAP engines : Trino / Presto / ClickHouse
- CDC pipelines (Debezium, Flink CDC)
- Azure Cloud
- Airflow / Dagster
- Kubernetes & CI/CD
Key Responsibilities :
- Build and maintain batch & streaming data pipelines
- Implement complex transformations using Java & SQL
- Design and optimize Lakehouse & Data Warehouse models
- Optimize query and pipeline performance
- Deploy pipelines on cloud and containerized platforms
- Collaborate with Data Science & Product teams
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1593523
Interview Questions for you
View All