Posted on: 31/07/2025
Job Description :
We are looking for a Senior Data Engineer with strong hands-on experience in building scalable data pipelines and real-time processing systems. You will be part of a high-impact team focused on modernizing our data architecture, enabling self-serve analytics, and delivering high-quality data products. This role is ideal for engineers who love solving complex data challenges, have a growth mindset, and are excited to work on both batch and streaming systems.
Responsibilities :
- Build and maintain real-time and batch data pipelines using tools like Kafka, Spark, and Airflow.
- Contribute to the development of a scalable LakeHouse architecture using modern data formats such as Delta Lake, Hudi, or Iceberg.
- Optimize data ingestion and transformation workflows across cloud platforms (AWS, GCP, or Azure).
- Collaborate with Analytics and Product teams to deliver data models, marts, and dashboards that drive business insights.
- Support data quality, lineage, and observability using modern practices and tools.
- Participate in Agile processes (Sprint Planning, Reviews) and contribute to team knowledge sharing and documentation.
- Contribute to building data products for inbound (ingestion) and outbound (consumption) use cases across the organization.
Requirements :
- 5-8 years of experience in data engineering or backend systems with a focus on large-scale data pipelines.
- Hands-on experience with streaming platforms (e. g., Kafka) and distributed processing tools (e. g., Spark or Flink).
- Working knowledge of LakeHouse formats (Delta/Hudi/Iceberg) and columnar storage like Parquet.
- Proficient in building pipelines on AWS, GCP, or Azure using managed services and cloud-native tools.
- Experience in Airflow or similar orchestration platforms.
- Strong in data modeling and optimizing data warehouses like Redshift, BigQuery, or Snowflake.
- Exposure to real-time OLAP tools like ClickHouse, Druid, or Pinot.
- Familiarity with observability tools such as Grafana, Prometheus, or Loki.
- Some experience integrating data with MLOps tools like MLflow, SageMaker, or Kubeflow.
- Ability to work with Agile practices using JIRA, Confluence, and participate in engineering ceremonies.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1522643
Interview Questions for you
View All