Posted on: 16/12/2025
Key Responsibilities :
- Lead and mentor a data engineering team driving scalable, reliable platform delivery
- Architect high-volume batch and streaming data pipelines across diverse data sources
- Own real-time data lakes and analytics-ready warehouse architectures
- Optimize Kafka-based streaming systems for throughput, latency, and reliability
- Implement observability metrics, dashboards, and alerts for operational KPIs
- Collaborate with product and platform teams on data contracts and integrations
- Drive research and adoption of modern streaming and lakehouse technologies
SKILL :
The Expertise We Require :
These details define the concrete, demonstrable capabilities necessary to take full ownership of the roles responsibilities.
- 10+ years experience in data engineering or backend engineering
- 2+ years in a technical leadership or team-lead role
- Expert-level experience with Kafka for high-throughput streaming systems
- Strong hands-on expertise with PySpark for distributed data processing
- Advanced experience with AWS Glue for ETL orchestration and metadata management
- Proven experience building and upgrading real-time data lakes at scale
- Hands-on knowledge of data warehouses such as Redshift or Snowflake
- Experience with AWS services including S3, Kinesis, Lambda, and RDS
PROCESS : Transparent Path to Partnership
This process is a mutual discovery. imerit, Pieworks (your guide), and your referrer are all invested in confirming your fit and potential.
- HR Screening
- Technical Round
- Hiring Manager Round
- Founder Discussion
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Engineering Management
Job Code
1591270
Interview Questions for you
View All