Posted on: 12/11/2025
Description :
About the Role :
We are looking for an experienced Lead Data Engineer to architect, build, and optimize large-scale data systems that power analytics and machine learning initiatives across the organization. This role requires a strong technical leader with deep expertise in data engineering, modern data architectures, and cloud-based ecosystems.
Key Responsibilities :
Data Architecture & Pipelines :
- Design and implement scalable data pipelines, data lakes, and data warehouse solutions for large and complex datasets.
ETL/ELT Frameworks :
- Architect efficient ETL/ELT workflows, ensuring best practices for data ingestion, transformation, storage, and governance.
Workflow Optimization :
- Build and manage both batch and real-time data processing pipelines to enable seamless analytics and operational insights.
Collaboration & Accessibility :
- Partner with data scientists, analysts, and backend engineers to ensure data is discoverable, reliable, and ready for use.
Data Modeling & Performance :
- Lead data modeling, schema design, and warehouse optimization to enhance performance and scalability.
Data Quality & Compliance :
- Implement data quality frameworks, observability tools, and governance practices to maintain accuracy, consistency, and compliance.
Leadership & Mentorship :
- Provide technical guidance and mentorship to the data engineering team, conduct code reviews, and drive engineering best practices.
Strategic Collaboration :
- Work closely with product and business leadership to translate data requirements into robust, scalable architecture and actionable insights.
Requirements :
- Experience : 6 - 8 years of hands-on experience in data engineering, including at least 2 years in a lead or mentoring capacity.
- Programming Skills : Strong proficiency in Python or Scala for data pipeline development.
- Database Expertise : Deep knowledge of SQL and experience with relational databases such as PostgreSQL, MySQL, etc.
- Big Data Tools : Proven experience with Apache Spark, Kafka, Airflow, Snowflake, or Redshift.
- Cloud Platforms : Solid understanding of AWS, GCP, or Azure data ecosystems.
- Data Architecture : Expertise in data modeling, schema design, and performance tuning for large-scale systems.
- Governance & Monitoring : Experience implementing data governance, quality checks, and monitoring frameworks.
- DevOps Integration : Familiarity with Docker, Kubernetes, CI/CD pipelines, and Git for modern deployment practices.
- Soft Skills : Excellent analytical thinking, communication, and leadership skills with the ability to drive cross-functional technical initiatives.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1574102
Interview Questions for you
View All