Posted on: 04/08/2025
About the Role :
We are on an exciting journey to scale our advanced analytics and data engineering capabilities.
We are seeking a Lead Data Lake Developer with strong expertise in architecting and building enterprise-grade data lake solutions on AWS.
This is a leadership role where you will own the design, development, and governance of robust, scalable, and cost-effective data platforms to support a range of analytics and business intelligence initiatives.
Key Responsibilities :
- Lead the design and implementation of modern Data Lake architectures using AWS and related technologies.
- Architect and build scalable data ingestion pipelines integrating data from diverse sources using ETL/ELT tools.
- Develop and maintain Delta Lake solutions on AWS S3 using Databricks, Apache Hudi, or similar technologies.
- Design, build, and manage enterprise data warehouses and analytical data models (Star, Snowflake, Flattened).
- Define and implement data access patterns supporting both OLTP and OLAP use cases.
- Provide technical leadership across the software development lifecyclefrom requirements through deployment.
- Own code versioning, CI/CD pipelines, and DevOps practices for the data engineering stack.
- Collaborate with cross-functional teams including business analysts, data scientists, and solution architects.
- Ensure reliability, performance, security, and cost-efficiency of the data platform.
Required Skills & Qualifications :
- 10+ years of experience in Data Engineering with at least 5+ years in cloud-native Data Lake implementations.
- Proven hands-on expertise in AWS services including S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS, and Redshift.
- Strong experience building Delta Lakes using Databricks or Apache Hudi.
- Proficiency in distributed computing frameworks such as Apache Spark.
- Deep understanding of ETL/ELT tools and techniques.
- Proficient in at least one modern programming language (Python, Scala, Java, or R).
- Expertise in designing and implementing data models and scalable data warehouses (Snowflake, Redshift, HANA, Teradata, etc.
- Hands-on experience in DevOps practices including version control (Git), CI/CD pipelines, and infrastructure as code.
- Experience in leading Agile/Scrum-based projects and mentoring junior engineers.
- Bachelor's degree in Computer Science, Information Technology, Data Science, or related field.
Preferred Qualifications :
- AWS Certification (e. , AWS Certified Data Analytics Specialty, AWS Certified Solutions Architect).
- Experience with real-time data processing frameworks and streaming data solutions.
- Exposure to data governance and metadata management tools
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1524023
Interview Questions for you
View All