Posted on: 22/07/2025
We are looking for an Associate Architect with 10+ years of experience to help scale and modernize Myntra's data platform. The ideal candidate will have a strong background in building scalable data platforms using a combination of open-source technologies and enterprise solutions. The role demands deep technical expertise in data ingestion, processing, serving, and governance, with a strategic mindset to scale the platform 10x to meet the ever-growing data needs across the organization. This is a high-impact role requiring innovation, engineering excellence, and system stability, with an opportunity to contribute to OSS projects and build data products leveraging available data assets.
Responsibilities :
- Design and scale Myntra's data platform to support growing data needs across analytics, ML, and reporting.
- Architect and optimize streaming data ingestion pipelines using Debezium, Kafka (Confluent), Databricks Spark, and Flink.
- Lead improvements in data processing and serving layers, leveraging Databricks Spark, Trino, and Superset.
- Good understanding of open table formats like Delta and Iceberg.
- Scale data quality frameworks to ensure data accuracy and reliability.
- Build data lineage tracking solutions for governance, access control, and compliance.
- Collaborate with engineering, analytics, and business teams to identify opportunities and build / enhance self-serve data platforms.
- Improve system stability, monitoring, and observability to ensure high availability of the platform.
- Work with open-source communities and contribute to OSS projects aligned with Myntra's tech stack.
- Implement cost-efficient, scalable architectures for handling 10B+ daily events in a cloud environment.
Requirements :
- Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
- Experience : 10+ years of experience in building large-scale data platforms.
- Expertise in big data architectures using Databricks, Trino, and Debezium.
- Strong experience with streaming platforms, including Confluent Kafka.
- Experience in data ingestion, storage, processing, and serving in a cloud-based environment.
- Hands-on experience implementing data quality checks using Great Expectations.
- Deep understanding of data lineage, metadata management, and governance practices.
- Strong knowledge of query optimization, cost efficiency, and scaling architectures.
- Familiarity with OSS contributions and keeping up with industry trends in data engineering.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Technical / Solution Architect
Job Code
1517504
Interview Questions for you
View All