Posted on: 25/08/2025
About Role :
- Building maintainable data pipelines both for data ingestion and operational analytics for data collected from 2 billion devices and 900M Monthly active users.
- Building customer-facing analytics products that deliver actionable insights and data, easily detect anomalies.
- Collaborating with data stakeholders to see what their data needs are and being a part of the analysis process.
- Write design specifications, test, deployment, and scaling plans for the data pipelines.
- Mentor people in the team & organization.
Requirements :
- 3+ years of experience in building and running data pipelines that scale for TBs of data.
- Proficiency in high-level object-oriented programming language (Python or Java) is must.
- Experience in Cloud data platforms like Snowflake and AWS, EMR/Athena is a must.
- Experience in building modern data lakehouse architectures using Snowflake and columnar formats like Apache Iceberg/Hudi, Parquet, etc.
- Proficiency in Data modeling, SQL query profiling, and data warehousing skills is a must.
- Experience in distributed data processing engines like Apache Spark, Apache Flink, Datalfow/Apache Beam, etc.
- Knowledge of workflow orchestrators like Airflow, Dasgter, etc is a plus.
- Data visualization skills are a plus (PowerBI, Metabase, Tableau, Hex, Sigma, etc).
- Excellent verbal and written communication skills.
- Bachelors Degree in Computer Science (or equivalent).
Benefits :
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1535355
Interview Questions for you
View All