HamburgerMenu
hirist

Job Description

About Rebel Foods :


World's leading consumer companies are all technology / new age companies - Amazon (retail), Airbnb (Hospitality), Uber (mobility), Netflix / Spotify (Entertainment). The only sector, where traditional companies are still the largest ones is restaurants - McDonald's (with a market cap of 130 BN USD). With Food Delivery growing exponentially worldwide, there is an opportunity to build the world's most valuable restaurant company on the internet, superfast. We have the formula to be that company.


Today, we can safely say we are world's largest delivery only / internet restaurant company, and by a wide margin with 4000+ individual internet restaurants, in 75+ cities and 3 countries (India, UAE, UK). It's still Day 1, but we know we are onto something very, very big. We have a once-in-a-lifetime opportunity to change the 500- year-old industry that hasnt been disrupted at its core by technology. For more details on how we are changing the restaurant industry from the core, please refer below. It's important reading if you want to know our company better and really explore working with us

https://spirit.rebelfoods.com/why-is-rebel-foods-hiring-super-talented-engineers-b88586223ebe

https://spirit.rebelfoods.com/how-to-build-1000-restaurants-in-24-months-the-rebel-method-cb5b0cea4dc8

https://spirit.rebelfoods.com/winning-the-last-frontier-for-consumer-internet-5f2a659c43db

https://spirit.rebelfoods.com/a-unique-take-on-food-tech-dcef8c51ba41

Job Description :


We are seeking a Senior Data Engineer (Individual Contributor) to build and scale the data infrastructure that powers analytics, machine learning, and business intelligence across the Data Science & Analytics (DSA) team. This is a highly hands-on technical role that combines deep expertise in data engineering, distributed systems, and cloud data platforms to enable reliable, scalable data solutions across Rebels fast-evolving business landscape.

You will be responsible for designing and maintaining robust data pipelines, scalable data models, and efficient data platforms that support high-impact use cases across the organization. You will work closely with data scientists, analysts, product managers, and engineering teams to ensure data is reliable, accessible, and production-ready for critical decision-making systems.

Our work spans across areas of inventory forecasting, order predictions, marketing optimization, delivery time prediction, personalization, customer insights, product recommendations, supply chain planning, and capacity optimization. We are looking for someone who enjoys solving complex data engineering challenges, has strong ownership, and is excited to build scalable systems that power data-driven decisions at scale.

What our team owns :


At Rebel, the Data Science & Analytics team builds and maintains the data infrastructure, pipelines, and data models that power analytics, machine learning systems, reporting automation, and real-time dashboards used across the organization. We interact with teams across every business area such as Supply, Operations, Demand Generation, D2C, Finance, Brands, Customer Delight, Central Planning, and many more.

Our focus is on building reliable, scalable, and cost-efficient data platforms that enable continuous insight generation while supporting advanced ML use cases. We balance delivering immediate business impact with building strong data foundations that support long-term scalability and reliability.

What You'll Own :


- Design, build, and maintain scalable data pipelines and data infrastructure to ingest, transform, and serve data across batch and near real-time systems.

- Develop reliable ETL / ELT workflows to integrate data from multiple operational systems into centralized data platforms.

- Design and maintain clean, scalable data models that support analytics, BI dashboards, and machine learning pipelines.

- Partner closely with Data Scientists and Analysts to build and maintain feature pipelines and enable seamless productionization of ML models.

- Ensure data quality, observability, and monitoring across pipelines through validation checks, alerts, and performance optimization.

- Optimize pipelines and data platforms for performance, scalability, and cost efficiency in a cloud environment.

- Collaborate with cross-functional teams to translate business requirements into scalable data engineering solutions.

Ideal Background & Skills :


- 5-8 years of industry experience in Data Engineering, Data Platform Engineering, or related roles.

- Strong experience designing and building scalable ETL / ELT pipelines and distributed data systems.

- Hands-on experience with Python and SQL for building production-grade data workflows.

- Experience working with cloud platforms such as AWS and services like S3, Glue, EMR, Lambda, or similar tools.

- Experience with data orchestration frameworks such as Airflow, Prefect, or Dagster.

- Strong understanding of data modeling, schema design, and data warehouse concepts (dimensional modeling, star schema).

- Experience with modern data warehouses or lakehouse architectures such as Redshift, Snowflake, BigQuery, or Delta Lake.

- Ability to work closely with cross-functional teams and translate data requirements into scalable engineering solutions.

Nice to Have :


- Experience with real-time or streaming data pipelines using tools like Kafka, Kinesis, or Spark Streaming.

- Exposure to ML pipelines, feature stores, or MLOps workflows supporting production ML systems.

- Experience building data observability and monitoring systems for large-scale data pipelines.

- Familiarity with fast-paced start-up environments and agile ways of working.

info-icon

Did you find something suspicious?

Similar jobs that you might be interested in