HamburgerMenu
hirist

Full Stack Data & ML Engineer

SASHR CONSULTANTS
Mumbai
6 - 10 Years

Posted on: 29/09/2025

Job Description

We are looking for a Full-stack Data + ML Engineer (Mid-Level) to join our Data & ML Team. The ideal candidate will bring strong experience in building data pipelines, managing data infrastructure, and deploying ML models into production.


This is a hands-on, high-impact role that blends data engineering and machine learning to drive actionable intelligence across the business.


Data Engineering (50%) :


- Design and build robust ELT/ETL pipelines from app, events, and 3rd-party data sources (batch & streaming).

- Create well-modeled data layers (staging/marts) with testing, documentation, and version control (e.g., dbt).

- Operate and optimize data warehouses/lakes, ensuring data lineage, quality checks, and secure access (PII compliance).

- Contribute to observability, cost tracking, and on-call support for data pipelines.


ML/AI (50%) :


- Frame business problems, prepare datasets, and train/evaluate ML models for production use.

- Build and maintain inference services/APIs (e.g., FastAPI, Triton, KServe) with defined latency and cost targets.

- Implement LLM pipelines (RAG), manage retrieval evaluation, prompt optimization, and safety guardrails.

- Work on classic ML use cases such as risk scoring, recommendation, churn, uplift modeling, and A/B testing.

- Monitor model drift, data integrity, and performance; maintain detailed runbooks and documentation.


Qualifications & Experience :


- 4-6 years of experience delivering production-grade data systems and ML features.

- Strong expertise in SQL and Python.

- Hands-on experience with dbt and an orchestration tool (Airflow/Prefect/Dagster).

- Proficiency with cloud data warehouses (Snowflake/BigQuery/Redshift) and lake formats (Parquet/Delta/Iceberg).

- ML toolchain proficiency : PyTorch/TensorFlow, scikit-learn, MLflow/W&B.

- Familiarity with model serving, Docker, CI/CD, and Kubernetes concepts.

- Strong communication skills, documentation habits, and ability to make pragmatic trade-offs.


Nice to Have :


- Experience with streaming frameworks (Kafka/Flink/Spark Structured Streaming) or CDC tools (Debezium).

- Familiarity with feature stores (Feast) and vector databases (pgvector/FAISS/Weaviate) for LLM/RAG use cases.

- Exposure to FinTech/lending domains, underwriting, bureau & alt-data ingestion, model risk controls, and data compliance.


What to Expect :


- Onsite collaboration at our Mumbai office with product, risk, and engineering teams.

- High ownership across the data ? intelligence ? product loop.

- Opportunities to mentor junior engineers and grow into a lead role as the team scales.


info-icon

Did you find something suspicious?