Posted on: 03/08/2025
Location : Indore / Jaipur (WFO)
We're on the lookout for a passionate Data Engineer who thrives at the intersection of big data, AI, and intelligent automation. If you're excited about building scalable data pipelines and enabling LLM-powered workflows, lets talk.
What Youll Be Working On :
- Design and implement scalable batch and streaming data pipelines using PySpark and Apache Spark
- Build intelligent applications using LLMs, RAG, embeddings, and frameworks like LangChain or CrewAI
- Work with vector databases (e.g., FAISS, Pinecone, ChromaDB) to enable retrieval-augmented generation (RAG) use cases
- Create actionable insights through dashboards using Power BI, Tableau, or Looker
- Develop and maintain cloud-based data lakes and warehouse architectures (e.g., on Azure, AWS, or GCP)
- Collaborate with AI scientists, data analysts, and product teams to deliver business-impacting solutions
What Were Looking For :
- 25 years of hands-on experience in data engineering, AI, or analytics
- Strong skills in SQL, Databricks, PySpark, and Apache Spark
- Practical experience with vector databases like FAISS, Pinecone, or ChromaDB
- Exposure to AI orchestration tools/frameworks : LangChain, CrewAI, or Haystack
- Experience working on one or more cloud platforms : Azure, AWS, or GCP
Bonus points for :
- MLflow, CI/CD pipelines, and ML/LLM lifecycle management
- Familiarity with Responsible AI practices and LLM orchestration workflows
Ideal Background :
- Educational or professional background in Data Engineering, AI/ML, Business Analytics, or Consulting
- A problem-solving mindset with a strong drive to learn and build in a fast-paced, client-facing environment
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1523529
Interview Questions for you
View All