HamburgerMenu
hirist

Job Description

Description :


- AI/ML skills with Databricks for our Data Science projects.


- Proven hands-on experience in AI/ML concepts with a strong track record of delivering data science projects into production environments. (Must-have).


- Proficiency in Python programming, with the ability to write clean, efficient, and maintainable code. (Must have).


- Understanding of the Lakehouse architecture and its practical applications in modern data platforms. (Good to have).


- Hands-on experience with Databricks will be a plus, including notebooks, MLflow, Delta Lake, and job orchestration. (Good to have).


- Ability to rapidly prototype solutions and iterate toward production-grade implementations using sound engineering principles. (Must have).


- Familiarity with Large Language Models (LLMs) and their application in real-world use cases. (must have).


- Experience with version control systems Git/GitLab (Generic skill).


We are seeking a highly skilled AI/ML Engineer with strong hands-on experience in building, deploying, and scaling machine learning models in production environments.


The ideal candidate should be proficient in Python, experienced with AI/ML concepts, familiar with Large Language Models (LLMs), and capable of working within modern data ecosystems such as Databricks and Lakehouse architecture.


This role requires the ability to rapidly prototype solutions and translate them into production-grade systems using robust engineering practices.


Key Responsibilities :


- Design, develop, and implement machine learning and deep learning models for various business use cases.


- Apply strong knowledge of ML algorithms, feature engineering, model selection, and evaluation techniques.


- Work on LLM-based solutions, embeddings, prompt engineering, fine-tuning, and integration of generative AI models into applications.


- Build scalable, production-ready ML pipelines and workflows.


- Deploy models to production using automated CI/CD pipelines, ensuring robustness, performance, and reliability.


- Use sound engineering principles to iterate from prototype to production deployment.


- Work with Databricks notebooks, Delta Lake, MLflow, and orchestration tools to develop ML workflows.


- Collaborate on building solutions that leverage Lakehouse architecture for data preprocessing, feature store usage, and model storage.


- Optimize ML pipelines for large-scale, distributed data processing.


- Work closely with data engineering teams to prepare, clean, and transform complex datasets.


- Explore large datasets to identify patterns, correlations, and insights to improve model performance.


- Implement reusable feature engineering modules and maintain feature reproducibility


- Partner with product, engineering, and business teams to understand requirements and deliver AI/ML solutions.


- Participate in architecture discussions, solution reviews, and model governance planning.


- Communicate insights, model results, and recommendations to non-technical stakeholders clearly.


- Use Git/GitLab for branching strategies, code reviews, version control, and collaborative development.


- Follow coding standards, maintain clean code, and ensure model documentation and reproducibility.


Required Skills & Experience :


- Strong hands-on experience in AI/ML development and end-to-end delivery of data science projects.


- Proficiency in Python with clean and efficient coding practices.


- Solid understanding of ML lifecycle from experimentation to productionization.


- Familiarity with Large Language Models (LLMs) and their real-world applications.


- Ability to rapidly build prototypes and scale them into production-ready solutions.


- Experience with version control tools like Git/GitLab.


- Working experience with Databricks, including:


- Databricks Notebooks


- Delta Lake


- MLflow for experiment tracking


- Job Orchestration


- Understanding of Lakehouse architecture and its implementation in modern data platforms.


General Competencies :


- Strong analytical and problem-solving skills.


- Excellent communication and collaboration abilities.


- Ability to work in an agile, fast-paced environment.


- Attention to detail with a commitment to quality and scalability.


- Experience in distributed data processing (Spark, PySpark).


- Exposure to cloud platforms (Azure, AWS, GCP).


- Experience in MLOps, model monitoring, and retraining automation.


info-icon

Did you find something suspicious?