Posted on: 30/11/2025
Description :
Role : Data Engineer
Location : Noida
Role : 4 to 9 years
Working with Us :
At Iris, every role is more than a job its a launchpad for growth.
Our Employee Value Proposition, Build Your Future. Own Your Journey. reflects our belief that people thrive when they have ownership of their career and the right opportunities to shape it.
We foster a culture where your potential is valued, your voice matters, and your work creates real impact. With cutting-edge projects, personalized career development, continuous learning and mentorship, we support you to grow and become your best both personally and professionally.
Curious what its like to work at Iris? Head to this video for an inside look at the people, the passion, and the possibilities. Watch it here.
Job Description :
- Design, implement, and maintain data pipelines for processing large datasets, ensuring data availability, quality, and efficiency for machine learning model training and inference.
- Collaborate with data scientists to streamline the deployment of machine learning models, ensuring scalability, performance, and reliability in production environments.
- Develop and optimize ETL (Extract, Transform, Load) processes, ensuring data flow from various sources into structured data storage systems.
- Automate ML workflows using ML Ops tools and frameworks (e.g., Kubeflow, MLflow, TensorFlow Extended (TFX)).
- Ensure effective model monitoring, versioning, and logging to track performance and metrics in a production setting.
- Collaborate with cross-functional teams to improve data architectures and facilitate the continuous integration and deployment of ML models.
- Work on data storage solutions, including databases, data lakes, and cloud-based storage systems (e.g., AWS, GCP, Azure).
- Ensure data security, integrity, and compliance with data governance policies.
- Perform troubleshooting and root cause analysis on production-level machine learning systems.
Skills : Glue, Pyspark, AWS Services, Strong in SQL; Nice to have : Redshift, Knowledge of SAS Dataset
Mandatory Competencies :
- Data Science and Machine Learning : - Data Science and Machine Learning - AI/ML
- Beh : Communication
- Big Data : Big Data - Pyspark
- DevOps/Configuration Mgmt : Cloud Platforms - AWS
- ETL : - AWS Glue
Database : - Sql Server - SQL Packages
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1582817
Interview Questions for you
View All