Posted on: 30/04/2026
Role Overview :
We are looking for a skilled Data Engineer with strong expertise in modern data platforms and cloud technologies. The ideal candidate will have hands-on experience in building scalable data pipelines, working with big data frameworks, and supporting advanced analytics and AI/ML use cases, preferably within the healthcare domain.
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines and ETL/ELT workflows
- Process large-scale data using Apache Spark (PySpark)
- Develop and optimize data models and queries using SQL
- Orchestrate workflows using Apache Airflow
- Collaborate with data scientists to support AI/ML model deployments
- Implement CI/CD pipelines using GitHub and related tools
- Manage containerized workloads using Kubernetes
- Ensure data quality, reliability, and performance across systems
- Work closely with stakeholders to understand business and analytics requirements
Required Skills & Qualifications :
- Hands-on experience with Snowflake and cloud platforms like Microsoft Azure
- Proficiency in Apache Spark (PySpark)
- Strong SQL and data modeling skills
- Experience with Airflow for workflow orchestration
- Knowledge of CI/CD pipelines (GitHub or similar tools)
- Experience working with Kubernetes and containerized environments
- Understanding of data analytics concepts and practices
- Exposure to AI/ML model deployment workflows
Preferred Skills :
- Prior experience in the Healthcare domain
- Familiarity with data governance and compliance standards
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1632547