Posted on: 29/04/2026
Responsibilities :
- Design, develop, and maintain scalable data pipelines in Snowflake for healthcare claims, contract configuration, and fee for service reimbursement logic
- Build and optimize ELT processes using Snowflake SQL, stored procedures, and dynamic SQL for complex healthcare datasets
- Develop and maintain Apache Airflow DAGs (AWS MWAA) for pipeline orchestration, scheduling, and monitoring across multiple data domains Write clean, testable Python scripts for data transformation, validation, web scraping automation, and API integrations
- Implement and maintain CI/CD workflows using Bitbucket Pipelines and Liquibase for database migration and schema change management
- Support data quality and governance processes, including Snowflake role-based access control, dynamic data masking, and schema patterns
- Monitor pipeline health, troubleshoot failures, and resolve incidents across Snowflake, AWS Lambda, S3, SNS, and CloudWatch
- Leverage AI coding assistants to accelerate development, with a focus on review, refinement, and quality assurance of AI-generated output
- Collaborate with business analysts and project teams to translate healthcare business requirements into technical pipeline specifications
- Document technical processes, pipeline architecture, and operational runbooks in Confluence
- Other duties as assigned
Requirements
- 5+ years of hands-on data engineering experience
- Proficiency in Snowflake: ELT pipeline development, stored procedures, query optimization, data modeling (star, snowflake, and hybrid schemas)
- Strong SQL skills, including complex joins, window functions, CTEs, and dynamic SQL generation
- Solid Python experience in a data engineering context (Pandas, file processing, API clients, web scraping with BeautifulSoup or Scrapy)
- Experience with Apache Airflow for pipeline orchestration (AWS Managed Workflows for Apache Airflow preferred)
- Hands-on experience with AWS services: S3, Lambda, SNS, CloudWatch, IAM
- Proficiency with Git-based version control and CI/CD pipelines (Bitbucket Pipelines preferred)
- Experience with database migration tooling (Liquibase preferred)
- Demonstrated ability to work independently and manage priorities across concurrent workstreams
- Strong written and verbal communication skills in English, with the ability to collaborate effectively across time zones
- Experience working in globally distributed teams
- Mandatory: Hands-on experience in the US Healthcare domain.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1632349