Posted on: 20/04/2026
Key Responsibilities :
- Leverage advanced Python to architect, implement, and optimize Extract-Transform-Load processes for large-scale, multi-source data integration with a focus on reliability and performance.
- Partner with product, analytics, and engineering teams to translate complex data requirements into scalable pipeline architecture, ensuring timely and accurate delivery of data assets.
- Apply advanced data modeling and database design principles to build efficient, normalized, and future-proof storage solutions that support high-volume analytical workloads.
- Design, deploy, and maintain scalable data solutions on AWS and/or Azure including services such as S3, Glue, Redshift, Data Factory, and Databricks while upholding cost and performance best practices.
- Build robust scripts and orchestration workflows (e.g., Airflow, Prefect) to automate repetitive operational tasks, significantly reducing manual overhead and risk of human error.
- Champion data quality initiatives including validation frameworks, lineage tracking, and governance standards to ensure accuracy, consistency, and compliance across the data platform.
- Proactively monitor data pipelines using observability tools, triage failures, and resolve issues swiftly to maintain SLA-level data availability and reliability.
- Actively participate in code reviews, document design decisions, and mentor peers raising the bar for engineering practices, testing standards, and team knowledge sharing.
- Respond effectively to evolving project requirements and technology changes, bringing pragmatic judgment to trade-offs between speed, scalability, and maintainability.
Did you find something suspicious?
Posted by
Kavitha Subramani
Talent Acquisition Consultant at Simpliigence
Last Active: NA as recruiter has posted this job through third party tool.
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1629526