Posted on: 19/08/2025
Key Responsibilities :
- Develop and maintain data ingestion pipelines from PostgreSQL source systems to the Enterprise Data Platform (Snowflake and Redshift).
- Design and implement robust ETL workflows using DBT, ensuring data accuracy and performance.
- Orchestrate and schedule data workflows using Apache Airflow.
- Manage and optimize data storage in AWS S3, including Iceberg tables.
- Handle Parquet data formats for efficient reporting and analytics consumption.
- Monitor pipeline performance, resolve bottlenecks, and troubleshoot data quality issues.
- Collaborate with QA teams and data scientists to ensure end-to-end data integrity.
- Follow industry-standard coding best practices and actively participate in code reviews.
Required Skills and Qualifications :
- Proven experience with Snowflake and AWS S3 for data warehousing and storage.
- Hands-on experience with DBT (Data Build Tool) for data modeling and transformation.
- Proficiency in Apache Airflow for data orchestration and scheduling.
- Familiarity with data lakehouse architecture, Iceberg table formats, and Parquet.
- Solid Python programming skills and experience with API integrations.
- Experience working with large-scale datasets, ensuring performance and scalability.
- Strong problem-solving, communication, and teamwork abilities.
Preferred Qualifications :
- Background in data governance, lineage, or metadata management.
- Familiarity with CI/CD pipelines and DevOps practices in data engineering.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1531669
Interview Questions for you
View All