Posted on: 18/08/2025
What you'll be doing :
- Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect
- Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks
- Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures
- Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications
- Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution
- Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows
- Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance.
What you'll have :
- 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning
- 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3
- 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs
- Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query
- Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch)
- Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform)
- Ability to effectively communicate timelines and deliver MVPs set for the sprint
- Strong analytical and problem-solving skills, with the ability to work across cross-functional teams.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1531085
Interview Questions for you
View All