Posted on: 14/10/2025
Description :
About the job :
Key Responsibilities :
- Lead the design, development, and optimization of end-to-end data pipelines and ETL workflows.
- Architect data integration solutions leveraging Snowflake, Informatica PowerCenter, and Teradata.
- Collaborate with business and technical teams to gather requirements and translate them into technical designs.
- Drive performance tuning, error handling, and automation across ETL and data warehouse layers.
- Provide technical leadership to developers, mentor junior engineers, and enforce best practices.
- Ensure data quality, consistency, and security across data environments.
- Support migration and modernization initiatives from legacy systems to cloud-based data platforms.
- Participate in Agile/Scrum ceremonies, sprint planning, and code reviews.
Required Skills & Experience :
Core Expertise (Must-Have) :
- Snowflake (3+ years) : Hands-on experience with SnowSQL, Time Travel, cloning, query profiling and optimization, and secure Data Sharing. Able to design schemas and implement performance-tuned solutions.
- Informatica PowerCenter (3+ years) : Proficient with Designer, Workflow Manager/Monitor, mappings and transformations, session/workflow tuning, error handling and deployments.
- Teradata (3+ years) : Strong SQL skills including BTEQ scripting, stored procedures, performance-tuned joins, indexing/collect stats, and utilities such as FastLoad, MultiLoad, TPT.
- SQL & Data Modeling : Demonstrated ability to design normalized and dimensional models (3NF, Star, Snowflake) and write complex, performance-oriented SQL for analytics.
Required Supporting Skills :
- Shell scripting (Bash/Ksh) : Practical experience automating ETL jobs, log handling, and job orchestration (typically 2+ years).
- Python for data engineering : Proficient in writing production-quality Python scripts for ETL and data processing (typically 2+ years).
- Familiar with commonly used libraries (e.g., pandas, SQLAlchemy), exception handling, basic unit testing, and performance considerations.
Preferred / Nice-to-Have :
- Experience with CI/CD and version control : Git (branching strategies) and building/maintaining pipelines (Jenkins, GitLab CI, Azure DevOps).
- Familiarity with cloud data platforms and migrations (AWS/GCP/Azure) and cloud-native storage/compute services.
- Experience with orchestration tools (Apache Airflow, Control-M) and monitoring/alerting solutions.
- Prior experience working in Agile/Scrum teams, performing code reviews, and mentoring team members.
Skills : etl,powercenter,bteq,teradata,snowflake
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1560489
Interview Questions for you
View All