Posted on: 17/07/2025
Job Title : Data Engineer EDW (Enterprise Data Warehouse)
Locations : Bangalore / Noida / Pune
Experience : 6 to 10 years
Employment Type : Full-time
Notice Period : Immediate to 30 days preferred
Role Overview :
We are hiring a Data Engineer EDW with strong expertise in Enterprise Data Warehousing, ETL pipeline design, data migration, integration, and advanced SQL/Python development. You will be responsible for building and maintaining scalable data pipelines and supporting strategic data initiatives across enterprise systems.
Key Responsibilities :
- Design, build, and optimize ETL/ELT pipelines for large-scale enterprise data warehouses (EDW)
- Migrate and integrate data from legacy systems, flat files, RDBMS, APIs, and cloud-based platforms
- Develop and maintain data pipelines using tools such as Informatica, Talend, Airflow, DBT, or custom Python scripts
- Write complex and optimized SQL queries, stored procedures, and views for reporting and analytics
- Work closely with business analysts, data modelers, and BI teams to deliver trusted data
- Maintain data accuracy, quality, integrity, and performance in all ETL jobs
- Collaborate with cloud teams to deploy EDW solutions in AWS, Azure, or GCP environments
- Monitor and troubleshoot ETL jobs and data issues using best practices and logging frameworks
- Adhere to data governance, privacy, and security compliance frameworks (GDPR, HIPAA, etc.)
Required Skills:
- 610 years of experience as a Data Engineer or EDW Developer
- Strong expertise in SQL (T-SQL/PL-SQL) and Python for data transformation and automation
- Hands-on experience with ETL tools: Informatica, Talend, or custom solutions using Python
- Deep understanding of data warehousing principles (Kimball/Inmon, SCDs, star/snowflake schemas)
- Proven experience in data migration projects and data integration from diverse sources
- Experience working with enterprise-scale data in relational DBs (Teradata, Oracle, SQL Server)
- Familiarity with cloud-based data warehousing (Snowflake, Redshift, BigQuery) is a strong plus
- Experience with data partitioning, indexing, query optimization, and large-volume data processing
- Strong debugging, performance tuning, and log analysis skills
Nice to Have :
- Exposure to streaming data platforms (Kafka, Kinesis, Pub/Sub)
- Knowledge of data lake architectures (Delta Lake, Iceberg, Hive)
- Familiarity with DevOps practices, Git, and CI/CD pipelines for data
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1515050
Interview Questions for you
View All