Posted on: 09/07/2025
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines for batch and streaming workflows.
- Implement robust ETL/ELT processes to extract data from various sources and load into data warehouses.
- Build and optimize database schemas following best practices in normalization and indexing.
- Create and maintain documentation for data flows, pipelines, and processes.
- Collaborate with cross-functional teams to translate business requirements into technical solutions.
- Monitor and troubleshoot data pipelines to ensure optimal performance.
- Implement data quality checks and validation processes.
- Build and maintain CI/CD workflows for data engineering projects.
- Stay current with emerging technologies and recommend improvements to existing systems.
Requirements :
- Minimum 4+ years of experience in data engineering roles.
- Strong proficiency in Python programming and SQL query writing.
- Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery).
- Proven track record in building efficient and scalable data pipelines.
- Practical knowledge of batch and streaming data processing approaches.
- Experience implementing data validation, quality checks, and error handling mechanisms.
- Working experience with cloud platforms, particularly AWS (S3, EMR, Glue, Lambda, Redshift) and/or Azure (Data Factory, Databricks, HDInsight).
- Understanding of different data architectures including data lakes, data warehouses, and data mesh.
- Demonstrated ability to debug complex data flows and optimize underperforming pipelines.
- Strong documentation skills and ability to communicate technical concepts effectively.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1509994
Interview Questions for you
View All