Posted on: 21/01/2026
Responsibilities :
- Design, develop, and maintain robust and scalable data pipelines.
- Build and optimize ETL/ELT processes using Python, SQL, and Spark.
- Develop and manage batch and streaming data processing workflows.
- Perform data modeling (conceptual, logical, and physical) for data warehouses and data lakes.
- Ensure data quality, integrity, and performance across data platforms.
- Optimize SQL queries and Spark jobs for performance and scalability.
- Collaborate with cross-functional teams to understand data requirements.
- Implement data validation, monitoring, and troubleshooting processes.
- Document data flows, models, and technical specifications.
Requirements :
- Strong programming experience in Python.
- Proficiency in SQL for querying and data transformation.
- Hands-on experience with Apache Spark (PySpark preferred) and API Integration Framework.
- Solid understanding of data modeling concepts (star schema, snowflake schema, normalization).
- Experience working with relational and/or NoSQL databases.
- Knowledge of data warehousing concepts and best practices.
- Familiarity with version control tools (Git).
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1604465