Posted on: 13/02/2026
Key Responsibilities :
- Implement data pipelines and transformations based on senior-defined designs
- Support ingestion of structured and semi-structured data sources
- Assist in developing and maintaining ETL workflows
- Perform data validation, reconciliation, and quality checks
- Monitor pipelines and report failures or anomalies
- Support bug fixes and operational changes
- Assist in documenting data flows, pipeline logic, and transformations
- Collaborate with senior engineers, reporting teams, and data scientists
- Continuously upskill in big data, cloud platforms, and data engineering tools
Data Engineering Fundamentals :
- Hands-on experience with ETL / ELT development
- Understanding of basic data pipeline concepts and workflows
- Exposure to batch and basic streaming data processing
Big Data & Data Processing :
1. Spark / PySpark
2. Spark SQL
- Ability to process structured and semi-structured data
- Familiarity with Hive or similar query engines
Programming & Dev Practices :
- Ability to follow coding standards and documentation practices
Soft Skills :
- Ability to work effectively in agile, cross-functional teams
- Willingness to learn and adapt to new tools and technologies
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1612505