Posted on: 26/11/2025
Description :
Key Responsibilities :
- Design, build, and maintain scalable data pipelines, ETL workflows, and data transformation processes.
- Develop and optimize data warehouses using modern cloud data platforms (Azure/AWS/OCI).
- Create and manage databases, advanced data models, and dimensional schemas (star and snowflake).
- Prepare data structures for advanced analytics, reporting, and self-service BI tools.
- Work with cross-functional teams to understand business requirements and translate them into technical solutions.
- Use PySpark/Python and SQL to transform, clean, and validate large datasets.
- Ensure data quality, lineage, governance, and accuracy across all pipelines.
- Integrate data from multiple sources using ETL/ELT frameworks.
- Monitor, troubleshoot, and optimize data workflows for performance, reliability, and scalability.
- Contribute to cloud migration, data lake modernization, and architecture improvements.
Required Skills & Qualifications :
- Bachelors degree in IT, Computer Science, Data Science, Analytics, or equivalent experience.
- 2+ years of experience in Data Engineering, Data Analytics, Business Intelligence, Data Science, IT, or related fields.
- Strong hands-on experience with SQL (complex queries, optimization, stored procedures).
- Proficiency in Python/PySpark for data processing, automation, and analytics.
- Experience with at least one major cloud platform :
1. Azure (ADB, ADF, Databricks)
2. AWS (Glue, Lambda, Redshift)
3. OCI (Oracle Cloud Infrastructure)
- Experience with creating data warehouses, schema design (star/snowflake), and data modeling frameworks.
- Good understanding of ETL/ELT workflows, data quality frameworks, and pipeline orchestration.
- Excellent English communication skills (verbal & written).
- Strong problem-solving aptitude and analytical thinking.
Preferred Skills :
- Experience with cloud-native monitoring, logging, and automation tools.
- Knowledge of CI/CD, version control (Git), and DevOps for data workflows.
- Exposure to big data ecosystems (Delta Lake, Spark, Kafka, etc.
- Ability to optimize cloud resource usage and reduce cost.
Why Join Us?
- Opportunity to work with modern cloud data engineering technologies.
- High-impact role building scalable analytics systems.
- Collaborative environment with strong learning and growth opportunities
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1580275