Posted on: 09/04/2026
Description :
- Build and manage end-to-end ETL/ELT workflows for data ingestion, transformation, and loading
- Architect and implement data lakes and data warehouse solutions on cloud platforms
- Optimize data processing systems for performance, scalability, and cost-efficiency
- Lead data engineering projects and mentor junior team members
- Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets
- Ensure data quality, integrity, and governance across all data systems
- Implement data security and compliance standards
- Troubleshoot and resolve data-related issues in production environments
- Work on real-time and batch data processing systems
Required Skills & Expertise :
- Strong programming skills in Python, Scala, or Java
- Hands-on experience with distributed data processing frameworks such as Apache Spark
- Experience with data pipeline orchestration tools like Airflow
- Strong understanding of data modeling, schema design, and database optimization
- Expertise in SQL and working with relational and NoSQL databases
- Experience with cloud platforms such as AWS, Azure, or Google Cloud
- Hands-on experience with data warehousing solutions like Snowflake, Redshift, or BigQuery
- Familiarity with streaming technologies like Kafka is a plus
- Strong understanding of data structures, algorithms, and system design
Preferred Qualifications :
- Exposure to real-time data processing and event-driven architectures
- Knowledge of DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes)
- Experience working in Agile/Scrum environments
- Prior experience in leading teams or handling end-to-end ownership of data systems
Key Competencies :
- Leadership and mentoring capabilities
- Excellent communication and stakeholder management
- Ability to work in a fast-paced, data-driven environment
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1627395