Posted on: 27/03/2026
About the Role :
We are looking for a highly skilled AWS Data Engineer who can design, build, and optimize scalable data pipelines. If youre passionate about working with large datasets, solving complex problems, and driving data-driven decisions in the banking domainthis role is for you!
Key Responsibilities :
- Design, develop, and maintain robust data pipelines on AWS
- Write optimized and high-performance SQL queries for large datasets
- Process and transform data using PySpark for scalable solutions
- Perform data validation, debugging, and performance tuning
- Collaborate with cross-functional teams to understand data requirements
- Take full ownership of tasks and ensure timely delivery
- Ensure data quality, integrity, and security standards
Required Skills :
- Strong expertise in SQL (complex joins, query optimization, performance tuning)
- Hands-on experience with PySpark (data processing & transformations)
- Experience working on AWS services (S3, Glue, Redshift, EMR, etc.)
- Strong debugging and analytical problem-solving skills
- Ability to work independently with minimal supervision
- Proactive mindset with high accountability
Good to Have :
- Experience in the Banking/Financial domain
- Knowledge of ETL/ELT processes and data warehousing concepts
- Familiarity with Airflow or other orchestration tool
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1624273