Posted on: 30/01/2026
Job Overview :
We are looking for an experienced AWS Data Engineer with strong expertise in PySpark and SQL to build and maintain scalable data pipelines on AWS.
The role involves working with large datasets and supporting analytics, reporting, and data-driven applications.
Key Responsibilities :
- Design, develop, and optimize data pipelines on AWS
- Build ETL/ELT workflows using PySpark
- Write efficient and complex SQL queries for data transformation
- Work with AWS services to ingest, process, and store large datasets
- Ensure data quality, performance, and reliability
- Collaborate with analytics, BI, and data science teams
- Troubleshoot and resolve production data issues
- Follow best practices for data engineering and cloud security
Mandatory Skills :
- Strong experience with AWS
- PySpark for data processing
- Advanced SQL
- Experience handling large-scale data systems
Good to Have :
- AWS services such as S3, Glue, EMR, Redshift, Athena
- Knowledge of data warehousing and data modeling
- Exposure to CI/CD for data pipelines
Keywords :
- AWS Data Engineer, PySpark, SQL, Big Data, Cloud Data Engineer, ETL, Kochi, Trivandrum
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1607855