Posted on: 12/12/2025
Key Responsibilities :
- Develop and optimize data processing pipelines using Apache Spark / PySpark.
- Build and automate ETL workflows using AWS Glue Studio, Glue ETL, and Glue Catalog.
- Manage and maintain data storage solutions using Amazon S3.
- Monitor systems using AWS CloudWatch and ensure high performance and reliability.
- Use AWS DevOps practices to streamline deployments and improve operations.
- Manage AWS IAM roles and policies for secure data access.
- Troubleshoot data issues and perform Spark performance tuning.
- Ensure data quality, accuracy, and integrity across all workflows.
- Work closely with cross-functional teams to integrate data solutions into business processes.
- Stay updated on AWS best practices and emerging technologies.
Required Skills :
- Experience with data lake or data warehouse solutions.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1589521
Interview Questions for you
View All