Posted on: 25/11/2025
Description : Role : AWS Data Engineer
Location : 100% Remote
Duration : Contract Long Term.
Job Description :
- We are seeking a highly skilled AWS Data Engineer to design, build, and optimize scalable data pipelines and analytical solutions.
- The ideal candidate will have strong expertise in Python, PySpark, Pandas, Databricks, AWS (EMR, Glue, DynamoDB, S3, Lambda), and ETL workflows.
- You will work closely with data architects, analytics teams, and application developers to enable high-performance data processing and real-time analytics.
Required Skills & Experience :
- Strong programming skills in Python, including Pandas and SQL.
- Hands-on experience with PySpark and distributed data processing.
- Expertise in AWS cloud services particularly EMR, Glue, DynamoDB, Lambda, S3, IAM.
- Experience working with Databricks.
- Strong understanding of ETL/ELT design, data warehousing, and data modeling concepts.
- Ability to optimize big data jobs for performance and cost efficiency.
- Familiarity with version control systems (Git) and CI/CD tools.
About the Role :
- We are looking for a highly skilled AWS Data Engineer to design, build, and optimize scalable data pipelines and analytical solutions.
- The ideal candidate will have strong experience with distributed data processing, ETL/ELT
workflows, and AWS cloud-native services.
- You will collaborate with data architects, analytics teams, and application developers to enable high-performance data processing and real-time analytics across the organization.
- Perform data validation, performance tuning, and cost optimization for big data workloads.
- Develop and maintain data models, metadata, and documentation following best practices.
- Ensure data reliability, integrity, and security across cloud platforms.
- Participate in code reviews and continuous improvement initiatives using CI/CD and Git.
Required Skills & Experience :
- Strong programming skills in Python, including Pandas and SQL.
- Hands-on experience with PySpark and distributed data processing.
- Expertise with AWS cloud services, especially : EMR, Glue, Lambda, DynamoDB, S3, IAM
- Experience working with Databricks for data engineering and analytics.
- Strong understanding of ETL/ELT processes, data warehousing, and data modeling concepts.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1579360
Interview Questions for you
View All