Posted on: 23/09/2025
Profile : AWS Data Engineer
Mandate Skills : AWS + Databricks + Pyspark + SQL role
Location : Bangalore /Pune /Hyderabad /Chennai /Gurgaon
Notice Period : Immediate
Key Responsibilities :
- Design, build, and maintain scalable data pipelines to collect, process, and store data from multiple datasets
- Optimize data storage solutions for performance, scalability, and cost-efficiency
- Develop and manage ETL/ELT processes with schema transformations and data slicing/dicing
- Collaborate with cross-functional teams to understand requirements and accelerate feature development
- Create curated datasets for downstream consumption and end-user reporting
- Automate deployment and CI/CD processes using GitHub workflows
- Ensure compliance with data governance, privacy regulations, and security protocols
- Work with AWS cloud services and Databricks for data processing with S3 storage
- Utilize big data technologies : Spark, SQL, Delta Lake
- Integrate SFTP for secure data transfer from Databricks to remote locations
- Analyze Spark query execution plans for performance optimization
- Troubleshoot issues in large-scale distributed systems
Required Skills :
- 3+ years AWS experience (S3, EMR, Glue, Lambda, Redshift)
- Proficiency in Spark, SQL, Delta Lake, and Databricks
- Strong Python/Scala programming skills
- Experience with ETL/ELT processes and data pipelines
- Git/GitHub workflows and CI/CD automation
- Knowledge of data governance and security protocols
- Problem-solving skills in distributed systems
Preferred :
- AWS certifications
- Real-time data processing experience
- Bachelor's in Computer Science or related field
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1550909
Interview Questions for you
View All