Posted on: 10/07/2025
AWS + Pyspark + Databricks
Description :
We are seeking an experienced AWS + Pyspark + Databricks professional to join our dynamic team in India. The ideal candidate will have a strong background in data engineering and will be responsible for designing and implementing data solutions that leverage cloud technologies and big data processing frameworks.
Responsibilities :
- Design, develop and maintain data pipelines using AWS services, Pyspark, and Databricks.
- Collaborate with data architects and data engineers to optimize data flow and data management practices.
- Perform data analysis and visualization to support business decision-making processes.
- Monitor and troubleshoot data processing jobs to ensure they are running efficiently and effectively.
- Implement best practices for data governance and data quality management.
- Work closely with cross-functional teams to gather requirements and deliver data solutions.
Skills and Qualifications :
- 5-8 years of experience in data engineering or related field.
- Strong proficiency in AWS services such as S3, EC2, Lambda, Glue, and Redshift.
- Hands-on experience with Pyspark for big data processing and analytics.
- Familiarity with Databricks platform and its features for data analytics.
- Experience with SQL and NoSQL databases.
- Knowledge of data modeling and ETL processes.
- Strong programming skills in Python or Scala.
- Ability to work in an agile environment and collaborate with teams effectively.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1510901
Interview Questions for you
View All