Posted on: 11/12/2025
Job Title : Databricks on AWS and PySpark Engineer
Job Summary
- Develop and optimize data processing workflows using PySpark and Databricks
- Collaborate with data scientists and analysts to design and implement data models and architectures
- Ensure data quality, security, and compliance with industry standards and regulations
- Troubleshoot and resolve data pipeline issues and optimize performance
- Stay up-to-date with industry trends and emerging technologies in data engineering and big data processing
Requirements
Technical Requirements
- 3+ years of experience in data engineering, with a focus on Databricks on AWS and PySpark
- Strong expertise in PySpark and Databricks, including data processing, data modeling, and data warehousing
- Experience with AWS services, including S3, Glue, and IAM
- Strong understanding of data engineering principles, including data pipelines, data governance, and data security
- Experience with data processing workflows and data pipeline management
Soft Skills
- Excellent problem-solving skills and attention to detail
- Strong communication and collaboration skills
- Ability to work in a fast-paced, dynamic environment
- Ability to adapt to changing requirements and priorities
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1589013
Interview Questions for you
View All