Posted on: 16/12/2025
Description :
About the Role :
We are looking for a highly skilled Python Developer with strong expertise in PySpark and AWS Databricks to build and optimize large-scale data pipelines.
The role involves handling high-volume data, improving system performance, and ensuring reliability across cloud-based data platforms.
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines using Python and PySpark
- Work extensively on AWS Databricks for distributed data processing
- Process and migrate large datasets across AWS services (S3, EC2)
- Optimize Spark jobs for performance, scalability, and cost efficiency
- Ensure data quality, reliability, and fault tolerance
- Collaborate with cross-functional teams including data engineering, analytics, and product
Required Skills & Experience :
- Strong hands-on experience in Python development
- Proven expertise in PySpark
- Experience working with AWS Databricks
- Solid understanding of Big Data architectures and Spark optimization
- Experience handling large-scale datasets in cloud environments
- Good problem-solving and analytical skills
Good to Have :
- Strong SQL skills
- Experience with ETL frameworks and workflows
- Exposure to FinTech / BFSI domain
- Knowledge of cloud cost optimization and performance tuning
Why Join Us :
- Work on high-impact, large-scale data systems
- Solve complex performance and scalability challenges
- Opportunity to extend the contract based on performance and project needs
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1590878