Posted on: 15/12/2025
Description :
Role : Data Engineer
Experience : 7+ years
Location : Bangalore
Skills Required : Databricks, PySpark & Python, SQL, AWS Services
Project Overview & Role Scope :
We are seeking highly skilled Data Engineers with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team
Key Responsibilities :
- Design, build, and maintain scalable data pipelines using Databricks and PySpark.
- Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
- Implement data integration solutions across AWS services (S3, Glue, Lambda, Redshift, EMR, etc.
- Collaborate with analytics, data science, and business teams to deliver clean, reliable datasets.
- Ensure data quality, performance, and reliability across workflows.
- Participate in code reviews, architecture discussions, and performance optimization.
- Support migration and modernization of legacy systems to cloud-based solutions.
Key Skills :
- Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
- Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
- Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.
- Experience with data modeling, schema design, and performance optimization.
- Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
- Excellent problem-solving and communication skills
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1590239
Interview Questions for you
View All