Posted on: 10/12/2025
Description : Data Engineer
Location : Bangalore Only
Mode : Hybrid
Role Overview :
We are seeking an experienced Data Engineer to design, build, and optimize data pipelines and solutions on modern big data platforms. The ideal candidate will have strong expertise in PySpark, Databricks, and distributed data processing frameworks, with a passion for delivering scalable and efficient data solutions.
Key Responsibilities :
- Design, develop, and maintain ETL pipelines using PySpark and Databricks.
- Work with large-scale datasets to ensure data quality, integrity, and availability.
- Optimize data workflows for performance and cost efficiency in cloud environments (Azure/AWS/GCP).
- Collaborate with data scientists, analysts, and business teams to deliver reliable data solutions.
- Implement best practices for data governance, security, and compliance.
- Monitor and troubleshoot data pipelines to ensure smooth operations.
- Document technical designs, processes, and standards.
Required Skills & Qualifications :
- 5+ years of experience in data engineering or related roles.
- Strong proficiency in PySpark and Databricks.
- Hands-on experience with big data technologies (Spark, Hadoop, Delta Lake).
- Expertise in SQL and data modeling.
- Experience with cloud platforms (Azure Data Lake, AWS S3, or GCP BigQuery).
- Knowledge of CI/CD pipelines and version control (Git).
- Familiarity with data warehousing and data lake architectures.
- Strong problem-solving and analytical skills.
Preferred Skills :
- Experience with Airflow or other orchestration tools.
- Knowledge of streaming frameworks (Kafka, Spark Streaming).
- Exposure to machine learning pipelines and data science workflows.
- Understanding of DevOps practices for data engineering.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1587712
Interview Questions for you
View All