HamburgerMenu
hirist

Senior Data Engineer - Apache Spark

RIGHT MOVE STAFFING SOLUTIONS PRIVATE LIMITED
Pune
10 - 13 Years

Posted on: 15/08/2025

Job Description

Job Summary :

We are looking for a highly skilled and experienced Senior Data Engineer to join our data team.

The ideal candidate will have over 4 years of experience in building and maintaining robust data pipelines and infrastructure.

This role requires expertise in cloud data services, big data processing frameworks, and a strong understanding of data warehousing and lakehouse concepts.

The Senior Data Engineer will be crucial in ensuring our data architecture is scalable, reliable, and secure.

Key Responsibilities :

- Design, develop, and optimize scalable ETL (Extract, Transform, Load) processes to ingest, transform, and load data from various sources into data warehouses and data lakes.

- Lead the development of data processing jobs using Azure Databricks, PySpark, and Apache Spark.

- Architect and manage data infrastructure on major cloud platforms including Azure, AWS, and GCP.

- Collaborate with data scientists and analysts to build and maintain the data foundation necessary for advanced analytics and machine learning initiatives.

- Implement and enforce DevOps practices for data pipelines, including CI/CD, monitoring, and automated testing.

- Contribute to the design of the overall data architecture, including data lakes, lakehouses, and traditional data warehouses.

- Write clean, well-documented, and efficient code primarily in Python for data manipulation and automation tasks.

- Troubleshoot and resolve complex data-related issues, ensuring data quality and system performance.

Required Technical Skills :

- Cloud Platforms: Extensive experience with at least one major cloud provider (Azure, AWS, GCP), including their data-specific services.

- Big Data Frameworks: Deep knowledge of Apache Spark and hands-on experience with PySpark and Azure Databricks.

- Data Architecture: Strong understanding of data lakes, data warehouses, and the modern lakehouse architecture.

- ETL/ELT: Expertise in designing and building efficient and scalable data pipelines.

- Programming Languages: Proficiency in Python for scripting, data engineering, and automation.

- DevOps: Familiarity with DevOps principles and tools for deploying and managing data solutions.

Qualifications and Experience :

- A minimum of 4 years of professional experience in a Data Engineering role.

- Bachelor's degree in Computer Science, Engineering, or a related quantitative field.

- Proven ability to work with large-scale datasets and complex data pipelines.

- Strong analytical, problem-solving, and communication skills.

- Experience with relational and NoSQL databases is a plus.

Notice Period - Immediate


info-icon

Did you find something suspicious?