HamburgerMenu
hirist

Senior Data Engineer - Python/SQL/ETL

Digihelic Solutions Private Limited
Multiple Locations
12 - 15 Years
star-icon
4.6white-divider20+ Reviews

Posted on: 30/10/2025

Job Description

Description :


We are seeking a highly experienced and technically adept Senior Data Engineer to join our TAVS team in Pune. The ideal candidate will have a minimum of 12 years of relevant experience and a strong background in designing, developing, and optimizing large-scale data pipelines and data warehouse solutions. This role requires deep expertise in PostgreSQL, Databricks/Apache Spark, Python, and Azure services.

Key Responsibilities :

- Data Architecture & Design : Design, implement, and manage robust, scalable, and high-performance data warehouse solutions and data marts, utilizing deep knowledge of PostgreSQL for optimal storage and retrieval.

- Database Management : Oversee all aspects of PostgreSQL database administration, including advanced query optimization, performance tuning, data replication, backup strategies, and ensuring data integrity and security.

- ETL/ELT Development : Develop, construct, test, and maintain data pipelines using Databricks (or another Apache Spark distribution) to ingest, transform, and load large datasets from various sources, ensuring data quality and reliability.

- Programming & Scripting : Write clean, efficient, and well-documented code primarily in Python, leveraging powerful data manipulation libraries like Pandas and NumPy for complex data transformation and analysis tasks.

- Cloud Integration : Utilize and manage Azure services (e.g., Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, Azure Databricks) for data storage, orchestration, processing, and analytical workloads.

- Performance Optimization : Proactively identify and resolve data bottlenecks and performance issues across the data platform, tuning SQL queries and Spark jobs for maximum efficiency.

- Collaboration & Mentorship : Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver effective solutions. Mentor junior team members on best practices in data engineering and specific technologies.

Required Qualifications & Skills :

- Data Engineering, Architecture, and Pipeline Development

- 12+ Years (Mandatory)

- Database : PostgreSQL (Advanced database design, indexing, partitioning, optimization, and management)

- Extensive Big Data Platform

- Databricks (or any other Apache Spark distribution like EMR, Google Cloud Dataproc)

- Extensive Programming : Python (Advanced proficiency, including Pandas and NumPy for data manipulation)

- Strong Cloud Microsoft Azure (Services for data storage, processing, and analytics e.g., ADLS, ADF, Synapse)

- Solid SQL Complex query writing, stored procedures, and performance tuning

- Expert ETL/ELT

- Designing, developing, and deploying scalable data pipelines

Desired (Good to Have) Skills :

- Experience with other NoSQL databases (e.g., MongoDB, Cassandra).

- Familiarity with CI/CD tools and data governance principles.

- Knowledge of stream processing technologies (e.g., Kafka).

- Certification in Azure Data Engineering (e.g., Azure Data Engineer Associate - DP-203).


info-icon

Did you find something suspicious?