HamburgerMenu
hirist

eGrove Systems - Principal Data Engineer - ETL/Python

eGrove Systems Pvt Ltd.
Multiple Locations
10 - 12 Years

Posted on: 29/10/2025

Job Description

Description :

Key Responsibilities :


- Data Architecture & Design: Lead the design and implementation of highly scalable and fault-tolerant data pipelines using modern platforms such as Azure Synapse, Snowflake, or Databricks workflows.

- Pipeline Development: Develop, construct, test, and maintain robust data architectures, ensuring optimal data flow and ingestion from diverse sources.

- Cloud Data Platform Expertise: Leverage expertise in the Azure Data Platform components, including Azure Data Factory (ADF) for orchestration, Azure Data Lake (ADLS) for storage, and Azure SQL Database for operational data storage.

- Coding & Scripting: Apply strong proficiency in Python or Scala to develop complex data transformation logic and custom ETL/ELT processes.

- SQL Optimization: Exhibit SQL mastery by writing advanced, efficient queries, performing performance tuning, and optimizing database schemas and procedures.

- PL/SQL Development: Design and develop efficient PL/SQL procedures for data manipulation and business logic within relational databases.

- Data Quality & Governance: Implement processes and systems to monitor data quality, ensuring accuracy, completeness, and reliability across all data assets.

- Reporting & BI Support: Collaborate with BI analysts, providing clean, optimized data feeds and supporting the development of dashboards using tools like Power BI.

- Cross-Cloud Integration: Utilize experience with general cloud platforms (AWS and Azure) and associated data services to inform architectural decisions and potential future integrations.

Required Skills & Qualifications :


- The successful candidate must possess 10+ years of progressive experience in data engineering, emphasizing architecture and optimization.

Technical Expertise :


- Cloud Data Warehousing: Extensive, hands-on experience designing and optimizing scalable data pipelines using modern platforms like Azure Synapse, Snowflake, or Databricks workflows.

- Programming: Strong proficiency in Python or Scala for complex data processing, transformation, and ETL/ELT development.

- Advanced query writing, performance tuning, and optimization.

- Hands-on experience developing efficient PL/SQL procedures.

- Experience with other relevant programming and scripting, including T-SQL and general data query optimization.

- Azure Data Platform: Deep, hands-on experience with core Azure services: Azure Data Factory (ADF), Azure Data Lake (ADLS), and Azure SQL Database.

- Cloud Platforms: Broad experience with major cloud platforms (e., AWS and Azure) and their associated services for data processing and storage.

- Business Intelligence: Experience working with reporting and visualization tools, specifically Power BI.

Professional Competencies :


- Proven ability to work independently, manage multiple priorities, and meet deadlines in a fast-paced environment.

- Excellent problem-solving, analytical, and communication skills.

- Experience mentoring or leading technical teams is a significant plus


info-icon

Did you find something suspicious?