HamburgerMenu
hirist

Job Description

Job Title : Senior ETL Developer

Location : Gurugram

Work mode : Monday- Friday (WFO) Odd Saturdays (WFH), Even Saturdays (Off)

Experience : 6+ Years in IT

Notice Period : Immediate to 15 days

Must-Have Experience : (Minimum 5+ years in ETL, with strong Apache NiFi expertise)

Required Skills - Apache Nifi, AWS, SQL, Data Pipeline development using ETL tool, Java/Python

Job Summary :


We are seeking an experienced Senior ETL Developer with deep expertise in Apache NiFi, PostgreSQL, and large-scale data pipelines. This role involves building and optimizing high-performance ETL workflows across massive data environments (handling thousands of processors, e.g., 5000+ in a single PG setup). The ideal candidate is a problem solver who thrives in fast-paced environments and has hands-on experience with scalable, reliable, and secure data systems.

Key Responsibilities :


- Design, develop, and maintain robust ETL pipelines for large and complex datasets.

- Build, optimize, and troubleshoot SQL/PLpgSQL queries in PostgreSQL, ensuring efficiency in high-concurrency environments.

- Manage and scale environments with 5000+ processor instances in PostgreSQL or equivalent setups.

- Ingest, process, and validate data from multiple sources while ensuring data integrity, consistency, and availability.

- Monitor ETL workflows, identify bottlenecks, and apply performance tuning techniques.

- Collaborate with data architects, analysts, and business stakeholders to define and deliver data requirements.

- Implement and maintain data quality, validation, and reconciliation across systems.

- Document ETL processes, pipelines, and data architectures.

- Ensure compliance with data security, governance, and privacy standards.

Required Skills & Experience :

- 5+ years of ETL development experience with complex data workflows.

- Strong hands-on expertise in Apache NiFi (must-have).

- Advanced experience with PostgreSQL (query optimization, scaling in massively parallel systems).

- Proven ability to work in large-scale environments (e.g., 5000+ processor instances).

- Proficiency in SQL, PL/pgSQL, and performance tuning techniques.

- Hands-on experience with ETL tools like Talend, Informatica, Airflow, etc.

- Familiarity with big data ecosystems (Hadoop, Spark, Kafka) is a plus.

- Strong knowledge of data modeling, warehousing, and governance principles.

- Excellent problem-solving, debugging, and analytical skills.

Preferred Qualifications :

- Experience with cloud platforms (AWS, GCP, Azure).

- Exposure to DevOps and CI/CD practices for data pipelines.

- Hands-on experience with real-time streaming data processing.

- Knowledge of scripting languages (Python, Bash, etc.).

Education :

- Bachelors or Masters degree in Computer Science, Data Engineering, or a related field.


info-icon

Did you find something suspicious?