HamburgerMenu
hirist
showcase-imageshowcase-image

Job Description

Description :

Job Overview :

We are seeking an experienced PL/SQL Developer Lead with strong exposure to PySpark, data pipelines, and enterprise data integration.

This role involves leading end-to-end ETL and data engineering initiatives, designing scalable data solutions, and ensuring data quality, governance, and analytics readiness across platforms.

The ideal candidate will combine hands-on technical expertise with the ability to guide teams and drive best practices in data engineering.

Key Responsibilities :

- Lead the design, development, and optimization of end-to-end ETL/data pipelines, ensuring efficient extraction, transformation, and loading of data from multiple sources

- Develop and maintain PL/SQL procedures, functions, packages, and performance-optimized queries for large-scale data processing

- Build and enhance data processing workflows using PySpark for distributed and high-volume datasets

- Design and implement API-based integrations and data ingestion mechanisms across enterprise systems

- Work with modern data storage and big data technologies such as Iceberg, DuckDB, Parquet, and Trino to support scalable analytics

- Drive data modeling, database design, and data warehousing solutions aligned with reporting and analytics needs

- Lead and support data integration initiatives using iPaaS tools such as IBM Sterling, MuleSoft, or Dell Boomi

- Ensure adherence to data governance, data quality, and compliance standards, implementing validation and monitoring mechanisms

- Collaborate with analytics and business teams to enable insights using SQL and BI tools

- Mentor junior developers, perform code reviews, and enforce development best practices

- Coordinate with cloud and infrastructure teams to deploy and manage data solutions on at least one cloud platform

Required Skills & Experience :

- 6-9 years of experience in data engineering, PL/SQL development, and ETL pipeline implementation

- Strong hands-on expertise in PL/SQL, including complex query optimization and performance tuning

- Practical experience with PySpark for large-scale data processing and transformation

- Solid understanding of ETL/ELT architectures, API integrations, and scripting using Python

- Experience working with big data and modern analytics technologies such as Trino, Iceberg, DuckDB, and Parquet

- Strong knowledge of database design, data modeling, and data warehousing concepts

- Advanced proficiency in SQL and experience working on at least one cloud platform (AWS, Azure, or GCP)

- Hands-on exposure to BI and analytical tools such as Superset or Power BI for data consumption and insights

- Good understanding of data governance frameworks, data quality management, and metadata management

- Experience delivering data integration projects using iPaaS tools like IBM Sterling, MuleSoft, or Dell Boomi

- Strong analytical, problem-solving, and communication skills with the ability to lead technical discussions


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in