HamburgerMenu
hirist

Informatica/IICS Engineer - Synapse Analytics

HexaCorp
Any Location
8 - 10 Years
star-icon
4white-divider4+ Reviews

Posted on: 27/10/2025

Job Description

Description :


We are seeking a highly skilled Informatica IICS Engineer with strong technical and functional expertise in data integration, data quality, and cloud-based ETL design.

The ideal candidate will be responsible for designing, developing, and optimizing large-scale data workflows across cloud and on-premises environments.

Responsibilities :


- Design, develop, and maintain Informatica ETL workflows (PowerCenter, IDQ, IICS) for enterprise-scale data integration.



- Integrate with major cloud platforms (Azure, AWS, GCP) and connect to cloud data warehouses such as Snowflake, Synapse, Redshift, and BigQuery.


- Implement data quality, profiling, and cleansing processes using Informatica IDQ.


- Develop and optimize ETL/ELT pipelines for performance, scalability, and maintainability.


- Build real-time and batch data pipelines leveraging CDC, Kafka, Spark, or similar streaming

tools.


- Collaborate with data architects, analysts, and business teams to define and deliver robust

data solutions.


- Ensure adherence to data governance, security, and compliance standards.


- Provide production support and issue resolution, ensuring minimal downtime.


- Mentor junior developers and promote data engineering best practices and DevOps

automation.

Primary Skills (Must-Have) :


- Strong hands-on experience with Informatica PowerCenter (ETL/ELT) and Informatica Data Quality (IDQ).


- Proficiency in SQL and PL/SQL across Oracle, SQL Server, Teradata, or DB2.


- Advanced knowledge of Informatica Intelligent Cloud Services (IICS) - Data Integration (DI), Application Integration (AI), API Management.


- Strong understanding of cloud data ecosystems (Azure, AWS, or GCP).


- Proven experience integrating with Cloud Data Warehouses - Snowflake, Synapse, Redshift,

BigQuery.


- Hands-on expertise in data modeling (Star/Snowflake schemas, OLTP, OLAP).


- Demonstrated experience in large-scale data integration and performance tuning.

Secondary Skills (Nice-to-Have) :


- Programming / automation : Python, Java, or Shell scripting.


- Experience in Big Data technologies - Hadoop, Spark, or Databricks.


- Familiarity with CI/CD pipelines - Jenkins, Git, Azure DevOps for deployment automation.

Preferred Qualifications :


- Bachelors or Masters degree in Computer Science, Information Systems, or a related field.


- 7+ years of experience in data integration and ETL development.



- Experience in hybrid cloud data migration and modernization projects.


info-icon

Did you find something suspicious?