Posted on: 05/11/2025
Job Description :
The ideal candidate will have hands-on experience with Informatica IICS, AWS Redshift, Python scripting, and Unix/Linux systems.
You will be responsible for building and maintaining scalable ETL pipelines to support business intelligence and analytics needs.
A strong passion for continuous learning, problem-solving, and enabling data-driven decision-making is highly valued.
Primary Skills : Informatica IICS,AWS
Secondary Skills : Python,Unix/Linux
Role Description :
Role Responsibility :
- Design, develop, and maintain end-to end data pipelines and infrastructure.
- Build and manage data flows across structured and unstructured data sources, including streaming and batch integrations.
- Ensure data integrity and quality through automated validations, unit testing, and robust documentation.
- Optimize data processing performance and manage large datasets efficiently
- Collaborate closely with stakeholders and project teams to align data solutions with business objectives.
- Implement and maintain security and privacy protocols to ensure safe data handling.
- Lead development environment setup and configuration of tools and services.
- Mentor junior data engineers and contribute to continuous improvement and automation initiatives.
- Coordinate with QA and UAT teams during testing and release phases
Role Requirement :
- Strong proficiency in SQL (including procedures, performance tuning, and analytical functions).
- Hands-on experience with scripting languages (Shell / PowerShell).
- Familiarity with Cloud and Big data technologies.
- Experience working with relational, non-relational databases, and data streaming systems.
- Proficiency in data profiling, validation, and testing practices.
- Excellent problem-solving, communication (written and verbal), and documentation skills.
- Exposure to Agile methodologies and CI/CD practices.
- Self-motivated, adaptable, and capable of working in a fast-paced environment.
Additional Requirement :
- Strong proficiency in AWS Redshift and writing complex SQL queries.
- Solid programming experience in Python for scripting, data wrangling, and automation.
- Experience with version control tools like Git and CI/CD workflows.
- Knowledge of data modeling and data warehousing concepts.
- Prior experience with data lakes and big data technologies is a plus
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1569636
Interview Questions for you
View All