Posted on: 02/11/2025
Job Description :
You will be responsible for leading the design, development, and optimization of scalable ETL/ELT pipelines that enable efficient data integration and power critical business intelligence and analytics initiatives. This role requires a proven ability to troubleshoot complex data challenges, drive performance improvements, and ensure our data ecosystem is both scalable and cost-efficient. A passion for continuous learning, problem-solving, and enabling data-driven decision-making is highly valued.
Primary Skills : AWS,ETL Concepts, Data Integration Tool (Any)
Secondary Skills : Python, Datastage, Informatica IICS, Informatica, SSIS
Role Description :
Role Responsibility :
- Lead the design and architecture of scalable and robust data integration solutions on-premise and in the cloud.
- Ensure solutions meet business requirements and industry standards.
- Engage with executive leadership and key stakeholders to understand business needs and translate them into technical solutions.
- Define data models, schemas, and structures to optimize data storage, retrieval, and processing for analytical workloads.
- Work with data architects and solution architects to ensure alignment with overall data strategy and architecture principles.
- Lead a team of data engineers by providing technical guidance, mentorship, and support.
- Plan, prioritize, and manage data engineering projects, tasks, and timelines.
- Design, develop, and maintain data integration pipelines and ETL/ELT workflows.
- Lead more than one project or manage a larger team that might have sub-tracks.
Role Requirement :
- Proficient in basic and advanced SQL.
- Proficient in using Python for data integration and data engineering tasks.
- Proficiency in ETL/ELT tools such as Informatica, Talend, DataStage, SSIS, DBT, Databricks, or equivalent.
- Experience with relational databases (like SQL Server, Oracle, MySQL, PostgreSQL) and NoSQL databases (like MongoDB,
Cassandra), cloud databases (RedShift, Snowflake, Azure SQL).
- Familiarity with big data technologies like Hadoop, Spark, Kafka, and cloud platforms such as AWS, Azure, or Google Cloud.
- Solid understanding of data modeling, data warehousing concepts, and practices.
- Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.).
- Knowledgeable in performance tuning and optimization
- Experience in Data Profiling and Data validation
- Experience in requirements gathering and documentation processes, and performing unit testing.
- Understanding and implementing QA and various testing processes in the project.
- Knowledge in any BI tools will be an added advantage.
- Sound aptitude, outstanding logical reasoning, and analytical skills.
- Willingness to learn and take initiative.
- Ability to adapt toa fast-paced Agile environment.
- Relevant certifications in data engineering or cloud platforms are a plus.
Additional Requirement :
- Strong proficiency in AWS Redshift and writing complex SQL queries.
- Solid programming experience in Python for scripting, data wrangling, and automation.
- Experience with version control tools like Git and CI/CD workflows.
- Knowledge of data modeling and data warehousing concepts.
- Prior experience with data lakes and big data technologies is a plus.
Did you find something suspicious?
Posted By
Mayuri Vaidya
Consultant - Recruitment at ResourceTree Global Services Pvt Ltd
Last Active: 5 Dec 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1568648
Interview Questions for you
View All