Posted on: 01/09/2025
Required Experience :
Preferred Skillsets :
- Very efficient in SQL/Stored procedures.
- At least 8 years of Hands on Experience in AWS Step function, AWS Lambda, AWS S3, AWS ECS, AWS Cloud Watch, AWS Event Bridge, AWS Athena, AWS Glue.
- At least 8 years of Experience in Terraform.
- At least 8 years of Experience in Jenkins pipelines(Groovy script).
- At least 8 years of experience in various kind of Data Scraping, Data Ingestion, Data
Processing.
- Good Experience in GIT, Bitbucket.
- Good Experience in Jira, Confluence and Scrum.
- Worked on end-to-end ETL projects.
- Have good understanding of data warehouse/data lake concepts.
- Very good at coding with best practices.
- Proactive, multitasker, Team Player, and can confidently collaborate with various teams like
data engineering, architecture, and data teams.
- Can lead and independently work on certain modules.
- Confidently lead and represent data team in client meetings and guide the juniors.
- Solution designing.
- Understand end to end data management tasks (not just ETL, for eg : scheduling,
performance, code optimisation, principles of data modelling, etc).
- Strong Problem Analysis & Solving skills.
- Effective Communication.
Key Responsibilities :
- Develop and optimize infrastructure as code using Terraform.
- Build and maintain CI/CD pipelines in Jenkins using Groovy scripting.
- Lead end-to-end ETL projects, ensuring best coding practices and high-quality delivery.
- Implement efficient data scraping, ingestion, and processing mechanisms for large-scale data.
- Collaborate effectively with data engineering, architecture, and analytics teams to design and
implement data solutions.
- Lead modules independently and provide technical guidance to junior engineers.
- Represent the data engineering team in client meetings, providing technical insights and
progress updates.
- Apply deep understanding of data warehousing and data lake concepts to design optimal solutions.
- Ensure scheduling, performance tuning, code optimization, and data modeling principles are
applied throughout data workflows.
- Utilize version control tools such as GIT and Bitbucket for source code management.
- Use Jira, Confluence, and Scrum methodologies to manage project workflows.
- Proactively identify and solve complex problems related to data management and
engineering.
- Communicate effectively across teams to align technical solutions with business objectives.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1538726
Interview Questions for you
View All