HamburgerMenu
hirist

Job Description

Description :

- Design, develop and maintain data pipelines for collecting, transforming and loading data into various data stores.

- Build and maintain data warehousing and data lake solutions.

- Develop and deploy data models that support various business requirements.

- Write efficient and scalable code in languages such as Python, Scala or Java.

- Lead the design of data solutions with quality, automation and performance in mind.

- Own the data pipelines feeding into the Data Platform ensuring they are reliable and scalable.

- Ensure data is available in a fit-for-purpose and timely manner for business and analytics consumption.

- Work with Data Governance team to ensure solutions are compliant with regulations such as GDPR and CISO policies and data quality is baked-in to pipelines.

- Maintain and optimise existing data pipelines to improve performance and quality, minimising impacts to business.

- Collaborate with cross-functional teams to understand data requirements and provide support for data-driven initiatives.

- Set and embed standards for systems and solutions, and share knowledge to keep the team engaged and skilled in the latest technology.

- Prototype and adopt new approaches, driving innovation into the solutions.

- Work closely with the Data Product Manager to support alignment of requirements and sources of data from line of business systems and other endpoints.

- Effectively communicate plans and progress to both technical and non-technical stakeholders.

- Develop and implement the Data roadmap for strategic data sets.

- Communicate complex solutions in a clear and understandable way to both experts and non-experts.

- Mentor and guide junior members of the team to help them get up to speed in a short amount of time.

- Interact with stakeholders and clients to understand their data requirements and provide solutions.

- Stay up-to-date with industry trends and technology advancements in data engineering.

- Promote the Data Platform and Data & Analytics team brand throughout the business and represent the interests of data engineering in cross-functional forums.

- Champion the importance of modern data solutions across the business and support the education of colleagues on the business value of obtaining good quality skills and experience required:

- Extensive experience leading AWS and cloud data platform transformations

- Proven track record of delivering large-scale data and analytical solutions in a cloud environment

- Hands-on experience with end-to-end data pipeline implementation on AWS, including data preparation, extraction, transformation & loading, normalization, aggregation, warehousing, data lakes, and data governance

- Expertise in developing Data Warehouses

- In-depth understanding of modern data architecture such as Data Lake, Data Warehouse, Lakehouse, and Data Mesh

- Strong knowledge of data architecture and data modelling practices

- Cost-effective management of data pipelines

- Familiarity with CI/CD driven data pipeline and infrastructure

- Agile delivery approach using Scrum and Kanban methodologies

- Ability to scope, estimate, and deliver committed work within deadlines, both independently and as part of an agile team

- Supporting QA and user acceptance testing processes

- Innovative problem-solving skills and ability to provide clear recommendations

- Understanding of the impact of changes to business rules on data processes

- Excellent communication and influencing skills, especially in regard to data solutions and outcomes

- Experience managing and leading a small team of data engineers

- Self-driven and constantly seeking opportunities to improve data processes

- Strong knowledge of how data drives analytical and reporting activities, including automated marketing and personalization capabilities

- Skills in Python, PySpark, SQL, NoSQL DBs (Mongo), Bash Scripting, Snowflake, Kafka, Nifi, Glue, Databrew, AWS, Kinesis, Terraform, APIs, and Lakehouse


info-icon

Did you find something suspicious?