Posted on: 15/07/2025
Job Description :
Responsibilities :
- Be a key team member that assists in design and development of the data pipeline.
- Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems.
- Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions.
- Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks.
- Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs.
- Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency.
- Implement data security and privacy measures to protect sensitive data.
- Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions.
- Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions.
- Identify and resolve complex data-related challenges.
- Adhere to standard processes for coding, testing, and designing reusable code/component.
- Explore new tools and technologies that will help to improve ETL platform performance.
- Participate in sprint planning meetings and provide estimations on technical implementation.
- Collaborate and communicate effectively with product teams.
What we expect of you :
Basic Qualifications :
Must-Have Skills :
- Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training.
- Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools.
- Excellent problem-solving skills and the ability to work with large, complex datasets.
- Strong understanding of data governance frameworks, tools, and standard methodologies.
- Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA).
Good-to-Have Skills :
- Strong understanding of data modeling, data warehousing, and data integration concepts.
- Knowledge of Python/R, Databricks, SageMaker, OMOP.
Professional Certifications :
- Certified Data Scientist (preferred on Databricks or Cloud environments).
- Machine Learning Certification (preferred on Databricks or Cloud environments).
- SAFe for Teams certification (preferred).
Soft Skills :
- Strong communication and collaboration skills.
- Demonstrated awareness of how to function in a team setting.
- Demonstrated presentation skills
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1513383
Interview Questions for you
View All