Posted on: 19/01/2026
Description :
- Perform data profiling, data cleansing, and data transformation to ensure data accuracy and consistency.
- Monitor and report on data quality metrics such as completeness, consistency, uniqueness, and accuracy.
- Design and automate data validation and quality-check workflows to improve efficiency and reliability.
- Integrate data quality checks into ETL pipelines and modern data platforms.
- Implement and support data quality, data profiling, and data governance frameworks.
- Collaborate with data engineers, analysts, and stakeholders to understand data requirements and quality standards.
- Develop Python-based scripts or utilities for data validation, automation, and reporting.
- Work with Databricks notebooks and contribute to solutions in a Databricks-based environment.
- Troubleshoot data quality issues and provide root-cause analysis and recommendations.
Required Skills & Qualifications :
- Hands-on experience in data profiling, data cleansing, and data transformation processes.
- Solid understanding of data quality dimensions including completeness, consistency, uniqueness, and accuracy.
- Experience in automating data validation and quality-check workflows.
- Familiarity with integrating data validation processes into ETL pipelines.
- Experience working with modern data platforms.
- Proficiency in Python for scripting, automation, and data validation tasks.
- Hands-on experience working with Databricks notebooks and Databricks environments.
- Strong analytical, problem-solving, and communication skills.
Good to Have :
- Exposure to cloud platforms (AWS, Azure, or GCP).
- Knowledge of SQL and data warehousing concepts.
- Experience working in Agile/Scrum environments.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1602907