Posted on: 15/07/2025
Job Summary :
This role offers a hands-on opportunity to work on modern data engineering practices and make a significant impact on large-scale data processing.
Key Responsibilities :
- Collaborate with data architects, analysts, and business stakeholders to understand data requirements and deliver efficient solutions.
- Implement CI/CD pipelines for data workflows, ensuring smooth deployment and integration
across environments.
- Optimize and troubleshoot performance issues in existing pipelines.
- Develop and maintain unit test cases to ensure data accuracy and reliability.
- Work with MongoDB, leveraging its aggregation framework for efficient data querying and
transformation.
- Document data flows, system architecture, and best practices for development and deployment.
Required Skills and Qualifications :
- Hands-on experience with the Databricks platform (including clusters, notebooks, jobs, and data pipelines).
- Strong programming skills in Python and PySpark.
- Proficiency in SQL for data querying, transformation, and validation.
- Experience with CI/CD practices and tools (e.g., Git, Azure DevOps, Jenkins).
- Solid experience in implementing and maintaining unit test cases for data pipelines.
- Good understanding and practical experience with MongoDB, including aggregation queries.
- Strong problem-solving skills and the ability to work independently in a fast-paced
environment.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1513673
Interview Questions for you
View All