Posted on: 20/08/2025
Responsibilities :
- Design, develop, and maintain scalable data pipelines using Python and modern data engineering tools.
- Understanding of banking data models, financial reporting, and regulatory compliance.
- Integrate data from legacy banking platforms into modern data lakes and warehouses.
- Collaborate with business analysts, architects, and operations teams to understand core banking workflows and data requirements.
- Design, build, and maintain scalable and reliable data pipelines using Azure Data Factory, Databricks, Synapse (Azure SQL DW), and Azure Data Lake.
- Perform data analysis and transformation using PySpark on Azure Databricks or Apache Spark.
- Design and implement data models optimized for storage and various query patterns.
- Work with structured, semi-structured, and unstructured data.
- Utilize various database technologies including Traditional RDBMS, MPP, and NoSQL.
- Ensure compliant handling and management of data.
- Experience in building end-to-end data pipelines in a cloud environment.
- Mentor junior engineers and contribute to the development of best practices and reusable frameworks.
Skills Expected :
- 8+ years of relevant IT experience in the BI/DW domain with hands-on experience on the Azure modern data platform.
- Strong knowledge and experience with the Python programming language
- Experience using version control systems like Git, bitbucket.
- Experience in data analysis and transformation using PySpark on Azure Databricks or Apache Spark.
- Good knowledge of Distributed Processing using Databricks or Apache Spark.
- Experience in creating data structures optimized for storage and various query patterns.
- Experience of integrating Unity Data Catalog with the Databricks and registering data assets etc.
- Understanding of CI/CD pipelines.
- Knowledge of Agile/Scrum methodologies.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1532856
Interview Questions for you
View All