Posted on: 04/11/2025
Description :
We are seeking a highly skilled Senior Data Engineer with strong hands-on experience in SQL, PySpark, ETL processes, Data Lakes, and the Azure data ecosystem.
The ideal candidate should be capable of designing, building, and optimizing end-to-end data pipelines and data processing solutions in a cloud environment.
This role requires the ability to work independently, collaborate closely with cross-functional teams, and ensure robustness, scalability, and performance of data workflows.
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines using Python and PySpark.
- Implement ETL/ELT workflows and automate data ingestion from multiple internal and external data sources.
- Optimize data transformations and ensure efficient data processing.
- Work extensively with Azure Data Lake, Azure Blob Storage, Azure Data Factory, and Azure Synapse Analytics.
- Develop data solutions aligned with Azure best practices, including performance, scalability, and security.
- Design and maintain logical and physical data models supporting analytical and reporting solutions.
- Ensure data integrity, quality, and consistency across systems and pipelines.
- Write complex SQL queries for data extraction, manipulation, and transformation.
- Perform query tuning and optimize execution plans for efficient performance.
- Collaborate with Data Scientists, BI teams, and business stakeholders to understand data needs.
- Maintain clear technical documentation for pipelines, workflows, data dictionaries, and processes.
- Use Git for source code management and maintain clean version histories.
- Work within Agile/Scrum environments using tools such as Jira for planning and execution
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1569390