Posted on: 21/01/2026
Description :
We are seeking a seasoned Senior Data Engineer to join our dynamic team. As a pivotal member of our data engineering group, you will be responsible for designing, implementing, and maintaining robust data pipelines and architectures that facilitate the movement of data across our platforms. Your strong background in AWS, Apache Kafka, Python, and PySpark will be essential in building scalable data solutions. Collaborating with cross-functional teams, you will ensure that data is collected, processed, and analyzed efficiently.
You will leverage tools like dbt for data transformation and work with Azure to enhance our cloud infrastructure capabilities. Your expertise in data integration and data warehousing will be critical as we aim to unlock insights from various data sources. Additionally, you will have the opportunity to mentor junior engineers and contribute to strategic data initiatives that drive the organization forward. This role is perfect for someone who thrives in a fast-paced environment and is passionate about utilizing cutting-edge technologies to solve complex data challenges, making a significant impact on our data strategy and operations.
Responsibilities :
- Design and develop scalable data pipelines using AWS services such as S3, Redshift, and Lambda.
- Implement real-time data processing solutions using Apache Kafka to handle streaming data efficiently.
- Utilize Python and PySpark for data transformation and analysis to ensure high-quality deliverables.
- Collaborate with data scientists and analysts to understand data requirements and provide appropriate data solutions.
- Conduct data modeling and data warehouse optimization to enhance query performance and support analytics needs.
- Leverage dbt for data transformation workflows and ensure version control of analytics code.
- Participate in architecture discussions to design modern data infrastructure on Azure that supports future growth and scalability.
Requirements :
- Bachelor's degree in Computer Science, Engineering, or a related field.
- Minimum of 5 years of experience in data engineering or a related field.
- Strong proficiency in AWS services and architecture for data processing and storage.
- Experience with Apache Kafka and its ecosystem for handling real-time data streams.
- Proficient in Python and PySpark with hands-on experience in building data processing applications.
- Familiarity with dbt for data transformation and analytics workflow management.
- Knowledge of Azure cloud services and how they can be integrated into data solutions.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1603993