Posted on: 19/02/2026
Description :
We are looking for a Senior Data Engineer to design, build, and optimize our Azure and Databricks-based data infrastructure.
You will play a critical role in developing scalable ETL pipelines, data ingestion frameworks, and contribute to AI/ML-powered solutions that drive internal decision-making and customer-facing insights.
This is a hands-on role that requires expertise in Azure Data Services, Databricks, and Python/SQL.
You will work closely with Data Architects, Engineers, and Product teams to develop robust data solutions that power analytics, personalization, and AI-driven customer experiences.
What you will do :
Data Pipeline Development & Infrastructure :
- Design, build, and maintain scalable data pipelines
- Develop real-time and batch data processing frameworks for structured and unstructured data.
- Implement ETL/ELT workflows to ingest data from various sources, ensuring high availability and performance.
- Optimize data storage and retrieval of data in (near) real time and batch processes
- Ensure cost-efficient and high-performance data infrastructure that scales with business needs.
Data Solutions & AI-Driven Applications :
- develop data pipelines for ML recommenders, search functionality, and AI-enhanced features.
- Develop and maintain data models, APIs, and integrations to support analytics and customer applications.
- Support eCommerce-related data solutions, including product recommendations, customer segmentation, and personalization models.
Collaboration & Continuous Improvement :
- Work closely with Data Architects, Analysts, and Product Teams to understand data requirements and deliver best-in-class solutions.
- Monitor and troubleshoot performance issues, ensuring high availability and efficiency of data pipelines.
- Continuously optimize cost, performance, and scalability of data engineering solutions.
What we need :
- 8+ years of experience as a Data Engineer, working with Azure and Databricks
- Strong expertise in data pipeline development, ETL workflows, and real-time/batch data processing.
- Experience in big data technologies, and large-scale data storage.
- Familiarity with data governance, security, and compliance best practices and tools
- Proficiency in Python, and SQL, for data transformation, data quality and automation.
- Strong experience in optimizing data performance and cost-efficiency in cloud environments.
Nice to Have :
- Experience with eCommerce data solutions, such as customer segmentation, recommendation engines, and personalization models.
- Knowledge of event-driven architectures, streaming data processing, and real-time analytics.
- Familiarity with LLM-based AI applications and OpenAI or similar APIs.
Why youll love working here :
- Impact from day one Join a scale-up where your ideas shape how global businesses operate online.
- Continuous learning Access a structured onboarding rated 9.1/10 by previous hires, mentorship, and feedback culture.
- Hybrid flexibility Work from our office in Alexandria 3 days per week and from home 2 days.
- Career growth Expand your technical and leadership scope in a company built for long-term success.
Our values :
At Sana Commerce, our values drive everything we do :
- Champions of Our League We deliver lasting success, balancing quick wins and long-term value
- Supercharge Our Customers Were revolutionizing B2B commerce together, helping our customers to lead and succeed.
- Determined to Grow We embrace challenges, growing and raising the bar for ourselves and our industry.
- Bold Together We dare to be bold because we have each others back.
The job is for:
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1614241