Posted on: 21/11/2025
Description :
Position : Data Engineer (Architect Level)
Exp : 10+ Yrs
Location : Offshore
Mandatory Skills : L5 Data Engineer with strong Data Architecture and Data Modeling expertise. This role is essential to establish the right architecture and standards that will serve as the foundation for the entire initiative.
Job Title : L5 Data Engineer
About the Role :
We are seeking an experienced (8-12 years) Data Engineer to design, build, and optimize scalable data platforms and pipelines. This role is critical for establishing robust data architecture, ensuring high data quality, and enabling advanced analytics. You will collaborate with architects, data scientists, and business stakeholders to deliver secure, efficient, and future-ready data solutions.
Key Responsibilities :
Data Architecture & Modeling :
- Design and implement data models (conceptual, logical, physical) for structured and semi-structured data.
- Contribute to data architecture decisions, ensuring scalability, performance, and compliance.
- Define and enforce data governance, lineage, and quality frameworks.
Data Engineering :
- Develop and maintain ETL/ELT pipelines using PySpark, Python, SQL, and Azure Data Factory.
- Optimize data storage and retrieval in ADLS, Delta Lake, and Azure SQL for performance and cost efficiency.
- Implement data partitioning, indexing, and caching strategies for large-scale datasets.
Test Automation & Data Quality :
- Build automated testing frameworks for data pipelines to validate transformations, schema changes, and data integrity.
- Implement unit tests, integration tests, and regression tests for data workflows.
- Establish data profiling, validation, and cleansing processes to maintain high-quality datasets.
- Monitor and troubleshoot data pipelines for reliability and performance using automated alerts.
Cloud & Platform Expertise :
- Build and manage data solutions on Azure Cloud, leveraging Databricks for big data processing.
- Integrate data from multiple sources including SAP HANA, SAP BW, and external systems into cloud platforms.
- Work with Snowflake for advanced analytics and cross-platform data sharing.
DevOps & CI/CD :
- Implement CI/CD pipelines using Azure DevOps and GitHub for automated deployments.
- Ensure version control and collaborative development practices across teams.
Agile Delivery :
- Participate in Agile ceremonies, contributing to sprint planning, backlog refinement, and continuous improvement.
- Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders.
AI/ML & MLOps (Nice to Have) :
- Collaborate with data scientists to prepare and deliver high-quality datasets for machine learning models.
- Implement feature engineering pipelines and manage model training and deployment workflows on Databricks.
- Support MLOps practices including model versioning, monitoring, and automated retraining.
- Optimize ML workflows for scalability and cost efficiency in Azure Databricks.
Required Skills :
- Primary : SQL, PySpark, Python, Databricks, Azure Cloud, Azure Data Factory, ADLS, Delta Lake, Azure SQL
- Secondary (Preferred) : SAP HANA, SAP BW, SAP ABAP, Snowflake
- Tools : GitHub, Azure DevOps, CI/CD
- Methodology : Agile
Domain Expertise :
- Healthcare & Pharmacy experience strongly preferred.
- Knowledge of pricing models in healthcare and pharmacy domains is a plus.
If you are interested please share your resume to saranapriya.sankar@citiustech.com
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1578451
Interview Questions for you
View All