Posted on: 22/04/2026



Description : Data Engineer to build and manage data pipelines that support batch and streaming data solutions. The role requires expertise in creating seamless data flows across platforms like Data Lake/Lakehouse in Cloudera, Azure Databricks, Kafka for both batch and stream data pipelines etc.
Responsibilities :
- Strong experience in develop, test, and maintain data pipelines (batch & stream) using Cloudera, Spark, Kafka and Azure services like ADF, Cosmos DB, Databricks, NoSQL DB/ Mongo DB etc.
- Strong programming skills in spark, python or scala & SQL.
- Optimize data pipelines to improve speed, performance, and reliability, ensuring that data is available for data consumers as required.
- Create ETL pipelines for downstream consumers by transform data as per business logic.
- Work closely with Data Architects and Data Analysts to align data solutions with business needs and ensure the accuracy and accessibility of data.
- Implement data validation checks and error handling processes to maintain high data quality and consistency across data pipelines.
- Strong analytical and problem solving skills, with a focus on optimizing data flows and addressing impacts in the data pipeline.
Qualifications :
Did you find something suspicious?
Posted by
Lakshmi Prasanna
TAG Associate Consultant at Virtusa Consulting Services Pvt Ltd
Last Active: 30 Apr 2026
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1630378