Posted on: 22/10/2025
Roles and Responsibilities :
- Understand Business requirements and implement scalable solutions.
- Translate complex technical and functional requirements into detailed designs.
- Develop a highly scalable, reliable, and high-performance data processing pipeline to.
- Extract, transform and load data from various systems to the Enterprise Data warehouse/Data Lake/Data Mesh.
- Provide research, high-level design and estimates for data transformation and data integration from source applications to end-use.
- Investigate alternatives for data storing and processing to ensure implementation of the most streamlined solutions.
- Develop comprehensive data products and implement them on cloud or in-house servers for production use.
Technical Skills :
- Minimum 6-10 years of progressive experience building solutions in Big Data environments.
- Have a strong ability to build robust and resilient data pipelines which are scalable, fault tolerant and reliable in terms of data movement.
- Hands-on experience of Apache Spark with Python for batch and stream processing.
- Should have knowledge of experience in batch and stream data processing.
- Exposure working on projects across multiple domains.
- Strong hands on capabilities on SQL and NoSQL technologies.
- Minimum 3+ years of experience with AWS services like S3, DMS, Redshift, Glue, Lambda, Kinesis, MSK etc. is must have or similar services of either Azure/GCP.
- Strong analytical/quantitative skills and comfortable working with huge sets of data.
- Excellent written and verbal communication skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1563258
Interview Questions for you
View All