Posted on: 11/07/2025
Exp : 5+yrs.
NP : Imm-15 days.
Rounds : 3 Rounds (Virtual).
Mandate Skills : Apache spark, hive, Hadoop, spark, scala, Databricks.
Job Description :
The Role :
- Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights.
- Constructing infrastructure for efficient ETL processes from various sources and storage systems.
- Leading the implementation of algorithms and prototypes to transform raw data into useful information.
- Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala).
- 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines.
- High proficiency in Scala/Java and Spark for applied large-scale data processing.
- Expertise with big data technologies, including Spark, Data Lake, and Hive.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1511836
Interview Questions for you
View All