Posted on: 09/12/2025


Description :
Education :
- B.Tech / B.E / M.Tech / M.E / B.Sc / M.Sc / BCA / MCA
Key Responsibilities :
- Develop and maintain scalable data processing applications using Apache Spark.
- Optimize Spark jobs for performance and reliability.
- Integrate Spark with Hadoop ecosystem tools (HDFS, Hive, HBase, etc.).
- Work on data ingestion, transformation, and ETL pipelines.
- Collaborate with data engineers and analysts to deliver high-quality solutions.
- Ensure data security, integrity, and compliance with organizational standards.
Required Skills :
- Strong experience in Apache Spark (Core, SQL, Streaming).
- Proficiency in Scala or Python.
- Hands-on experience with Hadoop ecosystem (HDFS, Hive, HBase, Sqoop).
- Knowledge of distributed computing concepts and data partitioning.
- Familiarity with performance tuning for Spark jobs.
- Experience with cloud platforms (AWS EMR, Azure HDInsight, GCP Dataproc) is desirable.
Soft Skills :
- Strong problem-solving and analytical skills.
- Ability to work in a collaborative team environment.
- Excellent communication skills.
Did you find something suspicious?
Posted by
Girish Nair
Senior Associate Lead - Talent Acquisition at Infosys BPM Limited
Last Active: 10 Dec 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1587324
Interview Questions for you
View All