Pyspark, Scala Developer
Location : Bangalore, Chennai
Experience : 6- 8 Years
Must-Have :
- Experience with Scala and Spark & its architecture.
- Good SQL Knowledge.
Job Description :
We are seeking a highly skilled Scala & Spark Developer with strong expertise in distributed data processing and big data technologies. The ideal candidate will have hands-on experience in building and maintaining scalable applications using Scala, Apache Spark, and related big data tools.
Key Responsibilities :
- Design, develop, and maintain Scala-based applications with high performance and scalability.
- Implement and optimize Apache Spark applications, ensuring efficient data processing.
- Understand and work with Spark architecture for performance tuning and troubleshooting.
- Develop and maintain complex SQL queries for data extraction, transformation, and analysis.
- Work with Hadoop ecosystem tools, especially Hive, for big data management.
- Create and manage shell scripts for automation and deployment tasks.
- Collaborate with cross-functional teams to deliver high-quality data solutions.
- Ensure adherence to best practices in coding, testing, and deployment.
Required Skills :
- 6+ years of experience in Scala and Apache Spark development.
- Deep understanding of Spark architecture and its components.
- Strong SQL expertise for data manipulation and querying.
- Hands-on experience with Hive and Hadoop ecosystem tools.
- Proficient in shell scripting for automation.
- Excellent problem-solving and analytical skills.
Good to Have :
- Working knowledge of Control-M for job scheduling.
- Experience with Tableau for data visualization.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1532972
Interview Questions for you
View All