Posted on: 03/11/2025
Big Data Engineer - Spark/Scala
Exp : 7 to 10 years
Location : Chennai
Overview :
We are looking for a highly skilled Big Data Engineer with extensive experience in Spark and Scala to join our team. The ideal candidate will play a crucial role in designing, developing, and optimizing large-scale data processing systems. You will work closely with data scientists, analysts, and other stakeholders to deliver high-quality data solutions.
Key Responsibilities :
- Design, develop, and maintain scalable data pipelines using Apache Spark and Scala.
- Collaborate with cross-functional teams to understand data requirements and deliver data solutions that meet business needs.
- Optimize Spark jobs for performance and cost-efficiency in a distributed computing environment.
- Implement best practices for data modeling, ETL processes, and data governance.
- Monitor and troubleshoot data processing workflows to ensure data integrity and availability.
- Work with cloud platforms (AWS, Azure, or GCP) to implement big data solutions.
- Stay up to date with industry trends and emerging technologies in big data and analytics.
Requirements :
- 7- 10 years of experience in Big Data technologies, with a strong focus on Apache Spark and Scala.
- Proficiency in data processing frameworks (Hadoop, Spark) and languages (Scala, Java).
- Experience with data warehousing solutions (Snowflake, Redshift, etc.) and SQL.
- Knowledge of data modeling, ETL processes, and data visualization tools (Tableau, Power BI).
- Familiarity with cloud services (AWS, Azure, Google Cloud) and containerization (Docker, Kubernetes).
- Strong analytical skills and the ability to work with large datasets.
- Excellent communication and teamwork skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1568879
Interview Questions for you
View All