Description :
Job Title : Data Engineer Hadoop + Spark
Location : Hyderabad, India
Job Summary :
Roles & Responsibilities :
- Create Scala/Spark/Pyspark jobs for data transformation and aggregation
- Produce unit tests for Spark transformations and helper methods
- Used Spark and Spark-SQL to read the parquet data and create the tables in hive using the Scala API
- Work closely with Business Analysts team to review the test results and obtain sign off
- Prepare necessary design/operations documentation for future usage Perform peers
- Code quality review and be gatekeeper for quality checks Hands-on coding, usually in a pair programming environment
- Working in highly collaborative teams and building quality code
- The candidate must exhibit a good understanding of data structures, data manipulation, distributed processing, application development, and automation
- Familiar with Oracle, Spark streaming, Kafka, ML
- To develop an application by using Hadoop tech stack and delivered effectively, efficiently, on-time, in-specification and in a cost-effective manner
- Ensure smooth production deployments as per plan and post-production deployment verification
- This Hadoop Developer will play a hands-on role to develop quality applications within the desired timeframes and resolving team queries
Requirements :
- Hadoop data engineer with total (4 6 years) & ( 6 -9 Years) of experience and should have strong experience in Hadoop, Spark, Scala, Java, Hive, Impala, CI/CD, Git, Jenkins, Agile Methodologies, DevOps, Cloudera Distribution
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1592345
Interview Questions for you
View All