Posted on: 13/08/2025
Roles and Responsibility :
- Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big Data tools and frameworks required to solve Big Data problems at scale.
- Design and implement systems to cleanse, process, and analyze large data sets using distributed processing tools like Akka and Spark.
- Understanding and critically reviewing existing data pipelines, and coming up with ideas in collaboration with Technical Leaders and Architects to improve upon current bottlenecks
- Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have.
- 3+ years of experience in developing highly scalable Big Data pipelines.
- In-depth understanding of the Big Data ecosystem including processing frameworks like Spark, Akka, Storm, and Hadoop, and the file types they deal with.
- Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc.
- Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design Patterns when required.
- Experience with Git and build tools like Gradle/Maven/SBT.
- Strong understanding of object-oriented design, data structures, algorithms, profiling, and
optimization.
- Have elegant, readable, maintainable and extensible code style.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1529675
Interview Questions for you
View All