Posted on: 04/09/2025
Job Description :
The Streaming Data Platform team is responsible for building, managing complex stream processing topologies using the latest open-source tech stack, build metrics and visualizations on the generated streams and create varied data sets for different forms of consumption and access patterns.
We're looking for a seasoned Staff Software engineer to help us build and scale the next generation of streaming platforms and infrastructure at Fanatics Commerce.
Responsibilities :
- Build data platforms and streaming engines that are real-time and batch in nature.
- Optimize existing data platforms and infrastructure while exploring other technologies.
- Provide technical leadership to the data engineering team on how data should be stored and processed more efficiently and quickly at scale.
- Build and scale stream & batch processing platforms using the latest open-source technologies.
- Work with data engineering teams and help with reference implementation for different use cases.
- Improve existing tools to deliver value to the users of the platform.
- Work with data engineers to create services that can ingest and supply data to and from external sources and ensure data quality and timeliness.
Qualifications :
- 8+ years of software development experience with at least 3+ years of experience on open-source big data technologies.
- Knowledge of common design patterns used in Complex Event Processing.
- Knowledge in Streaming technologies : Apache Kafka, Kafka Streams, KSQL, Spark, Spark Streaming.
- Proficiency in Java, Scala.
- Strong hands-on experience in SQL, Hive, Spark SQL, Data Modeling, Schema design.
- Experience and deep understanding of traditional, NoSQL and columnar databases.
- Experience of building scalable infrastructure to support stream, batch and micro-batch data processing.
- Experience utilizing Apache Iceberg as the backbone of a modern lakehouse architecture, supporting schema evolution, partitioning, and scalable data compaction across petabyte-scale datasets.
- Experience utilizing AWS Glue as a centralized data catalog to register and manage Iceberg tables, enabling seamless integration with real-time query engines and improving data discovery across distributed systems.
- Experience working with Druid/ StarRocks /Apache Pinot etc., powering low-latency queries, routine Kafka ingestion, and fast joins across both historical and real-time data.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1540915
Interview Questions for you
View All