Posted on: 13/01/2026
About Bazaarvoice :
At Bazaarvoice, we create smart shopping experiences by helping brands and retailers connect with consumers in meaningful, authentic ways. Through our expansive global network, product-passionate community, and enterprise-grade technology, we connect thousands of brands and retailers with billions of consumers worldwide. Our platform enables brands to collect, manage, and activate user-generated content (UGC)including ratings, reviews, photos, videos, and Q&Aat unprecedented scale. This content is distributed through our ever-expanding retail, social, and search syndication network, helping brands influence purchasing decisions at the most critical moments.
With intuitive dashboards and real-time insights, we empower brands and retailers to drive trust, increase conversions, improve products, and build long-term customer loyalty. The Problem We Solve Brands and retailers struggle to build genuine connections with consumers during discovery and purchase journeys. Too often, marketing investments fail to deliver trustworthy, inspiring content that drives engagement or loyalty. Bazaarvoice solves this by closing the gap between brands and consumers with authentic voices and actionable insights.
Our Brand Promise Closing the gap between brands and consumers. Founded in 2005, Bazaarvoice is headquartered in Austin, Texas, with offices across North America, Europe, Asia, and Australia. We are proud to be a Great Place to Work Certified company across multiple regions, including the US, India, Australia, the UK, Germany, France, and Lithuania.
About the Team :
You will be part of a highly skilled, distributed engineering team responsible for Bazaarvoices Data Enrichment Platform. This team builds and maintains large-scale data enrichment pipelines and supports bulk data exports for internal systems, enterprise clients, and external search partners. The platform generates and processes massive datasets consumed by downstream analytics, search, and recommendation systems.
We are looking for hands-on engineers who enjoy solving complex data challenges, working at scale, and continuously learning new technologies to improve system reliability, performance, and data quality.
How Youll Make an Impact :
- Design, develop, document, and architect Big Data applications with a strong focus on scalability, performance, and reliability
- Build, manage, and optimize data pipelines for batch and streaming workloads
- Collaborate closely with Product Owners, Developers, DevOps, and QA teams, applying an end-to-end quality mindset
- Develop and maintain MapReduce jobs that run efficiently on large Hadoop clusters
- Work on data enrichment, transformation, validation, and export workflows
- Monitor, troubleshoot, and improve data pipeline performance and stability
- Contribute to architectural decisions and best practices for Big Data systems
Must-Have Skills That Matter :
- 4+ years of hands-on Java development experience in a Big Data / Hadoop ecosystem
- Strong experience with Hadoop, MapReduce, and distributed data processing
- 2+ years of experience working with streaming platforms such as Spark, Scala, or Kafka
- Experience developing scalable, fault-tolerant data processing solutions
- Exposure to NoSQL databases (HBase, Cassandra, MongoDB, etc.)
- Strong experience working with cloud platforms, preferably AWS
Desired Skills :
- Working knowledge of Python or other scripting languages
- Ability to derive insights through data analysis and validation
- Experience with Agile, Kanban, or Lean development methodologies
- Familiarity with CI/CD pipelines, monitoring, and production support
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1600845