Posted on: 04/08/2025
About the Role :
We are seeking a highly skilled and motivated Senior Software Engineer to join our team, focusing on building and scaling our cutting-edge big data platform. You will be at the forefront of designing, developing, and optimizing data systems that power our business. The ideal candidate is an expert in big data technologies, a strong problem-solver, and passionate about creating efficient, cost-effective, and robust data solutions.
What You'll Be Doing :
- System Ownership : Take responsibility for the full lifecycle of big data systems and pipelines, from design and development to streamlining and tuning for optimal performance.
- Performance & Cost Optimization : Proactively identify and implement solutions to improve the efficiency and minimize the operational costs of our existing big data systems.
- Platform Innovation : Build and deploy new data systems and pipelines to meet evolving business needs. You will be encouraged to explore and contribute to underlying open-source technologies.
- Customer & Operational Support : Provide dedicated support to our internal customers, ensuring a stable, reliable environment. This includes participating in on-call services to maintain system health and deliver an excellent user experience.
- Technical Leadership : Serve as a subject matter expert, guiding the team with your deep understanding of distributed systems, algorithms, and data structures.
What We're Looking For :
Experience :
- 7+ years of professional experience in building and operating production-scale big data platforms.
- Experience working with at least three of the following technologies : Spark, Kafka, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, or Pinot.
- Extensive hands-on experience with public cloud platforms such as AWS or GCP.
Technical Skills :
- Strong programming expertise in a JVM language such as Java, Scala, or Kotlin.
- A robust grasp of distributed systems concepts, algorithms, and data structures.
- Deep familiarity with the Apache Hadoop ecosystem and related technologies (e.g., Spark, Kafka, Hive, Iceberg, Delta Lake, Presto/Trino, Pinot).
Education :
- BS/MS degree in Computer Science or a related technical field, or equivalent practical experience.
Mindset :
- Demonstrated AI literacy and a strong growth mindset, with an eagerness to learn and apply new technologies.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1524558
Interview Questions for you
View All