Posted on: 25/11/2025
Description :
Job Title : Staff Software Engineer Big Data, GenAI
Job Mode : Remote
Overview :
Are you energized by the idea of innovating with Generative AI? Do you want to create global impact while tackling challenges at the forefront of Artificial Intelligence? Are you excited to architect and build next-generation AI-driven platforms from the ground up?
This is a greenfield opportunity to shape the next wave of AI experiencescombining Generative AI, Machine Learning, Big Data, and Cloud Computing. As a Staff Software Engineer, you will significantly influence the product vision, architecture, and technical strategy, driving innovation from concept to delivery.
Our AI Core group is building foundational platforms and intelligence layers for cutting-edge Generative AI systems. From AI Agents, RAG pipelines, Knowledge Bases, and Data Mining to Anomaly Detection and LLM fine-tuning, our work fuels flagship products while enabling entirely new AI-driven offerings. We are creating intelligent, real-time multi-agent systems that perceive, learn, and actredefining how businesses leverage data and AI at scale.
Key Responsibilities :
- Architect, design, and develop scalable distributed systems, data pipelines, and ML infrastructure with an emphasis on performance and reliability.
- Own end-to-end delivery of major features and services across the full SDLC including design, coding, reviews, testing, deployment, observability, and operations.
- Drive innovation across Big Data, Generative AI, Graph ML, and real-time analytics, converting emerging technologies into robust production solutions.
- Build and optimize high-throughput, low-latency analytic systems that power next-gen AI Agents and intelligent automation platforms.
- Mentor and guide junior and mid-level engineers; enforce engineering best practices and foster a culture of technical excellence.
- Collaborate cross-functionally with Product, ML, Infrastructure, and Security teams to ensure solutions are resilient, scalable, and compliant with governance standards.
- Evaluate new technologies and frameworks, influencing strategic decisions and long-term technical roadmaps.
Required Qualifications :
- Bachelors degree in Computer Science, Mathematics, or a related technical field.
- 8+ years of software engineering experience across the full SDLC (architecture, design, coding, testing, deployment, monitoring).
5+ years of hands-on experience with distributed Big Data technologies, such as :
- PySpark, Lakehouse architectures, Kafka, Debezium
- Hudi, Druid, Flink, Spark Streaming, or similar tools
- Strong experience with streaming or sensitive data pipelines, including governance, compliance, auditing, and schema evolution.
- Working knowledge of Graph technologies or Graph ML frameworks (e.g., GNNs).
- Demonstrated success delivering complex, high-impact production systems end-to-end.
- Hands-on experience deploying large-scale systems on AWS, Azure, or GCP.
- Exceptional problem-solving skills and the ability to thrive in fast-paced, ambiguous environments.
Preferred Qualifications :
- Masters degree in Computer Science, Machine Learning, or a related discipline.
- Deep hands-on experience in Graph ML, Graph Databases, and Graph Neural Networks (GNNs).
- Experience building production-ready Generative AI solutions, including :
- Retrieval-Augmented Generation (RAG)
- AI Agents / Multi-agent Systems
- LLM fine-tuning and model deployment
- Expertise in designing fault-tolerant, large-scale data systems with strong observability and SLAs.
The job is for:
Did you find something suspicious?