Posted on: 22/08/2025
Work Model : Work From Office (Hyderabad)
Experience : 4 to 7 years
Notice Period : Immediate joiners or up to 15 days
Roles & Responsibilities :
- Develop and optimize core product development modules with a focus on performance, scalability, and reliability.
- Design and implement data ingestion pipelines for distributed systems with parallel processing, using Golang, C++, or Java.
- Build connectors to ingest data from diverse sources, including :
1. Cloud storage : Amazon S3, Azure Cloud Storage, Google Cloud Storage
2. Databases : Snowflake, Google BigQuery, PostgreSQL
3. Streaming and messaging: Kafka
4. Data lakehouses : Apache Iceberg
- Implement high-availability (HA) solutions, including cross-region replication and failover strategies.
- Develop and maintain loading monitoring, logging, and error reporting systems.
- Work with Spark connectors and third-party tools (Kafka, Kafka Connect, etc.).
- Collaborate with product managers, architects, and design engineers in an Agile development environment.
- Apply CICD best practices to ensure smooth deployment and release management.
Requirements :
- Hands-on experience with any of the following Kafka, Zookeeper, Spark, or Stream Processing frameworks.
- Strong understanding of event-driven architectures.
- Programming experience in Golang, C++, or Java.
- Solid knowledge of cloud platforms (AWS, Azure, or GCP) and modern data platforms (Snowflake, BigQuery, PostgreSQL).
- Familiarity with Agile software development practices.
- Strong expertise in CICD pipelines (Jenkins, GitLab, GitHub Actions, etc.).
Did you find something suspicious?
Posted By
Posted in
Backend Development
Functional Area
Backend Development
Job Code
1534167
Interview Questions for you
View All