Posted on: 03/11/2025
Senior Data Infrastructure Engineer (Java+Git + Jira Integrations)
Location : Hyderabad (Hybrid)
Experience : 6- 10 years
Stack : Java, Postgres, AWS (S3, ECS, Lambda), Kafka/SQS, dbt/ClickHouse (bonus)
Why This Role Matters :
Every engineering org tracks what developers do, i.e. PRs, commits, or tickets closed.
At Hivel, we're building the brain that understands why things move (or don't).
This role sits at the core of that brain. You'll wire the systems that connect Git, Jira, and CI/CD data - creating a living, breathing graph of how modern engineering really happens.
Your work will power dashboards that CTOs and engineering leaders use to make real decisions every day.
The Role :
We're looking for a Senior Data Infrastructure Engineer to scale and evolve the data backbone that powers Hivel's analytics and AI insights.
You'll own and optimize how engineering data flows through our systems - from multiple third-party integrations to processing pipelines and analytics stores - ensuring it's fast, reliable, and ready for insight generation.
What You'll Do :
- Build and scale multi-source data ingestion from Git, Jira, and other developer tools using APIs, webhooks, and incremental syncs.
- Refactor and harden existing Java-based ETL pipelines for modularity, reusability, and scale.
- Implement parallel and event-driven processing (Kafka/SQS, batch + streaming).
- Optimize Postgres schema design, partitioning, and query performance for 100GB+ datasets.
- Design and own data orchestration, lineage, and observability (Airflow, Temporal, OpenTelemetry, or similar).
- Collaborate with backend, product, and AI teams to make data easily consumable for insights and ML workflows.
- Maintain cost efficiency and scalability across AWS infrastructure (S3, ECS, Lambda, RDS, CloudWatch).
- Create self-healing and monitored pipelines that let you sleep through the night.
You'll Thrive If You Have :
- 6- 10 years of experience as a Backend Engineer or Data Engineer in data-heavy or analytics-driven startups.
- Strong hands-on experience with Java and AWS (S3, ECS, RDS, Lambda, CloudWatch).
- Proven experience fetching and transforming data from multiple external APIs (GitHub, Jira, Jenkins, Bitbucket, etc.).
- Solid understanding of data modeling, incremental updates, and schema evolution.
- Deep knowledge of Postgres optimization - indexing, partitioning, query tuning.
- Experience building data pipelines or analytics platforms at scale (100M+ records, multi-tenant systems).
- Bonus : exposure to dbt, ClickHouse, Kafka, Temporal, or developer-analytics ecosystems.
What Makes This Role Exciting :
- You'll build the foundation for AI-driven engineering insights used by companies around the world.
- You'll work directly with CxOs and senior architects.
- You'll get to see your work come alive in dashboards viewed by CTOs and CEOs.
- You'll shape how thousands of engineers measure productivity in the age of AI.
- Shape the core architecture powering AI-native engineering intelligence.
- Join a fast-moving, no-ego, design-loving culture that values ownership and craft.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1568934
Interview Questions for you
View All