Posted on: 22/04/2026
Description :
AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries.
We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards.
WHY JOIN US :
If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!.
ABOUT THE ROLE :
WHAT YOU WILL DO :
- Build and maintain scalable, distributed, fault-tolerant data pipelines on GCP
- Develop and manage lakehouse layers and Delta Lake workflows using BigQuery and Dataproc
- Collaborate with stakeholders across data engineering, compliance, and business teams
- Design and implement pipelines to acquire, normalise, transform, and release large volumes of financial data
- Design and implement bitemporal data models on BigQuery for regulatory-grade time-series datasets
- Build and maintain testing frameworks for data pipelines and transformation logic
- Own end-to-end solutions including ingestion pipelines, QA workflows, correction management, and audit trails
- Contribute to shared platform services in a collaborative environment
- Support implementation of AI solutions including data ingestion, anomaly detection, and semantic search using Vertex AI.
MUST HAVES :
- 6-8 years of experience in data engineering
- Proficiency in Python for data pipelines, transformation logic, and automation
- Proficiency in SQL with hands-on experience in BigQuery including partitioning, clustering, and time-series queries
- Experience with Cloud Composer (Apache Airflow) for pipeline orchestration
- Working knowledge of Dataproc (Apache Spark) for batch ingestion and incremental processing
- Experience with AI-assisted development tools such as GitHub Copilot or similar
- Experience with Git version control and collaboration workflows
- Familiarity with REST APIs for integrations
- Familiarity with GCP technologies (Cloud Storage, Pub/Sub, Datastream, Cloud Monitoring, IAM, VPC Service Controls)
- Understanding of financial data concepts related to equities and& other asset classes
- Upper-intermediate English level.
NICE TO HAVES :
- Knowledge of data libraries such as pandas or PySpark
- Experience with columnar storage and time-series analytics tools such as ClickHouse
- Familiarity with Dataplex for data governance and lineage
- Understanding of Change Data Capture (CDC) using Datastream
- Understanding of bitemporal data modeling concepts
- Knowledge of financial reference data such as equities, fixed income, or corporate actions
- Experience with BigQuery cost management techniques
- Experience with CI/CD pipelines and Terraform for infrastructure as code
- Exposure to LLMs and& Agentic AI using Vertex AI for data-related use cases.
PERKS AND BENEFITS :
- Remote work & Local connection : Work where you feel most productive and connect with your team in periodic meet-ups to strengthen your network and connect with other top experts.
- Legal presence in India : We ensure full local compliance with a structured, secure work environment tailored to Indian regulations.
- Competitive Compensation in INR : Fair compensation in INR with dedicated budgets for your personal growth, education, and wellness.
- Innovative Projects : Leverage the latest tech and create cutting-edge solutions for world-recognized clients and the hottest startups.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1630422