Posted on: 08/11/2025
Description :
Role Purpose :
Join our dynamic team to design scalable data pipelines, model decision-grade data, and deliver actionable business insights for top financial clients banks, NBFCs, lenders, and fintechs.
Key Responsibilities :
- Build and optimize ETL/ELT pipelines using PySpark/Python.
- Design and manage data lakehouse stacks on AWS (S3, Glue, Iceberg, Redshift, Postgres).
- Develop data ingestion from product, CRM, and financial systems.
- Support analytical onboarding, training, and customer success initiatives.
- Ensure compliant data usage in regulated environments.
Required Skills :
- Strong knowledge of SQL, data modeling, and Python (pandas/NumPy).
- Experience with AWS, Redshift, and Power BI/Tableau/Metabase.
- Exposure to financial data handling (PII, SOC 2, GDPR).
Nice to Have :
- Familiarity with dbt, Airflow, Terraform, Kafka/Kinesis, or reverse ETL.
- Understanding of lending lifecycle or AI/agent workflows.
Why Join Us :
- Work on high-impact, AI-powered products.
- Learn and grow in a collaborative engineering culture.
- 5-day Work from Office (Chandigarh/Panchkula).
- Employee-friendly policies and clear career growth.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1571383
Interview Questions for you
View All