Posted on: 19/11/2025
Description :
- Develop and maintain robust ETL/ELT pipelines using tools such as Apache PySpark, Airflow, and SQL
- Design and optimize data models for both structured and unstructured data
- Collaborate with Application, Digital Product, and Technology teams, as well as analysts and software engineers, to deliver high-quality data solutions
- Ensure data quality, integrity, and security across all systems
- Implement data governance, privacy, and compliance standards
- Monitor, troubleshoot, and optimize data pipeline performance and reliability
Required Skills :
- Proficiency in SQL, Python, and Apache Spark
- Hands-on experience with AWS cloud services (e.g., S3, Redshift, Airflow, Lambda, EMR , EC2, kinesis)
- Familiarity with big data and RDBMS technologies such as Hadoop, Spark, Kafka
- Strong understanding of data warehousing concepts, especially Amazon Redshift
- Experience with CI/CD pipelines and version control tools (Bitbucket, Git, Jenkins)
Functional Skills :
- In-depth understanding of Asset Management Companies (AMCs) and Capital Markets
- Familiarity with business processes across Sales, Finance, and Investment domains
- Strong grasp of how investor-level (360) data is derived from scheme-level data points
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1577096
Interview Questions for you
View All