HamburgerMenu
hirist

Capco - Senior Data Analyst - Python/PySpark

Capco Technologies Pvt Ltd
Multiple Locations
7 - 10 Years

Posted on: 23/12/2025

Job Description

Senior Data Analyst

Location : Bengaluru / Pune

Experience :
7+ Years

Notice Period : Immediate Joiners Preferred

Role Summary :

We are seeking a highly analytical and technically proficient Senior Data Analyst to join our ambitious data engineering and analytics team. This role is designed for a professional who combines deep technical expertise in Python and PySpark with a strong commercial acumen, specifically within the Banking and Financial Markets domain.


You will be responsible for defining complex business requirements, navigating large-scale Big Data environments, and delivering high-impact data solutions under tight deadlines. As a senior member of the team, you will ensure data integrity through rigorous control requirements and drive change using Agile methodologies and Test-Driven Development (TDD) principles.

Responsibilities :

- Execute complex data analysis and extraction tasks using Python, PySpark, and SQL to support large-scale Big Data programs and financial reporting.

- Translate intricate business and financial market requirements into detailed functional specifications, leveraging a strong background in business analysis.

- Navigate and query massive distributed databases, specifically utilizing Hive, and manage remote server environments via CMD, Putty, and Notepad++.

- Utilize and maintain sophisticated data models and data dictionaries specifically tailored for Banking and Financial Markets to ensure consistency across global datasets.

- Drive the end-to-end change delivery lifecycle, managing multiple competing priorities while adhering to strict regulatory and data-handling control requirements.

- Implement Test-Driven Development (TDD) practices, taking full ownership of the testing cycle to ensure products are robust and production-ready.

- Collaborate with cross-functional stakeholders in a multi-programme environment to define data lineages and ensure alignment with organizational goals.

- Proactively identify and resolve data anomalies or performance bottlenecks within the Software Development Life Cycle (SDLC) using formal Agile processes.

- Communicate complex analytical findings and technical roadmaps effectively to both technical engineering teams and non-technical business leaders.

- Ensure extreme attention to detail in data mapping, transformation logic, and documentation to minimize operational risk in financial processing.

Technical Requirements :

- Python & PySpark : 7+ years of experience building scalable data processing scripts and performing advanced analytics on Big Data platforms.

- Advanced SQL : Mastery in writing, debugging, and optimizing complex queries within Hive and relational database management systems.

- Environment Tools : Proficient in navigating Linux-based systems using Putty/CMD and managing code/configurations with Notepad++.

- Domain Expertise : Deep understanding of Banking and Financial Markets data structures, including knowledge of regulatory reporting and financial instruments.

- SDLC & Agile : Strong hands-on experience with Agile ceremonies, Jira, and version control, with a specific focus on Test-Driven Development (TDD).

- Data Governance : Practical knowledge of control requirements, data privacy, and the security protocols surrounding sensitive financial data handling.

Preferred Skills :

- Previous experience in a Tier 1 Investment Bank or Global Financial Services organization.

- Familiarity with Data Orchestration tools like Airflow or Control-M for managing complex job schedules.

- Knowledge of visualization tools (Tableau/Power BI) to present analytical findings to executive stakeholders.

- Understanding of CI/CD pipelines (Jenkins/Bitbucket) to support automated data deployment workflows.

- Ability to work as a self-starter in a high-pressure environment with a proven track record of meeting tight delivery deadlines.

- Certification in Python for Data Science or Google/AWS Big Data specialities is an added advantage.


info-icon

Did you find something suspicious?