Posted on: 29/10/2025
Description :
Key Responsibilities :
- Architect and maintain structured databases for tick-level and MFT data.
- Develop efficient ingestion & ETL pipelines using Python/C++, handling large volumes in near-real time.
- Write optimised SQL queries and manage schemas for time-series and reference data.
- Implement automated data validation, reconciliation, and version control.
- Work with quants to expose clean datasets for research and live trading systems.
- Ensure data integrity across time zones, instruments, and asset classes.
- Manage historical data archives and set up policies for retention, compression, and retrieval.
- Collaborate with the execution team to align live feeds with stored data.
Requirements :
Skills & Qualifications :
- Strong command over Python (pandas, multiprocessing, asyncio) and SQL (Postgres/MySQL).
- Working knowledge of C++ for performance-critical modules or parsers.
- Comfort with Linux environments, shell scripting, and version control (Git).
- Experience handling large-scale time-series data.
- Understanding of data normalization, schema design, and storage optimization.
- Ability to work independently, manage priorities, and deliver with accountability.
- Exposure to financial or tick-data pipelines, FIX/FAST feeds, or exchange APIs.
- Familiarity with Redis, Kafka.
- Prior experience in an HFT, quant, or data-heavy product firm
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1566863
Interview Questions for you
View All