HamburgerMenu
hirist

Senior Data Engineer - Financial Services & Trading Platform

SMC Group
Delhi
5 - 10 Years

Posted on: 17/12/2025

Job Description

Description:



About the Role :


We're seeking a Senior Data Engineer to join our team at SMC Global Securities. In this role, you will be crucial in building and maintaining the data infrastructure that powers our financial services and trading platforms.


You will work with a diverse tech stack to handle massive volumes of real-time and historical financial data, ensuring that our analytics, research, and business teams have access to high-quality, reliable information to support our brokerage, wealth management, and advisory services. You will be responsible for designing, governing, and evolving the core data infrastructure that powers our financial services, trading platforms, and strategic decision-making

Responsibilities :

- Design, build, and maintain highly efficient and scalable real-time & batch data pipelines to ingest, process, and analyze financial data, including market data, trades, and client information.

- Own the high-level design and architecture for our Data Lakehouse environments, ensuring they align with business strategy and scalability requirements.

- Implement and enforce data modeling standards with a strong understanding of various modeling techniques, including a good understanding of Data Vault, Dimensional, and star/snowflake schema modeling, to create a robust and flexible data architecture for financial data.

- Build data pipelines using a Medallion Architecture, progressing data through Bronze (raw), Silver (cleansed), and Gold (analytics-ready) layers to ensure data quality, lineage, and auditability for regulatory compliance.

- Develop and optimize composable data architectures, focusing on creating modular, reusable data components that can be easily combined to support different business needs, such as risk management, algorithmic trading, and research analytics.

- Develop and optimize data transformation processes for financial data using tools like DBT and data processing frameworks like Apache Spark on EMR

- Manage and maintain data storage solutions across our Data Lake (S3) Data Warehouse (Redshift) and Data Lakehouse architectures, including Hive & Iceberg tables to handle large-scale financial datasets.

- Write and optimize complex SQL queries and Python scripts for financial data extraction, transformation, and loading (ETL/ELT).

- Review code and designs from peers, providing constructive feedback to elevate team output, enforce best practices, and ensure system reliability.

- Implement and orchestrate data workflows for market data ingestion, trade processing, and reporting using DLT, DBT, Pyspark and Apache Airflow Utilize AWS cloud services such as Lambda, Glue S3 ,Athena and , Redshift , to build robust data solutions.

- Collaborate with research analysts, quant traders, and business intelligence teams to understand data needs and build data models that support financial analysis, reporting (e.g., IPO reports), and the development of trading tools.

- Architect and enforce the implementation of a comprehensive Data Quality framework to ensure integrity, accuracy, and reliability of all financial data products

- Proactively manage stakeholder translating complex business needs (e.g., for risk management, algorithmic trading, IPO reports) into clear technical specifications and data solutions.

- Document data pipelines, data models, and processes for maintainability and knowledge sharing.

Requirements

- Strong proficiency in SQL and Python is a must, with a focus on data manipulation and analysis.

- Good understanding of data modeling concepts and experience with various modeling techniques in a financial context.

- Strong understanding of Medallion and composable data architectures

- Solid understanding of data architecture concepts including Data Lake, Data Warehouse,

- Hands-on experience with AWS cloud services, including but not limited to S3, Redshift, Athena, Glue, EMR, and Lambda

- Experience with open-source data tools like Airflow, DBT, DLT and Airbyte

- Familiarity with Hive & Iceberg tables is essential.

- Proven experience building and maintaining data pipelines, handling large volumes of financial data.

- Experience with reporting or business intelligence tools (e.g., Metabase).

- Excellent problem-solving skills and the ability to work independently or as part of a team.

- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.

- Prior experience in the financial services, trading, or fintech industry is highly preferred.


info-icon

Did you find something suspicious?