Posted on: 20/11/2025
Description :
- Integrate with DLT nodes, RPC endpoints and indexers to capture transactions, blocks and smart-contract events into analytical stores.
- Implement data models and storage patterns (time-series, graph-friendly tables, partitioned fact tables) for analytics, reporting and ML training.
- Optimize pipeline performance and latency using Kafka, Spark and efficient partitioning; enforce schema evolution and data quality checks.
- Build and maintain pipeline orchestration, CI/CD, infrastructure-as-code and observability for production reliability and incident response.
- Collaborate with blockchain engineers, data scientists and product teams; mentor junior engineers and define engineering best practices for DLT data workloads.
Skills & Qualifications :
Must-Have :
- Apache Spark.
- Apache Kafka.
- SQL.
- Apache Airflow.
- Distributed Ledger Technology.
Preferred :
- Ethereum.
- AWS.
Additional Qualifications :
- Experience with cloud data platforms and analytical stores (e.g., Redshift, BigQuery, Snowflake) and productionizing data pipelines.
- Bachelor's or Master's degree in Computer Science, Engineering or equivalent practical experience.
- Comfortable working remotely from India with occasional overlap across global time zones.
Benefits & Culture Highlights :
- Focused opportunity to work at the intersection of blockchain/DLT and data engineering strong learning and career growth path.
- Supportive consulting culture with mentorship, training budget and hands-on technical ownership.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1577809
Interview Questions for you
View All