HamburgerMenu
hirist

Curl Tech - Senior Data Engineer - Databricks

EUPHONIOUS INNOVATIONS PRIVATE LIMITED
8 - 13 Years
rupee35-55 LPA
Bangalore

Posted on: 08/04/2026

Job Description

Description :

Location : Bangalore

Designation : Senior DataEngineer- DataBricks

Company Profile :

Amphora Software Pvt. Ltd. is the premier software solution provider for energy trading, logistics and risk management in the global crude oil, refined products and energy derivatives marketplace. Our team includes some of the most experienced software designers, developers and business analysts in the commodities industry today.


Since our inception, our main goal has been to provide the trading community with the most robust, user-friendly, enterprise-wide software package available. We continue to launch new products that address customers needs and adjust to dynamic market demands.

We at Amphora - Bangalore, work on developing products & solutions for Commodity Trading businesses. With a focus on Applied Research, we employ Data-centric AI to effectively tackle real-world challenges, ensuring a seamless transition of our AI solutions from research labs to production environments.

Our unique technical expertise spans Design, Machine Learning, Artificial Intelligence, Blockchain, Distributed Ledger Technologies, Software Engineering & Cloud. This diverse experience enables us to navigate complex use cases that involve Data.

Job Description :

We are looking for a Senior Data Engineer with strong technical expertise in Databricks, data engineering, and cloud-native analytics platforms. You will contribute to the development and expansion of our global analytics Platform supporting Front Office Trading across commodities by building scalable, secure, and efficient data solutions.

You will work alongside data scientists, ML engineers, and business stakeholders to understand requirements, design and build robust data pipelines, and deliver end-to-end analytics and ML/AI capabilities.

Key Responsibilities :

Bachelors / Masters in Computer Science or equivalent with at least 8+ years of professional experience.

- Design, build, and maintain scalable data pipelines and Delta Lake architectures in Databricks on AWS.

- Develop and enhance the Front Office data warehouse to ensure performance, reliability, and data quality for trading analytics.

- Partner with data scientists and quants to prepare ML-ready datasets and support the development of production-grade ML/AI pipelines.

- Implement and maintain CI/CD pipelines, testing frameworks, and observability tools for data engineering workflows.

- Contribute to MLOps practices, including model tracking deployment, and monitoring using MLflow and Databricks tools.

- Participate in code reviews, data modeling sessions, an collaborative solutioning across cross-functional teams.

- Ensure compliance with data governance, security, and performance standards.

- Stay current with Databricks platform enhancements and cloud data technologies to apply best practices and recommend improvements.

Who are we looking for? - Technical Expertise :

BS/MS in Computer Science, Software Engineering, or equivalent technical discipline.

- 8+ years of hands-on experience building large-scale distributed data pipelines and architectures.

- Expert-level knowledge in Apache Spark, PySpark, and Databricks including experience with Delta Lake, Unity Catalog, MLflow, and Databricks Workflows.

- Deep proficiency in Python and SQL, with proven experience building modular, testable, reusable pipeline components.

- Strong experience with AWS cloud services including S3, Lambda, Glue, API Gateway, IAM, EC2, EKS, and integration of AWS-native components with Databricks.

- Advanced skills in Infrastructure as Code (IaC) using Terraform for provisioning data infrastructure, including permissions, clusters, jobs, and lakehouse resources.

- Proven experience in building MLOps pipelines, tracking model lifecycle, and integrating with modern ML frameworks (e.g., scikit-learn, XGBoost, TensorFlow).

- Exposure to streaming data pipelines (e.g., Kafka, Structured Streaming) and real-time analytics architectures is a strong plus.

- Experience implementing robust DevOps practices for data engineering : versioning, testing frameworks, deployment automation, monitoring.

- Familiarity with data governance, access control, and regulatory compliance requirements in financial or trading environments.

- Excellent communication and problem-solving skills, with a strong sense of ownership and ability to work in agile cross- functional teams.

- Certifications in Databricks, AWS, or relevant big data technologies are preferred.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in