HamburgerMenu
hirist

Epsilon - Data Engineer - Business Intelligence

Posted on: 25/07/2025

Job Description

The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC).

As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability.

Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind.

They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem.

The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals.

Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud.

Why we are looking for you :

You have experience working as a Data Engineer with strong database fundamentals and ETL background.

You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above.

You have experience working in relation data systems, preferably PostgreSQL and SparkSQL.

You have excellent designing and coding skills and can mentor a junior engineer in the team.

You have excellent written and verbal communication skills.

You are experienced and comfortable working with global clients

You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams.

You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD.

What you will enjoy in this role :

As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry.

You will get to work on the latest tools and technology and deal with data of petabyte-scale.

Work on homegrown frameworks on Spark and Airflow etc.

Exposure to Digital Marketing Domain where Epsilon is a marker leader.

Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviours and patterns to design digital Ad strategies.

As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward.

Using existing standard methodologies and defining as per evolving industry standards.

Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain.

The open and transparent environment that values innovation and efficiency

Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice.

What will you do?

- Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment.

- Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the companys multi-faceted ad serving operations.

- Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model.

- This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices.

- Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations

- Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence.

- Mentor Junior engineers in the team.

- Stay abreast of developments in the data world in terms of governance, quality and performance optimization.

- Able to have effective client meetings, understand deliverables, and drive successful outcomes

Qualifications :

- Bachelors Degree in Computer Science or equivalent degree is required.

- 5 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas.

- Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required.

- Solid experience in Basic and Advanced SQL writing and tuning.

- Experience with Python

- Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects.

- Good understanding of Disaster Recovery and Business Continuity solutions

- Experience with scheduling applications with complex interdependencies, preferably Airflow

- Good experience in working with geographically and culturally diverse teams.

- Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks.

- Excellent written and verbal communication skills.

- Ability to handle complex products.

- Good communication and problem-solving skills, with the ability to manage multiple priorities.

- Ability to diagnose and solve problems quickly.

- Diligent, able to multi-task, prioritize and able to quickly change priorities.

- Good time management.

- Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools.

- Looking for candidates based out in Bangalore only


info-icon

Did you find something suspicious?