HamburgerMenu
hirist

Kenvue - Data Engineer - Enterprise Data & Analytics

Kenvue
2 - 4 Years
Bangalore

Posted on: 03/03/2026

Job Description

Note : Women Candidates Preferred


Description :


Who we are :


At Kenvue, we realize the extraordinary power of everyday care. Built on over a century of heritage and rooted in science, were the house of iconic brands - including Neutrogena, Aveeno, Tylenol, Listerine, Johnsons and BAND-AID Brand Adhesive Bandages that you already know and love. Science is our passion; care is our talent. Our global team is made up of ~ 22,000 diverse and brilliant people, passionate about insights, innovation and committed to delivering the best products to our customers. With expertise and empathy, being a Kenvuer means having the power to impact the life of millions of people every day. We put people first, care fiercely, earn trust with science and solve with courage and have brilliant opportunities waiting for you! Join us in shaping our futureand yours. For more information


What you will do :


A Data Engineer for Enterprise Data & Analytics role focuses on unifying complex, siloed data from internal ERPs (like SAP) and Product Life Cycle Manager systems like Laboratory Information Systems with external retail signals (POS data, shipment tracking) to optimize every stage from production to the shelf.


Core Responsibilities :


- Pipeline Development : Design and maintain scalable ETL/ELT pipelines using Azure Databricks to ingest data into Azure Data Lake Storage (ADLS).


- Data Transformation : Utilize PySpark and Spark SQL to clean, normalize, and aggregate raw data into "analytics-ready" formats (Medallion Architecture : Bronze, Silver, Gold layers).


- Orchestration & Monitoring : Configure and monitor data workflows using Azure Data Factory or Databricks Workflows (Lakeflow Jobs) to ensure daily retail reports are delivered accurately and on time.

- Governance & Security : Implement data access controls and cataloging using Unity Catalog to ensure secure and compliant data handling across the enterprise.


- Quality Assurance : Develop automated data quality checks (e.g., using Delta Live Tables) to prevent inaccurate metrics reporting.


Technical Requirements :


Education : Bachelors degree in Computer Science, Data Science, or a related engineering field.


Programming :


- 2 - 4 years work experience with proficiency in Python (PySpark) and SQL is essential for building distributed data processing logic.


- Databricks Ecosystem : Job clusters, Unity Catalog, Databricks Notebooks


- Familiarity with Delta Lake (ACID transactions, time travel) and basic knowledge of Spark core architecture (drivers, executors, tasks).

Azure Proficiency :


- Exposure to core Azure services, specifically ADLS Gen2, Azure Data Factory (ADF) and Key Vault.


Version Control :


- Experience using Git for collaborative code development and CI/CD processes.


Performance Tuning :


- Basic ability to identify bottlenecks in Spark jobs, such as optimizing joins for large-scale transaction tables.


- Agile Development : Experience working in an collaborative agile environment.


- Cloud Architecture : Understanding of cloud architecture principles and best practices

- Experience in the design and build of end-to-end solutions that meet business requirements and adhere to scalability, reliability, and security standards


What's in it for you :


- Competitive Total Rewards Package

- Paid Company Holidays, Paid Vacation, Volunteer Time & More!


- Learning & Development Opportunities


- Employee Resource Groups

The job is for:

Women candidates preferred
info-icon

Did you find something suspicious?

Similar jobs that you might be interested in