Posted on: 08/12/2025
Description :
About Servify :
Servify is a global product lifecycle management company, operating across 3 continents (in India, North America, Europe and MENA), that focuses on designing as well as administering custom product protection programs and exchange/upgrade programs for carriers, OEM brands, and retailers.
We have cultivated a diverse, global client portfolio that includes global fortune 100 companies, OEMs (with more than 87% global mobile phone market share including the likes of Apple and Samsung), and more than 75 other brands to support their product care solutions.
Servify protects tens of millions of devices across the globe and supports distribution of device protection products in over 200,000+ retail outlets globally.
POSITION SUMMARY :
We are seeking an experienced Senior Data Engineer to take operational ownership of our established data ecosystem.
If you have 5+ years of data engineering experience and 2+ recent years mastering Databricks in a production environment, this is your chance to drive a critical platform transformation.
This role provides significant autonomy.
You will be responsible for the continuous health of our production pipelines, ensuring data quality, and leading the charge to consolidate our entire visualization layer by migrating dashboards from Tableau onto Databricks.
KEY RESPONSIBILITIES :
- Pipeline Stabilization & Management : Manage, monitor, and ensure the 24/7 operational reliability of our current suite of production Databricks (PySpark/Delta Lake) pipelines.
- ETL Architecture Fidelity : Maintain and iterate on complex ETL/ELT processes structured around the Medallion Architecture.
- Visualization Migration Lead : Execute the end-to-end migration of all existing business intelligence dashboards from Tableau onto the native Databricks visualization platform.
- Source System Integration : Design and optimize ingestion logic for diverse data sources, with specific responsibility for extracting, transforming, and loading data efficiently from PostgreSQL and MongoDB.
- Partner Sharing & Security : Establish and govern secure, reliable mechanisms for sharing finalized Databricks visualizations and reports with both internal stakeholders and external partners.
- Cost & Performance Optimization : Actively tune Spark jobs, cluster configurations and Delta tables to drive down cloud costs and reduce pipeline latency.
REQUIREMENTS :
- 5+ years experience in building and managing robust data infrastructure.
- Databricks Mastery : Minimum of 2 years of recent, hands-on production experience in managing jobs, clusters, and data assets within the Databricks environment.
- Expert proficiency in Python (PySpark) and advanced SQL.
- Database Expertise : Proven ability to connect to, query, and efficiently extract large datasets from PostgreSQL (Postgres DB) & MongoDB (Understanding of NoSQL schema and extraction methods is key).
- Architecture & Methodology : Practical experience implementing and maintaining the Medallion Data Architecture.
- BI Tool Knowledge : Prior exposure to Tableau is essential for understanding the migration scope, coupled with proven experience developing dashboards in Databricks SQL Analytics/Warehouse.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1586665
Interview Questions for you
View All