Posted on: 21/01/2026
Note : If shortlisted, you will be invited for initial rounds on 7th February 2026 (Saturday) in Gurugram
Responsibilities :
- Responsible for the design, deployment, configuration, and operations for a multi-node big
data cluster.
- This includes working with open source and/or commercial stacks to support the
full SDLC.
- Resource will work to deploy, manage, and maintain development, test and production environments for the big data platform.
- Develop scripts to automate and streamline operations and configurations in the infrastructure
- Specify, design, build, and support BI solutions by working closely with datalake team
- Create dashboards and KPIs to show the business performance to management.
- Design and maintain data models used for reporting and analytics
- Work to identify infrastructure needs and providing support to developers and business user
- Research performance issues; Optimize platform for performance
- Troubleshoot and resolve issues in all operational environments
- Work with a cross functional team delivering software deployments
- Forward thinking by continuously adopting new ideas and technologies to solve business problems
- Own the design and development of automated solutions for recurring reporting and in-depth analysis.
- A problem solver and critical thinker.
Skills and Qualifications :
Skills needed :
- Strong Experience in Data lake Spark, distributed file system, Yarn, Cloud services (preferably GCP / AWS).
- Strong Experience in SQL tools Vertica, Dremio product or any big data SQL
- Scripting Knowledge Shell, python
- Experience with ETL and OLAP concepts in building highly scalable data pipelines
- Exposure to any Visualization systems is a plus (like Apache Superset, Tableau)
- Experience with Agile, data structures, data analysis and wrangling tools and technologies.
- Familiar with version control and relational databases.
- Strong Experience in monitoring, debugging and troubleshooting of services.
- Experience in providing on-call support.
Basic Qualifications :
- Bachelors / Masters Degree in Computer or related field in a reputed institution
- 4 to 8 years of professional experience in software with most of them from a product company
Preferred Qualifications :
- Proficient in one or more technologies like :
- AWS, EMR
- Hadoop , Spark
- SQL
- Python
- Data Structures
- Experience with working in Linux based environment
- Good communication and design skills
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1604875