Posted on: 25/11/2025
Description : Note : This position requires 5 days mandatory - Work From Office.
Job Location : Bangalore
Experience Required : 6 Years
Availability : Immediate Joiners Preferred
Key Technologies & Skills Required :
- Big Data Tools : Hadoop, Hive
- Programming Languages : Python, SQL
- Distributed Processing Frameworks : Apache Spark, PySpark
- Workflow Orchestration : Apache Airflow
- Version Control : Bitbucket Repository.
Job Description :
Key Responsibilities :
- Design and implement robust data pipelines using Spark and PySpark.
- Develop and optimize Hive queries for large-scale data processing.
- Automate workflows using Apache Airflow.
- Write clean, maintainable, and efficient Python and SQL code.
- Collaborate with cross-functional teams to understand data requirements.
- Ensure data quality, integrity, and security across all systems.
- Manage code repositories using Bitbucket and follow best practices in version control.
Preferred Qualifications :
- Strong understanding of distributed computing and data architecture.
- Hands-on experience with Hadoop ecosystem tools.
- Excellent problem-solving and debugging skills.
- Ability to work in a fast-paced, collaborative environment.
- Strong communication and documentation skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1579486
Interview Questions for you
View All