Posted on: 27/11/2025
Description :
We are looking for an experienced Big Data Engineer who has strong hands-on experience in Big Data ecosystem tools, database engineering, and ETL pipeline development.
The ideal candidate should have strong analytical and problem-solving skills along with expertise in performance tuning and scheduling tools.
Key Responsibilities :
- Design, develop, and optimize scalable Big Data pipelines.
- Work closely with cross-functional teams on data acquisition, transformation, and processing.
- Perform ETL workflows, data ingestion, and data processing using Hadoop ecosystem.
- Build and maintain data solutions ensuring performance, scalability, and reliability.
- Monitor, troubleshoot, and tune data pipelines to ensure optimal performance.
Mandatory Skills :
- Big Data / Hadoop Technologies: Hive, HQL, HDFS
- Programming: Python, PySpark, SQL (Strong query writing)
- Schedulers: Control-M or equivalent scheduler
- Database & ETL: Strong experience in SQL Server / Oracle or similar
- ETL pipeline development & performance tuning
Preferred (Good to Have) :
- GCP Services: BigQuery, Composer, DataProc, GCP Cloud architecture
- Experience in Agile delivery methodology
- Terraform coding and IaC knowledge
Did you find something suspicious?
Posted By
Posted in
Data Analytics & BI
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1581487
Interview Questions for you
View All