HamburgerMenu
hirist

Big Data Engineer - Hadoop/Spark

BeGig
Bangalore
3 - 5 Years
star-icon
4.4white-divider11+ Reviews

Posted on: 17/12/2025

Job Description

Description :

Role Overview :

As a Big Data Engineer, you will :

- Build ETL/ELT pipelines for large, streaming, or unstructured datasets.

- Design distributed data architectures using modern big-data stacks.

- Optimize query performance, data partitioning, and pipeline throughput.

- Work with data scientists and analysts to support ML workflows.

- Ensure reliability, monitoring, and scalability of data systems.

Technical Requirements & Skills :

- Experience : 3+ years in big-data engineering.

- Tools : Spark, Hadoop, Kafka, Flink, Airflow, dbt.

- Warehouses : BigQuery, Snowflake, Redshift, or Databricks.

- Languages : Python, SQL, Scala (bonus).

- Concepts : Distributed systems, data lakes, streaming pipelines.

What Were Looking For :

- Engineer who understands scaling challenges and distributed compute.

- Strong debugging and pipeline optimization skills.

- Comfortable working with massive datasets and complex pipelines.

Why Join Us :

- Impact : Build the data backbone for analytics and AI teams.

- Flexibility : Data engineering roles across enterprise and startup ecosystems.

- Network : Join a community of large-scale data infrastructure experts


info-icon

Did you find something suspicious?

Posted by

Kishan

TRE at BeGig

Last Active: 18 Dec 2025

Job Views:  
14
Applications:  20
Recruiter Actions:  0

Functional Area

Big Data / Data Warehousing / ETL

Job Code

1591620