Posted on: 17/12/2025
Description :
Role Overview :
As a Big Data Engineer, you will :
- Build ETL/ELT pipelines for large, streaming, or unstructured datasets.
- Design distributed data architectures using modern big-data stacks.
- Optimize query performance, data partitioning, and pipeline throughput.
- Work with data scientists and analysts to support ML workflows.
- Ensure reliability, monitoring, and scalability of data systems.
Technical Requirements & Skills :
- Experience : 3+ years in big-data engineering.
- Tools : Spark, Hadoop, Kafka, Flink, Airflow, dbt.
- Warehouses : BigQuery, Snowflake, Redshift, or Databricks.
- Languages : Python, SQL, Scala (bonus).
- Concepts : Distributed systems, data lakes, streaming pipelines.
What Were Looking For :
- Engineer who understands scaling challenges and distributed compute.
- Strong debugging and pipeline optimization skills.
- Comfortable working with massive datasets and complex pipelines.
Why Join Us :
- Impact : Build the data backbone for analytics and AI teams.
- Flexibility : Data engineering roles across enterprise and startup ecosystems.
- Network : Join a community of large-scale data infrastructure experts
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1591620
Interview Questions for you
View All