Posted on: 13/07/2025
Role Overview :
As a Big Data Engineer, you will be responsible for building robust data pipelines, developing scalable data processing systems, and optimizing data workflows across distributed environments.
You will collaborate closely with data scientists, analysts, and software engineers to ensure efficient data flow and accessibility across platforms.
Key Responsibilities :
- Design, develop, and manage scalable data pipelines using big data technologies.
Required Skills & Qualifications :
- 3+ years of hands-on experience in Big Data engineering.
- Proficiency in technologies such as Hadoop, Spark, Hive, Kafka, Flink, or Presto.
- Strong programming/scripting skills in Python, Java, or Scala.
- Experience with cloud-based data platforms (AWS EMR, GCP BigQuery, Azure Data Lake).
- Familiarity with data warehousing and distributed computing systems.
- Solid understanding of SQL, data modeling, and query optimization.
- Experience with workflow orchestration tools like Airflow, Luigi, or Dagster.
- Knowledge of version control (Git) and CI/CD pipelines for data deployments.
Preferred Skills :
- Experience with real-time streaming data pipelines using Kafka, Flink, or Kinesis.
What Youll Get :
- Work on high-volume data infrastructure projects with global companies.
- Flexible remote work and performance-based culture.
- Opportunity to architect data solutions for cutting-edge applications.
- Access to the HYI.AI network for collaboration, growth, and career advancement.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1512252
Interview Questions for you
View All