Posted on: 24/08/2025
Job Summary :
Key Responsibilities :
- Develop and optimize data ingestion, transformation, and integration workflows using technologies such as Hadoop, Spark, Kafka, and others.
- Collaborate with data scientists and analysts to understand data requirements and deliver accessible, clean, and well-organized data sets.
- Monitor, troubleshoot, and optimize performance of big data infrastructure and processes.
- Implement data security, privacy, and compliance standards.
- Work with cloud platforms like AWS, Azure, or GCP to build scalable and cost-effective data solutions.
- Automate data workflows and create reusable components to improve efficiency.
- Stay up to date with the latest big data technologies and best practices.
- Document data processes, pipeline architecture, and operational procedures.
Key Skills & Qualifications :
- 3+ years of experience in big data engineering or data pipeline development.
- Strong programming skills in Python, Java, or Scala.
- Hands-on experience with big data frameworks such as Apache Hadoop, Spark, Kafka, Flink, etc.
- Proficient with data storage technologies like HDFS, NoSQL databases (Cassandra, HBase), and relational databases.
- Experience with cloud services (AWS EMR, Azure HDInsight, Google Cloud Dataproc).
- Solid understanding of data modeling, ETL/ELT processes, and data warehousing concepts.
- Familiarity with containerization and orchestration tools like Docker and Kubernetes is a plus.
- Strong problem-solving skills and ability to work in a fast-paced environment.
- Excellent communication and teamwork skills
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1534349
Interview Questions for you
View All