HamburgerMenu
hirist

Job Description

Description :


Objectives of this role :

- Designing, developing, and implementing scalable and efficient data processing pipelines using Big Data technologies.

- Collaborating with data architects, data scientists, and business analysts to understand and translate data requirements into technical solutions.

- Developing and optimising ETL (Extract, Transform, Load) processes to ingest and transform large volumes of data from multiple sources.

- Implementing data integration solutions to ensure seamless data flow and accessibility across different platforms and systems.

- Building and maintaining data warehouses, data lakes, and data marts to store and organise structured and unstructured data.

- Designing and developing data models, schemas, and structures to support business analytics and reporting requirements.

- Monitoring and optimising the performance of Big Data applications and infrastructure to ensure reliability, scalability, and efficiency.

- Troubleshooting and resolving data processing, quality, and system performance issues.

Your tasks :

- Develop and deploy data processing applications using Big Data frameworks such as Hadoop, Spark, Kafka, or similar technologies.

- Write efficient and optimised code in programming languages like Java, Scala, Python, or SQL to manipulate and analyse data.

- Implement data security and privacy measures to protect sensitive information and comply with regulatory requirements.

- Collaborate with cross-functional teams to integrate data solutions with existing systems and applications.

- Conduct testing and validation of data pipelines and analytical solutions to ensure accuracy, reliability, and performance.

- Document technical specifications, deployment procedures, and operational guidelines for data solutions.

- Stay updated on industry trends, emerging technologies, and Big Data development and analytics best practices.

Required skills and qualifications :

- Bachelors degree in Computer Science, Engineering, Information Technology, or a related field.

- Demonstrable experience as a Big Data Developer, Data Engineer, or similar role with a minimum of 3 years in Big Data technologies and platforms.

- Strong understanding of distributed computing principles and Big Data ecosystem components (e.g., Hadoop, Spark, Hive, HBase).

- Proficiency in programming languages and scripting (e.g., Java, Scala, Python, SQL) for data processing and analysis.

- Experience with cloud platforms and services for Big Data (e.g., AWS, Azure, Google Cloud).

- Solid understanding of database design, data warehousing, and data modelling concepts.

- Excellent problem-solving skills and analytical thinking, with the ability to troubleshoot complex data issues.

- Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams.

Preferred skills and qualifications :

- Masters degree in Data Science, Computer Science, or a related field.

- Relevant certification in Big Data technologies or related fields (e.g., Cloudera Certified Professional, AWS Certified Big Data, Hortonworks, Databricks).

- Experience with real-time data processing frameworks (e.g., Kafka, Flink).

- Knowledge of machine learning and data science concepts for Big Data analytics.

- Familiarity with DevOps practices and tools for continuous integration and deployment


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in