Posted on: 06/04/2026
Company Overview:
Cerebre is a leading provider of advanced data analytics and AI solutions, empowering organizations across various sectors, including finance, healthcare, and retail, to unlock actionable insights from their data. We specialize in building robust data ecosystems that drive informed decision-making and optimize business performance. Our expertise lies in leveraging cutting-edge technologies to transform complex data into strategic assets.
Role Overview:
As a Graph Cpyher Neo4j Data Modeling Data Engineer at Cerebre, you will be instrumental in designing, developing, and maintaining our graph database solutions using Neo4j.
You will collaborate closely with data scientists, business analysts, and other engineers to build scalable and efficient data models that address complex business challenges.
Your work will directly impact our ability to deliver innovative data-driven products and services to our clients, enabling them to gain a competitive edge in their respective industries.
Key Responsibilities :
- Design and implement robust and scalable graph data models in Neo4j to support various business requirements.
- Develop and maintain Cypher queries for data retrieval, analysis, and reporting, ensuring optimal performance.
- Build and optimize ETL pipelines to ingest, transform, and load data into Neo4j from diverse data sources.
- Collaborate with data scientists and business analysts to understand their needs and translate them into effective graph database solutions.
- Implement data governance and data quality measures to ensure the accuracy, consistency, and reliability of data in the graph database.
- Monitor and troubleshoot performance issues in the Neo4j environment, identifying and implementing solutions to improve efficiency.
- Contribute to the development of best practices and standards for graph data modeling and development within the organization.
Required Skillset :
- Demonstrated ability to design and implement graph data models using Neo4j, including experience with Cypher query language.
- Proven expertise in data warehousing concepts, data modeling techniques, and database design principles.
- Strong proficiency in SQL and experience with relational database systems.
- Hands-on experience with ETL processes and data integration tools.
- Ability to write clean, efficient, and well-documented Python code for data manipulation and automation.
- Solid understanding of data governance principles and data quality management practices.
- Excellent communication and collaboration skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences.
- Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
- Experience with cloud computing platforms (e.g., AWS, Azure, GCP) is a plus.
The job is for:
Did you find something suspicious?