HamburgerMenu
hirist

Job Description

At Ennoventure, we are redefining the fight against counterfeit goods with our groundbreaking technology.

Backed by key investors like Fenice Investment Group and Tanglin Venture Partners, we are ready to embark on the next phase of our journey.

Our aim? To build a world where authenticity reigns, ensuring every product and experience is genuine.

Here, innovation moves fast, collaboration fuels success, and your growth isnt just encouragedits inevitable.

As a Data Engineer, youll take the lead in designing, building, and optimizing data pipelines, storage solutions, and infrastructure for scalable data applications.

You will collaborate closely with crossfunctional teams to ensure data quality, integration, and performance in modern cloud-based architectures, helping to create impactful, next-gen products.

Your role will also involve transforming ideas into action, optimizing systems, and pushing technical boundaries.

If you are ready to break new ground and revolutionize the tech/research landscape, this is the opportunity for you.

What will you do :

- Good understanding in application of modern data architecture - Data Lake, Data Warehouse, Data Lake, Data Lake, Data Lake, Data Lake, Data Lakehouse, as well as Data Fabric and Data Mesh concepts.

- In-depth expertise in cloud platforms - AWS, AZURE (including IaaS/PaaS/SaaS service models)

- Proficient in multi-cloud and hybrid-cloud platforms.

- Good understanding of data storage, application integration, open file formats, and data processing

- Experience in orchestrating end-to-end data engineering infrastructure for intricate and large-scale applications.

- Collaborate with data scientists to translate model requirements into optimized data pipelines, ensuring data quality, processing, and integration.

- Define and refine performance benchmarks, and optimize data infrastructure to achieve peak correctness, availability, cost efficiency, scalability, and robustness.

- Expert in data engineering architecture and frameworks including batch and stream processing with Hadoop ecosystem, data warehouse / data lake platforms, Python / PySpark programming.

- Data Pipelines & Infrastructure: Take full ownership of building and maintain ETL/ELT pipelines to ensure data is collected, transformed, and available for real-time analytics.

- Design and implement systems which can power the customer facing analytics with good performance and user experience.

- Build and manage large, complex data sets to meet functional and non-functional business need

- Analyze and integrate disparate systems to provide timely and accurate information to visualization teams and business stakeholders.

- Optimize and troubleshoot data pipelines for performance, reliability, and scalability.

- Ensure data integrity and quality by implementing validation and testing strategies.

- Create and implement internal process improvements, such as redesigning infrastructure for scalability, improving data delivery, and automating manual processes.

- Data Observability: Track and manage the lifecycle of data paths across systems using data observability tools to measure and ensure quality, reliability, and lineage.

What do we look for at Ennoventure?:

- Bachelors degree in computer science, Engineering, Information Technology, or a related field.

- Strong proficiency in Python, SQL, and PySpark for data manipulation and transformation.

- In-depth knowledge of Big Data technologies such as Hadoop, MapReduce, and Spark.

- Experience with cloud platforms like AWS, Azure (familiarity with services like Redshift).

- Solid understanding of ETL/ELT processes, data pipelines, and data integration tools.

- Experience with Modern data warehousing technologies and architectures, such as Redshift, Snowflake, Big Query, ClickHouse, etc.

- Strong knowledge of data modelling, including dimensional modelling, Hive, and star schema.

- Familiarity with data visualization tools such as Tableau, Power BI, Superset, and Metabase.

- Hands-on experience with Data Processing & Orchestration frameworks: Spark, Temporal, Kafka, Pyspark, Presto

- Experience with Star Rocks, and NoSQL databases like MongoDB, Cassandra, or HBase is a plus.

- Familiarity with modern data lake concepts and tools (e.g., AWS Lake Formation, Azure Data Lake).

- Familiarity with containerization (e.g., Docker, Kubernetes) and CI/CD pipelines.

- Excellent problem-solving skills and analytical thinking.

- Ability to work in a fast-paced and constantly evolving environment

- - - - - - -


info-icon

Did you find something suspicious?