HamburgerMenu
hirist

Data Engineer - ETL Pipelines Implementation

Ace Recruit
4 - 6 Years
Multiple Locations

Posted on: 24/03/2026

Job Description

Company Overview :

Ace Recruit is a leading talent acquisition firm specializing in connecting top-tier professionals with high-growth companies across various sectors, including technology, finance, and consulting. We partner with organizations ranging from innovative startups to established enterprises, providing customized recruitment solutions to meet their unique hiring needs.


Our expertise lies in identifying and attracting exceptional candidates who can drive business success and contribute to a thriving company culture.

Role Overview :

As a Data Engineer at Ace Recruit, you will play a crucial role in building and maintaining the data infrastructure that powers our recruitment operations and provides valuable insights to our internal teams.


You will collaborate closely with data scientists, analysts, and software engineers to design, develop, and deploy scalable data pipelines, ensuring data quality and accessibility for informed decision-making.


Your work will directly impact our ability to efficiently source, assess, and place top talent, ultimately contributing to the success of our clients and candidates.

Key Responsibilities :

- Design and implement robust ETL pipelines to ingest, transform, and load data from various sources into our data warehouse.

- Develop and maintain scalable data infrastructure using cloud-based technologies such as AWS and GCP.

- Build and optimize data models and schemas to support analytical and reporting requirements.

- Monitor data quality and performance, identifying and resolving data-related issues proactively.

- Collaborate with data scientists and analysts to understand their data needs and provide solutions.

- Implement data security and governance policies to ensure data privacy and compliance.

- Automate data processing tasks to improve efficiency and reduce manual effort.

Required Skillset :

- Proven ability to design, develop, and maintain ETL pipelines using Python and SQL.

- Demonstrated expertise in working with big data technologies such as Spark, Kafka, and Hadoop.

- Hands-on experience with cloud platforms like AWS and GCP, including services such as S3, EC2, and BigQuery.

- Strong understanding of database technologies, including PostgreSQL and MongoDB.

- Experience with Apache server.

- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.

- Bachelor's degree in Computer Science, Engineering, or a related field.

- Ability to adapt to a fast-paced, dynamic environment and learn new technologies quickly.

info-icon

Did you find something suspicious?

Similar jobs that you might be interested in