HamburgerMenu
hirist

Data Engineer - Python/SQL/ETL

DigitalCube Consultancy
Noida
3 - 6 Years
star-icon
4.3white-divider2+ Reviews

Posted on: 05/10/2025

Job Description

Job Description :

Job Title : Data Engineer

Experience Required : 3-6 Years

Responsibilities :


- Design, develop, and maintain scalable data pipelines and ETL processes to collect, process, and store data from various sources.

- Work with Apache Spark to process large datasets in a distributed environment, ensuring optimal performance and scalability.

- Develop and optimize Spark jobs and data transformations using Scala for large-scale data processing.

- Collaborate with data analysts and other stakeholders to ensure data pipelines meet business and technical requirements.

- Integrate data from different sources (databases, APIs, cloud storage, etc.) into a unified data platform.

- Ensure data quality, consistency, and accuracy by building robust data validation and cleansing mechanisms.

- Use cloud platforms (AWS, Azure, or GCP) to deploy and manage data processing and storage solutions.

- Automate data workflows and tasks using appropriate tools and frameworks.

- Monitor and troubleshoot data pipeline performance, optimizing for efficiency and cost-effectiveness.

- Implement data security best practices, ensuring data privacy and compliance with industry standards.

- Stay updated with new data engineering tools and technologies to continuously improve the data infrastructure.

Requirements :


- 4 to 6 years of experience required as a Data Engineer or an equivalent role

- Strong experience working with Apache Spark with Scala for distributed data processing and big data handling.

- Basic knowledge of Python and its application in Spark for writing efficient data transformations and processing jobs.

- Proficiency in SQL for querying and manipulating large datasets.

- Experience with cloud data platforms, preferably AWS (e.g., S3, EC2, EMR, Redshift) or other cloud-based solutions.

- Strong knowledge of data modeling, ETL processes, and data pipeline orchestration.

- Familiarity with containerization (Docker) and cloud-native tools for deploying data solutions.

- Knowledge of data warehousing concepts and experience with tools like AWS Redshift, Google BigQuery, or Snowflake is a plus.

- Experience with version control systems such as Git.

- Strong problem-solving abilities and a proactive approach to resolving technical challenges.

- Excellent communication skills and the ability to work collaboratively within cross-functional teams.

Preferred Qualifications :


- Experience with additional programming languages like Python, Java, or Scala for data engineering tasks.

- Familiarity with orchestration tools like Apache Airflow, Luigi, or similar frameworks.

- Basic understanding of data governance, security practices, and compliance regulations.


info-icon

Did you find something suspicious?