HamburgerMenu
hirist

Job Description

About the Role :

We are looking for an experienced Data Engineer with 5 - 7 years of hands-on experience in building and optimizing scalable data pipelines and architectures. The ideal candidate will have strong expertise in data wrangling, ETL/ELT processes, data warehousing, and working with cloud-based data platforms.

As a Data Engineer, you will collaborate closely with data scientists, analysts, and other engineers to ensure the availability, reliability, and accessibility of clean and structured data across the organization.

Key Responsibilities :


- Design, build, and maintain robust, scalable, and high-performance data pipelines to support data analytics, reporting, and machine learning workflows.

- Develop and optimize ETL/ELT processes for structured and unstructured data.

- Work with large datasets across various storage and processing systems including data lakes and data warehouses.

- Implement and manage data models, schemas, and data governance policies.

- Collaborate with stakeholders to understand data requirements and translate them into technical solutions.

- Monitor pipeline performance and troubleshoot data quality or latency issues.

- Use best practices for version control, testing, and deployment of data pipeline components.

- Document data flows, definitions, and technical architecture.

Required Qualifications :


- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field.

- 5 - 7 years of experience as a Data Engineer, with a strong portfolio of data pipeline and infrastructure work.

- Proficient in Python or Scala for data processing.

- Strong SQL skills for querying and data modeling (preferably PostgreSQL, MySQL, or SQL Server).

- Hands-on experience with modern data processing frameworks like Apache Spark, Apache Airflow, or Databricks.

- Experience with cloud data platforms (e.g., AWS Redshift, Azure Synapse, Google BigQuery, or Snowflake).

- Knowledge of data warehousing principles and data modeling (star/snowflake schemas).

- Experience with tools like Kafka, Delta Lake, or Apache Parquet is a plus.

- Familiarity with CI/CD pipelines, Docker, and version control tools like Git.

Preferred Qualifications :


- Experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.

- Exposure to data governance and data quality frameworks (e.g., Great Expectations).

- Understanding of data security, encryption, and compliance standards (GDPR, HIPAA, etc.).

- Certification in cloud platforms like AWS Certified Data Analytics, Azure Data Engineer Associate, or GCP Professional Data Engineer.


info-icon

Did you find something suspicious?