HamburgerMenu
hirist

Job Description

Description :

Key Responsibilities :

- Design, develop, and maintain scalable ETL/ELT pipelines using Python and PySpark.

- Build and optimize data workflows on Databricks platform.

- Develop and manage data solutions on AWS (S3, Glue, Redshift, EMR, Lambda, etc.).

- Implement efficient data models and transformations using SQL.

- Work with structured and unstructured data from multiple sources.

- Ensure data quality, integrity, and performance optimization.

- Collaborate with Data Scientists, Analysts, and cross-functional teams to support analytics and reporting needs.

- Implement CI/CD pipelines and follow DevOps best practices for data engineering.

- Monitor and troubleshoot data pipelines and production issues.

- Ensure compliance with data governance and security standards.

Required Skills & Qualifications :

- 6-9 years of experience in Data Engineering.

- Strong programming skills in Python.

- Hands-on experience with PySpark and distributed data processing.

- Experience with Databricks (workspace management, Delta Lake, notebooks, job scheduling).

- Strong knowledge of AWS services (S3, Glue, Redshift, EMR, IAM, Lambda, CloudWatch).

- Proficiency in SQL and database optimization techniques.

- Experience working with large-scale data systems and big data technologies.

- Knowledge of data warehousing concepts and dimensional modeling.

- Experience with version control systems (Git).

- Strong problem-solving and analytical skills.

- Bachelors or Masters degree in Computer Science, Engineering, or related field.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in