Posted on: 01/04/2026
Description :
Key Responsibilities :
- Design, develop, and maintain scalable ETL/ELT pipelines using Python and PySpark.
- Build and optimize data workflows on Databricks platform.
- Develop and manage data solutions on AWS (S3, Glue, Redshift, EMR, Lambda, etc.).
- Implement efficient data models and transformations using SQL.
- Work with structured and unstructured data from multiple sources.
- Ensure data quality, integrity, and performance optimization.
- Collaborate with Data Scientists, Analysts, and cross-functional teams to support analytics and reporting needs.
- Implement CI/CD pipelines and follow DevOps best practices for data engineering.
- Monitor and troubleshoot data pipelines and production issues.
- Ensure compliance with data governance and security standards.
Required Skills & Qualifications :
- 6-9 years of experience in Data Engineering.
- Strong programming skills in Python.
- Hands-on experience with PySpark and distributed data processing.
- Experience with Databricks (workspace management, Delta Lake, notebooks, job scheduling).
- Strong knowledge of AWS services (S3, Glue, Redshift, EMR, IAM, Lambda, CloudWatch).
- Proficiency in SQL and database optimization techniques.
- Experience working with large-scale data systems and big data technologies.
- Knowledge of data warehousing concepts and dimensional modeling.
- Experience with version control systems (Git).
- Strong problem-solving and analytical skills.
- Bachelors or Masters degree in Computer Science, Engineering, or related field.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1625331