Posted on: 20/01/2026
Description :
Job Title : Data Engineer
Experience : 5 to 9 years
Location : [Remote]
Job Summary :
- We are looking for a skilled Data Engineer with strong expertise in Python, PySpark, AWS services (Glue, Lambda), and Snowflake.
- The ideal candidate will design, build, and maintain scalable data pipelines, ensure efficient data integration, and enable advanced analytics and reporting across the organization.
Key Responsibilities :
- Design, develop, and optimize ETL/ELT pipelines using Python, PySpark, and AWS Glue.
- Implement data ingestion, transformation, and integration from diverse structured and unstructured sources.
- Work extensively with Snowflake for data modeling, performance tuning, and query optimization.
- Automate workflows and data processing using AWS Lambda and other AWS-native services.
- Ensure data quality, consistency, and security across data platforms.
- Collaborate with data scientists, analysts, and business teams to deliver scalable data solutions.
- Monitor, troubleshoot, and improve the performance of data pipelines.
- Maintain proper documentation of data flows, processes, and best practices.
Required Skills & Qualifications :
- Strong programming skills in Python and hands-on experience with PySpark.
- Expertise in AWS services Glue, Lambda, S3, CloudWatch, IAM, etc.
- Proficiency in Snowflake data modeling, warehouse design, query optimization.
- Solid understanding of ETL/ELT concepts, data warehousing, and big data processing.
- Strong knowledge of SQL and performance tuning.
- Experience with version control (Git), CI/CD pipelines, and deployment best practices.
- Knowledge of data governance, security, and compliance.
- Excellent problem-solving, communication, and collaboration skills.
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1603881