Posted on: 14/11/2025
Job Title : Data Engineer
Experience : 5 to 9 years
Location : [Remote]
Job Summary :
The ideal candidate will design, build, and maintain scalable data pipelines, ensure efficient data integration, and enable advanced analytics and reporting across the organization.
Key Responsibilities :
- Design, develop, and optimize ETL/ELT pipelines using Python, PySpark, and AWS Glue.
- Implement data ingestion, transformation, and integration from diverse structured and unstructured sources.
- Work extensively with Snowflake for data modeling, performance tuning, and query optimization.
- Automate workflows and data processing using AWS Lambda and other AWS-native services.
- Ensure data quality, consistency, and security across data platforms.
- Collaborate with data scientists, analysts, and business teams to deliver scalable data solutions.
- Monitor, troubleshoot, and improve the performance of data pipelines.
- Maintain proper documentation of data flows, processes, and best practices.
Required Skills & Qualifications :
- 4 to 9 years of proven experience as a Data Engineer or similar role.
- Strong programming skills in Python and hands-on experience with PySpark.
- Expertise in AWS services Glue, Lambda, S3, CloudWatch, IAM, etc.
- Proficiency in Snowflake data modeling, warehouse design, query optimization.
- Solid understanding of ETL/ELT concepts, data warehousing, and big data processing.
- Strong knowledge of SQL and performance tuning.
- Experience with version control (Git), CI/CD pipelines, and deployment best practices.
- Knowledge of data governance, security, and compliance.
- Excellent problem-solving, communication, and collaboration skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1575042
Interview Questions for you
View All