Posted on: 22/08/2025
Position : Big Data Engineer (Immediate Joiner)
Experience : 5+ Years
Location : (Gurugram / Bangalore)
Joining : Immediate Joiner
Job Summary :
We're seeking an experienced Senior Big Data Engineer with 5+ years of experience in designing, developing, and implementing large-scale data systems using Redshift, AWS, Spark, and Scala.
The ideal candidate will have expertise in building data pipelines, data warehousing, and data processing applications.
Key Responsibilities :
Data Warehousing :
- Design, develop, and maintain large-scale data warehouses using Amazon Redshift.
- Optimize Redshift cluster performance, scalability, and cost-effectiveness.
Data Pipelines :
- Build and maintain data pipelines using Apache Spark, Scala, and AWS services like S3, Glue, and Lambda.
- Ensure data quality, integrity, and security across the data pipeline.
Data Processing :
- Develop and optimize data processing applications using Spark, Scala, and AWS services.
- Work with data scientists and analysts to develop predictive models and perform advanced analytics.
AWS Services :
- Leverage AWS services like S3, Glue, Lambda, and IAM to build scalable and secure data systems.
- Ensure data systems are highly available, scalable, and fault-tolerant.
Troubleshooting and Optimization :
- Troubleshoot and optimize data pipeline performance issues.
- Ensure data systems are optimized for cost, performance, and scalability.
Requirements :
Experience : 5+ years of experience in big data engineering or a related field.
Technical Skills :
- Proficiency in Amazon Redshift, Apache Spark, and Scala.
- Experience with AWS services like S3, Glue, Lambda, and IAM.
- Knowledge of data processing frameworks like Spark and data storage solutions like S3 and Redshift.
Data Architecture : Strong understanding of data architecture principles and design patterns.
Problem-Solving : Excellent problem-solving skills and attention to detail.
Preferred Qualifications :
Certifications : AWS Certified Big Data Specialty or similar certifications.
Machine Learning : Familiarity with machine learning frameworks like Spark MLlib or TensorFlow.
Agile Methodology : Experience working in agile development environments.
Data Governance : Experience with data governance, data quality, and data security.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1533931
Interview Questions for you
View All