Job Summary :
Key Responsibilities :
- Build and manage data pipelines and ETL workflows on cloud platforms such as AWS, Azure, or GCP.
- Develop and maintain scalable data solutions using cloud-native tools (e.g., AWS Glue, Azure Data Factory, GCP Dataflow).
- Optimize performance of data systems, including storage, processing, and querying.
- Collaborate with data scientists, analysts, and software engineers to deliver data solutions that meet business needs.
- Ensure data quality, security, and compliance with data governance standards.
- Monitor and troubleshoot data workflows and address production issues.
Required Skills & Qualifications :
- 3+ years of experience with the Hadoop ecosystem.
- Hands-on experience with one or more cloud platforms: AWS (preferred), Azure, or GCP.
- Strong programming skills in Python, Java, or Scala.
- Proficiency in SQL and working with relational and NoSQL databases.
- Experience with big data tools : Hive, Pig, Spark, HBase, Oozie, Kafka, etc.
- Familiarity with CI/CD pipelines, version control (Git), and DevOps practices.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1512031
Interview Questions for you
View All