Posted on: 18/09/2025
Job Description :
Key Responsibilities :
- Design, build, and maintain scalable data pipelines and ETL workflows.
- Develop and optimize data models for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to ensure data availability and quality.
- Work with structured and unstructured data across multiple sources.
- Implement best practices for data governance, security, and compliance.
- Optimize performance of large-scale data systems and troubleshoot data-related issues.
- Deploy and manage data workflows on cloud platforms (AWS, Azure, or GCP).
Required Qualifications :
- 7+ years of experience as a Data Engineer or in a similar role.
- Strong programming skills in Python, SQL, and/or Scala.
- Hands-on experience with big data tools (Spark, Hadoop, Kafka).
- Experience with cloud-based data services (AWS Redshift, Azure Data Factory, GCP BigQuery, etc.).
- Proficiency in relational and NoSQL databases.
- Strong knowledge of ETL frameworks, data warehousing, and data lake architectures.
- Excellent problem-solving skills and ability to work in Agile environments.
Preferred Qualifications :
- Experience with data orchestration tools (Airflow, Luigi, Prefect).
- Familiarity with containerization and DevOps tools (Docker, Kubernetes, CI/CD pipelines).
- Knowledge of streaming data pipelines and real-time processing.
- Exposure to machine learning data pipelines is a plus.
- Strong communication and collaboration skills.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1548193
Interview Questions for you
View All