Posted on: 25/09/2025
Responsibilities :
- Design, develop, and maintain scalable data platforms and pipelines.
- Build data ingestion frameworks to collect data from diverse sources (structured and unstructured).
- Ensure high availability and performance of data systems.
- Automate data quality checks and monitoring.
- Collaborate with cross-functional teams to define data architecture and strategy.
- Implement data governance, security, and compliance practices.
- Optimize data workflows for performance, scalability, and cost-efficiency.
Requirements :
- Strong programming skills in Python, Java, or Scala.
- Hands-on experience with big data technologies such as Apache Spark, Kafka, Flink, and Hadoop.
- Experience with data orchestration tools like Airflow, Dagster, or Prefect.
- Solid understanding of cloud platforms (e. g., AWS, GCP, or Azure) and cloud native data services (e. g., S3 BigQuery, Redshift).
- Experience with CI/CD for data pipelines and infrastructure as code (e. g., Terraform).
- Strong knowledge of SQL and relational as well as NoSQL databases.
- Understanding of data modeling, data warehousing, and ETL/ELT processes.
- Familiarity with containerization (Docker, Kubernetes) is a plus.
- Experience with monitoring and logging tools for data platforms (e. g., Prometheus, Grafana, ELK stack).
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1551249
Interview Questions for you
View All