Posted on: 15/11/2025
Description :
About the Role :
We are seeking a highly skilled Data Engineer to design, build, and maintain scalable data pipelines and platforms. The ideal candidate should have strong experience in Python, PySpark,AWS, Databricks, SQL, Kubernetes, and Jenkins, with a passion for working with large datasets and cloud-native technologies.
Key Responsibilities :
- Develop, optimize, and maintain ETL/ELT data pipelines using Python and PySpark.
- Work extensively on Databricks for data processing, notebook development, and workflow orchestration.
- Build and manage data workflows that ensure data quality, reliability, and performance.
- Write efficient and complex SQL queries for data transformation and analysis.
- Deploy and manage data applications using Kubernetes for container orchestration.
- Implement CI/CD pipelines using Jenkins to automate code deployment and testing.
- Collaborate with Data Scientists, Analysts, and Product teams to deliver high-quality data solutions.
- Ensure data security, governance, and best engineering practices across the data ecosystem.
- Troubleshoot performance issues and optimize workflows across big data systems.
Good to Have :
- Experience with data lake house architecture.
- Knowledge of workflow orchestration tools (Airflow, Azure Data Factory, etc.).
- Exposure to DevOps and monitoring tools (Prometheus, Grafana, etc.).
- Understanding of data governance and security best practices.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1575031
Interview Questions for you
View All