Posted on: 01/08/2025
Job Summary :
We are looking for a skilled Data Engineer with strong expertise in Python, PySpark, SQL, and Azure Databricks to design, build, and optimize scalable data pipelines. The ideal candidate will work closely with cross-functional teams to develop efficient data solutions, ensuring high data quality and supporting advanced analytics and reporting needs.
Key Responsibilities :
- Write efficient SQL queries for data extraction, transformation, and loading.
- Work with Python for data processing, automation, and integration tasks.
- Implement data ingestion from various sources (APIs, flat files, cloud storage, streaming platforms).
- Optimize large-scale data workflows to improve performance and cost efficiency.
- Collaborate with Data Architects and Analysts to define data models and business rules.
- Ensure data quality, validation, and governance across platforms.
- Deploy, monitor, and troubleshoot data pipelines in production.
- Leverage Azure ecosystem (Data Lake, Synapse, Data Factory, Event Hubs, etc.) for data solutions.
Required Skills & Qualifications :
- Proficiency in SQL (query optimization, stored procedures, performance tuning).
- Hands-on experience with Azure Databricks for big data processing.
- Knowledge of Azure Data Lake, Azure Data Factory, and Synapse Analytics is a plus.
- Experience working with large, complex datasets in a distributed environment.
- Familiarity with CI/CD pipelines, version control (Git), and DevOps practices.
- Strong problem-solving and debugging skills.
- Excellent communication and collaboration abilities.
Did you find something suspicious?
Posted By
Posted in
Data Analytics & BI
Functional Area
Data Analysis / Business Analysis
Job Code
1522974
Interview Questions for you
View All