Posted on: 25/09/2025
Roles & Responsibilities :
- Lead the design, development, and implementation of scalable big data solutions using Azure Databricks, Spark, and Azure Cloud services.
- Architect end-to-end data pipelines for ingestion, transformation, and storage of structured and unstructured data.
- Collaborate with Data Engineers, Data Scientists, and Business Analysts to translate business requirements into technical solutions.
- Optimize Spark jobs, Databricks clusters, and workflows for performance, cost, and scalability.
- Implement data governance, security, and compliance standards in cloud data platforms.
- Mentor junior team members and provide technical guidance across the team.
- Evaluate and recommend new tools, frameworks, and best practices for big data processing and cloud analytics.
Required Skills :
- Strong experience with Azure Databricks, PySpark, and Spark SQL.
- Hands-on experience with Azure Data Lake, Azure Synapse, Azure Data Factory, and other Azure cloud services.
- Expertise in data warehousing concepts, ETL processes, and big data architecture.
- Proficient in programming languages : Python, Scala, or Java.
- Knowledge of CI/CD pipelines, version control, and Agile methodology.
- Experience in performance tuning, cluster management, and job scheduling.
- Excellent problem-solving, communication, and leadership skills.
Desired Skills :
- Familiarity with Snowflake, Databricks Delta Lake, or Apache Kafka is a plus.
- Certification in Azure Data Engineering / Databricks is desirable
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1551547
Interview Questions for you
View All