Posted on: 06/10/2025
Description :
We are looking for highly skilled and motivated Big Data Engineers / Leads to join our team and drive the development of our next-generation data platform on Microsoft Azure. The ideal candidate will be an expert in Azure Databricks and have a strong background in big data processing, data warehousing, and modern DevOps practices. We are hiring for three positions across our major offices.
Role & Compensation Details :
Role : Big Data Engineer / Lead
Experience : 4 - 7 Years (3 Open Positions)
Locations : Bangalore | Chennai | Mumbai | Pune | Noida
Key Responsibilities :
Big Data Application Development :
- Build, deploy, and optimize scalable Big Data applications and pipelines primarily leveraging Azure Databricks.
- Design and implement robust data processing jobs using Apache Spark with proficiency in either Scala or Python (PySpark).
Data Warehousing & ETL/ELT :
- Apply strong SQL expertise for complex data querying, analysis, and optimization within cloud data solutions.
- Demonstrate strong experience in designing and developing ETL/ELT processes, data ingestion patterns, and modern Data Warehousing concepts.
Infrastructure & Automation (DevOps Focus) :
- Work hands-on with infrastructure-as-code tools like Terraform for provisioning and managing Azure resources.
- Utilize workflow orchestration tools such as Airflow to design, schedule, and monitor complex data workflows.
- Familiarity with containerization and orchestration technologies like Kubernetes is highly desirable.
- Implement and manage robust CI/CD pipelines using tools like Git and Jenkins to ensure automated, reliable deployment of data applications.
Data Lifecycle Management :
- Enable and support the complete data lifecycle, including data collection, efficient storage strategies, scalable data modeling, and timely analysis across various operational systems.
Code Quality & Performance :
- Ensure high standards of code quality, security, and performance optimization for all data processing jobs and infrastructure components.
Mandatory Skills :
- Cloud Data Platform : Expert-level proficiency in Azure Databricks.
- Data Processing : Strong hands-on experience with Apache Spark (using Scala or Python).
- Database : Advanced proficiency in SQL (including complex queries, stored procedures, and optimization).
Preferred Skills :
- Experience with Infrastructure as Code (e.g., Terraform).
- Experience with workflow orchestration tools (e.g., Apache Airflow).
- Familiarity with CI/CD tools (Git, Jenkins).
- Knowledge of Azure Data Services (e.g., Azure Data Factory, Azure Synapse, Azure Datalake Storage).
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1556544
Interview Questions for you
View All