Posted on: 30/11/2025
Note : If shortlisted, you will be invited for initial rounds on 6th December'25 (Saturday) in Bangalore
Key Responsibilities :
- Architect, design, and optimize SQL-based data models, ensuring scalability, performance, and reliability.
- Develop complex T-SQL scripts, stored procedures, and performance-tuned queries.
- Build and manage distributed data processing pipelines using PySpark.
- Implement and support solutions on Microsoft Fabric or Azure Synapse Analytics.
- Design and maintain Lakehouse architectures, leveraging Delta Lake for data versioning, ACID transactions, and scalable storage.
- Oversee data ingestion, transformation, and orchestration using Azure Data Lake, Azure Data Factory, and related Azure data services.
- Ensure adherence to data governance, quality, and security standards.
- Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into effective data solutions.
- Produce clear architectural documentation, design diagrams, and technical specifications.
- Continuously evaluate emerging technologies and recommend improvements to existing data infrastructure.
Must-Have Technical Expertise :
- Strong proficiency in SQL, especially T-SQL, including query tuning and optimization.
- Hands-on experience with PySpark for large-scale, distributed data processing.
Practical experience implementing solutions on :
- Microsoft Fabric, or Azure Synapse Analytics
Strong understanding of :
- Delta Lake
- Lakehouse architecture
- Data warehousing methodologies (Kimball, Inmon, Data Vault)
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Technical / Solution Architect
Job Code
1582762
Interview Questions for you
View All