Posted on: 21/09/2025
KEY RESPONSIBILITIES :
- Participate in requirements gathering, database design, testing and production deployments.
- Analyze system/application requirements and design innovative solutions.
- Translating business requirements to technical specifications- data streams, data integrations, data transformations, databases, data warehouses & data validation rules.
- Design and develop SQL Server database objects (i.e., tables, views, stored procedures, indexes, triggers, constraints, etc.)
- Analyze and design data flow, data lineage mapping and data models.
- Optimize, scale and reduce the cost of analytics data platforms for multiple clients.
- Adherence to data management processes and capabilities.
- Enforcing compliance with data governance and data security.
- Performance tuning and query optimizations in terms of all the database objects.
- Perform unit and integration testing.
- Create technical documentation.
- Develop and assist team members.
- Provides training to the team members.
DESIRED PROFILE :
- Graduate(BE/B.Tech)/Masters(ME/M.Tech/MS) in Computer Science or equivalent from a premier institute (preferably NIT) with 3 - 4 years of Microsoft SQL Server experience
- Experience in data modelling and database design.
- In-depth ETL development experience working with Microsoft SQL Integration Services (SSIS), using all areas of the environment, including scripting.
- Development experience working with OLAP and Microsoft SQL Server Analysis Services (SSAS).
- The candidate must have demonstrated experience in building and maintaining reliable and scalable Data pipelines on Azure cloud for big data platforms.
- Experience in working on Azure Services like Azure Data Factory, Azure data lake storage (ADLS), Azure Blob Storage, Azure Synapse Analytics (Azure Data Warehouse), Azure Analysis Services, Azure Databricks, Power BI or equivalent AWS services like S3, Athena and AWS Glue
- Solid understanding of database design principles.
- The candidate must also have experience in data warehousing inclusive of dimensional modeling concepts and data lake concepts with practical knowledge of Data Modeling and implementation.
- The candidate must have experience in dealing multiple data sources like CSV, Parquet, JSON, Rest API, any other relational database, etc.
- Experience working with varied forms of data infrastructure inclusive of relational databases (MySQL, Oracle etc), column-oriented databases (Redshift, Teradata etc) & No SQL databases (Elastic Search, Mongo DB, Cassandra etc) would be good to have.
- Demonstrate proficiency in at least one scripting languages, for example, Python, Scala, Power Shell, Unix Shell Scripting, etc. This would be again good to have.
- Experience in analysing very large real-world datasets and hands-on approach in data analytics will be a plus.
- Experience with Test Driven Development, Continuous Integration, Continuous Deployment etc.
- Strong analytical and technical skills.
- Good verbal and written communication
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1549747
Interview Questions for you
View All