Posted on: 19/08/2025
Position Summary :
Your Role Responsibilities and Duties :
- Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals.
- Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options.
- Determine solutions that are best suited to develop a pipeline for a particular data source.
- Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development.
- Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance).
- Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver.
- Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders.
- Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability).
- Build cross-platform data strategy to aggregate multiple sources and process development datasets.
Required Skills and Qualifications :
- B.Tech/MTech in Computer Science or Equivalent. An excellent understanding of data warehouses / data-marts and dimensional data models.
- 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
- 3+ years of experience with setting up and operating data pipelines using Python or SQL
- 3+ years of advanced SQL Programming: PL/SQL, T-SQL
- 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization.
- Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
- 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
- 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
- 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring
- Strong analytical abilities and a strong intellectual curiosity.
- In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts
- Understanding of REST and good API design
- Experience working with Apache Iceberg, Delta tables and distributed computing frameworks
- Strong collaboration, teamwork skills, excellent written and verbal communications skills
- Self-starter and motivated with ability to work in a fast-paced development environment
- Agile experience highly desirable
- Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools
Preferred Skills :
- Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques
- Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks
- Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
- Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
- Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting
- ADF, Databricks and Azure certification is a plus.
Technologies to be Preferred :
Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake
What we Offer :
- Bootstrapped and financially stable with high pre-money evaluation.
- Above industry renumerations.
- Additional compensation tied to Renewal and Pilot Project Execution.
- Additional lucrative business development compensation.
- Chance to work closely with industry experts driving strategy with data and analytics.
- Firm building opportunities that offer stage for holistic professional development, growth, and branding.
- Empathetic, excellence and result driven organization. Believes in mentoring and growing a team with constant emphasis on learning.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1531041
Interview Questions for you
View All