HamburgerMenu
hirist

Junior Data Engineer - Microsoft Azure

Sky Systems, Inc.
Multiple Locations
6 - 8 Years

Posted on: 30/10/2025

Job Description

Role : Junior Data Engineer.

Position Type : Full-Time Contract (40hrs/week).


Contract Duration : Long Term.


Work Schedule : 8 hours/day (Mon-Fri).


Work Hours : 9:30 AM 6:30 PM IST.


Location : 100% Remote.


We are looking for a Junior Data Engineer who is passionate about building robust, scalable, and efficient data pipelines on Microsoft Azure. The ideal candidate will have strong hands-on experience in Python, SQL, and Azure Data Services, with a keen eye for performance optimization and data quality.

You will collaborate with data architects, analysts, and business stakeholders to design, implement, and maintain data pipelines that drive analytics, reporting, and data-driven decision-making across the organization.

Key Responsibilities :


- Design, develop, and maintain data ingestion, transformation, and integration pipelines using Python and Azure data services.

- Implement ETL/ELT workflows to extract, clean, and load data from multiple structured and unstructured sources.

- Ensure pipelines are scalable, efficient, and adhere to best practices in data engineering.

- Write and optimize SQL queries, stored procedures, and functions for high-performance data extraction and transformation.

- Analyze EXPLAIN plans to tune and improve query and cluster performance.

- Apply best practices for data quality, validation, and governance.

- Work with Azure Data Factory (ADF), Azure Synapse Analytics, and/or Databricks for data movement, orchestration, and transformation.

- Collaborate with the cloud team to manage data storage, compute, and security configurations in Azure.

- Monitor pipeline health and proactively resolve performance or data integrity issues.

- Work in an Agile environment using tools like Jira or Azure DevOps for task management and sprint planning.

- Participate in code reviews, follow version control best practices (e.g., Git), and ensure high-quality, maintainable code.

- Contribute to project documentation, data catalogs, and design specifications to support long-term maintainability.

- Identify opportunities to automate repetitive processes, enhance data pipeline efficiency, and reduce latency.

- Stay current with emerging trends in data engineering, cloud architecture, and DevOps practices.

- Actively contribute ideas to improve the reliability, performance, and scalability of data solutions.

Required Skills & Qualifications :


- Strong proficiency in Python (data manipulation, automation, API integration, error handling).

- Expert in writing complex SQL queries, joins, inserts, and query optimization.

- Azure Data Services: Practical knowledge of Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage (Blob/Data Lake), and Azure Functions.

- Experience building and managing end-to-end ETL/ELT pipelines on cloud environments.

- Familiarity with Git or similar version control systems.

- Strong understanding of data modeling, data warehousing concepts, and database performance tuning.

- Ability to troubleshoot complex data issues and provide reliable, scalable solutions.

- Excellent written and verbal communication skills to collaborate with cross-functional teams.

- Proactive attitude with strong accountability and ownership of assigned tasks.

- Ability to work independently as well as part of a distributed agile team.

- Experience with Databricks notebooks, PySpark, or ADF pipelines.

- Exposure to DevOps CI/CD pipelines for data solutions deployment.

- Basic knowledge of data governance and metadata management practices.

- Familiarity with Power BI or other reporting tools for data validation and visualization.

- Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or a related field.

- Microsoft Azure certifications (e.g., Azure Data Engineer Associate, Azure Fundamentals) are a plus.


info-icon

Did you find something suspicious?