Posted on: 19/11/2025
Description:
We are looking for a candidate with 4 + years of experience in a Data Engineer role and has attained bachelors degree in technology and has experienced with Agile/Scrum development process and methodologies.
- Experience as a Data Engineer with strong track record of designing and implementing data solutions.
- Experience in programming language as Python, Pyspark with airflows
- Experience with cloud data warehousing technologies, such as Snowflake and Redshift.
- Experience with Cloud platforms such as AWS, Azure or Google Cloud Platform.
- Experience with AWS cloud services, such as S3, EC2, EMR, Glue, CloudWatch, Athena, Lambda.
- Experience with containerization and orchestration technologies, such as Docker and Kubernetes.
- Experience with building CI/CD pipeline using tools, such has GitLab and Bitbucket
- Experience with data pipeline orchestration tools, such as Airflow and Jenkins.
- Knowledge of database concepts, data modelling, schemas and query languages, SQL and Hive.
- Good to have the Knowledge of informatica
- Retail experience is a plus
What roles and responsibilities will be performed by the selected candidate?:
- Should be able to create ADF and data pipeline
- Design, develop, and deploy end-to-end data solutions using various components of Microsoft Fabric, including Lakehouse, Data Warehouse, Data Factory, and Data Engineering.
- Utilize Power BI for data visualization and reporting, ensuring seamless integration with Fabric data assets.
- Contribute to the design and implementation of robust, scalable, and secure data architectures within the Microsoft Fabric platform.
What is the expectation from the candidates current role/profile?
- 3 to 4 years of experience in Data Developer role with experience in ADF, SQL, and MS Fabric
- Hands-on experience with Microsoft Fabric, including its core components (Lakehouse, Data Warehouse, Data Factory, Data Engineering).
- Strong expertise in Microsoft Azure data services like Azure Data Factory (ADF), Azure Data Lake Storage Gen2
- Proven experience in designing, developing, and maintaining scalable data pipelines.
- Solid understanding of data warehousing concepts and data lakehouse architectures.
- Proficiency in SQL for data manipulation and querying.
- Experience with version control systems (e.g., Git, Azure Repos).
- Strong analytical and problem-solving skills with meticulous attention to detail.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1577477
Interview Questions for you
View All