Posted on: 25/09/2025
Overview :
The Data Engineer specializing in Snowflake, Azure, and AWS technologies plays a crucial role in designing, developing, and maintaining scalable data solutions to meet the organization's evolving data needs. The role requires expertise in a range of data engineering tools and platforms for managing and transforming large sets of structured and unstructured data.
Key Responsibilities :
- Design and develop data pipelines using Snowflake, Azure Data Lake Storage, Azure Data Factory, and Azure Synapse Analytics.
- Implement, maintain, and optimize data infrastructure on AWS, including Redshift, EMR, and Glue.
- Collaborate with data scientists and analysts to understand data requirements and implement the necessary data transformations and integrations.
- Ensure data quality and reliability by designing and implementing data cleansing and validation processes.
- Develop and maintain ETL processes using Informatica and DataStage to extract, transform, and load data from various sources.
- Optimize query performance and data distribution on Snowflake to ensure efficient data access and processing.
- Implement best practices for data security, encryption, and access control on Azure and AWS platforms.
- Participate in the evaluation and selection of new data management and analytics technologies to meet evolving business needs.
- Document data architecture, data flows, and data processes to facilitate knowledge sharing and collaboration.
- Provide technical guidance and support to junior data engineering team members.
Required Qualifications :
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Proven experience in designing and implementing data solutions on Snowflake, Azure, and AWS platforms.
- Expertise in using Azure Data Lake Storage, Azure Data Factory, and Azure Synapse Analytics for data management and analytics.
- Demonstrated proficiency in working with AWS Redshift, EMR, and Glue for data warehousing and ETL processes.
- Strong understanding of data integration and transformation using Informatica and DataStage.
- In-depth knowledge of SQL and programming languages such as Python or Java for data manipulation and automation.
- Experience in optimizing Snowflake data warehouse performance and scalability.
- Familiarity with data modeling, schema design, and data governance best practices.
- Ability to collaborate effectively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders.
- Strong problem-solving skills and the ability to troubleshoot and optimize complex data pipelines.
- Experience with version control systems and continuous integration/continuous deployment (CI/CD) processes for data engineering projects.
- Certifications in relevant cloud platforms (e.g., Microsoft Azure, Amazon Web Services) and data engineering tools.
- Excellent written and verbal communication skills.
- Ability to adapt to new technologies and learn continuously in a fast-paced, dynamic environment.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1552111
Interview Questions for you
View All