Posted on: 16/07/2025
We are looking for a Senior Data Engineer with 57 years of hands-on experience in designing, developing, and deploying scalable and high-performance data engineering solutions. The ideal candidate will have deep technical expertise in data platforms like Snowflake and Databricks, strong programming skills, and a passion for building robust data pipelines and integrations across complex systems and cloud environments.
Key Responsibilities:
- Design, build, and deploy scalable ETL/ELT pipelines for complex data integrations.
- Work extensively with Snowflake, Databricks, and PySpark to process and manage large datasets efficiently.
- Build and maintain data orchestration workflows using tools such as Azure Data Factory, Apache Airflow, or Databricks workflows.
- Implement data transformations using DBT and manage replication/migration workflows using tools like Qlik.
- Develop APIs using FastAPI for secure and efficient data access and services.
- Use Azure Functions and other serverless computing models to build scalable cloud-native data solutions.
- Manage version control and automation of data workflows using GitHub or Azure DevOps, including CI/CD pipeline implementation.
- Collaborate with cross-functional teams to understand business needs and translate them into efficient data models and solutions.
- Apply best practices in data warehouse design, including dimensional modeling and schema optimization for analytics use cases.
- Ensure code quality, performance optimization, and maintainability in all deliverables.
Required Skills & Expertise :
- 5 to 7 years of progressive experience in data engineering roles.
- Advanced skills in Python, SQL, and Shell scripting.
- Expert-level knowledge in Snowflake, Databricks, and PySpark.
- Strong experience with Azure Data Factory, Apache Airflow, and Databricks orchestration tools.
- Hands-on experience with DBT and Qlik Replicate.
- Deep understanding of big data architectures, distributed data processing, and performance tuning.
- Proficiency in version control systems (GitHub, Azure DevOps) and implementation of CI/CD practices.
- Demonstrated ability to build cloud-native serverless applications using Azure Functions.
- Proficient in building REST APIs using FastAPI or equivalent frameworks.
- Extensive experience in designing robust, maintainable, and scalable ETL/ELT pipelines.
- Sound knowledge of data warehousing concepts, dimensional modeling, and analytics optimization.
Preferred Qualifications:
- Experience working in an Azure cloud environment.
- Exposure to CI/CD for data pipelines and data observability frameworks.
- Experience in cross-functional team leadership or mentoring junior data engineers.
Did you find something suspicious?
Posted By
Bharath Arumugam
HR Lead (APAC/EMEA) at TECEZE CONSULTANCY SERVICES PRIVATE LIMITED
Last Active: 25 Jul 2025
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1514059
Interview Questions for you
View All