Posted on: 23/10/2025
About the Role :
We are seeking an experienced Snowflake Python Developer with a strong background in Data Warehousing (DWH), ETL, and Big Data technologies. The ideal candidate will have deep hands-on experience in Python/PySpark, Snowflake Data Cloud, and SQL/PL-SQL, along with strong analytical and performance optimization skills.
You will play a key role in designing, developing, and maintaining scalable, efficient, and secure data pipelines and ETL processes to support analytics, business intelligence, and data science initiatives.
Key Responsibilities :
- Design, develop, and implement ETL pipelines to extract, transform, and load data from diverse data sources into Snowflake.
- Develop data ingestion and data transformation scripts using Python and PySpark.
- Create and optimize Snowflake SQL scripts, stored procedures, and user-defined functions (UDFs) for data transformation and validation.
- Implement and maintain data integration workflows across Snowflake, Oracle, and other source systems.
- Perform data modeling, schema design, and data partitioning for efficient querying and storage in Snowflake.
- Conduct performance tuning for SQL queries, ETL jobs, and data pipelines.
- Optimize Snowflake compute and storage costs through best practices and resource monitoring.
- Identify and resolve performance bottlenecks in ETL processes and database operations.
- Develop automation scripts using Python, Unix shell scripting, and orchestration tools for scheduling and monitoring ETL workflows.
- Integrate Snowflake with external data sources, APIs, or third-party systems.
- Implement error handling, logging, and alerting mechanisms for pipeline monitoring.
- Work closely with data architects, business analysts, and BI developers to ensure data accuracy and consistency.
- Collaborate with cross-functional teams to design end-to-end data solutions.
- Maintain detailed documentation for ETL processes, data mappings, and database structures.
- Support production deployments, troubleshooting, and root cause analysis.
Required Skills & Experience :
Technical Skills :
Programming Languages :
- Strong hands-on experience with Python and PySpark.
Data Warehousing :
- Deep understanding of DWH concepts, ETL methodologies, and data modeling techniques (star/snowflake schema).
- Proven expertise with Snowflake (SnowSQL, Streams, Tasks, Time Travel, Cloning, etc.).
- Advanced proficiency in Oracle SQL/PL-SQL and performance tuning of complex queries.
- Experience with Unix shell scripting for workflow automation.
- Familiarity with ETL orchestration or data pipeline tools such as Airflow, Informatica, Talend, or Azure Data Factory (nice to have).
- Proficiency with Git and CI/CD integration for data pipelines.
- Exposure to AWS, Azure, or GCP cloud data services.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Big Data / Data Warehousing / ETL
Job Code
1564102
Interview Questions for you
View All