Posted on: 29/10/2025
Description :
About the Role :
We are looking for a skilled and motivated Big Data Engineer with strong experience in Apache NiFi, Snowflake, Databricks, and Python (PySpark).
The ideal candidate will design, develop, and optimize data pipelines, ensuring efficient data ingestion, transformation, and delivery across cloud and analytics platforms.
You will work closely with data architects, analysts, and business teams to enable scalable, high-performance, and reliable data solutions supporting enterprise analytics and reporting.
Key Responsibilities :
- Design, build, and maintain end-to-end data pipelines using Apache NiFi for ingestion, processing, and movement of large-scale data.
- Develop ETL/ELT workflows and data integration processes from multiple structured and unstructured sources into Snowflake.
- Create data transformation logic using Databricks (PySpark) for data cleansing, enrichment, and aggregation.
- Automate data flows, monitoring, and alerting mechanisms to ensure robust pipeline performance.
- Implement parameterized and reusable NiFi templates for consistent data integration.
- Develop and optimize Snowflake SQL scripts, stored procedures, UDFs, and tasks for high-performance data processing.
- Build and manage data models, schemas, and data warehouses within Snowflake.
- Implement data loading strategies (Snowpipe, COPY commands, stages) and ensure performance tuning and cost optimization.
- Monitor Snowflake resources (warehouses, storage, and queries) for efficiency and scalability.
- Integrate Snowflake with NiFi, Databricks, and BI tools to support analytical workloads.
- Develop and optimize PySpark-based transformation jobs within Databricks notebooks and workflows.
- Perform data wrangling, cleansing, and feature engineering for analytics or machine learning models.
- Integrate Databricks workflows with NiFi and Snowflake for seamless end-to-end data processing.
- Apply performance tuning techniques in Spark jobs to optimize resource utilization and execution time.
- Ensure data accuracy, consistency, and integrity across ingestion and transformation layers.
- Implement error handling, logging, and exception management in NiFi and PySpark processes.
- Participate in data validation, reconciliation, and testing for quality assurance.
- Collaborate with data governance teams to define and enforce metadata management and security policies.
- Work with cross-functional teams including business analysts, data scientists, and platform engineers to deliver scalable data solutions.
- Participate in Agile/Scrum ceremonies, providing timely status updates and technical inputs.
- Document data flows, mappings, and pipeline configurations clearly in Confluence or equivalent tools.
- Support production deployments, monitor pipelines, and resolve operational issues proactively.
Education & Certifications :
- Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or related discipline.
Certifications (preferred) :
- Snowflake SnowPro Certification
- Databricks Certified Data Engineer Associate/Professional
- Apache NiFi Certification
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1566354
Interview Questions for you
View All