HamburgerMenu
hirist

HYPR4 - Data Lead Engineer - ETL/Data Warehousing

HYPR4 CLOUD TECH PRIVATE LIMITED
Hyderabad
6 - 7 Years

Posted on: 14/10/2025

Job Description

Description :





Experience : 7+ Years

Location : Hyderabad

Job Type : Full-Time

Responsibilities :

- Data Infrastructure & Pipeline Development :

- Design, develop, and optimize scalable, efficient, and reliable data pipelines for large-scale data processing and transformation.

- Manage and maintain data architecture, ensuring high availability and performance using tools like Snowflake, Dataproc, BigQuery and other cloud technologies.

- Lead the integration of data sources from multiple systems, ensuring seamless data flow across various platforms.

- Build and optimize data pipelines using BigQuery, Snowflake, DBT Cloud, and Airflow.

- Expertise in Data Modelling to desing and build Data warehouses, Data Marts and Data lakes

- Manage version control and workflows with GitHub.

- Performance & Optimization :

- Perform tuning and optimization of queries and data pipelines to ensure high-performance data systems.

- Conduct regular performance reviews and recommend improvements or optimizations for system reliability, speed, and cost-efficiency.

- DBT (Data Build Tool) Implementation :

- Implement and maintain DBT models for data transformation workflows.

- Collaborate with data analysts and data scientists to ensure high-quality, well-documented datasets for downstream analysis.

- Ensure the use of best practices for DBT testing, version control, and deployment.

- Leadership & Mentorship :

- Lead and mentor a team of data engineers, ensuring that best practices are followed in development and deployment of data pipelines.

- Conduct code reviews, provide feedback, and ensure the implementation of high-quality data solutions.

- Drive collaboration with product teams and business stakeholders to understand data requirements and deliver scalable solutions.

Preferred Skills :

- 10+ years of experience in Data Engineering with a strong focus on data warehousing, ETL pipelines, and big data technologies.

- At least 3-5 years of hands-on experience with Snowflake data warehouse or BigQuery, including setup, configuration, optimization, and maintenance.

- Proficiency in SQL for query optimization and performance tuning.

- In-depth experience with Dataproc for running large-scale data processing workflows(e.g., Spark, Hadoop).

- Expertise with DBT or any other ELT tool for data transformation and model building.

Technical Skills :

- Strong experience in cloud platforms like AWS, GCP, or Azure, with a focus on data engineering tools and services.

- Proficient in programming/scripting languages such as Python, Java, or Scala for data processing.

- Experience with CI/CD pipelines and version control (Git, Jenkins, etc.).

- Knowledge of distributed computing frameworks (e.g., Spark, Hadoop) and related data processing concepts.

Data Architecture & Design :

- Experience with building and maintaining data warehouses and lakes.

- Strong understanding of data modelling concepts, data quality, and governance.

- Familiarity with Kafka, Airflow, or similar tools for orchestrating data workflows.

Key skills: GCP/AWS, Python, Terraform, Data Governance & Data Modelling.

info-icon

Did you find something suspicious?