HamburgerMenu
hirist

Job Description

We are hiring for our client - based in Hyderabad- GCC - who are yet to establish their presence in India.

Job Summary :

We are looking for a Senior Data Engineer to join our growing team of analytics experts. As a data engineer, you are responsible for designing and implementing our data pipeline architecture and optimizing data flow and collection for cross-functional groups, considering

scalability in mind. Data engineering is about building the underlying infrastructure, and so being able to pass the limelight to someone else is imperative.

Required Skills : Hands-on experience in Data Integration and Data Warehousing

Strong proficiency in :


1. Google BigQuery

2. Python

3. SQL

4. Airflow/Cloud Composer

5. Ascend or any modern ETL tool

- Experience with data quality frameworks or custom-built validations

Preferred Skills :


- Knowledge of DBT for data transformation and modeling

Familiarity with Collibra for data cataloging and governance

Qualifications :

- Advanced working SQL knowledge and experience working with relational databases and working familiarity with a variety of databases.

- Strong analytic skills related to working with unstructured datasets.

- Experience building a serverless data warehouse in GCP or AWS

- 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics,

- Information Systems, or another quantitative field.

- Strong analytic skills related to working with unstructured datasets

Responsibilities :


- Create and maintain optimal data pipeline architecture.

- Assemble large, complex data sets that meet functional / non-functional business requirements.

- Identify, design, and implement internal process improvements : automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.


- Design, build, and optimize data pipelines using Google BigQuery, ensuring use of best practices such as query optimization, partitioning, clustering, and scalable data modeling.

- Develop robust ETL/ELT processes using Python and SQL, with an emphasis on reliability, performance, and maintainability.

- Create and manage Ascend or equivalent tool data flows, including :

- Setting up read/write connectors for various data sources.

- Implementing custom connectors using Python.

- Managing scheduling, failure notifications, and data services within Ascend.


- Implement data quality checks (technical and business level) and participate in defining data testing strategies to ensure data reliability.

- Perform incremental loads and merge operations in BigQuery.

- Build and manage Airflow (Cloud Composer) DAGs, configure variables, and handle scheduling as part of orchestration.

- Work within a CI/CD (DevSecOps) setup to promote code efficiently across environments.

Participate in technical solutioning :


- Translate business integration needs into technical user stories.

- Contribute to technical design documents and provide accurate estimations.

- Conduct and participate in code reviews, enforce standards, and mentor junior engineers.

- Collaborate with QA and business teams during UAT; troubleshoot and resolve issues in development, staging, and production environments.

info-icon

Did you find something suspicious?