HamburgerMenu
hirist

Job Description

We are seeking a skilled and experienced Data Architect to join our team in Chennai. In this role, you will be responsible for designing, developing, and managing robust and scalable data solutions that support our analytical and business intelligence needs. You will work closely with various stakeholders to understand data requirements and translate them into effective architectural designs, ensuring data quality, accessibility, and performance.

Key Responsibilities :


- Design and implement scalable and efficient data architectures, including data models, data flows, and ETL processes, to meet business requirements.

- Work with stakeholders to gather and analyze data requirements, translating them into technical specifications and architectural designs.

- Develop and maintain data warehousing solutions, ensuring data integrity, consistency, and availability for reporting and analytics.

- Utilize your expertise in SQL to design complex queries, optimize database performance, and ensure data accuracy.

- Leverage Python and PySpark for data manipulation, transformation, and automation of data pipelines.

- Contribute to the strategic planning and evolution of our data platforms and infrastructure.

- Collaborate with data engineers, analysts, and other teams to ensure seamless data integration and accessibility.

- Implement best practices for data governance, security, and compliance.

- Troubleshoot data-related issues and provide timely resolutions.

Required Skills and Experience :

Based on your self-assessment, we are looking for candidates with the following proficiency levels :

- SQL : Strong proficiency (equivalent to a score of 3.5), with extensive experience in writing complex queries, performance tuning, and database design.

- Python : Solid experience (equivalent to a score of 3.2) in data processing, scripting, and developing data-centric applications.

- PySpark : Good understanding and practical experience (equivalent to a score of 3.2) in using PySpark for big data processing and distributed computing.

- Big Data Concepts : Good understanding of big data principles, technologies, and distributed systems (equivalent to a score of 3).

- ETL (Extract, Transform, Load) : Solid experience (equivalent to a score of 3) in designing, developing, and managing ETL processes and pipelines.

- Data Warehousing : Good understanding and practical experience (equivalent to a score of 3) in data warehousing concepts, dimensional modeling, and data mart design.

Desired Skills (Experience Not Explicitly Required, But a Plus) :


- Familiarity with cloud-based data platforms (e.g., AWS, Azure, GCP).

- Experience with data orchestration tools.

- Knowledge of data visualization tools.


info-icon

Did you find something suspicious?