HamburgerMenu
hirist

Data Architect - Dremio

MNM HIRETECH PVT LTD
Multiple Locations
5 - 15 Years

Posted on: 30/01/2026

Job Description

Location : Mumbai, Bengaluru, Hyderabad, Gurugram,


Outstation Candidates : Allowed


Experience : Min. 5 Years -15 Years of Experience


Notice Period : Immediate to 30 Days


Mandatory Requirements :


- Strong Dremio / Lakehouse Data Architect profile


- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio


- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems


- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts


- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)


- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics


- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices


- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline


- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred :


- Preferred (Nice-to-have) Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Ideal Candidate :


- Bachelors or Masters in Computer Science, Information Systems, or related field.


- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.


- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).


- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.


- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).


- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).


- Excellent problem-solving, documentation, and stakeholder communication skills.


Role & Responsibilities :


You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).


- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.


- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.


- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).


- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.


- Support self-service analytics by enabling governed data products and semantic layers.


- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.


- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


info-icon

Did you find something suspicious?

Similar jobs that you might be interested in