HamburgerMenu
hirist

Aptus Data Labs - Enterprise Data Architect - AWS

Posted on: 27/10/2025

Job Description

Job Title : Enterprise Data Architect (AWS)

Experience : 12+ years

Location : Bangalore

Employment Type : Full-time

Notice : Prefer with shorter notice or can join within 30 days

About the Role :

We are seeking a seasoned Enterprise Data Architect with over 12+ years of experience in designing, implementing, and optimizing enterprise data platforms.

The ideal candidate will have deep expertise in cloud-native data architecture (AWS) and data engineering frameworks (Databricks, Spark) to drive large-scale digital transformation and AI/analytics initiatives.

Key Responsibilities :

- Lead the enterprise data architecture strategy, ensuring scalability, performance, and alignment with business goals.

- Architect and implement data lakehouse solutions using Databricks on AWS for unified data management and analytics.

- Design end-to-end data pipelines, integration frameworks, and governance models across structured and unstructured data sources.

- Define data models, metadata management, and data quality frameworks for enterprise-wide adoption.

- Collaborate with data engineering, AI/ML, analytics, and business teams to enable real-time and batch data processing.

- Evaluate and integrate emerging technologies in data mesh, GenAI data pipelines, and automation frameworks.

- Provide technical leadership and mentorship to data engineering and architecture teams.

- Establish best practices for data security, lineage, compliance (GDPR, HIPAA), and cloud cost optimization.

- Partner with business stakeholders to define data modernization roadmaps and cloud migration strategies.

Required Skills and Experience :

- 10- 15 years IT experience with 5- 8 years enterprise data architect.

- Strong experience in Data Architecture, Data Engineering, or related domains.

- Proven experience architecting enterprise-scale data platforms using AWS (S3, Glue, Lambda, Redshift, EMR, Athena, Lake Formation, etc.

- Hands-on expertise in Databricks (Delta Lake, Spark, Unity Catalog, MLflow).

- Strong experience with data modeling (dimensional, canonical, semantic models) and ETL/ELT pipelines.

- Deep understanding of data governance, master data management (MDM), and data cataloging tools.

- Proficient in SQL, Python, PySpark, and API-based data integration.

- Experience with modern data stack (Snowflake, dbt, Airflow, Kafka, etc.) is a plus.

- Strong understanding of AI/ML data readiness, metadata design, and data observability frameworks.

- Excellent communication and leadership skills to collaborate with technical and business teams.

- Certifications in AWS (Data Analytics / Solutions Architect) or Databricks preferred.

Preferred Qualifications :

- Experience in enterprise data strategy, governance frameworks, and migration of legacy systems to cloud.

- Exposure to GenAI data pipelines or LLM-based data preparation workflows.

- Strong background in data security, IAM, and compliance standards.

Education :

- Bachelors or Masters degree in Computer Science, Information Systems, Data Engineering, or related field.


info-icon

Did you find something suspicious?