- We are looking for an experienced Data Architect with 610 years of expertise in data engineering, data management, and architecture.
- This role will define the strategy, design, and implementation of enterprise-scale data platforms, ensuring data availability, governance, security, and scalability across the organization.
- The Data Architect will collaborate with technology and business stakeholders to align data initiatives with strategic goals.
Key Responsibilities :
- Define the enterprise data architecture roadmap, covering data modeling, integration, quality, and governance.
- Architect and implement data platforms including data warehouses, lakes, and lakehouses (e.g., Snowflake, BigQuery, Redshift and Databricks).
- Establish standards for data modeling, schema design, metadata management, and lineage tracking.
- Lead the design and development of data integration frameworks, covering ETL/ELT pipelines, APIs, streaming, and real-time data delivery.
- Ensure data governance, compliance, privacy, and security frameworks are embedded in all data platforms.
- Partner with data engineering, analytics, and product teams to build scalable data products and pipelines.
- Optimize enterprise-wide data storage, retrieval, and processing to balance performance and cost.
- Collaborate with AI/ML and business intelligence teams to enable advanced analytics and AI readiness.
Required Skills & Qualifications :
- 610 years of proven experience in data architecture, engineering, or solution design.
- Strong expertise in Data Lake creation (e.g., Databricks Delta Lake, Snowflake, and Azure Data Lake Storage etc).
- Strong expertise in relational databases (Oracle, PostgreSQL, MySQL) and NoSQL technologies (MongoDB, Cassandra, DynamoDB).
- Proficiency in SQL and programming (Python, Scala, or Java).
- Deep understanding of data modeling techniques (dimensional, relational, document, graph).
- Experience with big data frameworks (Spark, Hadoop, Hive) and cloud-native data platforms (AWS Redshift, GCP BigQuery, Azure Synapse).
- Strong grounding in data governance, data quality, and metadata management.
- Familiarity with data orchestration tools (Airflow, Dagster, Luigi).
- Cloud experience in AWS, GCP, or Azure for building large-scale distributed systems.
- Bachelors or Masters degree in Computer Science, Engineering or related field.
Preferred Qualifications :
- Experience with real-time streaming frameworks (Kafka, Flink, Kinesis, Pub/Sub).
- Knowledge of data security frameworks, compliance (GDPR, HIPAA, DPDPA), and role-based access control.
- Familiarity with containerization and orchestration (Docker, Kubernetes).
- Exposure to AI/ML pipelines and data readiness for MLOps.
- Contributions to data standards, open-source projects, or industry best practices.
Soft Skills & Qualities :
- Strategic thinker with the ability to translate business needs into scalable data solutions.
- Strong development and mentoring skills to guide data teams.
- Excellent communication and stakeholder management abilities.
- Proactive, ownership-driven mindset with focus on data quality, security, and reliability.
- Growth mindset; stays ahead of evolving data and cloud technology landscapes.