HamburgerMenu
hirist

JMAN Group - Technology Solution Architect - ETL/Data Pipeline

JMAN DIGITAL SERVICES PRIVATE LIMITED
7 - 15 Years
Chennai

Posted on: 30/04/2026

Job Description

About JMAN :


JMAN Group is a growing technology-enabled management consultancy that empowers organizations to create value through data.


Founded in 2010, we are a team of 100+ consultants based in London, UK, and a team of 170+ engineers in Chennai, India. Having delivered multiple projects in the US, we are now opening a new office in New York to help us support and grow our US client base.


We approach business problems with the mindset of a management consultancy and the capabilities of a tech company. We work across all sectors, and have in depth experience in private equity, pharmaceuticals, government departments and high-street chains.


Our team is as cutting edge as our work. We take pride for ourselves on being great to work with - no jargon or corporate-speak, flexible to change and receptive of feedback.


We have a huge focus on investing in the training and professional development of our team, to ensure they can deliver high quality work and shape our journey to becoming a globally recognised brand. The business has grown quickly in the last 3 years with no signs of slowing down.


Why work at JMAN :


Our vision is to ensure JMAN Group is the passport to our teams future. We want our team to go on a fast-paced, high-growth journey with us - when our people want to do something else, the skills, training, exposure, and values that JMAN has instilled in them should open doors all over the world.


Current Benefits :


- Competitive annual bonus


- Market-leading private health insurance


- Regular company socials


- Annual company away days


- Extensive training opportunities


TECHNOLOGY ARCHITECT :


Technical specification :


- Strong Experience in anyone of ETL/ELT or building Data Pipeline tools like AWS Glue/Azure Data Factory/ Synapse/ Matillion/dbt.


- Ability to use GIT for version control and to maintain versions of data models.


- Hands - on experience in anyone of data warehouse/platforms like Azure SQL Server/Redshift/Big Query/Databricks/Snowflake.


- To have strong hands-on experience to write SQL queries, database query optimization and stored procedures.


- Familiarity in working with data visualization tools like Power BI/Tableau/Looker.


- Should have ability to integrate with SaaS based CRM, Accounting and Financial systems (HubSpot, Salesforce, NetSuite, Zoho, etc)


- Should have experience in end-to-end deployment process from understanding business requirement/idea to implementation.


- Expertise in data solution architectures and the tools and techniques used for data management.


- Constructs and implements operational data store and Data marts.


- Strong proficiency in Python and PySpark.


- Knowledge of SDLC, Agile Scrum Methodologies.


- Should have worked on a budgeting, Service cost and product migration from legacy to modernized application.


- Proficiency in SQL and experience working with relational databases like MySQL, PostgreSQL, etc.


- Experience with NoSQL databases like MongoDB, Cassandra, or DynamoDB is a plus.


- Strong experience working with on-premises and Cloud-based databases and lakes.


- Proven experience as a Data Engineer with a focus on cloud platforms (AWS/Azure/GCP).


- Experience in Full stack technology is a plus.


- Hands on experience in taking ownership on design patterns & best practices in design patterns, data models, determine implementation strategy.


- The role would require extensive internal interfacing including the CoE leadership, business units.


- Identify opportunities for automation and implement tools to streamline data engineering workflows.


- Ensure compliance with data governance policies, data privacy regulations, and industry best practices.


Responsibilities :


- Manage and lead a team of data engineers, providing guidance, mentorship, and fostering a collaborative environment to maximize team performance.


- Your role will involve the diagnosis of existing architecture and data maturity and help the organization in identifying gaps and possible solutions.


- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.


- To comply with coding standards and ensure the creation of test cases/queries/validations for all developed features.


- Data governance - meta data management, data quality & etc.,


- Evaluate and recommend the Cloud data platform for customer needs with optimal solution.


- To provide guidance and monitoring the team on end-to-end operational process.


- Dimensional modelling & business domain - Convert data into business domain / entity using Dimensional modelling / Data Vault design pattern.


- You will oversee the development and maintenance of data pipelines, ensuring data quality, reliability, security, and scalability.


Competencies :


- Ready to learn, adopt and implement state of the art evolving open-source technologies to provide path breaking business solutions to enhance top organisation globally.


- Build moderately complex prototypes to test new concepts and provide ideas on reusable frameworks, components and data products or solutions and help promote adoption of new technologies.


- Promote a culture of continuous learning, sharing knowledge, and keep abreast of emerging data engineering technologies and trends, and recommend their adoption to improve efficiency and productivity.


- Design and develop project proposals, technology architecture, Delivery management and career development strategies.


- Strong analytical and critical thinking skills


- Excellent written and oral communication in English to collaborate with cross-functional teams.


Required Skillset :


PRIMARY SKILLSET :


- ETL or ELT : AWS Glue/ Azure Data Factory/ Synapse/ Matillion/ dbt (Anyone - Mandatory).


- Data Warehouse : Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone - Mandatory).


- Cloud Experience : AWS/Azure/GCP (Anyone - Mandatory).


- Programming Language : Apache Spark / Python.


- Data Patterns.


- Data Modelling.


- Product migration.


SECONDARY SKILLSET :


- Data Visualization : Power BI/Tableau/Looker. (Anyone - Good to have).


- Full stack technologies (Good to have)

info-icon

Did you find something suspicious?

Similar jobs that you might be interested in