Posted on: 26/03/2026
Role : Senior Data Engineer
Location : Bangalore (Client-facing role; travel across India / Southeast Asia possible)
Experience : 5-10 years
About Greyamp :
Greyamp Consulting is a boutique transformation consulting firm that works with large enterprises across Southeast Asia and India to turn strategy into execution.
Over the past decade, our work has focused on operating model transformation, and large-scale program activation. Today, we are expanding our focus toward data and AI-led transformation, helping organizations move from analytics experimentation to production-grade data and AI systems that drive real business decisions.
Our teams work closely with enterprise clients and global partners to design, build, and operationalize AI-driven solutions across industries such as manufacturing, mining, and healthcare.
Joining Greyamp at this stage offers a unique opportunity to work on high-impact transformation programs, build production-grade data platforms, and collaborate with consulting teams and client stakeholders to solve complex business problems using data and AI.
Role Overview :
We are looking for a Senior Data Engineer who can build and scale modern data platforms that power analytics, machine learning, and AI applications.
This is not a pure infrastructure role :
Our engineers work closely with data scientists, consultants, and client stakeholders to translate business use cases into production-ready data pipelines, models, and data products.
You will be responsible for designing and implementing scalable data pipelines, enabling AI and analytics use cases, and ensuring that data systems are reliable, performant, and production-ready.
What You Will Work On :
You will contribute to enterprise data and AI programs such as :
- Building data platforms that power predictive analytics and ML use cases
- Developing pipelines that enable fraud detection, operational forecasting, or process optimization
- Supporting AI and LLM-based solutions through reliable data infrastructure
- Productionizing analytics models into real-world workflows
- Creating data visibility and monitoring systems for enterprise decision-making
- Your work will move beyond building pipelines - you will help turn data into operational impact.
Key Responsibilities :
Data Platform & Pipeline Development :
- Design and build scalable data pipelines (batch and streaming) for analytics and AI workloads
- Develop robust ETL/ELT workflows to ingest, transform, and curate data from multiple enterprise systems
- Build data models and datasets optimized for analytics and machine learning
Production-Ready Data Systems :
- Ensure reliability, monitoring, and observability of data pipelines
- Implement data quality checks and governance mechanisms
- Optimize performance and cost of data processing pipelines
Enabling AI & Advanced Analytics :
- Support data scientists in building and deploying ML pipelines
- Build feature pipelines and data workflows required for model training and inference
- Enable production deployment of analytics and AI use cases
Collaboration & Delivery :
- Work closely with consultants, product owners, and client stakeholders to understand business requirements
- Translate business use cases into technical data solutions
- Contribute to architecture discussions and platform design decisions
Required Experience :
- 5 -10 years of experience in data engineering, data platform development, or related roles
- Hands-on experience building production-grade data pipelines
- Experience working with modern cloud-based data platforms
Core Technical Skills :
Programming & Data Processing :
- Strong proficiency in Python and SQL
- Experience working with large-scale data processing frameworks (e.g., Spark)
Modern Data Platforms :
Experience with at least one modern data platform such as :
- Databricks
- Snowflake
- BigQuery
- Redshift
Cloud Data Services :
Experience working with cloud-native data services on AWS, Azure, or GCP, such as :
- Storage (S3 / Blob / GCS)
- Data orchestration tools
- Streaming or event processing services
Data Engineering Practices :
- Experience building reliable ETL/ELT pipelines
- Data modeling for analytics and machine learning workloads
- Pipeline orchestration tools (Airflow, Dagster, Prefect or similar)
- Version control and CI/CD practices for data workflows
Preferred Experience :
- Exposure to ML pipelines or MLOps workflows
- Experience with streaming data architectures
- Experience supporting LLM or AI-driven applications
- Familiarity with data governance, lineage, and cataloging tools
- Experience working in consulting or client-facing environments
What Makes Someone Successful in This Role :
We look for engineers who :
- Think in business outcomes, not just pipelines
- Can translate ambiguous problems into structured data solutions
- Are comfortable working in consulting-led transformation programs
- Balance engineering rigor with pragmatic delivery
- Enjoy collaborating across engineering, consulting, and business teams
Why Join Greyamp :
- Work on real production AI and data systems for enterprise clients
- Collaborate with consulting teams solving complex transformation problems
- Exposure to global enterprise programs across multiple industries
- Opportunity to shape and grow Greyamp's data and AI practice
Did you find something suspicious?
Posted by
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1623887