Posted on: 11/07/2025
Data Engineer Individual Contributor (IC).
Kritsnam Technologies Hyderabad On-site (AWS-centric).
Own the entire AWS data platform that powers Kritsnams industrial-water intelligence.
Beyond ultrasonic-meter telemetry, youll integrate data from manufacturing (ERP/MES), field-service operations, supply-chain, and finance systemscreating a single, trusted source of truth.
You are the sole data-engineering owner, with Step Functions as your orchestration backbone and AWS native services everywhere else.
- Design Step Functions (or MWAA) to process IoT telemetry, ERP/MES, CRM exports, and financial data..
Must-have :
- AWS IoT Core (MQTT), AWS Glue Jobs / Workflows for ERP/CSV, AWS AppFlow or custom Lambda for SaaS (Salesforce, Service Now).
Nice-to-have : Amazon Kinesis Data Streams.
Orchestration :
Must-have : AWS Step Functions, Event Bridge, CloudWatch Alarms, SNS.
Nice-to-have : Amazon MWAA (Managed Airflow).
Must-have : AWS Glue (Python/Spark), dbt in Code Build.
Nice-to-have : Glue Studio, Glue Streaming.
Quality & Governance :
Must-have : Glue Data Quality, Deequ, Lake Formation.
Nice-to-have : AWS DataZone.
Serving / Warehouse :
Must-have : Aurora Postgres (with pgvector).
Nice-to-have : Amazon Redshift Serverless.
Vector / LLM :
Must-have : pgvector, OpenSearch Serverless vector search.
Nice-to-have : Amazon Bedrock embeddings.
Must-have : AWS CDK / CloudFormation, CodePipeline + CodeBuild, ECR, Linux/Bash.
Nice-to-have : Systems Manager, CodeDeploy (blue/green).
Must-have : CloudWatch Logs & Dashboards, X-Ray, AWS Cost Explorer.
Nice-to-have : QuickSight SPICE ops dashboards.
No Kafka/Hadoop architecture sized for ~1 GB/day telemetry plus structured business data.
Youll thrive here if you :
- Have 58 years building production data stacks exclusively on AWS.
- Turn scattered ERP/CRM/IoT feeds into contract-driven, test-gated pipelines.
- Love designing Step Functions and Glue workflows with bullet-proof error handling.
- Optimize Aurora & Redshift queries and manage storage costs via S3 tiering.
- Communicate trade-offs (e.g., AppFlow vs. custom ETL) to tech & business leaders.
- Enjoy being a team-of-one who owns prioritization, delivery, and ops.
Bonus points :
- Migrated cron or Bash scripts into Step Functions or MWAA.
- Implemented vector search (pgvector/OpenSearch) for RAG use-cases.
- Integrated SAP, Oracle, or MES systems with AWS Glue.
- Deep IoT fleet-management knowledge (Device Shadow, Defender, OTA).
- AWS Certified Machine Learning Specialty is a must.
Did you find something suspicious?
Posted By
Posted in
Data Engineering
Functional Area
Data Engineering
Job Code
1511903
Interview Questions for you
View All