Posted on: 05/03/2026
Description :
Working hours : 8- 4 EST minimum, 9-5 EST preferred.
Background check : will be required upon selection - unnecessary for submission.
Timeframe : immediate desired, ~2 weeks delay from selection is fine.
Data/GenAI Engineer (Contractor)
Position Overview :
We are seeking experienced Data/GenAI Engineers to join our Professional Services team on a contract basis. You will work directly on client engagements delivering production grade Generative AI solutions, including conversational AI assistants, document processing automation, RAG (Retrieval-Augmented Generation) systems, and AI-powered data analytics platforms. This role requires hands-on technical execution, client interaction, and the ability to work independently within an agile delivery framework.
Primary Responsibilities :
GenAI Solution Development :
- Design and implement production-ready Generative AI applications using Amazon Bedrock, Anthropic Claude, and other foundation models
- Build and optimize RAG (Retrieval-Augmented Generation) pipelines with vector databases (Weaviate, OpenSearch, Pinecone)
- Develop AI agents and multi-agent orchestration systems using frameworks like LangChain, LlamaIndex, or custom implementations
- Create conversational AI interfaces with natural language understanding, intent detection, and context management
- Implement prompt engineering strategies, few-shot learning, and fine-tuning approaches for domain specific applications
AWS Cloud Architecture & Development :
- Build serverless architectures using AWS Lambda, API Gateway, Step Functions, and EventBridge
- Design and implement data pipelines for AI model training, inference, and feedback loops
- Develop RESTful APIs and WebSocket connections for real-time AI interactions
- Configure and optimize AWS services including S3, DynamoDB, RDS, SQS, SNS, and CloudWatch
- Implement infrastructure-as-code using CloudFormation, CDK, or Terraform
Data Engineering & ML Operations :
- Design and build data ingestion pipelines for structured and unstructured data sources
- Implement ETL/ELT workflows for data preparation, cleaning, and transformation
- Create vector embeddings and semantic search capabilities for knowledge retrieval
- Develop data validation, quality monitoring, and observability frameworks
- Optimize model inference performance, latency, and cost efficiency
Client Engagement & Delivery :
- Participate in sprint planning, daily standups, and client review sessions
- Translate business requirements into technical specifications and implementation plans
- Provide technical guidance and recommendations to clients on AI/ML best practices
- Document architecture decisions, code, and deployment procedures
- Troubleshoot production issues and implement solutions quickly
Required Technical Skills (Priority Order) :
Tier 1 - Critical Must-Haves :
- Amazon Bedrock : Hands-on experience with foundation models (Claude, Nova, Llama or others), model invocation, streaming responses, and guardrails
- Agent Frameworks & Orchestration : Production experience with LangChain, LlamaIndex, Bedrock Agents, or custom multi-agent orchestration systems
- Python : Advanced proficiency with modern Python (3.9+), including async/await, type hints, and testing frameworks (pytest, unittest)
- AWS Lambda & Serverless : Production experience building event-driven architectures, function optimization, and cold start mitigation
- Vector Databases : Practical experience with at least one: Weaviate, OpenSearch, Pinecone, Chroma, or FAISS for semantic search
- LLM Integration : Direct experience with LLM APIs (Anthropic, OpenAI, Cohere), prompt engineering, and response parsing
- API Development : RESTful API design and implementation using FastAPI, Flask, or similar frameworks
Tier 2 - Highly Valuable :
- Amazon Bedrock AgentCore : Experience with AgentCore Runtime, Memory, Gateway, and Observability for building production agent systems
- AWS API Gateway : Configuration, authorization, throttling, and integration with Lambda/backend services
- DynamoDB : NoSQL data modeling, single-table design, GSI/LSI optimization, and DynamoDB Streams
- AWS Step Functions : Workflow orchestration for complex AI pipelines and multistep processes
- Docker & Containers : Containerization, ECR, ECS/Fargate deployment for AI workloads
- Data Processing : Experience with Pandas, PySpark, AWS Glue, or similar data transformation tools
Tier 3 - Strong Differentiators :
- RAG Architecture : End-to-end RAG system design including chunking strategies, retrieval optimization, and context management
- Embedding Models : Working knowledge of text embeddings (Bedrock Titan, OpenAI, Cohere) and embedding optimization
- AWS S3 & Data Lakes : S3 event notifications, lifecycle policies, and data lake architecture patterns
- CloudWatch & Observability : Logging, metrics, alarms, and distributed tracing for AI applications
- IAM & Security : AWS security best practices, least privilege access, secrets management (Secrets Manager, Parameter Store)
- CI/CD Pipelines : Experience with CodePipeline, GitHub Actions, or GitLab CI for automated deployments
Tier 4 - Nice to Have :
- SageMaker : Model training, deployment, endpoints, and feature stores
- OpenSearch : Full-text search, vector search, and hybrid search implementations
- EventBridge : Event-driven architectures and cross-service integrations
- WebSockets : Real-time bidirectional communication for streaming AI responses
- AWS CDK - Infrastructure-as-code using Python or TypeScript CDK constructs
- Fine-tuning & Training : Experience with model fine-tuning, PEFT methods, or custom model training
Required Experience & Qualifications :
- 5+ years of software engineering experience with at least 2+ years focused on AI/ML, data engineering, or cloud-native development
- 2+ years of hands-on AWS experience with production deployments
- 1+ years of direct Generative AI experience (LLMs, embeddings, RAG, agents)
- Proven track record delivering production AI applications from concept to deployment
- Strong understanding of software engineering best practices (version control, testing, code review, documentation)
- Experience working in agile/scrum environments with distributed teams
- Excellent problem-solving skills and ability to work independently with minimal supervision
- Strong written and verbal communication skills for client-facing interactions
Preferred Qualifications :
- AWS Certifications : Solutions Architect Associate/Professional, Machine Learning Specialty, or Developer Associate
- Background in healthcare, financial services, or regulated industries with understanding of compliance requirements (HIPAA, PCI-DSS, SOC 2)
- Contributions to open-source AI/ML projects or published technical content
- Experience with multi-tenant SaaS architectures and data isolation patterns
- Knowledge of cost optimization strategies for AI workloads (model selection, caching, batching)
- Familiarity with frontend frameworks (React, Angular) for building AI-powered UIs
Project Examples You May Work On :
- Building conversational AI assistants for customer service automation using Bedrock and Anthropic Claude
- Implementing RAG systems for document processing, classification, and intelligent search
- Developing AI-powered data extraction and validation pipelines for healthcare claims processing
- Creating multi-agent systems for complex workflow automation and decision support
- Building integration marketplaces connecting AI capabilities to third-party platforms
- Designing voice AI solutions using Amazon Connect and Polly for customer engagement
- Implementing AI-driven content recommendation and personalization engines
Did you find something suspicious?