Posted on: 16/09/2025
DevOps Engineer
About Sheshi :
SHESHI.AI, fast-growing SaaS platform and is at the forefront of applying artificial intelligence (AI) and automation to make financial reporting and analytics faster and more efficient.
With an exceptional 0 to 1 journey, we are all set to move into the growth phase of the company and are looking to add exceptional talent across all verticals in the company.
Here are a few reasons why Sheshi is a great place to work at :
- Challenge yourself and learn new technologies continuously.
- Solve Unique Problems that were never solved before.
- Revolutionize the Fintech sector with Artificial Intelligence & Machine Learning (AI/ML).
- Exhibit your creativity and make users fall in love with Finance.
- Hangout, Collaborate & enjoy your work with vibrant teams.
We are seeking a skilled and experienced DevOps Engineer to design, develop, and maintain a robust and scalable platform supporting our applications and services.
You will collaborate with cross-functional teams, optimize infrastructure, and drive automation and observability. The ideal candidate will have a passion for continuous improvement, strong problem-solving abilities, and expertise in cloud technologies, containerization, and CI/CD pipelines.
Key Responsibilities :
Platform Development :
- Design, develop, and maintain scalable, reliable, and secure platforms for supporting applications and services.
- Expertise in cloud platforms, with AWS being a must-have (experience with Azure or GCP is a plus).
- Proficiency in containerization technologies like Docker, Kubernetes (including EKS or GKE)
- Leverage serverless technologies such as AWS Lambda, Fargate, and ECS for scalability and efficiency.
Infrastructure Management :
- Manage and optimize servers, networks, and storage systems.
- Implement and manage Infrastructure as Code (IaC) tools like Terraform, CloudFormation, Pulumi, or Ansible.
Observability :
- Set up and manage monitoring, logging, and alerting infrastructure.
- Use tools like Opentelemetry, ELK, Prometheus, and Grafana.
- (Optional) Work with distributed tracing tools like Jaeger or Zipkin and event-driven monitoring tools like Honeycomb.io.
Automation :
- Drive automation and continuous improvement of infrastructure orchestration, deployments, and upgrades.
- Utilize CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
- Strong expertise in scripting languages like Python and Bash, and configuration management tools like Ansible.
- Implement progressive delivery with tools like Flagger.
CI/CD Pipelines :
- Develop and maintain CI/CD pipelines for efficient software delivery.
- Implement deployment methodologies like Blue/Green and Canary releases.
Monitoring and Troubleshooting :
- Continuously monitor platform performance and proactively identify and resolve issues.
- Experience with log aggregation tools like Fluentd and Logstash, and incident management platforms like PagerDuty or Opsgenie is a plus.
Collaboration :
- Collaborate with development teams to align solutions with their goals.
- Use collaboration tools such as Jira and Confluence to document and manage projects.
Security :
- Implement security best practices and protect platform and data integrity.
- Use tools like IAM roles, security groups, GuardDuty, Aqua Security, or Sysdig Secure.
- Manage secrets using AWS Secrets Manager or HashiCorp Vault.
- (Optional) Perform penetration testing and ensure compliance with industry standards.
Database Management :
- Work with SQL and NoSQL databases such as PostgreSQL, MongoDB, and Redis (optional).
- Monitor databases using tools like Percona Monitoring and Management (PMM).
Requirements :
Mandatory Skills :
- Proven experience with AWS and containerization technologies (Docker, Kubernetes, EKS, or GKE).
- Proficiency in scripting languages such as Python, Bash, and IaC tools like Terraform and CloudFormation.
- Experience with CI/CD tools (e.g., Jenkins, GitLab CI/CD, ArgoCD).
- Hands-on experience with monitoring tools like ELK, Prometheus, and Grafana.
Preferred Skills :
- Knowledge of Azure, GCP, and serverless technologies like AWS Lambda.
- Experience with distributed tracing (Jaeger, Zipkin) and event-driven monitoring tools (Honeycomb.io).
- Familiarity with database systems like PostgreSQL, MongoDB, Redis, and database monitoring tools.
- Exposure to progressive delivery (Flagger) and incident management systems (PagerDuty, Opsgenie).
Soft Skills :
- Strong problem-solving and troubleshooting skills.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Did you find something suspicious?
Posted By
Arpitha
People Success Partner at Sheshi AI
Last Active: NA as recruiter has posted this job through third party tool.
Posted in
DevOps / SRE
Functional Area
DevOps / Cloud
Job Code
1547310
Interview Questions for you
View All