Posted on: 18/03/2026
Description :
Our Context.
At NeoXam, we are looking for a Performance Test Architect to build and lead our performance engineering practice from the ground up.
You will own the end-to-end performance strategy for the ARO platform from designing the test lab infrastructure and crafting benchmarking frameworks to embedding performance validation into our CI/CD pipelines.
This is a high-impact, hands-on leadership role where you will shape how we measure, optimize, and guarantee the performance of a complex distributed system under real-world capital markets workloads.
Required Qualifications :
- 10+ years of hands-on experience in performance testing and engineering for large-scale, distributed enterprise applications.
- Deep expertise with load testing tools: JMeter, Gatling, and/or k6 including scripting complex scenarios, correlation, parameterization, and custom plugins.
- Strong experience with Kubernetes and Docker: understanding of pod scheduling, resource limits, HPA/VPA, and their impact on application performance.
- Proven track record of building performance test frameworks integrated into CI/CD pipelines (Jenkins, GitLab CI, or GitHub Actions).
- Hands-on experience with monitoring and observability stacks: Grafana, Prometheus, ELK (Elasticsearch, Logstash, Kibana), or similar APM tools.
- Solid understanding of database performance: SQL tuning, indexing strategies, connection pooling, and behavior under concurrent load (PostgreSQL, Oracle, or SQL Server preferred).
- Experience with message queue systems (RabbitMQ, Kafka, or ActiveMQ) and API performance testing (REST/gRPC).
- Proficiency in at least one programming language (Java, Python, or Go) for scripting, tooling, and automation.
- Experience leading or mentoring a team of 2+ engineers in performance testing or related disciplines.
Preferred Qualifications :
- Domain experience in capital markets, financial services back-office operations, or reconciliation systems.
- Experience performance testing shared-database architectures with multiple concurrent services.
- Familiarity with Infrastructure-as-Code (Terraform, Ansible) and GitOps practices.
- Experience with chaos engineering tools (Litmus, Chaos Monkey) for resilience testing.
- Certifications in performance engineering (e.g., ISTQB Performance Testing, AWS/GCP/Azure certifications).
- Experience with financial data volumes: millions of positions/trades per reconciliation cycle.
Key Responsibilities :
Performance Test Lab & Infrastructure :
- Design, build, and maintain a dedicated performance test lab that mirrors production Kubernetes environments (namespaces, resource quotas, network policies, storage classes).
- Provision and manage test infrastructure using Infrastructure-as-Code (Terraform, Helm charts) for repeatable, version-controlled environments.
- Set up isolated test databases with realistic data volumes (millions of reconciliation records) and configure MQ/API test harnesses for inter-module communication.
- Establish baseline hardware and configuration profiles for consistent benchmarking across releases.
Test Strategy & Benchmarking :
- Define the comprehensive performance test strategy covering load, stress, soak, spike, capacity, and scalability testing across all modules.
- Create workload models based on real production traffic patterns peak reconciliation windows, batch processing cycles, concurrent user simulations, and MQ throughput scenarios.
- Establish KPIs and SLAs: response time percentiles (P50/P95/P99), throughput (TPS), resource utilization ceilings (CPU, memory, IOPS), reconciliation processing rates, and error thresholds.
- Design module-level and end-to-end integration performance test suites that validate the shared database model under concurrent multi-module access.
- Build automated benchmarking frameworks that produce comparable, versioned performance reports across releases.
CI/CD Integration & Shift-Left Performance :
- Embed performance test gates into Jenkins/GitLab CI/GitHub Actions pipelines so that every build is validated against performance baselines before promotion.
- Implement automated regression detection: flag builds that degrade response times or throughput beyond configurable thresholds.
- Create lightweight smoke performance tests for PR-level validation and full-scale suite execution for nightly/release pipelines.
- Integrate with Grafana/Prometheus/ELK for real-time test execution dashboards, trend analysis, and alerting on performance anomalies.
Analysis, Optimization & Collaboration :
- Perform deep-dive root cause analysis of performance bottlenecks using profiling tools, APM traces, thread dumps, and database execution plans.
- Partner with development teams to recommend and validate optimization strategies (query tuning, caching, connection pooling, async processing, horizontal scaling).
- Produce executive-level performance reports with clear findings, risk assessments, and capacity projections for product leadership.
- Evaluate and benchmark database performance under the shared-DB model, including connection contention, lock analysis, and read/write throughput under concurrent module load.
Team Leadership :
- Recruit, mentor, and lead a team of 24 performance engineers, establishing best practices, coding standards, and review processes.
- Drive a culture of performance awareness across engineering conduct training sessions, brown-bags, and create internal knowledge base documentation.
- Define career growth paths and skill development plans for the performance engineering team.
Did you find something suspicious?
Posted by
Posted in
CyberSecurity
Functional Area
QA & Testing
Job Code
1621690