Canonical Benchmark

This section presents results from a canonical benchmark that measures the end-to-end performance of Rumi's core message processing capabilities.

What is Benchmarked

The canonical benchmark measures the Receive-Process-Send flow of a clustered microservice, which represents the fundamental operation pattern of most Rumi applications:

  1. Receive - AEP Engine receives an inbound message

  2. Dispatch to Handler - Engine starts a transaction and dispatches message to microservice

  3. Process - Microservice message handler executes:

    • 3.1. Read and write state POJOs - Business logic accesses and modifies state

    • 3.2. Send outbound messages - Business logic creates response messages (held by engine)

  4. Return from Handler - Control returns to the engine

  5. Commit - Engine commits the transaction, which includes:

    • Replicating state changes and outbound messages to backup

    • Persisting changes to disk (asynchronous, non-blocking)

    • Receiving stability acknowledgment from backup

  6. Send Outbound Messages - Engine releases held messages on commit completion

This flow exercises the complete Rumi stack including messaging, state management, persistence, cluster replication, consensus protocol, threading, and serialization.

Key Metrics

The benchmark reports two primary metrics:

Wire-to-Wire (w2w) Latency

  • Measures time from inbound message arrival to outbound message transmission

  • Reported as 50th, 99th, and 99.9th percentiles

  • Includes all processing, persistence, and replication overhead

  • Measured in microseconds (µs)

Maximum Throughput

  • Measures messages processed per second under saturated load

  • Tests the platform's ability to handle high-volume scenarios

  • Measured in messages per second

Test Methodology

The benchmark is designed to provide:

  • Reproducibility - Consistent hardware, software configuration, and test parameters

  • Relevance - Tests real-world clustered microservice patterns

  • Transparency - Full disclosure of test configuration and methodology

  • Comparability - Consistent methodology across releases enables direct comparison

Results Organization

Performance results are organized by Rumi release. Each release page includes:

  • Complete latency and throughput results

  • Performance analysis and characteristics

  • Tuning recommendations

  • Comparisons with previous releases (where applicable)

Next Steps

Last updated