The Serialization module benchmarks message encoding and decoding performance using Rumi's Xbuf2 binary serialization format.
Message serialization/deserialization is a critical operation in messaging systems. This benchmark measures the overhead of:
Encoding : Converting POJO messages to wire format
Decoding : Converting wire format back to POJOs
The benchmark uses the same Car message model used in the AEP Module canonical benchmark.
Class : com.neeve.perf.serialization.Driver
The benchmark can be invoked through the Rumi Interactive CLI or directly.
xbuf2 / xbuf2.serial / rumi.xbuf2.serial
Tests serialization with sequential/predictable data:
Copy java -cp "libs/*" com.neeve.perf.serialization.Driver --provider xbuf2.serial Characteristics :
Predictable data patterns
Consistent serialized size
xbuf2.random / rumi.xbuf2.random
Tests serialization with random data:
Characteristics :
Random data in all fields
More realistic performance
The Car message contains:
Simple Fields :
Complex Fields :
Repeated Fields :
performanceFigures (array of objects)
fuelFigures (array of objects)
Typical Size : ~200 bytes serialized
Command-Line Parameters
Parameter
Short
Default
Description
Serialization provider: xbuf2, xbuf2.serial, xbuf2.random, rumi.xbuf2.serial, rumi.xbuf2.random
Running the Benchmark
Test with Random Data
Interpreting Results
The benchmark outputs median and mean latencies for encoding and decoding operations.
Example Output :
PROV : Serialization provider
RUN : Run number (multiple runs for consistency)
TYPE : Operation type (ENC=encode, DEC=decode)
SIZE : Serialized size in bytes
MED : Median latency in nanoseconds
MEAN : Mean latency in nanoseconds
Typical Results (Linux x86-64)
Operation
Sequential Data
Random Data
Size
Encode vs Decode :
Encoding and decoding have similar overhead
Both operations are highly optimized
Sequential vs Random :
Random data ~5-10% slower due to less predictable access patterns
Sequential data represents best-case performance
Message Size :
Overhead scales roughly linearly with message complexity
The Car message is moderately complex
Access Patterns
The benchmark demonstrates two message access patterns:
Indirect Access (POJO)
Standard object-oriented access via getters/setters:
Direct Access (Serializer/Deserializer)
Zero-copy access via serializers (shown in benchmark code):
Direct access is faster (used in high-performance scenarios)
For Lowest Latency
Use direct serialization (serializer/deserializer)
Reuse serializer/deserializer instances
Minimize nested object depth
For Ease of Use
Use indirect access (POJO getters/setters)
Accept ~10-15% overhead for better code readability
Good for most business applications
Comparison with AEP Module
The AEP Module canonical benchmark includes serialization overhead as part of end-to-end latency:
Serialization Benchmark : ~480ns (encode + decode)
AEP Benchmark : ~27µs (includes serialization + all other operations)
Serialization represents ~1.7% of end-to-end latency
Keep messages compact : Fewer fields = faster serialization
Use primitives where possible : Avoid excessive nesting
Size arrays appropriately : Large arrays increase overhead
Consider field ordering : Group frequently-accessed fields
Review AEP Module to see serialization in end-to-end context