Below are the hardware, software, and Rhino configuration used for the Rhino benchmarks.
Hardware
The SIP High Availablity (HA) benchmark tests were run using configurations ranging from one node to four node clusters.
Each Rhino node is run on its own host. Simulators are each run on their own hosts, with horizontal scaleout to properly load the Rhino nodes.
All machines used for benchmarking are provided by the Azure cloud platform.
We use exclusively Standard_D8s_v5
instances as described here
.
The Standard_D8s_v5
instance type has 8 vCPU (hardware threads) and 32 GiB of ram, with up to 12.5 Gbps networking.
Previous published benchmarks were run on Amazon EC2 |
Software
The SIP benchmark tests used the following software.
Software | Version |
---|---|
Java |
Microsoft OpenJDK 64-Bit Server VM 11.0.20+8-LTS |
Rhino |
Rhino 3.2.4 |
3.1.6 |
Rhino configuration
For the SIP benchmark tests, we made the following changes to the Rhino 3.2 default configuration.
Rhino’s memory sizes were adjusted as follows to gracefully handle the load required by this benchmark:
Parameter | Value |
---|---|
Heap size |
10240m |
Garbage collector |
G1 |
The G1 garbage collector is now used by default in Rhino 3.2.
Staging queue size was adjusted to accommodate more than 2 seconds worth of events in the queue without loss, as a reasonable protection against sudden traffic spikes combining badly with garbage collection pauses.
Parameter | Value |
---|---|
Staging queue size |
25000 |
These tests used Rhino’s default Savanna clustering mode rather than Pool clustering mode. Replicated state is not used in these tests, so performance is identical in either clustering mode. |
SIP configuration
The High Availability (HA) benchmarks are configured with replication disabled. We use TCP transport throughout to avoid retransmissions which can occur with UDP.
Benchmark results
Please review the SIP benchmark results here.