Below are the hardware, software, and Rhino configuration used for the Rhino IN benchmarks.

Hardware

The IN benchmark tests were run using both one- and two-node Rhino clusters.

Each Rhino node is run on its own host. Simulators for the MSC and HLR are run on their own hosts, with horizontal scaleout to properly load the Rhino nodes.

Rhino’s CGIN resource adaptor requires an external TCAP stack to communicate with SS7 networks. Metaswitch’s OCSS7 stack was used for this benchmark, with two OCSS7 SGC clusters, each with two nodes within the cluster. Rhino was connected to one SGC cluster, and the simulators were connected to the other SGC cluster. Each OCSS7 SGC node ran on its own VM.

All machines used for benchmarking are provided by the Azure cloud platform. We use exclusively Standard_D8s_v5 instances as described here. The Standard_D8s_v5 instance type has 8 vCPU (hardware threads) and 32 GiB of ram, with up to 12.5 Gbps networking.

Note

Previous published benchmarks were run on Amazon EC2 c5.2xlarge instances. While similar to Azure Standard_D8s_v5 instances, they are not identical, so direct comparison of results may be misleading.

Software

The IN benchmark tests used the following software.

Software Version

Java

Microsoft OpenJDK 64-Bit Server VM 11.0.20+8-LTS

Rhino

3.2.4

CGIN

3.1.3

OCSS7

4.1.3

Rhino configuration

For the IN benchmark tests, we made the following changes to Rhino’s default configuration.

Rhino’s memory sizes were adjusted as follows to gracefully handle the load required by this benchmark:

Table 1. JVM parameters
Parameter Value

Heap size

8192M

Garbage collector

G1

The G1 garbage collector is now used by default in Rhino 3.2.

Rhino’s default configuration was adjusted as follows:

Table 2. Rhino parameters
Parameter Value Rationale

Staging queue size

42000

Accommodate 1 second’s worth of events in the queue without loss, as a reasonable protection against sudden traffic spikes combining badly with garbage collection pauses.

Local Memory DB size

600M

Allow some contingency space in MemDB, just in case. Normal usage is stays below 340M, and the default is 400M.

Note

These tests used Rhino’s default Savanna clustering mode rather than Pool clustering mode.

Replicated state is not used in these tests, so performance is identical in either clustering mode.

CGIN’s default configuration was adjusted as follows:

Table 3. CGIN parameters
Parameter Value Rationale

ocss7.trdpCapacity

850000

Increase the maximum number of open dialogs from the default 100,000. This benchmark requires at least 300,000.

ocss7.schNodeListSize

850000

Increase the number of invoke and activity timers, without depending on the autosizing option.

Results

Please review the IN benchmark results here.

Previous page Next page