Below is an overview of how we test CGIN performance, followed by links to the benchmarks.

About the CGIN test scenario

To test the CGIN performance we use a Virtual Private Network (VPN) application, which:

  • provides number-translation and call-barring services for groups of subscribers

  • uses CAPv3 and INAP CS1 protocols for call-control, and MAPv3 for location information

  • uses OpenCloud’s CGIN APIs and resource adaptor to provide CAP and MAP support.

OpenCloud developed this VPN application to meet genuine business requirements from a Tier-1 network operator, giving us genuine benchmarks — instead of, for example, using a trivial demo application.

This test application performs functions that are common to many real-world IN applications:

  • monitoring the entire call, and reliably maintaining state for the duration of the call

  • interacting with external systems (MSC, HLR, call detail records)

  • supporting a large subscriber database.

The VPN Application

The VPN application provides number-translation and call-barring services for groups of subscribers. It uses CAPv3 for call-control and MAPv3 for location information.

If the A party…​ then VPN responds with:

…​has called a B party in their own VPN using short code or public number

Connect with DRA=public B number, and AdCgPN=short code for A

…​and B party are not in the same VPN

Continue instead

…​dials a short code that doesn’t exist

ReleaseCall

Call flow

The VPN application applies incoming call-screening and outgoing call-screening rules, some of which require location information and involve a MAP query of the HLR.

vpn callflow
Note
Measuring call setup time
  • The switch simulator measures call-setup time from when it sends the InitialDP to the application until it receives the corresponding response.

  • The response time (or "call-setup" time) is the time between sending the InitialDP (1) and receiving the Connect (2), as measured by the switch.

  • The response time includes the time for any MAP query to the HLR that the application may have performed.

Configuration and results

For details, please see:

CGIN Benchmark Environment and Configuration

Below are the hardware, software, and Rhino configuration used for the IN benchmarks.

Hardware

The CGIN benchmark tests used the following hardware for one and two nodes.

Single node (8,000 calls per second)

cgin 1node mk3

Two nodes (15,000 calls per second)

cgin 2node mk3

Hardware specifications

Machine CPUs Hardware threads RAM OS

Rhino node 1

2 x 6-core Xeon X5650 2.67GHz

24

24G

RHEL 7

Rhino node 2

2 x 6-core Xeon X5650 2.67GHz

24

24G

RHEL 7

SGC cluster 1

2 x 6 core Xeon X5660 2.8GHz

24

32G

Centos 7

SGC cluster 2

2 x 6 core Xeon X5660 2.8GHz

24

32G

Centos 7

Simulator host 1

2 x 6 core Xeon X5660 2.8GHz

24

36G

RHEL 6

Simulator host 2

2 x 6 core Xeon X5660 2.8GHz

24

36G

RHEL 6

Software

The CGIN benchmark tests used the following software.

Software Version

Java

JDK 1.8.0_60

Rhino

Rhino 2.5.0.1

CGIN

2.0.0.0

CGIN VPN

2.0.0.0

CGIN Back End

Rhino configuration

For the CGIN benchmark tests, we made the following changes to the Rhino 2.5 default configuration.

Parameter Value Note

TCAP stack

OCSS7

JVM Architecture

64bit

Enables larger heaps

Heap size

8192M

New Gen size

256M

Increased from default of 128M to achieve higher throughput without increasing latency

Staging queue size

5000

Increased from default of 3000 to allow burst traffic

Staging threads

150

Increased from default of 30 for reduced latency at high throughput

CGIN RA tcap-fibers.max-pool-size

10

Increased from default of 4 for reduced latency at high throughput

CGIN RA tcap-fibers.queue-size

5000

Increased from default of 250 to allow load spikes without processing falling back to incoming thread

CGIN RA ocss7.trdpCapacity

850000

Increased from default of 100000 as required to allow sufficient inflight calls

CGIN RA ocss7.schNodeListSize

850000

Set from default autosize to match ocss7.trdpCapacity

Local Memory DB size

200M

Increased from default of 100M to allow more in-flight calls

Results

Please review the CGIN benchmark results.

CGIN Benchmark Results

Below are a summary of the results and some notes about the testing.

Summary

Cluster size Call Rate Event Rate TCAP message Rate Results

One node

8,000

36,000 events/s

64,000 messages/s

Two nodes

15,000

67,500 events/s

120,000 messages/s

Notes

Below are notes about the test procedure, types of graphs in the results, and how to calculate the number of dialogs per second.

Test procedure

Test calls:

  • were evenly distributed between all available cluster nodes

  • had originating and terminating treatment

  • were monitored for the entire duration on the originating leg

  • lasted 60s.

10% of calls involved a MAP query.

Types of graphs

The individual test results include the following types of graphs:

Response Times

Distribution of response-time changes during the course of the test run.

Response Time Distribution

Overall distribution of response times from the entire test run.

CPU Utilization

Percentage of CPU used by Rhino nodes, as reported by the kernel.

Heap

Heap usage.

Calculating number of dialogs

To calculate the number of dialogs (for example, using 8,000 calls per second):

  • All calls involve originating and terminating treatment (in our example, this means 16,000 dialogs).

  • 10% of calls involve a query to the HLR (800 dialogs).

…​for a total of 16,800 dialogs per second.

1 Node Results

Below are the CGIN single-node benchmarks.

Call rate

  • 8000 calls per second.

  • 60,180,030 total calls completed.

Response time (ms)

Call setup time

  • 50% of calls in 2.2ms

  • 75% of calls in 11.3ms

  • 90% of calls in 38.8ms

  • 95% of calls in 69.5ms

8000 results rtt

.

8000 results rttdist

CPU usage

Raw CPU results are not normalized to 100%.

  • Node 1 normalized: 45%

8000 results node1 cpu

Heap

8000 results node1 heap

2 Node Results

Below are the CGIN two-node benchmarks.

Call rate

  • 15000 calls per second.

  • 112,837,136 total calls completed.

Response time (ms)

Call setup time

  • 50% of calls in 3.5ms

  • 75% of calls in 15.1ms

  • 90% of calls in 37.5ms

  • 95% of calls in 56.6ms

15000 results rtt

.

15000 results rttdist

CPU usage

Raw CPU results are not normalized to 100%.

  • Node 1: 43%

  • Node 2: 40%

15000 results node1 cpu
15000 results node2 cpu

Heap

15000 results node1 heap
15000 results node2 heap