Below are the hardware, software, and Rhino configuration used for the IN benchmarks.

Hardware

The CGIN benchmark tests used the following hardware for one and two nodes.

Single node (8,000 calls per second)

cgin 1node mk3

Two nodes (15,000 calls per second)

cgin 2node mk3

Hardware specifications

Machine CPUs Hardware threads RAM OS

Rhino node 1

2 x 6-core Xeon X5650 2.67GHz

24

24G

RHEL 7

Rhino node 2

2 x 6-core Xeon X5650 2.67GHz

24

24G

RHEL 7

SGC cluster 1

2 x 6 core Xeon X5660 2.8GHz

24

32G

Centos 7

SGC cluster 2

2 x 6 core Xeon X5660 2.8GHz

24

32G

Centos 7

Simulator host 1

2 x 6 core Xeon X5660 2.8GHz

24

36G

RHEL 6

Simulator host 2

2 x 6 core Xeon X5660 2.8GHz

24

36G

RHEL 6

Software

The CGIN benchmark tests used the following software.

Software Version

Java

JDK 1.8.0_60

Rhino

Rhino 2.5.0.1

CGIN

2.0.0.0

CGIN VPN

2.0.0.0

CGIN Back End

Rhino configuration

For the CGIN benchmark tests, we made the following changes to the Rhino 2.5 default configuration.

Parameter Value Note

TCAP stack

OCSS7

JVM Architecture

64bit

Enables larger heaps

Heap size

8192M

New Gen size

256M

Increased from default of 128M to achieve higher throughput without increasing latency

Staging queue size

5000

Increased from default of 3000 to allow burst traffic

Staging threads

150

Increased from default of 30 for reduced latency at high throughput

CGIN RA tcap-fibers.max-pool-size

10

Increased from default of 4 for reduced latency at high throughput

CGIN RA tcap-fibers.queue-size

5000

Increased from default of 250 to allow load spikes without processing falling back to incoming thread

CGIN RA ocss7.trdpCapacity

850000

Increased from default of 100000 as required to allow sufficient inflight calls

CGIN RA ocss7.schNodeListSize

850000

Set from default autosize to match ocss7.trdpCapacity

Local Memory DB size

200M

Increased from default of 100M to allow more in-flight calls

Results

Please review the CGIN benchmark results.

Previous page Next page