Below are the hardware, software, and Rhino configuration used for the IN benchmarks.

Hardware

The IN benchmark tests used the following hardware for one and two nodes.

Single node (8,000 calls per second)

cgin 1node mk3

Two nodes (15,000 calls per second)

cgin 2node mk3

Hardware specifications

Machine CPUs Hardware threads RAM OS

Rhino node 1

2 x 6-core Xeon X5650 2.67GHz

24

24G

RHEL 7

Rhino node 2

2 x 6-core Xeon X5650 2.67GHz

24

24G

RHEL 7

SGC cluster 1

2 x 6 core Xeon X5660 2.8GHz

24

32G

Centos 7

SGC cluster 2

2 x 6 core Xeon X5660 2.8GHz

24

32G

Centos 7

Simulator host 1

2 x 6 core Xeon X5660 2.8GHz

24

36G

RHEL 6

Simulator host 2

2 x 6 core Xeon X5660 2.8GHz

24

36G

RHEL 6

Software

The IN benchmark tests used the following software.

Software Version

Java

JDK 1.7.0_71 and JDK 1.8.0_60

Rhino

Rhino 2.5.0.1

CGIN

1.5.4.1

CGIN VPN

1.5.4.1

CGIN Back End

Rhino configuration

For the IN benchmark tests, we made the following changes to the Rhino 2.5 default configuration.

Parameter Value Note

TCAP stack

OCSS7

JVM Architecture

64bit

Enables larger heaps

Heap size

8192M

New Gen size

256M

Increased from default of 128M to achieve higher throughput without increasing latency

Staging queue size

5000

Increased from default of 3000 to allow burst traffic

Staging threads

150

Increased from default of 30 for reduced latency at high throughput

CGIN RA tcap-fibers.max-pool-size

10

Increased from default of 4 for reduced latency at high throughput

CGIN RA tcap-fibers.queue-size

5000

Increased from default of 250 to allow load spikes without processing falling back to incoming thread

CGIN RA ocss7.trdpCapacity

850000

Increased from default of 100000 as required to allow sufficient inflight calls

CGIN RA ocss7.schNodeListSize

850000

Set from default autosize to match ocss7.trdpCapacity

Local Memory DB size

200M

Increased from default of 100M to allow more in-flight calls

Results

Please review the IN benchmark results.

Previous page Next page