This section of the manual covers the installation, basic configuration, and control the SGC component of OCSS7.

Tip The Rhino-side component of OCSS7, the TCAP stack, is automatically installed with CGIN and has no special installation procedure or requirements beyond those of Rhino and CGIN. Information on configuration the TCAP stack component can be found in TCAP Stack configuration.

Checking prerequisites

Before installing OCSS7, make sure you have the required Operating System, Hardware, and Software.

Before attempting a production installation please also see Network Architecture Planning.

Operating System

OS Version

Linux

RedHat Enterprise Linux 6.3 on x86-64 architectures (however, OCSS7 has also been successfully deployed on other versions of Linux)

Hardware

The general hardware requirements below are for an OCSS7 SGC production system, used for live deployment.

Hardware Minimum Recommended

Type of machine

Current commodity hardware and CPUs

Number of machines
(one cluster node per machine)

1

2 or more

Number of CPU cores per machine

2

2 - 16

RAM requirements per machine

1 GB

2+ GB

Network interface

Switched ethernet

Network interface requirements per machine

2 interfaces at 100Mb

2 or more interfaces at 1Gb

Software

The OCSS7 SGC requires these versions of Java and SCTP.

Software Required

Java

Compliant with Sun/Oracle Java version 7 update 55 32-bit and 64-bit.

Warning A regression in Java 7u60 and higher affects SCTP on Linux. OpenCloud products are NOT currently certified past Java 1.7.0_55

SCTP

Compliant with lksctp-tools package version 1.0.10 and later.

To enable SCTP support on RedHat Enterprise Linux 6.3, please install the lksctp-tools package, using the command: yum install lksctp-tools

Configuring network features

Before installing OCSS7, please configure the following network features:

Feature What to configure

IP address

Make sure the system has an IPv4 or IPv6 address and is visible on the network.

Host names

Make sure that the system can resolve the localhost to loopback interface.

Multicast addresses
(firewall rules)

If the local system has a firewall installed, modify its rules to allow multicast UDP traffic:

  • Multicast addresses are, by definition, in the range 224.0.0.0/4 (224.0.0.0 - 239.255.255.255) or ff00::/8 for IPv6. OCSS7 by default uses address 224.2.2.3.

  • OCSS7 uses multicast UDP to discover cluster members. The default port number used is 54327.

  • Ensure that the firewall is configured to allow multicast messages through on the multicast address/port used by OCSS7.

Unicast addresses
(firewall rules)

If the local system has a firewall installed, modify its rules to allow TCP traffic:

  • By default, OCSS7 listens on all available network interfaces for incoming TCP connections from other cluster members (this connection is used to distribute cluster configuration and run-time information state). By default, the port used is chosen dynamically, starting at port 5701 and incremented by 1 until a free port is found.

  • Addresses and ports to be used for message traffic distribution between nodes and the SGC-Stack-to-TCAP-Stack communication are user-configurable. They are configured when defining a new node in the SGC Cluster, as described in General Configuration.

Cluster network configuration

OCSS7 network configuration related to cluster operation can be customized by providing a hazelcast.xml configuration file in the config subdirectory of the OCSS7 distribution (for example, rename config/hazelcast.xml.sample to config/hazelcast.xml). For a description of possible configuration options, and the format of the hazelcast.xml configuration file, please see the Hazelcast In-Memory Data Grid Documentation (Version 2.6).

Tip
IPv6 considerations
When using IPv6 addressing, remember to configure the PREFER_IPV4 property in the SGC_HOME/config/sgcenv file. For details, please see Configuring SGC_HOME/config/sgcenv.

User process tuning

Ensure that the user that OCSS7 will be running under has a soft limit of no less than 4096 user processes.

Tip
The number of permitted user processes may be determined at runtime using the ulimit command; for example ulimit -Su

This value may be changed by editing /etc/security/limits.conf as root, and adding (or altering) the line:

sgc_user soft nproc 4096

It may also be necessary to increase the hard limit:

sgc_user hard nproc 4096

SCTP tuning

For optimal performance, tune these kernel parameters:

Parameter Recommended value What it specifies

net.core.rmem_default

512000

Default receive buffer size (in bytes)

net.core.wmem_default

512000

Default send buffer size (in bytes)

net.core.rmem_max

2048000

Maximum receive buffer size (in bytes)

This value limits the so-rcvbuf parameter. For details, please see Listening for and establishing SCTP associations.

net.core.wmem_max

2048000

Maximum send buffer size (in bytes)

This value limits the so-sndbuf parameter. For details, please see Listening for and establishing SCTP associations.

net.sctp.rto_min

40 < rto_min < 100

Minimum retransmission timeout (in ms)

This should be greater than the sack_timeout of the remote SCTP endpoints.

net.sctp.sack_timeout

40 < sack_timeout < 100

Delayed acknowledgement (SACK) timeout (in ms)

Should be lower than the retransmission timeout of the remote SCTP endpoints.

net.sctp.hb_interval

1000

SCTP heartbeat interval (in ms)

Tip
Kernel parameters can be changed
  • at runtime using sysctl command; for example: sysctl -w net.core.rmem_max=2000000

  • or set permanently in /etc/sysctl.conf file.

SGC Installation

Unpack and configure

Note
SGC_HOME

The following instructions use SGC_HOME to represent the path to the SGC installation directory.

To begin the SGC installation and create the first node:

1

Unpack the SGC archive file

Run:

unzip ocss7-package-VERSION.zip

(replacing ocss7-package-VERSION.zip with the correct file name)

This creates the distribution directory, ocss7-X.X.X.X, in the current working directory.

2

Make sure that the JAVA_HOME environment variable is set to the location of the Oracle JDK installation. (for JDK installation, see the Operating System documentation and/or JDK vendor documentation).

3

Configure basic cluster / node information

If your installation will use more than a single node in a SGC cluster, then:

  • Customize the ss7.instance property in SGC_HOME/config/SGC.properties to a value unique among all other nodes in the SGC cluster.

  • Define a cluster name through the hazelcast.group property in SGC_HOME/config/SGC.properties. The value of this property must be the same for all nodes that form a particular cluster.

If you are planning to use more than one SGC cluster in the same local network then:

  • Set the hazelcast.group property in SGC_HOME/config/SGC.properties to a value unique among all other clusters in the same local network.

Creating additional nodes

After installing the first SGC node in a cluster, you can add more nodes by either:

  • copying the installation directory of an existing node, and changing the ss7.instance property in SGC_HOME/config/SGC.properties to a value unique among all the other nodes in the cluster.

or

  • repeating the installation steps for another node,

  • setting the ss7.instance property in SGC_HOME/config/SGC.properties to a value unique among all other nodes in the cluster,

  • setting the hazelcast.group in SGC_HOME/config/SGC.properties to the value chosen as cluster name, and

  • repeating any other installation customization steps.

Layout of the SGC installation

Example 1. Installation directory contents

A typical SGC installation contains these subdirectories:

Directory Contents

.

(SGC installation directory)

  • main SGC Java Archive

bin

SGC management scripts

cli

command line interface installation, including start scripts, configuration, and logs

config

configuration files which may be edited by the user as required

doc

supplementary documentation included as a convenience, such as SNMP MIB files

lib

Java libraries used by the SGC

license

third-party software licenses

logs

log output from the SGC

var

persisted cluster configuration (sgc.dat) and temporary files used by the SGC management scripts

Running the SGC

SGC operations

Note
JAVA_HOME

The SGC script expects the JAVA_HOME environment variable to be set up and point to a valid JVM version 7 or greater (expects executable file JAVA_HOME/bin/java).

The SGC is started and stopped using the SGC_HOME/bin/sgc script.

The sgc script runs SGC under a watchdog: if the SGC process exits for an unrecognized reason it is automatically restarted. Output from the SGC and from the watchdog script is redirected into a startup log file. The startup log files are in SGC_HOME/logs directory and are named startup.<startup-time>. If startup fails for any reason, details about the failure should be available in the startup file.

The sgc script is configured in SGC_HOME/config/sgcenv. The sgcenv file contains JVM parameters which cannot be provided in the SGC.properties file.

The sgc script can be run with the following arguments:

Command argument Optional arguments Description

start

--nowait
--jmxhost <host>
--jmxport <port>

Starts the SGC using the configuration from SGC_HOME/config/sgcenv.

  • The --nowait argument specifies that the startup script does not verify if SGC has started successfully, but just initializes start and exits.

  • The --jmxhost and --jmxport arguments allow specifying different JMX listening sockets than the one defined in SGC_HOME/config/sgcenv.

stop

--immediate

Stops the SGC. Without the --immediate argument, a graceful shutdown is attempted. With --immediate, processes are killed.

restart

--nowait
--jmxhost <host>
--jmxport <port>
--immediate

Equivalent of stop, then start.

test

--jmxhost <host>
--jmxport <port>

Runs the SGC in test mode. In test mode, SGC runs in the foreground; and logging is configured in log4j.test.xml, which prints more information to the console.

foreground

--jmxhost <host>
--jmxport <port>

Runs the SGC in foreground mode. SGC is not demonized.

status

Prints the status of SGC and returns one of these LSB-compatible exit codes:

  • 0 = the SGC is running

  • 1 = the SGC is dead and the SGC_HOME/var/sgc.pid file exists

  • 3 = the SGC is not running

For example:

Start SGC

$SGC_HOME/bin/sgc start
$SGC_HOME/bin/sgc start --nowait --jmxport 50111 --jmxhost 127.0.0.1

Stop SGC

$SGC_HOME/bin/sgc stop
$SGC_HOME/bin/sgc stop --immediate

Check SGC status

$SGC_HOME/bin/sgc status

Configuring SGC_HOME/config/sgcenv

The SGC_HOME/config/sgcenv file contains configuration parameters for the sgc script. The following settings are supported:

Variable name Descriptions Valid Values Default

JAVA_HOME

Location of the JVM home directory.

JMX_HOST

Host that SGC should bind to in order to listen for incoming JMX connections.

IPv4 or IPv6 address

127.0.0.1

JMX_PORT

Port where SGC binds for incoming JMX connections.

1-65535

50111

JMX_SECURE

Whether or not the JMX connection should be secured with SSL/TLS.

true, false

false

JMX_NEED_CLIENT_AUTH

Whether or not the SGC should require a trusted client certificate for an SSL/TLS-secured JMX connection.

JMX_SECURE_CFG_FILE

Path to the configuration file with properties used to secure the JMX management connection.

DEFAULT_STORE_PASSWORD

Password used during generation of the key store and trust store used to secure the JMX management connection.

changeit

MAX_HEAP_SIZE

Maximum size of the JVM heap space.

MIN_HEAP_SIZE

Initial size of the JVM heap space.

MAX_PERM_SIZE

Maximum size of the JVM permgen space.

GCOPTIONS

Full override of default garbage collections settings.

JVM_PROPS

Additional JVM parameters.

Modifications should add to the existing JVM_PROPS rather than overriding it. For example: JVM_PROPS="$JVM_PROPS -Dsome_option".

LOGGING_CONFIG

The log4j configuration file to be used in normal mode (start/restart/foreground).

LOGGING_TEST_CONFIG

The log4j configuration file to be used in test mode.

SGC_CFG_FILE

Location of the SGC properties file.

RUNONCE

Whether or not the watchdog is enabled. Disabling the watchdog may be required if the SGC is run under the control of some other HA systems.

0 = watchdog is enabled
1 = start SGC in background without watchdog script

DEBUG

Enables additional script information.

0 = no additional debug information
1 = additional information enabled

PREFER_IPV4

Prefers using IPv4 protocol. Set value to false to enable IPv6 support.

true = use only IPv4
false = use both IPv4 and IPv6

true

NUMAZONES

On NUMA architecture machines, this parameter allows selecting specific CPU and memory bindings for SGC.

CPUS

On non-NUMA architecture machines, this parameter may be used to set SGC affinity to specific CPUs.

Note
JMX Connector configuration variables

SGC_HOME/config/sgcenv contains additional variables used to configure a secure JMX management connection. For details, please see Securing the SGC JMX management connection with SSL/TLS.

Installing SGC as a service

To install SGC as a service, perform the following operations as user root:

1

Copy SGC_HOME/bin/sgcd to /etc/init.d .

# copy $SGC_HOME/bin/sgcd /etc/init.d

2

Grant execute permissions to /etc/init.d/sgcd:

# chmod a+x /etc/init.d/sgcd

3

Create the file /etc/sysconfig/sgcd. Assuming that the SGC is installed in /opt/sgc/current as user sgc, the file would have the following content:

SGC=/opt/sgc/current/bin/sgc
SGCUSER=sgc

4

Activate the service using the standard RedHat command:

# chkconfig --add sgcd
Previous page Next page