This guide describes how to deploy, manage and maintain OCSS7 - Metaswitch’s SS7 Stack.

This document assumes a working knowledge of SS7 and SIGTRAN, as well as some familiarity with the administration of Linux systems and Java applications.

Topics

This document includes the following topics:

About OCSS7

An overview of OCSS7, including features and architecture

Quick Start

A step-by-step setup guide from packages through a test dialog, ignoring production-grade details

Installing the SGC

SGC installation in detail, including production-grade features and tuning

Configuring the OCSS7 SGC

Configuration options for startup and runtime operations

Operational State and Instance Management

Monitoring OCC7’s operational state

Command-Line Management Console

Driving OCCS7 from the command line

SGC TCAP Stack

Use of OCSS7 with Metaswitch’s CGIN Connectivity Pack

Upgrading the SGC and TCAP stack

Upgrading the SGC and the CGIN TCAP stack

Appendix A: SGC Properties

Appendix describing the configuration properties that may be set in the SGC.properties file

Appendix B: Support Requests

Gathering information for support requests

Appendix C: Online Upgrade Support Matrix

The SGC versions that support online upgrade

Appendix D: Glossary

Glossary of acronyms

Other documentation for OCSS7 can be found on the OCSS7 product page.

About OCSS7

What is OCSS7?

OCSS7 provides SS7 network connectivity to Metaswitch products, including the Rhino CGIN RA and the Metaswitch Scenario Simulator. It does this by implementing a set of SS7 and related SIGTRAN protocols. The current version supports M3UA, ITU-T SCCP, ANSI SCCP, ITU-T TCAP and ANSI TCAP, and can act as an M3UA Application Server connected either to an M3UA Signalling Gateway or to another Application Server as an IP Server Process.

Main features

OCSS7 offers the following main features:

  • Provides IN network connectivity to Metaswitch’s CGIN and Scenario Simulator

  • SIGTRAN M3UA communication with Signalling Gateways and/or IPSP peers

  • Cluster deployment — SGC may be deployed in a cluster where multiple SGC instances cooperate and represent a single Signalling Point Code.

Supported specifications

Specification Comment

RFC 4666

M3UA (ASP and IPSP functionality)

ITU-T Q.711-Q.714

ITU-T SCCP (excluding LUDT)

ATIS-1000112 (previously ANSI T1.112)

ANSI SCCP (excluding LUDT, INS and ISNI)

ITU-T Q.771-Q.774

TCAP (constrained to features supported by the TCAP SPI interface provided by the CGIN RA)

ANSI T1.114-2000 and T1.114-1988

ANSI TCAP (constrained to features supported by the TCAP SPI interface provided by the CGIN RA)

More details on supported RFCs, see the OCSS7 Compliance Matrix.

Architecture

OCSS7 is composed of two main user-visible components:

  • the Signalling Gateway Client (SGC) TCAP Stack (frontend), automatically deployed as part of the Rhino CGIN RA, and

  • the SGC (backend), deployed as one or more separate processes.

After initialization, the TCAP Stack (frontend) registers with a preconfigured SSN at the SGC instance, and allows the CGIN RA to accept and initiate TCAP dialogs. Multiple TCAP Stack (frontend) instances representing the same SSN can be connected to the SGC instance (backend); in this case all incoming traffic will be load balanced among them.

The SGC provides IN connectivity for exactly one Point Code. Typically, the SGC is deployed as a cluster of at least two instances to meet standard carrier-grade, high-availability requirements. This manual primarily concerns itself with the OCSS7 SGC, as the TCAP stack component within CGIN RA is managed as part of CGIN.

Below is a high-level diagram followed by descriptions of the main components of the OCSS7 stack. The OCSS7 stack components are yellow. Components that are provided by the Rhino platform, CGIN, or the operating system environment are blue.

architectural overview

SGC Subsystems overview

All SGC nodes in the cluster represent a single logical system with a single PC (SS7 Point Code). Each SGC cluster node instantiates all stack layers and subsystems, which coordinate with each other to provide a single logical view of the cluster.

The major SS7 SGC subsystems are:

  • Configuration Subsystem — plays a central role in managing the life cycle of all processing objects in the SGC Stack. The Configuration Subsystem is distributed, so an object on any cluster node may be managed from every other cluster node. Configuration is exposed through a set of JMX MBeans which allow manipulation of the underlying configuration objects

  • TCAP Layer — manages routing and load balancing of TCAP messages to appropriate Rhino CGIN RAs (through registered TCAP Stacks)

  • SCCP Layer — responsible for routing SCCP messages, GT translation, and managing internal and external subsystem availability states

  • M3UA Layer — establishes and manages the SG, IPSP connectivity, and AS state.

Internally, all layers use Cluster Communication subsystems to distribute management state and configuration data, to provide a single logical view of the entire cluster. Irrespective of which cluster node receives or originates an SS7 message, that message is transported to the appropriate cluster node for processing (by the Rhino CGIN RA) or for further sending using one of the nodes' established SCTP associations.

SGC Deployment model

The SGC cluster physical deployment model is independent from Rhino cluster deployment. That is, SGC nodes can be deployed on separate hosts than Rhino nodes OR share the same machines.

Management tools

The SGC installation package provides a CLI management console that exposes a set of CRUD commands that are used to configure the SGC cluster and observe its runtime state. SGC cluster configuration and runtime state can be managed from each and every node; there is no need to connect separately to each node. As the SGC exposes a standard Java JMX management interface, users are in no way constrained to provided tools and are free to create custom management clients to serve their particular needs.

The SGC also provides an SNMP agent, exposing SGC-gathered statistics and Alarm notifications (Traps) through the SNMP protocol.

The TCAP stack component installed as part of the Rhino CGIN RA is managed using the usual Rhino and CGIN RA management facilities.

Quick Start

This section provides a step-by-step walk through basic OCSS7 setup from unpacking software packages through running test traffic. The end result is a functional OCSS7 network suitable for basic testing. For production installations and installation reference material please see Installing the SGC.

Introduction

In this walk-through we will be:

  • setting up two OCSS7 clusters, each with a single SGC node, and

  • running one of the example IN scenarios through the network using the Metaswitch Scenario Simulator.

To complete this walk-through you will need the:

These instructions should be followed on a test system which:

  • runs Linux,

  • has SCTP support, and

  • is unlikely to be hampered by local firewall or other security restrictions.

Finally, you will need to make sure that the JAVA_HOME environment variable is set to the location of your Oracle Java JDK installation.

The Plan

We will set up two clusters, each with a single node, both running on our single test system. At the M3UA level:

  • cluster 1 will use Point Code 1

  • cluster 2 will use Point Code 2

  • there will be one Application Server (AS) with routing context 2

  • there will be one SCTP association between the two nodes

We will test the network using two Metaswitch Scenario Simulators:

  • Simulator 1 will:

    • connect to cluster 1

    • use SSN 101

    • use GT 1234

  • Simulator 2 will:

    • connect to cluster 2

    • use SSN 102

    • use GT 4321

Routing between the two simulators will be via Global Title translation.

Once we think that the network is operational we will test it by running one of the example scenarios shipped with the IN Scenario Pack for the Scenario Simulator.

SGC installation

Naming Conventions

The cluster naming convention in this example uses PC followed by the point code. For example, a cluster whose point code is 1 will have a name of PC1.

The node naming convention lists the cluster name first, and then a hyphen, followed by the node number within that SGC cluster. For example, PC1-1 or PC2-1. During this walk-through the number after the hyphen will always be 1, but this convention provides space to expand if you wish to add additional nodes after completing the walk through.

Installation

We will now install two SGC clusters, each containing one node.

1

Create the root installation directory for the PC1 cluster and PC1-1 node:
mkdir PC1/PC1-1
Tip This installation structure follows the recommended Installation structure and creates an installation that is compatible with the automated upgrade tool, Orca.

2

Unpack the SGC archive file in the PC1/PC1-1 directory
unzip ocss7-package-VERSION.zip

(replacing ocss7-package-VERSION.zip with the correct file name).

This creates the distribution directory, ocss7-X.X.X.X, in the current working directory.

Example:

$ unzip ocss7-package-3.0.0.0.zip
Archive:  ocss7-package-3.0.0.0.zip
   creating: ocss7-3.0.0.0/
  inflating: ocss7-3.0.0.0/CHANGELOG
  inflating: ocss7-3.0.0.0/README
   creating: ocss7-3.0.0.0/config/
   creating: ocss7-3.0.0.0/doc/
   creating: ocss7-3.0.0.0/license/
   creating: ocss7-3.0.0.0/logs/
   creating: ocss7-3.0.0.0/var/
  inflating: ocss7-3.0.0.0/config/SGC.properties
  inflating: ocss7-3.0.0.0/config/SGC_bundle.properties.sample
  inflating: ocss7-3.0.0.0/config/log4j.dtd
  inflating: ocss7-3.0.0.0/config/log4j.test.xml
  inflating: ocss7-3.0.0.0/config/log4j.xml
  inflating: ocss7-3.0.0.0/config/sgcenv
  inflating: ocss7-3.0.0.0/license/LICENSE.apache-log4j-extras.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.commons-cli.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.commons-collections.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.commons-lang.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.guava.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.hazelcast.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.jline.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.jsr305.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.log4j.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.netty.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.protobuf.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.slf4j.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.snmp4j.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.velocity.txt
   creating: ocss7-3.0.0.0/bin/
  inflating: ocss7-3.0.0.0/bin/generate-report.sh
  inflating: ocss7-3.0.0.0/bin/sgc
  inflating: ocss7-3.0.0.0/bin/sgcd
  inflating: ocss7-3.0.0.0/bin/sgckeygen
  inflating: ocss7-3.0.0.0/sgc.jar
   creating: ocss7-3.0.0.0/lib/
  inflating: ocss7-3.0.0.0/lib/apache-log4j-extras-1.2.17.jar
  inflating: ocss7-3.0.0.0/lib/guava-14.0.1.jar
  inflating: ocss7-3.0.0.0/lib/hazelcast-3.7.jar
  inflating: ocss7-3.0.0.0/lib/jsr305-1.3.9.jar
  inflating: ocss7-3.0.0.0/lib/log4j-1.2.17.jar
  inflating: ocss7-3.0.0.0/lib/netty-buffer-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-codec-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-codec-http-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-common-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-handler-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-transport-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/protobuf-java-2.3.0.jar
  inflating: ocss7-3.0.0.0/lib/protobuf-library-2.3.0.1.jar
  inflating: ocss7-3.0.0.0/lib/slf4j-api-1.7.25.jar
  inflating: ocss7-3.0.0.0/lib/slf4j-log4j12-1.7.25.jar
  inflating: ocss7-3.0.0.0/lib/snmp4j-2.2.2.jar
  inflating: ocss7-3.0.0.0/lib/snmp4j-agent-2.0.10a.jar
   creating: ocss7-3.0.0.0/lib/upgrade-packs/
  inflating: ocss7-3.0.0.0/lib/upgrade-packs/ocss7-upgrade-pack-3.0.0.0.jar
   creating: ocss7-3.0.0.0/cli/
  inflating: ocss7-3.0.0.0/cli/sgc-cli.sh
   creating: ocss7-3.0.0.0/cli/conf/
   creating: ocss7-3.0.0.0/cli/lib/
  inflating: ocss7-3.0.0.0/cli/conf/cli.properties
  inflating: ocss7-3.0.0.0/cli/conf/log4j.xml
  inflating: ocss7-3.0.0.0/cli/lib/commons-cli-1.2.jar
  inflating: ocss7-3.0.0.0/cli/lib/commons-collections-3.2.1.jar
  inflating: ocss7-3.0.0.0/cli/lib/commons-lang-2.6.jar
  inflating: ocss7-3.0.0.0/cli/lib/jline-1.0.jar
  inflating: ocss7-3.0.0.0/cli/lib/log4j-1.2.17.jar
  inflating: ocss7-3.0.0.0/cli/lib/ocss7-cli.jar
  inflating: ocss7-3.0.0.0/cli/lib/ocss7-remote-3.0.0.0.jar
  inflating: ocss7-3.0.0.0/cli/lib/slf4j-api-1.7.25.jar
  inflating: ocss7-3.0.0.0/cli/lib/slf4j-log4j12-1.7.25.jar
  inflating: ocss7-3.0.0.0/cli/lib/velocity-1.7.jar
  inflating: ocss7-3.0.0.0/cli/sgc-cli.bat
   creating: ocss7-3.0.0.0/doc/mibs/
  inflating: ocss7-3.0.0.0/doc/mibs/COMPUTARIS-MIB.txt
  inflating: ocss7-3.0.0.0/doc/mibs/CTS-SGC-MIB.txt
  inflating: ocss7-3.0.0.0/doc/mibs/OPENCLOUD-OCSS7-MIB.txt
  inflating: ocss7-3.0.0.0/config/hazelcast.xml.sample

3

Create the root installation directory for the PC2 cluster and PC2-1 node:
mkdir PC2/PC2-1

4

Unpack the SGC archive file in the PC2/PC2-1 directory
unzip ocss7-package-VERSION.zip

(replacing ocss7-package-VERSION.zip with the correct file name).

This creates the distribution directory, ocss7-X.X.X.X, in the current working directory.

We now have two SGC nodes with no configuration. The next step is to set up their cluster configuration.

SGC cluster membership configuration

We will now do the cluster membership configuration for our two SGC nodes/clusters:

  • The node name is specified by the ss7.instance parameter

  • And the cluster name is specified by the hazelcast.group parameter

Later on, during SS7 configuration, the ss7.instance value is used to specify which node in the cluster certain configuration elements (such as SCTP endpoints) are associated with.

1

Give node PC1-1 its identity

Edit the file PC1/PC1-1/ocss7-3.0.0.0/config/SGC.properties and set ss7.instance to PC1-1 and hazelcast.group to PC1. When you are done the file should look like this:

# SGC instance node name
ss7.instance=PC1-1
# Path to the Hazelcast config file
hazelcast.config.file=config/hazelcast.xml
# Default Hazelcast group name
hazelcast.group=PC1

#path where sgc data file should be stored
sgc.data.dir=var

2

Give node PC2-1 its identity

Edit the file PC2/PC2-1/ocss7-3.0.0.0/config/SGC.properties and set ss7.instance to PC2-1 and hazelcast.group to PC2. When you are done the file should look like this:

# SGC instance node name
ss7.instance=PC2-1
# Path to the Hazelcast config file
hazelcast.config.file=config/hazelcast.xml
# Default Hazelcast group name
hazelcast.group=PC2

#path where sgc data file should be stored
sgc.data.dir=var
Tip

For clusters with multiple nodes the hazelcast.group property must be set, see the installation reference. For single-node clusters this value is optional. If unconfigured, the node will create a unique group for itself. However, in order to add further members to the cluster later the group must be set.

Starting the clusters

We will now start the two SGC clusters.

1

Check JAVA_HOME

Make sure your JAVA_HOME environment variable points to supported Java installation. Example:

$ echo $JAVA_HOME
/opt/jdk1.8.0_60

2

Change the management port for node PC1-1

Edit the file PC1/PC1-1/ocss7-3.0.0.0/config/sgcenv and change the JMX_PORT setting to 10111.

The JMX_PORT is the management port to which the command line management console will connect. It is not normally necessary to change this setting, but we must since we are running multiple nodes on a single system and they cannot both bind to the same port.

Tip Throughout this walk-through we will need a number of ports for different things. Any unique port numbers may be used, but it is helpful if there is some structure or pattern in place to help remember or calculate which port should be used in each situation. In this walk-through port numbers will be created as a concatenation of a purpose-specific prefix followed by the cluster number and the node number within that cluster. For management ports we’re adopting the prefix 101, therefore node PC1-1 has management port 10111, and node PC2-1 has management port 10121. If we were to add a second node to cluster PC2 later on we would assign it 10122 to use as its JMX_PORT.

3

Start node PC1-1
./PC1/PC1-1/ocss7-3.0.0.0/bin/sgc start

If all is well, you should see:

SGC starting - daemonizing ...
SGC started successfully

4

Change the management port for node PC2-1

Edit the file PC2/PC2-1/ocss7-3.0.0.0/config/sgcenv and change the JMX_PORT setting to 10121.

The JMX_PORT is the management port to which the command line management console will connect. It is not normally necessary to change this setting, but we must since we are running multiple nodes on a single system and they cannot both bind to the same port.

5

Start node PC2-1
./PC2/PC2-1/ocss7-3.0.0.0/bin/sgc start

If all is well, you should see:

SGC starting - daemonizing ...
SGC started successfully

If the SGC start command reported any errors please double check your JAVA_HOME environment variable and make sure that nothing has already bound the management ports 10111 and 10121. If these ports are already in use on your system you may simply change them to something else and make a note of the values for later use.

Connect the management console

We now have two running OCSS7 clusters with blank configuration. The configuration we have done so far was done on a per-node basis using configuration files, but this does no more than give a node the minimal configuration it needs to boot and become a cluster member. The rest of our SGC configuration will now be done using the Command-Line Management Console. Configuration done in this manner becomes cluster-wide configuration which is automatically propagated to and saved by every other cluster node, although for our single-node clusters that detail will not be particularly relevant.

It is recommended that you start one management console per node for this walk-through, however, if your system is low on RAM you may wish to start and stop these consoles as required.

1

Start the mangement console for PC1-1
./PC1/PC1-1/ocss7-3.0.0.0/cli/sgc-cli.sh

Example:

$ ./PC1/PC1-1/ocss7-3.0.0.0/cli/sgc-cli.sh
Preparing to start SGC CLI ...
Checking environment variables
[JAVA_HOME]=[/opt/jdk1.8.0_60]
[CLI_HOME]=[/home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0/cli]
Environment is OK!
Determining SGC home and JMX configuration
[SGC_HOME]=/home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0
[JMX_HOST]=127.0.0.1
[JMX_PORT]=10111
Done
+---------------------------Environment--------------------------------+
CLI_HOME: /home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0/cli
JAVA: /opt/jdk1.8.0_60
JAVA_OPTS:  -Dlog4j.configuration=file:/home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0/cli/conf/log4j.xml -Dsgc.home=/home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0/cli
+----------------------------------------------------------------------+
127.0.0.1:10111 PC1-1>

Here we can see the management console’s prompt, which identifies the node to which it is connected by host and port.

Tip The host and port settings were determined automatically by the CLI, which is possible because it is currently part of an SGC installation and can read the SGC configuration files. The CLI can also be copied out to elsewhere and run from another location or another host. When the CLI is not part of an SGC installation it is necessary to provide host and port options to the CLI on the command line.

2

Start the management console for PC2-1
./PC2/PC2-1/ocss7-3.0.0.0/cli/sgc-cli.sh

Example:

$ ./PC2/PC2-1/ocss7-3.0.0.0/cli/sgc-cli.sh
Preparing to start SGC CLI ...
Checking environment variables
[JAVA_HOME]=[/opt/jdk1.8.0_60/]
[CLI_HOME]=[/home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0/cli]
Environment is OK!
Determining SGC home and JMX configuration
[SGC_HOME]=/home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0
[JMX_HOST]=127.0.0.1
[JMX_PORT]=10121
Done
+---------------------------Environment--------------------------------+
CLI_HOME: /home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0/cli
JAVA: /opt/jdk1.8.0_60/
JAVA_OPTS:  -Dlog4j.configuration=file:/home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0/cli/conf/log4j.xml -Dsgc.home=/home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0/cli
+----------------------------------------------------------------------+
127.0.0.1:10121 PC2-1>
Tip

The management console supports tab completion and suggestions. If you hit tab while in the console it will complete the command, parameter, or value as best it can. If the console is unable to complete the command, parameter, or value entirely because there are multiple completion choices then it will display the available choices.

Tip You can exit the management console either by hitting ctrl-d or entering the quit command.

General configuration

General Configuration is that which is fundamental to the cluster and the nodes within it. For our purposes this means:

  • setting the local Point Codes for the clusters,

  • setting the basic communication attributes of each node.

The basic communication attributes of each node are used to control:

  • payload message transfer between SGCs within the cluster; and

  • communication with client TCAP stacks running in Rhino or the Scenario Simulator.

Tip The distinction between clusters and nodes is about to become apparent because each cluster has exactly one local Point Code for which it provides services and which is set once for the entire cluster. In contrast, each node must be defined and given its own basic communication configuration.

1a

Set the Point Code for PC1-1’s cluster to 1

Within the management console for PC1-1 run:

modify-parameters: sp=1

Example:

127.0.0.1:10111 PC1-1> modify-parameters: sp=1
OK parameters updated.

1b

Set the Point Code for PC2-1’s cluster to 2

Within the management console for PC2-1 run:

modify-parameters: sp=2

Example:

127.0.0.1:10121 PC2-1> modify-parameters: sp=2
OK parameters updated.

2a

Configure node PC1-1’s basic communication attributes

Within the management console for PC1-1 run:

create-node: oname=PC1-1, switch-local-address=127.0.0.1, switch-port=11011, stack-data-port=12011, stack-http-port=13011, enabled=true

Example:

127.0.0.1:10111 PC1-1> create-node: oname=PC1-1, switch-local-address=127.0.0.1, switch-port=11011, stack-data-port=12011, stack-http-port=13011, enabled=true
OK node created.
Warning The value given to oname above must exactly match the value of ss7.instance which was set in SGC cluster membership configuration. If the values are different the running node will assume this configuration is not intended for it, but for some other cluster node.

This command configures network communication for:

  • message passing between SGC cluster members, through the attributes starting with switch-; and

  • communication with client TCAP stacks such as Rhino or the Scenario Simulator, through the attributes starting with stack-.

Tip There are two attributes not specified in this command,stack-data-address and stack-http-address, which control the addresses used for client TCAP stack communications. These have been left to default to the value of switch-local-address because we are running everything on a single system. A typical production installation would partition the two traffic types by having one value for switch-local-address, and a different value shared between stack-data-address and stack-http-address. See Network Architecture Planning and the node configuration attributes for details.

2b

Configure node PC2-1’s basic communication attributes

Within the management console for PC2-1 run:

create-node: oname=PC2-1, switch-local-address=127.0.0.1, switch-port=11021, stack-data-port=12021, stack-http-port=13021, enabled=true
Tip This command differs from the previous command in that the ports have been carefully chosen not to conflict with those of the other SGC process. This is only necessary because all the SGC processes are running on a single test system in this walk through. Typical production installations, where each host has only one SGC node running on it, could omit the ports entirely and use the default values on every node.

Example:

127.0.0.1:10121 PC1-1> create-node: oname=PC2-1, switch-local-address=127.0.0.1, switch-port=11021, stack-data-port=12021, stack-http-port=13021, enabled=true
OK node created.
Warning The value given to oname above must exactly match the value of ss7.instance which was set in SGC cluster membership configuration. If the values are different the running node will assume this configuration is not intended for it, but for some other cluster node.

Of the attributes we set above only the switch-local-address and stack-data-port settings are required for future configuration; we’ll use them when we get to the Scenario Simulator configuration section.

Tip The create-node command is discussed above in the context of configuring the basic communication attributes to be used, but it also creates a node configuration object which can be enabled or disabled and for which the current state can be seen when using the display-node command. It has been discussed this way because some configuration must be provided, no matter what your configuration. If it was not necessary to provide some configuration then the SGC could simply automatically detect and add cluster nodes as they come online.

M3UA configuration

We will now begin configuring the M3UA layer of our network. There are a number of ways this can be done, but for the purposes of this walk-through we will use:

  • a single Application Server (AS) between the two instances,

  • the cluster for Point Code 1 as a client (in IPSP mode),

  • the cluster for Point Code 2 as a server (in IPSP mode), and

  • one SCTP association between the two nodes.

At a high level the procedure we’re about to follow will:

  • define the Application Server (AS) on each cluster,

  • define routes to our destination Point Codes through the defined AS,

  • define the SCTP connection on each node, and

  • associate the SCTP connection with the Application Server.

All the steps below are in two parts, the part "a" commands must be run on the management console connected to node PC1-1 and the part "b" commands must be run on the management console connected to node PC2-1. If this becomes confusing please check the examples given, which will indicate the correct management console by the port number in the prompt.

Tip Those familiar with M3UA will note that Single Exchange is used. The SGC does not support double exchange.

1a

Define the Application Server for PC1-1

Create the AS with traffic-maintenance-role=ACTIVE and Routing Context 2:

create-as: oname=PC2, traffic-maintenance-role=ACTIVE, rc=2, enabled=true

Example:

127.0.0.1:10111 PC1-1> create-as: oname=PC2, traffic-maintenance-role=ACTIVE, rc=2, enabled=true
OK as created.

1b

Define the Application Server for PC2-1

On PC2-1 note that traffic-maintenance-role=PASSIVE:

create-as: oname=PC2, traffic-maintenance-role=PASSIVE, rc=2, enabled=true

Example:

127.0.0.1:10121 PC1-1> create-as: oname=PC2, traffic-maintenance-role=PASSIVE, rc=2, enabled=true
OK as created.

2a

Define the local SCTP association’s endpoint for PC1-1
create-local-endpoint: oname=PC1-1-PC2-1, node=PC1-1, port=21121

Example:

127.0.0.1:10111 PC1-1> create-local-endpoint: oname=PC1-1-PC2-1, node=PC1-1, port=21121
OK local-endpoint created.

This defines a local endpoint which will be bound to SCTP port 21121.

Tip

Naming and oname

Every configuration object has an object name field called oname. These onames serve both as user documentation and the method of referring to other configuration objects, as seen here, where the node attribute refers to our node’s oname.

It is a good idea to plan a consistent and informative naming scheme before starting. In this walk-through several strategies are used:

  • the owning cluster name is used where a cluster as a whole owns an object (the AS is named PC2 because the cluster for PC 2 is acting as the server, and therefore has the greatest claim to ownership);

  • the owning node name is used where a specific node owns an object (each node is named for itself, for example);

  • where an object connects two things (like an SCTP association end point which will be used for outgoing connections) the name is the client node followed by the server node (for example oname=PC1-1-PC2-1 above)

2b

Define the local SCTP association’s endpoint for PC2-1
create-local-endpoint: oname=PC2-1, node=PC2-1, port=22100

This defines a local endpoint which will be bound to SCTP port 22100.

Example:

127.0.0.1:10121 PC1-1> create-local-endpoint: oname=PC2-1, node=PC2-1, port=22100
OK local-endpoint created.
Tip The oname here is the same as the node’s oname because the node owns this configuration object. In this walk-through we will only be using it to connect to PC1-1, but because we intend to have this node acting in the server role for SCTP we could reasonably expect more nodes from our peer cluster to connect to it, so it would be misleading to name it PC2-1-PC1-1 in the style used for step 2a.

3a

Define the local SCTP endpoint IP addresses for PC1-1

We will now define the IP address to be used by our SCTP association.

create-local-endpoint-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, local-endpoint-name=PC1-1-PC2-1

Example:

127.0.0.1:10111 PC1-1> create-local-endpoint-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, local-endpoint-name=PC1-1-PC2-1
OK local-endpoint-ip created.

As you can see above, a local-endpoint-ip associates itself with one particular local-endpoint by setting local-endpoint-name to the oname value of the intended local-endpoint. This step is necessary because SCTP supports "multi-homing", meaning that one association can be bound to multiple local IP addresses. Typically these IP addresses would be associated with resilient physical network paths, allowing multi-homing to provide protection against network failure.

3b

Define the local SCTP endpoint IP addresses for PC2-1

Similar to 3a, above:

create-local-endpoint-ip: oname=PC2-1, ip=127.0.0.1, local-endpoint-name=PC2-1

Example:

127.0.0.1:10121 PC1-1> create-local-endpoint-ip: oname=PC2-1, ip=127.0.0.1, local-endpoint-name=PC2-1
OK local-endpoint-ip created.

4a

Enable the local endpoint for PC1-1

The local endpoint was created in its default enabled=false to allow us to add local endpoint IP addresses to it. The SGC does not allow changes to enabled local endpoints to avoid unexpected service interruptions while it tears down the connection and establishes it with new configuration. We are now done modifying this configuration, so it is time to enable the local endpoint:

enable-local-endpoint: oname=PC1-1-PC2-1

Example:

127.0.0.1:10111 PC1-1> enable-local-endpoint: oname=PC1-1-PC2-1
OK local-endpoint enabled.

4b

Enable the local endpoint for PC2-1
enable-local-endpoint: oname=PC2-1

Example:

127.0.0.1:10121 PC1-1> enable-local-endpoint: oname=PC2-1
OK local-endpoint enabled.

5a

Define the client connection for PC1-1 to PC2-1

We will now define the SCTP association used by PC1-1, as well as some M3UA settings for the connection:

create-connection: oname=PC1-1-PC2-1, port=22100, local-endpoint-name=PC1-1-PC2-1, conn-type=CLIENT, state-maintenance-role=ACTIVE, is-ipsp=true, enabled=true

Example:

127.0.0.1:10111 PC1-1> create-connection: oname=PC1-1-PC2-1, port=22100, local-endpoint-name=PC1-1-PC2-1, conn-type=CLIENT, state-maintenance-role=ACTIVE, is-ipsp=true, enabled=true
OK connection created.

The port here is the remote SCTP port to which the SGC should connect, the local IP and port information comes from the local-endpoint-name. This connection will act in all ways as a "client" connection, in that it will initiate the connection and begin the conversation. If you’re interested in the exact details please see the connection reference documentation.

5b

Define the server connection for PC2-1 from PC1-1

Similar to the above, this defines a client connection to the node, which is acting as a server:

create-connection: oname=PC1-1-PC2-1, port=21121, local-endpoint-name=PC2-1, conn-type=SERVER, state-maintenance-role=PASSIVE, is-ipsp=true, enabled=true

Example:

127.0.0.1:10121 PC1-1> create-connection: oname=PC1-1-PC2-1, port=21121, local-endpoint-name=PC2-1, conn-type=SERVER, state-maintenance-role=PASSIVE, is-ipsp=true, enabled=true
OK connection created.

the port in this command is the remote port from which the connection will be initiated. It must match the configuration in node PC1-1 or the connection will not be accepted.

6a

Define the connection IP addresses for PC1-1 to PC2-1

Just as we had to define local endpoint IP addresses earlier, we must now define the remote connection IP addresses to which the node should connect:

create-conn-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, conn-name=PC1-1-PC2-1

Example:

127.0.0.1:10111 PC1-1> create-conn-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, conn-name=PC1-1-PC2-1
OK conn-ip created.

Again, this extra step is because SCTP supports multi-homing.

6b

Define the connection IP addresses for PC2-1 from PC1-1

The compliment of step 6a, above, PC2-1 needs to know which IP addresses to expect a connection from:

create-conn-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, conn-name=PC1-1-PC2-1

Example:

127.0.0.1:10121 PC1-1> create-conn-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, conn-name=PC1-1-PC2-1
OK conn-ip created.

The IP address here must match the local-endpoint-ip address from PC1-1 or the connection will not be accepted by PC2-1.

7a

Connect the AS to the connection on PC1-1

We must now tell the SGC that our AS should use the connection we have defined:

create-as-connection: oname=PC1-1-PC2-1, as-name=PC2, conn-name=PC1-1-PC2-1

Example:

127.0.0.1:10111 PC1-1> create-as-connection: oname=PC1-1-PC2-1, as-name=PC2, conn-name=PC1-1-PC2-1
OK as-connection created.

This as-connection is necessary because one AS may use many connections, and a connection may serve many Application Servers, in an "many-to-many" relationship.

7b

Connect the AS to the connection on PC2-1
create-as-connection: oname=PC1-1-PC2-1, as-name=PC2, conn-name=PC1-1-PC2-1

Example:

127.0.0.1:10121 PC1-1> create-as-connection: oname=PC1-1-PC2-1, as-name=PC2, conn-name=PC1-1-PC2-1
OK as-connection created.

8a

Define the route on PC1-1 to Point Code 2

The final step, we must now define which Destination Point Codes can be reached via our Application Server. Define a Destination Point Code for PC=2 and a route to it via our AS with the following commands:

create-dpc: oname=PC2, dpc=2
create-route: oname=PC2, as-name=PC2, dpc-name=PC2

Example:

127.0.0.1:10111 PC1-1> create-dpc: oname=PC2, dpc=2
OK dpc created.
127.0.0.1:10111 PC1-1> create-route: oname=PC2, as-name=PC2, dpc-name=PC2
OK route created.

8b

Define the route on PC2-1 to Point Code 1

Define a Destination Point Code for PC=1 and a route to it via our AS with the following commands:

create-dpc: oname=PC1, dpc=1
create-route: oname=PC1, as-name=PC2, dpc-name=PC1

Example:

127.0.0.1:10121 PC1-1> create-dpc: oname=PC1, dpc=1
OK dpc created.
127.0.0.1:10121 PC1-1> create-route: oname=PC1, as-name=PC2, dpc-name=PC1
OK route created.

General and M3UA configuration is now complete. In the next section we will check that everything is working correctly.

M3UA state inspection

You should now have two SGCs which are connected to each other at the M3UA layer. Before we move on to the upper layers of configuration we should check that everything is working as expected up to this point. If you are confident of your setup and in a hurry you can skip this section.

Please note that it is not normally necessary to check state in this exhaustive a manner, we are doing it in this step-by-step fashion to provide some familiarization with the SGC state inspection facilities and assist with troubleshooting.

Tip Most of the commands shown below show both the definition and the state of the various configuration objects they examine, and are intended for those modifying or considering modifying the configuration of the SGC. If you are interested strictly in state rather than configuration, there is a related family of commands which start with display-info- which will show extended state information without any configuration details.

1

Check the display-active-alarms command for problems

The display-active-alarms command can show problems from any aspect of the SGC’s operation. If you check it now you should see:

PC1-1

127.0.0.1:10111 PC1-1> display-active-alarm:
Found 0 objects.

PC2-1

127.0.0.1:10121 PC1-1> display-active-alarm:
Found 0 objects.

If, instead, you see one or more alarms, don’t worry, we’ll step through the diagnostics one by one.

2

Check the node state

If something is wrong with the node state or configuration than nothing will work. Run

display-node

on both nodes. Both nodes should say that the active state is true. If your active state is not true then either:

  • the enabled attribute is set to false, and you need to use the enable-node command to enable it; or

  • the ss7.instance values does not match the node’s oname value.

3

Check the local endpoint state

The local endpoint must be enabled and active before the connection between the nodes will work. Run:

display-local-endpoint

on both nodes. Both nodes should say that the active state is true. If the active state is not true then:

  • the enabled attribute is set to false, which can be corrected with the enable-local-endpoint command.

4

Check the connection state

The next thing to check, working up the stack, is the SCTP association. Run

display-connection

on both nodes. Both nodes should say that the active state is true. If the active state is not true then either:

  • the enabled attribute is set to false on one or the other of the nodes, and you need to use the enable-connection command to enable it;

  • there is a configuration mismatch between the nodes with regard to ports or IP addresses; or

  • the ports selected in this walk through may not work correctly due to your local operating system configuration, in which case you may need to consult Configuring network features.

It is often helpful to consult either the active alarms list or the logs when diagnosing connection issues, but that is outside the scope of this walk-through.

5

Check the AS state

The AS should be active on both nodes. Run:

display-as

on both nodes to check. The state should be listed as ACTIVE. If the state is not ACTIVE then:

  • there is a problem at a lower layer;

  • the two clusters disagree about the rc value; or

  • one of the create-as-connection commands was omitted.

6

Check the SCCP state

SCCP is the next layer up, and we have not yet configured it, but it should be able to activate and communicate with its peer at this point. Run this command on both nodes to check:

display-info-remotessninfo

This should show the following output on both nodes:

127.0.0.1:10111 PC1-1> display-info-remotessninfo
Found 2 object(s):
+----------+----------+---------------+
|dpc       |ssn       |status         |
+----------+----------+---------------+
|1         |1         |ALLOWED        |
+----------+----------+---------------+
|2         |1         |ALLOWED        |
+----------+----------+---------------+

This output shows that the SCCP layers on each node are communicating with each other.

Tip SSN=1 is the SCCP management SubSystem Number. If the status of SSN=1 is not ALLOWED then the SCCP layers are unable to communicate with each other and no other SSN will be reachable.

If the status shown above is PROHIBITED for the remote Destination Point Code then:

SCCP configuration

In The Plan we can see that the two Scenario Simulators expect to refer to each other by their global titles as follows:

  • 1234: PC=1,SSN=101

  • 4321: PC=2,SSN=102

Several inbound and outbound global title translation (GTT) rules are required to allow this to happen, which we will create now.

Also, while not technically necessary, we will configure Concerned Point Codes for each of the two nodes, so that they will inform each other about changes to the state of interesting SSNs.

All the steps below are in two parts, the part "a" commands must be run on the management console connected to node PC1-1 and the part "b" commands must be run on the management console connected to node PC2-1. If this becomes confusing please check the examples given, which will indicate the correct management console by the port number in the prompt.

1a

Outbound GTT setup on PC1-1

Run the following commands to setup outbound global title translation on PC1-1:

create-outbound-gt: oname=4321, addrinfo=4321
create-outbound-gtt: oname=4321, gt=4321, dpc=2, priority=5

Example:

127.0.0.1:10111 PC1-1> create-outbound-gt: oname=4321, addrinfo=4321
OK outbound-gt created.
127.0.0.1:10111 PC1-1> create-outbound-gtt: oname=4321, gt=4321, dpc=2, priority=5
OK outbound-gtt created.

This defines a Global Title and then creates a translation rule which will cause messages with that GT in the Called Party Address to be routed to our peer at PC=2.

1b

Outbound GTT setup on PC2-1

Run the following commands to setup outbound global title translation on PC2-1:

create-outbound-gt: oname=1234, addrinfo=1234
create-outbound-gtt: oname=1234, gt=1234, dpc=1, priority=5

Example:

127.0.0.1:10121 PC1-1> create-outbound-gt: oname=1234, addrinfo=1234
OK outbound-gt created.
127.0.0.1:10121 PC1-1> create-outbound-gtt: oname=1234, gt=1234, dpc=1, priority=5
OK outbound-gtt created.

This defines a Global Title and then creates a translation rule which will cause messages with that GT in the Called Party Address to be routed to our peer at PC=1.

2a

Inbound GTT setup on PC1-1

Run the following to setup inbound GTT on PC1-1:

create-inbound-gtt: oname=1234, addrinfo=1234, ssn=101
create-outbound-gt: oname=1234, addrinfo=1234
create-outbound-gtt: oname=1234, gt=1234, dpc=1, priority=5

Example

127.0.0.1:10111 PC1-1> create-inbound-gtt: oname=1234, addrinfo=1234, ssn=101
OK inbound-gtt created.
127.0.0.1:10111 PC1-1> create-outbound-gt: oname=1234, addrinfo=1234
OK outbound-gt created.
127.0.0.1:10111 PC1-1> create-outbound-gtt: oname=1234, gt=1234, dpc=1, priority=5
OK outbound-gtt created.

The first command creates an inbound GTT rule for the Global Title we expect to be accepted traffic on. The second and third commands may look somewhat surprising, as they create an outbound GTT rule. This is the correct configuration for our network, as SCCP’s service messages (UDTS and XUDTS) may be generated locally in response to traffic we are attempting to send, and these service messages are routed as outbound messages.

2b

Inbound GTT setup on PC2-1

Run the following to setup inbound GTT on PC2-1:

create-inbound-gtt: oname=4321, addrinfo=4321, ssn=102
create-outbound-gt: oname=4321, addrinfo=4321
create-outbound-gtt: oname=4321, gt=4321, dpc=2, priority=5

Example:

GC[127.0.0.1:10121]> create-inbound-gtt: oname=4321, addrinfo=4321, ssn=102
OK inbound-gtt created.
127.0.0.1:10121 PC1-1> create-outbound-gt: oname=4321, addrinfo=4321
OK outbound-gt created.
127.0.0.1:10121 PC1-1> create-outbound-gtt: oname=4321, gt=4321, dpc=2, priority=5
OK outbound-gtt created.

3a

Create the Concerned Point Code on PC1-1

Run the following to configure PC1-1 to announce SSN changes for SSN=101 to the PC2 cluster:

create-cpc: oname=PC2-101, dpc=2, ssn=101

Example:

127.0.0.1:10111 PC1-1> create-cpc: oname=PC2-101, dpc=2, ssn=101
OK cpc created.

3b

Create the Concerned Point Code on PC2-1

Run the following to configure PC2-1 to announce SSN changes for SSN=102 to the PC1 cluster:

create-cpc: oname=PC1-102, dpc=1, ssn=102

Example:

127.0.0.1:10121 PC1-1> create-cpc: oname=PC1-102, dpc=1, ssn=102
OK cpc created.

This completes our SCCP configuration, which we will check in the next section.

SCCP state inspection

We now have two fully configured SCCP layers. We will now check their state to make sure they will work as expected.

1

Check the outbound GTT state on PC1-1

The following command will show the current state of configured outbound GTT rules:

display-info-ogtinfo: column=addrInfo, column=connId, column=rc, column=dpc

Example on PC1-1

127.0.0.1:10111 PC1-1> display-info-ogtinfo: column=addrInfo, column=connId, column=rc, column=dpc
Found 2 object(s):
+---------------+---------------+----------+----------+
|addrInfo       |connId         |rc        |dpc       |
+---------------+---------------+----------+----------+
|1234           |               |-1        |1         |
+---------------+---------------+----------+----------+
|4321           |PC1-1-PC2-1    |2         |2         |
+---------------+---------------+----------+----------+

For GT 1234 we can see that:

  • it has no associated connection, and no valid routing context (-1), and

  • the DPC to be used is 1, which is our local PC.

This GT will be routed to the local SGC.

For GT 4321 we can see that:

  • it has an associated connection and Routing Context, and

  • the DPC to be used is 2, which is the remote PC.

This GT will be routed to PC2-1 using the specified connection and Routing Context.

Example on PC2-1

127.0.0.1:10121 PC1-1> display-info-ogtinfo: column=addrInfo, column=connId, column=rc, column=dpc
Found 2 object(s):
+---------------+---------------+----------+----------+
|addrInfo       |connId         |rc        |dpc       |
+---------------+---------------+----------+----------+
|1234           |PC1-1-PC2-1    |2         |1         |
+---------------+---------------+----------+----------+
|4321           |               |-1        |2         |
+---------------+---------------+----------+----------+
Tip All display- commands can be given a list of column= arguments to restrict the columns listed. This feature has been used in the examples above to reduce clutter and make the output more readable.

2

Check the local SSN state

The command:

display-info-localssninfo

will list the state of all SSNs which are either:

  • declared in Concerned Point Code configuration, or

  • known from current or previous SSN connections.

Example on PC1-1

GC[127.0.0.1:10111]> display-info-localssninfo: column=ssn, column=status
Found 2 object(s):
+----------+---------------+
|ssn       |status         |
+----------+---------------+
|1         |ALLOWED        |
+----------+---------------+
|101       |PROHIBITED     |
+----------+---------------+

Example on PC2-1

127.0.0.1:10121 PC1-1> display-info-localssninfo: column=ssn, column=status
Found 2 object(s):
+----------+---------------+
|ssn       |status         |
+----------+---------------+
|1         |ALLOWED        |
+----------+---------------+
|102       |PROHIBITED     |
+----------+---------------+

Scenario Simulator installation

This quick start walk-through will use the OC Scenario Simulator to test the network, rather than Rhino with CGIN, for simplicity.

For this quick start we will be assuming that your Scenario Simulator package is shipped with an IN Scenario Pack which does not support OCSS7 (which is true for Scenario Simulator 2.2.0.x), or with an obsolete version of the IN Scenario Pack. If you know that your Scenario Simulator contains a suitable IN Scenario Pack you may skip this section after completing it through step 2.

1

Unpack the Scenario Simulator archive file
unzip scenario-simulator-package-VERSION.zip

(replacing scenario-simulator-package-VERSION.zip with the correct file name).

This creates the distribution directory, scenario-simulator-VERSION, in the current working directory.

Example:

$ unzip scenario-simulator-package-2.3.0.6.zip
Archive:  scenario-simulator-package-2.3.0.6.zip
   creating: scenario-simulator-2.3.0.6/
   creating: scenario-simulator-2.3.0.6/licenses/
  inflating: scenario-simulator-2.3.0.6/licenses/LICENSE-XPathOverSchema.txt
  inflating: scenario-simulator-2.3.0.6/licenses/LICENSE-antlr.txt
[...]

2

Change directory into the Scenario Simulator directory
cd scenario-simulator-VERSION

(replacing scenario-simulator-package-VERSION.zip with the correct file name).

Example:

$ cd scenario-simulator-2.3.0.6/

3

Install the new IN Scenario Pack

We want to replace the old IN Scenario Pack with the new, which can be done with the following commands. Please ensure that you are in the Scenario Simulator’s installation directory before running these commands.

rm -r in-examples/ protocols/in-scenario-pack-*
unzip -o ../in-scenario-pack-VERSION.zip

(replacing ../in-scenario-pack-VERSION.zip with the correct file name). The -o option is being used here to automatically overwrite existing files without prompting, which is desirable in this case since we expect to replace certain files which were not explicitly removed with the rm command above.

Example:

$ rm -r in-examples/ protocols/in-scenario-pack-*
$ unzip -o ../in-scenario-pack-2.0.0.0.zip
Archive:  ../in-scenario-pack-2.0.0.0.zip
  inflating: protocols/in-scenario-pack-1.5.3.jar
   creating: in-examples/
   creating: in-examples/2sims/
   creating: in-examples/2sims/config/
   creating: in-examples/2sims/config/loopback/
   creating: in-examples/2sims/config/mach7/
   creating: in-examples/2sims/config/ocss7/
   creating: in-examples/2sims/config/signalware/
   creating: in-examples/2sims/scenarios/
   creating: in-examples/3sims/
   creating: in-examples/3sims/config/
   creating: in-examples/3sims/config/loopback/
   creating: in-examples/3sims/config/mach7/
   creating: in-examples/3sims/config/ocss7/
   creating: in-examples/3sims/config/signalware/
   creating: in-examples/3sims/scenarios/
  inflating: CHANGELOGS/CHANGELOG-in.txt
  inflating: README/README-in.txt
  inflating: in-examples/2sims/config/loopback/cgin-tcapsim-endpoint1.properties
  inflating: in-examples/2sims/config/loopback/cgin-tcapsim-endpoint2.properties
  inflating: in-examples/2sims/config/loopback/setup-sim1.commands
  inflating: in-examples/2sims/config/loopback/setup-sim2.commands
  inflating: in-examples/2sims/config/loopback/tcapsim-gt-table.txt
  inflating: in-examples/2sims/config/mach7/mach7-endpoint1.properties
  inflating: in-examples/2sims/config/mach7/mach7-endpoint2.properties
  inflating: in-examples/2sims/config/mach7/setup-mach7-endpoint1.commands
  inflating: in-examples/2sims/config/mach7/setup-mach7-endpoint2.commands
  inflating: in-examples/2sims/config/ocss7/ocss7-endpoint1.properties
  inflating: in-examples/2sims/config/ocss7/ocss7-endpoint2.properties
  inflating: in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands
  inflating: in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands
  inflating: in-examples/2sims/config/setup-examples-sim1.commands
  inflating: in-examples/2sims/config/setup-examples-sim2.commands
  inflating: in-examples/2sims/config/signalware/setup-signalware-endpoint1.commands
  inflating: in-examples/2sims/config/signalware/setup-signalware-endpoint2.commands
  inflating: in-examples/2sims/config/signalware/signalware-endpoint1.properties
  inflating: in-examples/2sims/config/signalware/signalware-endpoint2.properties
  inflating: in-examples/2sims/scenarios/CAPv3-Demo-ContinueRequest.scen
  inflating: in-examples/2sims/scenarios/CAPv3-Demo-ReleaseCallRequest.scen
  inflating: in-examples/2sims/scenarios/INAP-SSP-SCP.scen
  inflating: in-examples/3sims/config/loopback/cgin-tcapsim-endpoint1.properties
  inflating: in-examples/3sims/config/loopback/cgin-tcapsim-endpoint2.properties
  inflating: in-examples/3sims/config/loopback/cgin-tcapsim-endpoint3.properties
  inflating: in-examples/3sims/config/loopback/setup-sim1.commands
  inflating: in-examples/3sims/config/loopback/setup-sim2.commands
  inflating: in-examples/3sims/config/loopback/setup-sim3.commands
  inflating: in-examples/3sims/config/loopback/tcapsim-gt-table.txt
  inflating: in-examples/3sims/config/mach7/mach7-endpoint1.properties
  inflating: in-examples/3sims/config/mach7/mach7-endpoint2.properties
  inflating: in-examples/3sims/config/mach7/mach7-endpoint3.properties
  inflating: in-examples/3sims/config/mach7/setup-mach7-endpoint1.commands
  inflating: in-examples/3sims/config/mach7/setup-mach7-endpoint2.commands
  inflating: in-examples/3sims/config/mach7/setup-mach7-endpoint3.commands
  inflating: in-examples/3sims/config/ocss7/ocss7-endpoint1.properties
  inflating: in-examples/3sims/config/ocss7/ocss7-endpoint2.properties
  inflating: in-examples/3sims/config/ocss7/ocss7-endpoint3.properties
  inflating: in-examples/3sims/config/ocss7/setup-sim-endpoint1.commands
  inflating: in-examples/3sims/config/ocss7/setup-sim-endpoint2.commands
  inflating: in-examples/3sims/config/ocss7/setup-sim-endpoint3.commands
  inflating: in-examples/3sims/config/setup-examples-sim1.commands
  inflating: in-examples/3sims/config/setup-examples-sim2.commands
  inflating: in-examples/3sims/config/setup-examples-sim3.commands
  inflating: in-examples/3sims/config/signalware/setup-signalware-endpoint1.commands
  inflating: in-examples/3sims/config/signalware/setup-signalware-endpoint2.commands
  inflating: in-examples/3sims/config/signalware/setup-signalware-endpoint3.commands
  inflating: in-examples/3sims/config/signalware/signalware-endpoint1.properties
  inflating: in-examples/3sims/config/signalware/signalware-endpoint2.properties
  inflating: in-examples/3sims/config/signalware/signalware-endpoint3.properties
  inflating: in-examples/3sims/scenarios/CAPv2-Relay.scen
  inflating: in-examples/3sims/scenarios/INAP-SSP-SCP-HLR.scen
  inflating: in-examples/3sims/scenarios/MAP-MT-SMS-DeliveryAbsentSubscriber.scen
  inflating: in-examples/3sims/scenarios/MAP-MT-SMS-DeliveryPresentSubscriber.scen
  inflating: in-examples/README-in-examples.txt
  inflating: licenses/LICENSE-netty.txt
  inflating: licenses/LICENSE-slf4j.txt
  inflating: licenses/README-LICENSES-in-scenario-pack.txt

Scenario Simulator configuration

We will now configure two Scenario Simulator instances and connect them to the cluster. This work should be done in the Scenario Simulator installation directory, which is where the steps from the previous section left us.

Tip The Scenario Simulator and CGIN use identical configuration properties and values when using OCSS7, the only difference between the two is the procedure used for setup and configuration.

1

Set the OCSS7 connection properties for Simulator 1

Edit the file in-examples/2sims/config/ocss7/ocss7-endpoint1.properties and make the following changes:

local-sccp-address = type=C7,ri=gt,ssn=101,digits=1234,national=true

and

ocss7.sgcs = 127.0.0.1:12011

The port in the ocss7.sgcs property is the stack-data-port we configured earlier for node PC1-1.

2

Set the OCSS7 connection properties for Simulator 2

Edit the file in-examples/2sims/config/ocss7/ocss7-endpoint2.properties and make the following changes:

local-sccp-address = type=C7,ri=gt,ssn=102,digits=4321,national=true

and

ocss7.sgcs = 127.0.0.1:12021

The port in the ocss7.sgcs property is the stack-data-port we configured earlier for node PC2-1.

3

Set the Scenario Simulator endpoint addresses

Edit the following files:

  • in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands

  • in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands

and replace the two lines beginning:

set-endpoint-address endpoint1
set-endpoint-address endpoint2

with

set-endpoint-address endpoint1 type=c7,ri=gt,pc=1,ssn=101,digits=1234,national=true
set-endpoint-address endpoint2 type=c7,ri=gt,pc=2,ssn=102,digits=4321,national=true
Tip

We’ve given the PC and SSN information in the above endpoint addresses, and you may be wondering why and whether global title translation is actually happening. The reason for this is that the Scenario Simulator is just that, a simulator, and it needs the PC and SSN information to work out roles and endpoints correctly for scenario validation. If you wish, you can change the SSN given to the simulator in set-endpoint-address to some other value without changing the value used by the TCAP stack in local-sccp-address and our test scenario will still work correctly because global title translation is actually happening and the inbound SSN is ignored by the receiving SGC.

The Scenario Simulators are now fully configured and ready to test our network.

Test the network

We will now test the network using the Metaswitch Scenario Simulator and one of the example IN scenarios included with it.

1

Start the scenario simulators

We need two Scenario Simulator instances for this test, one to initiate our test traffic, and one to respond. Start them with these two commands:

./scenario-simulator.sh -f in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands -f in-examples/2sims/config/setup-examples-sim1.commands

and

./scenario-simulator.sh -f in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands -f in-examples/2sims/config/setup-examples-sim2.commands

Example for Simulator 1:

$ ./scenario-simulator.sh -f in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands -f in-examples/2sims/config/setup-examples-sim1.commands
Starting JVM...
Processing commands from file at in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands
Processing command: set-endpoint-address endpoint1 type=C7,ri=gt,digits=1234
Processing command: set-endpoint-address endpoint2 type=C7,ri=gt,digits=4321
Processing command: create-local-endpoint endpoint1 cgin -propsfile in-examples/2sims/config/ocss7/ocss7-endpoint1.properties
Initializing local endpoint "endpoint1" ...
Local endpoint initialized.
Finished reading commands from file
Processing commands from file at in-examples/2sims/config/setup-examples-sim1.commands
Processing command: bind-role SSP-Loadgen endpoint1
Processing command: bind-role SCP-Rhino endpoint2
Processing command: wait-until-operational 60000
Simulator is operational
Processing command: load-scenario in-examples/2sims/scenarios/INAP-SSP-SCP.scen
Playing role "SSP-Loadgen" in initiating scenario "INAP-SSP-SCP" with dialogs [SSP-SCP]
Processing command: load-scenario in-examples/2sims/scenarios/CAPv3-Demo-ContinueRequest.scen
Playing role "SSP-Loadgen" in initiating scenario "CAPv3-Demo-ContinueRequest" with dialogs [SSP-SCP]
Processing command: load-scenario in-examples/2sims/scenarios/CAPv3-Demo-ReleaseCallRequest.scen
Playing role "SSP-Loadgen" in initiating scenario "CAPv3-Demo-ReleaseCallRequest" with dialogs [SSP-SCP]
Finished reading commands from file
Ready to start

Please type commands... (type "help" <ENTER> for command help)
>

Example for Simulator 2:

$ ./scenario-simulator.sh -f in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands -f in-examples/2sims/config/setup-examples-sim2.commands
Starting JVM...
Processing commands from file at in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands
Processing command: set-endpoint-address endpoint1 type=C7,ri=gt,digits=1234
Processing command: set-endpoint-address endpoint2 type=C7,ri=gt,digits=4321
Processing command: create-local-endpoint endpoint2 cgin -propsfile in-examples/2sims/config/ocss7/ocss7-endpoint2.properties
Initializing local endpoint "endpoint2" ...
Local endpoint initialized.
Finished reading commands from file
Processing commands from file at in-examples/2sims/config/setup-examples-sim2.commands
Processing command: bind-role SSP-Loadgen endpoint1
Processing command: bind-role SCP-Rhino endpoint2
Processing command: wait-until-operational 60000
Simulator is operational
Processing command: load-scenario in-examples/2sims/scenarios/INAP-SSP-SCP.scen
Playing role "SCP-Rhino" in receiving scenario "INAP-SSP-SCP" with dialogs [SSP-SCP]
Processing command: load-scenario in-examples/2sims/scenarios/CAPv3-Demo-ContinueRequest.scen
Playing role "SCP-Rhino" in receiving scenario "CAPv3-Demo-ContinueRequest" with dialogs [SSP-SCP]
Processing command: load-scenario in-examples/2sims/scenarios/CAPv3-Demo-ReleaseCallRequest.scen
Playing role "SCP-Rhino" in receiving scenario "CAPv3-Demo-ReleaseCallRequest" with dialogs [SSP-SCP]
Finished reading commands from file
Ready to start

Please type commands... (type "help" <ENTER> for command help)
>

2

Check the remote SSN information

Before running a test session let’s pause to check the

display-info-remotessninfo

command, which should now show the following on both nodes:

127.0.0.1:10111 PC1-1> display-info-remotessninfo
Found 4 object(s):
+----------+----------+---------------+
|dpc       |ssn       |status         |
+----------+----------+---------------+
|1         |1         |ALLOWED        |
+----------+----------+---------------+
|1         |101       |ALLOWED        |
+----------+----------+---------------+
|2         |1         |ALLOWED        |
+----------+----------+---------------+
|2         |102       |ALLOWED        |
+----------+----------+---------------+

From this we can see that both SGCs have registered the connected simulators and informed the Concerned Point Codes about the state change for the SSN used by the simulator.

3

Run a test session

On Simulator 1 run:

run-session CAPv3-Demo-ContinueRequest

This will run the test scenario, which is a basic CAPv3 IDP / CON scenario.

Example:

> run-session CAPv3-Demo-ContinueRequest
Send -->  OpenRequest to endpoint2
Send -->  InitialDP (Request) to endpoint2
Send -->  Delimiter to endpoint2
Recv <--  OpenAccept from endpoint2
Recv <--  Continue (Request) from endpoint2
Recv <--  Close from endpoint2
Outcome of "CAPv3-Demo-ContinueRequest" session: Matched scenario definition "CAPv3-Demo-ContinueRequest"
Tip This scenario has a 10 second delay between the OpenRequest and the OpenAccept so do not be concerned if it seems to be taking a little while to get a reply.

Installing the SGC

This section of the manual covers the installation, basic configuration, and control of the SGC component of OCSS7.

Tip The Rhino-side component of OCSS7, the TCAP stack, is automatically installed with CGIN and has no special installation procedure or requirements beyond those of Rhino and CGIN. Information on configuring the TCAP stack component can be found in TCAP Stack configuration.

Checking prerequisites

Before installing OCSS7, make sure you have the required:

Before attempting a production installation please also see Network Architecture Planning.

Configuring network features

Before installing OCSS7, please configure the following network features:

Feature What to configure

IP address

Make sure the system has an IPv4 or IPv6 address and is visible on the network.

Host names

Make sure that the system can resolve localhost to the loopback interface.

Firewall

Ensure that any firewall installed is configured to permit OCSS7 related traffic. The rules required will depend on the Hazelcast network configuration chosen. See the OCSS7 manual section Hazelcast cluster configuration and the Hazelcast 3.7 Reference Manual section Setting Up Clusters for more details on Hazelcast network configurations.

In its default configuration the SGC uses multicast UDP to discover other cluster members:

  • The default UDP port number is 14327

  • Multicast addresses are, by definition, in the range 224.0.0.0/4 (224.0.0.0 - 239.255.255.255) or ff00::/8 for IPv6. OCSS7 by default uses address 224.2.2.3.

Additionally, Hazelcast uses TCP to form connections between cluster members. By default it listens on all available network interfaces for incoming TCP connections. This uses ports in the range 5701 through 5799.

TCP is also used for intra-node communication (comm switch) and SGC to TCAP stack communication. The ports and addresses for these are user configurable and described in General Configuration.

SCTP is used by M3UA assocations. See M3UA Configuration.

Tip
IPv6 considerations
When using IPv6 addressing, remember to configure the PREFER_IPV4 property in the SGC_HOME/config/sgcenv file. For details, please see Configuring SGC_HOME/config/sgcenv.

User process tuning

Ensure that the user that OCSS7 will be running under has a soft limit of no less than 4096 user processes.

Tip
The number of permitted user processes may be determined at runtime using the ulimit command; for example ulimit -Su

This value may be changed by editing /etc/security/limits.conf as root, and adding (or altering) the line:

sgc_user soft nproc 4096

It may also be necessary to increase the hard limit:

sgc_user hard nproc 4096

SCTP tuning

For optimal performance, tune these kernel parameters:

Parameter Recommended value What it specifies

net.core.rmem_default

512000

Default receive buffer size (in bytes)

net.core.wmem_default

512000

Default send buffer size (in bytes)

net.core.rmem_max

2048000

Maximum receive buffer size (in bytes)

This value limits the so-rcvbuf parameter. For details, please see Listening for and establishing SCTP associations.

net.core.wmem_max

2048000

Maximum send buffer size (in bytes)

This value limits the so-sndbuf parameter. For details, please see Listening for and establishing SCTP associations.

net.sctp.rto_min

40 < rto_min < 100

Minimum retransmission timeout (in ms)

This should be greater than the sack_timeout of the remote SCTP endpoints.

net.sctp.sack_timeout

40 < sack_timeout < 100

Delayed acknowledgement (SACK) timeout (in ms)

Should be lower than the retransmission timeout of the remote SCTP endpoints.

net.sctp.hb_interval

1000

SCTP heartbeat interval (in ms)

Tip
Kernel parameters can be changed
  • at runtime using sysctl command; for example: sysctl -w net.core.rmem_max=2000000

  • or set permanently in /etc/sysctl.conf file.

SGC Installation

Recommended Installation Structure

In order to upgrade the SGC cluster in the future using the automated upgrade tool the SGC should be installed following a certain file structure.

The SGC package should be unpacked in the location corresponding to:

$BASE_DIR/cluster_name/node_name/

This results in the following structure:

$BASE_DIR/cluster_name/node_name/ocss7-3.0.0.0/...

A symbolic link current should be created such that:

$BASE_DIR/cluster_name/node_name/current -> ocss7-3.0.0.0

$BASE_DIR may be any location. For example, /home/sentinel/ocss7/

All customizable configuration file locations (sgcenv, SGC.properties, hazelcast.xml, sgc.dat) must meet the following requirements:

  • Must be specified using a relative path, not an absolute path.

  • The path provided must be wholly located within the SGC installation directory or a sub-directory thereof.

  • Symbolic links are not permitted.

  • Paths that step outside of the SGC installation directory are not permitted.

Examples:

  • config/SGC.properties — OK, a relative path

  • ../ocss7-3.0.0.0/config/SGC.properties — not permitted, leaves the installation directory

  • obfuscated/path/config/SGC.properties where path is a symbolic link — not permitted

  • /home/ocss7/cluster/node/ocss7-3.0.0.0/config/SGC.properties — absolute paths are not permitted

The default location for the named configuration files meets these requirements.

Orca does not support customization of the location of any other configuration files, including config/log4j.xml. Log files themselves may be located anywhere on the filesystem.

Warning If the SGC installation structure does not meet these requirements it may not be possible to perform an automated online upgrade. In this case a manual online upgrade may be performed instead.

Unpack and configure

Note
SGC_HOME

The following instructions use SGC_HOME to represent the path to the SGC installation directory.

To begin the SGC installation and create the first node:

1

Unpack the SGC archive file

Run:

unzip ocss7-package-VERSION.zip

(replacing ocss7-package-VERSION.zip with the correct file name)

This creates the distribution directory, ocss7-X.X.X.X, in the current working directory.

2

Make sure that the JAVA_HOME environment variable is set to the location of the Oracle JDK installation. (for JDK installation, see the Operating System documentation and/or JDK vendor documentation).

3

Configure basic cluster / node information

If your installation will use more than a single node in a SGC cluster, then:

  • Customize the ss7.instance property in SGC_HOME/config/SGC.properties to a value unique among all other nodes in the SGC cluster.

  • Define a cluster name through the hazelcast.group property in SGC_HOME/config/SGC.properties. The value of this property must be the same for all nodes that form a particular cluster.

If you are planning to use more than one SGC cluster in the same local network then:

  • Set the hazelcast.group property in SGC_HOME/config/SGC.properties to a value unique among all other clusters in the same local network.

Creating additional nodes

After installing the first SGC node in a cluster, you can add more nodes by either:

  • copying the installation directory of an existing node, and changing the ss7.instance property in SGC_HOME/config/SGC.properties to a value unique among all the other nodes in the cluster.

or

  • repeating the installation steps for another node,

  • setting the ss7.instance property in SGC_HOME/config/SGC.properties to a value unique among all other nodes in the cluster,

  • setting the hazelcast.group in SGC_HOME/config/SGC.properties to the value chosen as cluster name, and

  • repeating any other installation customization steps.

Layout of the SGC installation

Example 1. Installation directory contents

A typical SGC installation contains these subdirectories:

Directory Contents

.

(SGC installation directory)

  • main SGC Java Archive

bin

SGC management scripts

cli

command line interface installation, including start scripts, configuration, and logs

config

configuration files which may be edited by the user as required

doc

supplementary documentation included as a convenience, such as SNMP MIB files

lib

Java libraries used by the SGC

license

third-party software licenses

logs

log output from the SGC

var

persisted cluster configuration (sgc.dat) and temporary files used by the SGC management scripts

Running the SGC

SGC operations

Note
JAVA_HOME

The SGC script expects the JAVA_HOME environment variable to be set up and point to a valid JVM version 7 or greater (expects executable file JAVA_HOME/bin/java).

The SGC is started and stopped using the SGC_HOME/bin/sgc script.

The sgc script runs SGC under a watchdog: if the SGC process exits for an unrecognized reason it is automatically restarted. Output from the SGC and from the watchdog script is redirected into a startup log file. The startup log files are in SGC_HOME/logs directory and are named startup.<startup-time>. If startup fails for any reason, details about the failure should be available in the startup file.

The sgc script is configured in SGC_HOME/config/sgcenv. The sgcenv file contains JVM parameters which cannot be provided in the SGC.properties file.

The sgc script can be run with the following arguments:

Command argument Optional arguments Description

start

--nowait
--jmxhost <host>
--jmxport <port>

Starts the SGC using the configuration from SGC_HOME/config/sgcenv.

  • The --nowait argument specifies that the startup script does not verify if SGC has started successfully, but just initializes start and exits.

  • The --jmxhost and --jmxport arguments allow specifying different JMX listening sockets than the one defined in SGC_HOME/config/sgcenv.

stop

--immediate

Stops the SGC. Without the --immediate argument, a graceful shutdown is attempted. With --immediate, processes are killed.

restart

--nowait
--jmxhost <host>
--jmxport <port>
--immediate

Equivalent of stop, then start.

test

--jmxhost <host>
--jmxport <port>

Runs the SGC in test mode. In test mode, SGC runs in the foreground; and logging is configured in log4j.test.xml, which prints more information to the console.

foreground

--jmxhost <host>
--jmxport <port>

Runs the SGC in foreground mode. SGC is not demonized.

status

Prints the status of SGC and returns one of these LSB-compatible exit codes:

  • 0 = the SGC is running

  • 1 = the SGC is dead and the SGC_HOME/var/sgc.pid file exists

  • 3 = the SGC is not running

For example:

Start SGC

$SGC_HOME/bin/sgc start
$SGC_HOME/bin/sgc start --nowait --jmxport 10111 --jmxhost 127.0.0.1

Stop SGC

$SGC_HOME/bin/sgc stop
$SGC_HOME/bin/sgc stop --immediate

Check SGC status

$SGC_HOME/bin/sgc status

Configuring SGC_HOME/config/sgcenv

The SGC_HOME/config/sgcenv file contains configuration parameters for the sgc script. The following settings are supported:

Variable name Descriptions Valid Values Default

JAVA_HOME

Location of the JVM home directory.

JMX_HOST

Host that SGC should bind to in order to listen for incoming JMX connections.

IPv4 or IPv6 address

127.0.0.1

JMX_PORT

Port where SGC binds for incoming JMX connections.

It is not recommended to use a port in the emphemeral range as these are used for short-lived TCP connections and may result in the SGC failing to start if the port is in use by another application. The emphemeral port range may be queried with cat /proc/sys/net/ipv4/ip_local_port_range.

1-65535

10111

JMX_SECURE

Whether or not the JMX connection should be secured with SSL/TLS.

true, false

false

JMX_NEED_CLIENT_AUTH

Whether or not the SGC should require a trusted client certificate for an SSL/TLS-secured JMX connection.

JMX_SECURE_CFG_FILE

Path to the configuration file with properties used to secure the JMX management connection.

DEFAULT_STORE_PASSWORD

Password used during generation of the key store and trust store used to secure the JMX management connection.

changeit

MAX_HEAP_SIZE

Maximum size of the JVM heap space.

For details, please see Configuring the Java Heap.

MIN_HEAP_SIZE

Initial size of the JVM heap space.

MAX_PERM_SIZE

Maximum size of the JVM permgen space.

GCOPTIONS

Full override of default garbage collections settings.

JVM_PROPS

Additional JVM parameters.

Modifications should add to the existing JVM_PROPS rather than overriding it. For example: JVM_PROPS="$JVM_PROPS -Dsome_option".

LOGGING_CONFIG

The log4j configuration file to be used in normal mode (start/restart/foreground).

LOGGING_TEST_CONFIG

The log4j configuration file to be used in test mode.

SGC_CFG_FILE

Location of the SGC properties file.

RUNONCE

Whether or not the watchdog is enabled. Disabling the watchdog may be required if the SGC is run under the control of some other HA systems.

0 = watchdog is enabled
1 = start SGC in background without watchdog script

DEBUG

Enables additional script information.

0 = no additional debug information
1 = additional information enabled

PREFER_IPV4

Prefers using IPv4 protocol. Set value to false to enable IPv6 support.

true = use only IPv4
false = use both IPv4 and IPv6

true

NUMAZONES

On NUMA architecture machines, this parameter allows selecting specific CPU and memory bindings for SGC.

CPUS

On non-NUMA architecture machines, this parameter may be used to set SGC affinity to specific CPUs.

Note
JMX Connector configuration variables

SGC_HOME/config/sgcenv contains additional variables used to configure a secure JMX management connection. For details, please see Securing the SGC JMX management connection with SSL/TLS.

Configuring the Java Heap

The Java Virtual Machine’s MAX_HEAP_SIZE must be appropriately configured for the SGC. If insufficient heap is configured, then the result may be:

  • Frequent and/or prolonged garbage collections, which may have a negative impact on the SGC’s performance.

  • SGC restarts caused by the JVM throwing OutOfMemoryError.

The main factors affecting the selection of an appropriate value are:

  • The base SGC requires a certain amount of heap.

  • The size of the configuration MML.

  • The configured maximum concurrent TCAP transactions (sgc.tcap.maxTransactions)

MAX_HEAP_SIZE is no longer dependent on the number of connected TCAP peers or migrated prefixes.

Recommendations
Factor Recommendation

Base SGC

1024MB

This value allows for an SGC configured for 1 million sgc.tcap.maxTransactions with a bare minimum MML configuration, with only a node configured and no signalling links or global title configuration.

The amount required is platform and JVM dependent. An estimation of the base SGC requirements may be obtained by loading a minimal MML configuration into a test SGC, and then using the jmap tool to print a heap summary. The resulting value should then be doubled or even tripled to allow for efficient garbage collection.

Configuration MML Size

Variable

Each create-X configuration MML instruction requires heap storage, both to store the configuration data and to enable the configuration.

As each configuration differs substantially it is not possible to provide generic guidelines. An estimation of a configuration’s minimum requirements may be obtained by loading the full MML configuration into a test SGC, and then using the jmap tool to print a heap summary.

Maximum concurrent TCAP transactions

1 million transactions requires no additional heap allowance as this is included in the base SGC figure above.

10 million transactions requires approximately 250MB additional heap.

100 million transactions requires approximately 2.5GB additional heap.

The maximum number of concurrent TCAP transactions is configured in sgc.tcap.maxTransactions. See Static SGC configuration.

Installing SGC as a service

To install SGC as a service, perform the following operations as user root:

1

Copy SGC_HOME/bin/sgcd to /etc/init.d .

# copy $SGC_HOME/bin/sgcd /etc/init.d

2

Grant execute permissions to /etc/init.d/sgcd:

# chmod a+x /etc/init.d/sgcd

3

Create the file /etc/sysconfig/sgcd. Assuming that the SGC is installed in /opt/sgc/PC1-1/PC1/current as user sgc, the file would have the following content:

SGC=/opt/sgc/PC1-1/PC1/current/bin/sgc
SGCUSER=sgc

4

Activate the service using the standard RedHat command:

# chkconfig --add sgcd

Network Architecture Planning

Warning

Ensure you are familiar with the with the OCSS7 architecture before going further. In particular, ensure you have read the following:

Network planning

When planning an OCSS7 deployment, Metaswitch recommends preparing IP subnets that logically separate different kinds of traffic:

Subnet Description

SS7 network

dedicated for incoming/outgoing SIGTRAN traffic; should provide access to the operator’s SS7 network

SGC interconnect network

internal SGC cluster network with failover support (provided by interface bonding mechanism); used by Hazelcast and communication switch

Rhino traffic network

used for traffic exchanged between SGC and Rhino nodes

Management network

dedicated for managing tools and interfaces (JMX, HTTP)

SGC Stack network communication overview

The SS7 SGC uses multiple logical communication channels that can be separated into two broad categories:

SGC directly managed connections

The following table describes network connections managed directly by the SGC configuration.

Protocol Subsystem Subnet Defined by Usage

TCP

TCAP

Rhino traffic network

stack-http-address and stack-http-port attributes of the node configuration object

Used in the first phase of communication establishment between the TCAP Stack (CGIN RA) and the SGC cluster.

The communication channel is established during startup of the TCAP Stack (CGIN RA activation), and closed after a single HTTP request / response.

stack-data-address and stack-data-port attributes of the node configuration object

Used in the second phase of communication establishment between the TCAP Stack (CGIN RA) and the SGC cluster.

The communication channel is established and kept open until either the SGC Node or the TCAP Stack (CGIN RA) is shutdown (deactivated). This connection is used to exchange TCAP messages between the SGC Node and the TCAP Stack using a custom protocol. The level of expected traffic is directly related to the number of expected SCCP messages originated and destined for the SSN represented by the connected TCAP Stack.

Note When multiple TCAP Stacks representing the same SSN are connected to the SGC cluster, then message traffic is load-balanced between them.

Communication Switch

SGC interconnect network

switch-local-address and switch-port attributes of the node configuration object

Used by the communication switch (inter-node message transfer module) to exchange message traffic between nodes of the SGC cluster.

The communication channel is established between nodes of the SGC cluster during startup, and kept open until the node is shut down. During startup, the node establishes connections to all other nodes that are already part of the SGC cluster. The level of expected traffic depends on the deployment model, and can vary anywhere between none and all traffic destined and originated by the SGC cluster.

Note To avoid unnecessary resource use, minimise communication switch traffic when planning the deployment model. For details please see Inter-node message transfer.

SCTP

M3UA

SS7 Network

local-endpoint and connection M3UA configuration objects

Used by SGC nodes to exchange M3UA traffic with Signalling Gateways and/or Application Servers.

The communication channel lifecycle depends directly on the SGC cluster configuration; that is, the enabled attribute of the connection configuration object and the state of the remote system with which SGC is to communicate. The level of traffic should be assessed based on business requirements.

JMX over TCP

Configuration

Management network

Used for managing the SGC cluster.

Established by the management client Command-Line Management Console, for the duration of the management session. The level of traffic is negligible.

Hazelcast managed connections

Hazelcast uses a two-phase cluster-join procedure:

  1. Discover other nodes that are part of the same cluster.

  2. Establish one-to-one communication with each node found.

Depending on the configuration, the first step of the cluster-join procedure can be based either on UDP multicast or direct TCP connections. In the latter case, the Hazelcast configuration must contain the IP address of at least one other node in the cluster. Connections established in the second phase always use direct TCP connections established between all the nodes in the Hazelcast cluster.

Traffic exchanged over SGC interconnect network by Hazelcast connections is mainly related to:

  • SGC runtime state changes

  • SGC configuration state changes

  • Hazelcast heartbeat messages.

During normal SGC cluster operation, the amount of traffic is negligible and consists mainly of messages distributing SGC statistics updates.

Inter-node message transfer

The communication switch (inter-node message transfer module) is responsible for transferring data traffic messages between nodes of the SGC cluster. After the initial handshake message exchange, the communication switch does not originate any network communication by itself. It is driven by requests of the TCAP or M3UA layers.

Usage of the communication switch involves additional message-processing overhead, consisting of:

  • CPU processing time to encode and later decode the message — this overhead is negligible

  • network latency to transfer the message between nodes of the SGC cluster — overhead depends on the type and layout of the physical network between communicating SGC nodes.

This overhead is unnecessary in normal SGC cluster operation, and can be avoided during deployment-model planning.

Below are outlines of scenarios involving communication switch usage: Outgoing message inter-node transfer and Incoming message inter-node transfer; followed by tips for Avoiding communication switch overhead.

Outgoing message inter-node transfer

A message that is originated by the TCAP stack (CGIN RA) is sent over the TCP-based data-transfer connection to the SGC node (node A). It is processed within that node up to the moment when actual bytes should be written to the SCTP connection, through which the required DPC is reachable. If the SCTP connection over which the DPC is reachable is established on a different SGC node (node B), then the communication switch is used. The outgoing message is transferred, using the communication switch, to the node where the SCTP connection is established (transferred from node A to node B). After the message is received on the destination node (node B) it is transferred over the locally established SCTP connection.

Incoming message inter-node transfer

A message received by an M3UA connection, with a remote Signalling Gateway or other Application Server, is processed within the SGC node where the connection is established (node A). If the processed message is a TCAP message addressed to a SSN available within the SGC cluster, the processing node is responsible for selection of a TCAP Stack (CGIN RA) corresponding to that SSN. The TCAP Stack (CGIN RA) selection process gives preference to TCAP Stacks (CGIN RAs) that are directly connected to the SGC node which is processing the incoming message. If a suitable locally connected TCAP Stack (CGIN RA) is not available, then a TCAP stack connected to another SGC node (node B) in the SGC cluster is selected. After the selection process is finished, the incoming TCAP message is sent either directly to the TCAP Stack (locally connected TCAP Stack), or first transferred through the communication switch to the appropriate SGC node (transferred from node A to node B) and later sent by the receiving node (node B) to the TCAP Stack.

Note
TCAP Stack (CGIN RA) selection

TCAP Stack selection is invoked for messages that start a new transaction (TCAP BEGIN) or are not a part of any transaction (TCAP UNIDIRECTIONAL). Messages that are a part of an existing transaction are directed to the TCAP Stack serving that transaction; that is, the TCAP Stack that received initial TCAP BEGIN message.

TCAP Stack selection is described by following algorithm:

  1. Acquire a set of locally connected TCAP Stacks serving the appropriate SSN.

  2. If the set of locally connected TCAP Stacks is empty, acquire a set of TCAP Stacks serving the appropriate SSN, cluster-wide.

  3. Load balance incoming messages among the TCAP Stacks in the acquired set, in a round-robin fashion.

Avoiding communication switch overhead

A review of the preceding communication-switch usage scenarios suggests a set of rules for deployment, to help avoid communication-switch overhead during normal SGC cluster operation.

Scenario Avoidance Rule Configuration Recommendation

If an SSN is available within the SGC cluster, at least one TCAP Stack serving that particular SSN must be connected to each SGC node in the cluster.

The number of TCAP Stacks (CGIN RAs) serving a particular SSN should be at least the number of SGC nodes in the cluster.

Note Keep in mind that a single CGIN RA entity deployed within a Rhino cluster is instantiated on each Rhino node; this translates to the number of TCAP Stacks being equal to the number of Rhino nodes for each CGIN RA entity.

If the SGC Stack is to communicate with a remote PC (another node in the SS7 network), that PC must be reachable through an M3UA connection established locally on each node in the SGC cluster.

When configuring remote PC availability within the SGC Cluster, the PC must be reachable through at least one connection on each SGC node.

SGC cluster membership and split-brain scenario

The SS7 SGC Stack is a distributed system. It is designed to run across multiple computers connected across an IP network. The set of connected computers running SGC is known as a cluster. The SS7 SGC Stack cluster is managed as a single system image. SGC Stack clustering uses an n-way, active-cluster architecture, where all the nodes are fully active (as opposed to an active-standby design, which employs a live but inactive node that takes over if needed).

SGC cluster membership state is determined by Hazelcast based on network reachability of nodes in the cluster. Nodes can become isolated from each other if some networking failure causes a network segmentation. This carries the risk of a "split brain" scenario, where nodes on both sides of the segment act independently, assuming nodes on the other segment have failed. The responsibility of avoiding a split-brain scenario depends on the availability of a redundant network connection. For this reason, network interface bonding MUST be employed to serve connections established by Hazelcast.

Usage of a communication switch subsystem within the SGC cluster depends on the cluster membership state, which is managed by Hazelcast. Network connectivity as seen by the communication switch subsystem MUST be consistent with the cluster membership state managed by Hazelcast. To fulfil this requirement, the communication switch subsystem MUST be configured to use the same redundant network connection as Hazelcast.

Note
Network connection redundancy delivery method
Both Hazelcast and the communication switch currently do not support network interface failover. This results in a requirement to use OS-level network interface bonding to provide a single logical network interface delivering redundant network connectivity.
Warning
Network Path Redundancy
The entire network path between nodes in the cluster must be redundant (including routers and switches).

Recommended physical deployment model

In order to take full advantage of the fault-tolerant and high-availability modes supported by the OC SS7 stack, Metaswitch recommends using at least two dedicated machines with multicore CPUs and two or more Network Interface Cards.

recommended physical deployment model

Each SGC node should be deployed on one dedicated machine. However hardware resources can be also shared with nodes of Rhino Application Server.

shared resources
Note The OC SS7 stack also supports less complex deployment modes which can also satisfy high-availability requirements.
Tip To avoid single points of failure at network and hardware levels, provide redundant connections for each kind of traffic.The SCTP protocol that SS7 traffic uses itself provides a mechanism for IP multi-homing. For other kinds of traffic, an interface-bounding mechanism should be provided. Below is an example assignment of different kinds of traffic among network interface cards on one physical machine.
Network Interface Card 1 Network Interface Card 2

port 1

SS7 IP addr 1

SS7 IP addr 2

port 2

SGC Interconnect IP addr (bonded)

SGC Interconnect IP addr (bonded)

port 3

Rhino IP addr

port 4

Management IP addr

Warning While not required, bonding Management and Rhino traffic connections can provide better reliability.

Securing the SGC JMX management connection with SSL/TLS

Default configuration of the JMX management connection

The default JMX configuration allows for unsecured JMX management connections from the local machine only. That is, the SGC SS7 stack by default listens for management connections on a local loopback interface. This allows for any JMX management client running on the same machine as the SGC stack instance to connect and manage that instance with no additional configuration.

Securing the JMX management connection with SSL/TLS

Note
SGC_HOME
SGC_HOME in the following instructions represents the path to the SGC Stack installation directory.

SGC stack secure configuration

The SGC SS7 stack can be configured to secure JMX management connections using the SSL/TLS protocol. The default installation package provides a helper shell script (SGC_HOME/bin/sgckeygen) that generates:

  • SGC_HOME/config/sgc-server.keystore — a JKS repository of security certificates containing two entries: an SGC JMX server private key and a trust entry for the SGC JMX client certificate

  • SGC_HOME/config/sgc-client.keystore — a JKS repository of security certificates containing two entries: an SGC JMX client private key and a trust entry for the SGC JMX server certificate

  • SGC_HOME/config/netssl.properties — a Java properties file containing the configuration the SGC Stack uses during start-up (properties in this file point to the generated sgc-server.keystore)

  • SGC_HOME/config/sgc-trust.cert — the SGC JMX server certificate, which can be imported to any pre-existing KeyStore to establish a trust relation.

To enable a secure JMX management connection:

  1. Generate appropriate server / client private keys and certificates: run the SGC_HOME/bin/sgckeygen script.

  2. Change the SGC stack configuration to enable the secure connection: edit the configuration file SGC_HOME/config/sgcenv, changing the JMX_SECURE variable value to true.

Tip By default, the SGC stack is configured to require client authorization with a trusted client certificate. The straightforward approach is to use the generated SGC_HOME/config/sgc-client.keystore as part of the JMX management client configuration.
Note
  • For detailed information about creating a KeyStore, please see the Java Virtual Machine vendor documentation on the Oracle JDK Tools and Utilities page.

  • For general information about SSL/TLS support, see the JSSE Reference Guide.

  • Configuration changes take effect on SGC stack instance restart.

Example client configuration for a JMX management secure connection

You can configure the JMX management connection from the command line or using a JDK tool.

Configuring from the command line

To configure a secure JMX connection for the SGC Stack using a command-line management console, please see Command-Line Management Console.

Configuring with a generic JMX management tool

The Command-Line Management Console is a dedicated tool for operating and configuring the SGC stack; but there are many tools that support the JMX standard. Below are tips for letting them communicate with the SGC stack.

The SGC stack is equipped with scripts that enable JMX connector and provide a simple way to prepare all the necessary keys and certificates used during the SSL/TLS authentication process.

Warning In order to connect to the SGC stack with an external tool, follow the tool’s SGC stack secure configuration instructions.

For example, for Java VisualVM (part of the Sun/Oracle JDK) :

  1. Generate the appropriate server / client private keys and certificates.

  2. Copy the SGC_HOME/config/sgc-client.keystore to the machine where you want to start the Java VisualVM.

  3. Start the Java VisualVM with parameters pointing to the relevant KeyStore file. For example: jvisualvm -J-Djavax.net.ssl.keyStore=sgc-client.keystore -J-Djavax.net.ssl.keyStorePassword=changeit -J-Djavax.net.ssl.trustStore=sgc-client.keystore -J-Djavax.net.ssl.trustStorePassword=changeit

Warning The connection is secured only when using a remote/local JMX connector. Java VisualVM uses the "Attach API" to connect to locally running Java Virtual Machines, in effect bypassing the secure connection. In this case, client setup of a secure JMX connection is not required.

SGC stack JMX configuration properties

During SGC Stack instance startup, Java system properties are interrogated to derive configuration of the JMX RMI connector. Values of relevant properties can be configured using variables in the SGC_HOME/config/sgcenv configuration file.

Properties configurable using the sgcenv configuration file

The following JMX connector settings are supported in the SGC_HOME/config/sgcenv configuration file:

Variable What it specifies Values Default

JMX_SECURE

whether to secure the JMX connection with SSL/TLS

true/false

false

JMX_NEED_CLIENT_AUTH

whether the SGC Stack requires a trusted client certificate for an SSL/TLS-secured JMX connection

true/false

true

JMX_SECURE_CFG_FILE

path to the configuration file with properties used to secure the JMX management connection

SGC_HOME/config/netssl.properties

DEFAULT_STORE_PASSWORD

password used to secure the KeyStore and TrustStore when generating them using the SGC_HOME/bin/sgckeygen script

changeit

The file specified by JMX_SECURE_CFG_FILE should be in the Java properties file format (as described in Javadoc for Properties class). Properties configurable using JMX_SECURE_CFG_FILE are related to the location and security of Java KeyStores containing the SGC stack private key, certificate, and trusted client certificate. Here are the properties configurable using JMX_SECURE_CFG_FILE:

Key What it specifies

javax.net.ssl.keyStore

path to the Java KeyStore file containing the SGC Stack private key

javax.net.ssl.keyStorePassword

password protecting the KeyStore denoted by the javax.net.ssl.keyStore property

javax.net.ssl.trustStore

path to the Java KeyStore file containing the trusted client certificate

javax.net.ssl.trustStorePassword

password protecting the KeyStore denoted by the javax.net.ssl.trustStore property

Example JMX_SECURE_CFG_FILE properties file

The JMX_SECURE_CFG_FILE generated by the SGC_HOME/bin/sgckeygen script looks like this:

#This is a SSL configuration file.
#A properties file that can be used to supply the KeyStore
#and truststore location and password settings thus avoiding
#to pass them as cleartext in the command-line.

javax.net.ssl.keyStore=./config/sgc-server.keystore
javax.net.ssl.keyStorePassword=changeit

javax.net.ssl.trustStore=./config/sgc-server.keystore
javax.net.ssl.trustStorePassword=changeit

SGC stack JMX connector configuration details

The details presented above should be sufficient to secure the SGC JMX management connection. However, for a customized solution (for example, using other start-up scripts), see the following JMX connector parameters supported by SGC stack.

Warning Usually there is no need to customize the operation of the SGC stack JMX RMI connector, as relevant configuration is exposed through SGC start-up scripts.

Here are the Java system properties used to configure the SGC stack JMX RMI connector:

Key What it specifies

Values

com.cts.ss7.management.jmxremote.host

host that SGC should bind to in order to listen for incoming JMX connections

resolvable host name or IP address
(0.0.0.0 to listen on all local interfaces)

com.cts.ss7.management.jmxremote.port

port where SGC binds for incoming JMX connections

Valid port value 0..65535 inclusive
(0 = system assigned)

com.cts.ss7.management.jmxremote.ssl

whether to enable secure monitoring using SSL (if false, then SSL is not used)

true/false
Default is false.

com.cts.ss7.management.jmxremote.ssl.enabled.cipher.suites

a comma-delimited list of SSL/TLS cipher suites to enable; used in conjunction with com.cts.ss7.management.jmxremote.ssl

default SSL/TLS cipher suites

com.cts.ss7.management.jmxremote.ssl.enabled.protocols

a comma-delimited list of SSL/TLS protocol versions to enable; used in conjunction with com.cts.ss7.management.jmxremote.ssl

default SSL/TLS protocol version

com.cts.ss7.management.jmxremote.ssl.need.client.auth

whether to perform client-based certificate authentication, if both this property and com.cts.ss7.management.jmxremote.ssl are true

true/false
Default is true.

com.cts.ss7.management.jmxremote.ssl.config.file

path to the configuration file with properties used to secure the JMX management connection (should be in Java properties file format)

no default path
(SGC_HOME/config/netssl.properties is assigned by the SGC start scripts)

javax.net.ssl.keyStore

KeyStore location *

no default path

javax.net.ssl.keyStorePassword

KeyStore password *

no default path

javax.net.ssl.trustStore

truststore location *

no default path

javax.net.ssl.trustStorePassword

truststore password *

no default path

* Can be defined in the com.cts.ss7.management.jmxremote.ssl.config.file configuration file

SGC Backups

Backup Requirements

Selecting and applying an appropriate backup strategy (software, number of backups, frequency, etc) is outside of the scope of this guide.

However, the chosen strategy must be able to preserve the SGC’s critical files. Provided that a version of these files are available from before any event causing catastrophic failure it should be possible to restore the failed SGC node or nodes.

Possible options include:

Warning Backing up of an entire VM from the VM host is not recommended due to the likelihood of significant whole VM pauses. Such a pause can cause Hazelcast to detect cluster failure which may result in node restarts.

Backup Critical Files Only

This option involves taking a backup of the critical files and, where file locations have been customized, noting which have changed and where they should be located.

Restoration of the SGC component requires:

  • Installing the SGC from the original ocss7-${version}.zip package

  • Copying the configuration files from the backup to the new installation, honouring any original custom locations

Warning Restoration following a whole-OS failure also requires reinstatement of any SCTP tuning parameters, user process tuning and network interfaces as originally described in Installing the SGC.

Backup Whole SGC Installation

This option involves taking a backup of the entire SGC installation directory. Note that if any file locations have been customized to live outside of the SGC installation directory these will also need including.

Restoration of the SGC component requires:

  • Extracting the entire SGC installation from the backup

Warning Restoration following a whole-OS failure also requires reinstatement of any SCTP tuning parameters, user process tuning and network interfaces as originally described in Installing the SGC.

Critical Files

The following files contain installation-specific configuration and must be included in any backup regimen. The following are the default paths of these files, specified relative to the OCSS7 installation root:

  • config/sgcenv

  • config/SGC.properties

  • config/log4j.xml

  • config/hazelcast.xml — if it exists

  • var/sgc.dat

In addition, the files in the following locations must be preserved for at least 30 days, preferably longer:

  • logs/

  • cli/log/

This is to ensure that logging remains available in the event that a support request is required for an event that occurred just prior to catastrophic failure of an SGC host.

Configuring the SS7 SGC Stack

Configuration data

Configuration data can be separated into two main groups:

  • static — configuration properties loaded from a file during SGC instance start-up, influencing the behaviour of that single instance

  • managed — dynamic runtime configuration managed by and distributed within the SGC cluster.

Note
SGC_HOME

In the following instructions, SGC_HOME represents the path to the SGC Stack installation directory.

Static SGC configuration

Static configuration is loaded during SGC instance startup; any configuration changes take effect after SGC Stack instance restart. Static configuration consists of:

Static SGC instance configuration

During SGC instance start-up, the SGC_HOME/config/SGC.properties configuration file is loaded.

The SGC.properties file may be modified using any standard text editor. This is a standard Java properties file containing one or more key-value pairs.

A full description of all of the configuration properties that may be set in SGC.properties may be found in Appendix A: SGC Properties.

An SGC restart is required if any of these properties are changed on a running SGC.

Configuration Properties of Particular Note

The majority of the configuration properties have default values that should not require changing. However, there are some whose values should be considered for each installation:

ss7.instance mandatory The name of the SGC instance. Must be unique amongst instances in the cluster.

hazelcast.group

mandatory for 2+ node cluster
highly recommended for single node cluster due to small possibility of collisions

The name of the SGC cluster.

sgc.tcap.maxTransactions

optional

The default value is sufficient for most installation, but those installations expecting close to or greater than one million concurrent transactions may need to increase this value.

Hazelcast cluster configuration

Hazelcast is a opensource In-Memory Data Grid. Hazelcast provides a set of distributed abstractions (such as data collections, locks, and task execution) that are used by subsystems of the SS7 SGC Stack to provide a single logical view of the entire cluster. SGC cluster membership state is directly based on Hazelcast cluster and node lifecycle.

The SGC stack deployment package uses a custom Hazelcast configuration, which is available in SGC_HOME/config/hazelcast.xml.sample.

The official Hazelcast documentation covering setting up clusters can be found in the Hazelcast 3.7 Reference Manual section Setting Up Clusters.

Customizing Hazelcast Configuration

Hazelcast configuration can be customized by providing a hazelcast.xml configuration file in the config subdirectory of the SGC Stack distribution (for example, by renaming config/hazelcast.xml.sample to config/hazelcast.xml). For a description of possible configuration options, and the format of the hazelcast.xml configuration file, please see the Hazelcast 3.7 Reference Manual section Understanding Configuration.

Hazelcast Heartbeat Configuration

The default Hazelcast configuration used by the SGC includes customisation of some hazelcast default values to support rapid cluster member failure detection and faster cluster merges following failure recovery. This configuration can be further refined as necessary.

Property What it specifies Default

hazelcast.heartbeat.interval.seconds

How frequently the hazelcast heartbeat algorithm is run.

1s

hazelcast.max.no.heartbeat.seconds

How long to wait before considering a remote hazelcast peer unreachable.

If this value is set too small, then there is an increased risk of very short network outages or extended Java garbage collection triggering heartbeat failure detection.

If this value is set too large, then there is an increased risk of SGC features becoming temporarily unresponsive due to blocking on necessary cluster-wide operations.

It is important to balance the need to rapidly detect genuinely failed nodes with the need to protect against unnecessarily splitting and reforming the cluster as the split and merge operation is not instant and some SGC features may be temporarily unavailable during this process.

5s

hazelcast.merge.first.run.delay.seconds

How long hazelcast will wait to attempt a cluster merge immediately following a node failure.

10s

hazelcast.merge.next.run.delay.seconds

How long hazelcast will wait to attempt a cluster merge following a node failure after the first merge attempt.

10s

The hazelcast heartbeat mechanism is used to detect cluster member failures; either network failures between members or actual process failures.

Hazelcast Cluster With Three Or More Members

The default Hazelcast configuration used by the SGC is optimized for two cluster members. In the case where a larger cluster is required the backup-count parameter must be configured to the total number of cluster members, minus one. This provides maximum resiliency in the case where more than one node may fail or be split from the cluster simultaneously.

Warning If the backup-count is too low the cluster may suffer catastrophic data loss. This can lead to undefined behaviours up to and including total loss of service.

There are multiple locations within hazelcast.xml where this parameter must be configured: under <queue name="default">, <map name="default">, <multimap name="default">, <list name="default">, <set name="default">, <semaphore name="default"> and <ring-buffer name="default">. Each of these must be configured for correct behaviour.

For example, in a three node cluster:

<queue name="default">
    <max-size>10000</max-size>
    <backup-count>2</backup-count>
...

<map name="default">
    <in-memory-format>BINARY</in-memory-format>
    <backup-count>2</backup-count>
...

<multimap name="default">
    <backup-count>2</backup-count>
...

<list name="default">
    <backup-count>2</backup-count>
...

<set name="default">
    <backup-count>2</backup-count>
...

<semaphore name="default">
    <initial-permits>0</initial-permits>
    <backup-count>2</backup-count>
...

<ring-buffer name="default">
    <capacity>10000</capacity>
    <backup-count>2</backup-count>
...
Hazelcast Network Interface Selection

On a host with multiple network interfaces it is necessary to manually specify the network interface(s) to bind to in the network/interfaces section of hazelcast.xml.

If this is not manually specified then Hazelcast will select an arbitrary interface at boot. This may result in the node starting up as a singleton — unable to communicate with the other cluster members.

For example:

<network>
  <interfaces enabled="true">
    <interface>192.168.1.*</interface>
  </interfaces>
<network>

Logging configuration

Note For a description of the logging subsystem, please see Logging.

Managed SGC cluster configuration

The SS7 SGC Stack is built around a configuration subsystem that plays a central role in managing the lifecycle of all configuration objects in SGC. This is called "managed configuration" (when the configuration subsystem manages the configuration data). During normal cluster operation, configuration data and management state is shared between all cluster nodes. Each cluster node persists configuration data in the local file system (as an XML file).

Managed configuration can be divided based on its cluster or node-level applicability:

  • per node — this configuration is stored cluster-wide but relevant only for a given node (it references just the particular node for which it is relevant; for example, different SCTP associations may be created on specific nodes).

  • cluster wide — this configuration is relevant and the same for each node (general configuration parameters, SCCP configuration, parts of M3UA)

Configuration is represented as a set of configuration objects. These configuration objects are managed through a set of CRUD commands exposed by the Command-Line Management Console distributed with the SGC SS7 Stack.

Configuration objects

Each configuration object within the SS7 SGC Stack is an instance of a particular configuration object type. The type of configuration object defines a set of attributes. For example, configuration objects of type connection are defined by attributes such as port, conn-type, and others. A particular configuration object is identified by its oname attribute, which must be unique among all other configuration objects of that type.

Note Configuration objects may not be created, removed or altered when the SGC cluster is in an upgrade or reversion mode.

Common configuration object attributes

These attributes are common to some or all configuration objects.

oname

Every configuration object has an oname attribute that specifies its Object Name. It is an identifier that must be unique among all other objects of that particular type.

Whenever an attribute is a reference to a configuration object, its value must be equal to the oname attribute of that referenced configuration object.

dependencies

A configuration object depends on another configuration object when any of its attributes reference that other configuration object. That is, the attribute value is equal to the oname attribute of the other configuration object. The configuration system keeps track of such references, and provides the dependencies attribute, which is a counter of how many other configuration objects depend on the current one.

If the dependencies value is greater than 0, the configuration object cannot be removed (to remove it, first all other configuration objects that depend on it must be removed).

enabled

Some configuration objects must be enabled before the configuration layer changes the related runtime state. All such objects expose the enabled attribute with values of true or false.

active

Some configuration objects with the enabled attribute also expose the active attribute, which tells if this object was successfully instantiated and is used in processing (for example if a connection is established). Possible values are true or false.

General Configuration

Below are attributes for configuring an SGC Stack instance and the entire cluster.

Note
Atribute modification restrictions
  • Only attributes that are "Reconfigurable" can be modified after a configuration object is created.

  • Attributes that do not support "Active Reconfiguration" can be changed only when the configuration object is disabled (the value of its disabled attribute is false).

node

The node configuration object is for configuring an SGC Stack instance. Every SGC instance that is to be a part of cluster must be represented by a node configuration object. During startup, the SGC instance property ss7.instance is matched against the oname of existing nodes. If a match is found, the SGC instance will connect to the cluster, acting as that matching node.

Attribute name Attribute description Default

oname

object name

dependencies
Read only

number of items which depend on this object

enabled
Reconfigurable / Active reconfiguration

is object enabled

false

active
Read only

is object active

switch-local-address
Reconfigurable

local address for CommSwitch to bind to

switch-port
Reconfigurable

local port for CommSwitch to bind to

9701

stack-data-address
Reconfigurable

interface where stack can connect for data connection

The TCAP Stack that is bundled with CGIN RA will use this connection to originate and receive TCAP messages.

value of stack-http-address if defined, otherwise value of switch-local-address

stack-data-port
Reconfigurable

port where stack can connect for data connection

The TCAP Stack that is bundled with CGIN RA will use this connection to originate and receive TCAP messages.

9703

stack-http-address
Reconfigurable

interface where stack can get balancing information

The TCAP Stack that is bundled with CGIN RA will use this connection to to register with the SGC node.

value of stack-data-address if defined, otherwise value of switch-local-address

stack-http-port
Reconfigurable

port where stack can get balancing information

The TCAP Stack that is bundled with CGIN RA will use this connection to to register with the SGC node.

9702

local-configuration
Read only

node configuration

Specifies values of properties that can be defined in the SGC.properties file.

Note The JMX Object name for node is SGC:type=general,category=node

parameters

The parameters category specifies cluster-wide configuration (for a single cluster).

Attribute name Attribute description Default

sccp-variant
Reconfigurable

SCCP variant to be used: itu or ansi

The following conditions must be met in order to reconfigure this property:

  • Own signalling point code must be set to 0

  • No outbound GTT rules may be configured

  • No DPCs may be configured

  • No CPCs may be configured

  • No connections may be configured

itu

class1-resend-delay
Reconfigurable

delay (in milliseconds) before a class1 message is re-sent when route failure is detected

1000

sp
Reconfigurable

local signalling pointcode; format dependent on sccp-variant configuration

  • ITU; a 14-bit integer; supported formats: simple integer (4106) or int(3bit)-int(8bit)-int(3bit) (2-1-2)

  • ANSI; a 24-bit integer; supported formats: simple integer (34871) or int(8bit)-int(8bit)-int(8bit) in Network-Cluster-member order (3-125-44). This simple integer is the network byte order integer derived from the ANSI MTP/M3UA transmission order of the network, cluster and member components. i.e. [00000000 NNNNNNNN CCCCCCCC MMMMMMMM] where N=network, C=cluster and M=member.

0

sp-state
Reconfigurable

local signalling point state (OK/RESTRICTED/DOWN/CONG0/CONG1/CONG2)

OK

ni
Reconfigurable

network indicator used in M3UA messages sent by this node (INTERNATIONAL/SPARE/NATIONAL/RESERVED)

INTERNATIONAL

national
Reconfigurable

the national indicator bit in the address indicator octet of SCCP management messages sent by this node (AUTO/NATIONAL/INTERNATIONAL)

The meaning of AUTO depends on the SCCP variant configured:

  • ITU: AUTO means INTERNATIONAL

  • ANSI: AUTO means NATIONAL

AUTO

sccp-sst-interval
Reconfigurable

interval (in milliseconds) between sent SST messages

10000

sccp-timer-td
Reconfigurable

decay timer (in milliseconds) for SCCP congestion procedure
(Q.714 section 5.2.8 SCCP management congestion reports procedure)

10000

sccp-timer-ta
Reconfigurable

attack timer (in milliseconds) for SCCP congestion procedure
(Q.714 section 5.2.8 SCCP management congestion reports procedure)

600

sccp-max-rsl-m
Reconfigurable

maximum restriction sublevel per restriction level (Q.714 section 5.2.8 SCCP management congestion reports procedure)

4

sccp-max-rl-m
Reconfigurable

maximum restriction level for each affected SP
(Q.714 section 5.2.8 SCCP management congestion reports procedure)

8

sccp-timer-tconn
Reconfigurable

timer (in milliseconds) for congestion abatement

10000

ntfy-m3ua-err
Reconfigurable

time interval (in seconds) between successive notifications sent
when the M3Ua layer is unable to process incoming messages

600

ntfy-sccp-err
Reconfigurable

time interval (in seconds) between successive notifications sent
when the SCCP layer is unable to process incoming messages

60

ntfy-tcap-err
Reconfigurable

time interval (in seconds) between successive notifications sent
when the TCAP layer is unable to process incoming messages

60

sccp-timer-reassembly
Reconfigurable

reassembly timeout for SCCP reassembly procedure in milliseconds

2000

sccp-max-reassembly-processes
Reconfigurable

maximum number of concurrent SCCP reassembly processes that may be active.
0 = reassembly not supported.

10000

default-ansi-tcap-version
Reconfigurable

ANSI TCAP version to assume where this cannot be derived from the the dialog’s initial Query or Uni message.
Permitted options: 1988 and 2000.

See also the OCSS7 TCAP stack configuration property: default-tcap-version

2000

Note The JMX Object name for parameters is SGC:type=general,category=parameters.

M3UA Configuration

The SS7 SGC Stack acts as one or more Application Servers when connected to the Signalling Gateway or another Application Server (in IPSP mode). M3UA configuration can be separated into two related domains:

Note
Attribute modification restrictions

Only attributes that are "Reconfigurable" can be modified after a configuration object is created. Attributes that do not support "Active Reconfiguration" can be changed only when the configuration object is disabled (the value of its disabled attribute is false).

Application Server and routes

After an SCTP association with SG is established, the as-connection that maps connections to Application Servers is checked. This allows for the SGC to inform SG which routing contexts should be active for that SCTP association. Also, with mapping internally, SGC decides which Point Codes are reachable through the SCTP association (the chain of dependencies is: connection - as-connection - as - route - pc).

as

Represents an Application Server — a logical entity serving a specific Routing Key. Defines the Application Server that the SGC Stack will represent after the connection to SG/IPSP peer is established.

Attribute name
Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

enabled

is object enabled

false

Reconfigurable using active reconfiguration

traffic-maintenance-role

what kind of AS this is; whether it should handle
sending (ACTIVE) or reception (PASSIVE) of ASP_(IN)ACTIVE

ACTIVE

rc

routing context NOTE: This attribute is optional, but only one as with an unspecified rc can be used per association.

state

state of the AS (DOWN, PENDING, ACTIVE, INACTIVE).

Read-only

pending-size

maximum number of pending messages, per node

Applicable only to AS mode.

0

Reconfigurable

pending-time

maximum pending time (in milliseconds)

Applicable only to AS mode.

0

Reconfigurable

Note The JMX Object name for as is SGC:type=m3ua,category=as.

route

Mapping of DPCs (dpc) accessible when a particular Application Server (as) is active.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

dpc-name

reference to DPC

as-name

reference to AS

priority

priority of the route (larger value is higher priority)

0

Note The JMX Object name for route is SGC:type=m3ua,category=route.

dpc

A DPC defines a remote point code which is accessible from this SGC cluster. DPCs are used to define routes (route) which bind DPCs to specific Applications Server (as) definitions and SCTP associations (connections).

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

dpc

destination point code; format dependent on sccp-variant configuration:

  • ITU; a 14-bit integer; supported formats: simple integer (4106) or int(3bit)-int(8bit)-int(3bit) (2-1-2)

  • ANSI; a 24-bit integer; supported formats: simple integer (34871) or int(8bit)-int(8bit)-int(8bit) in Network-Cluster-member order (3-125-44). This simple integer is the network byte order integer derived from the ANSI MTP/M3UA transmission order of the network, cluster and member components. i.e. [00000000 NNNNNNNN CCCCCCCC MMMMMMMM]

Reconfigurable

na

network appearance

Attribute is optional

Reconfigurable

mss

maximum user data length per segment to send to this destination, supported values are sccp-variant dependent:

  • ITU: 0-247 (0=do not segment)

  • ANSI: 0-245 (0=do not segment)

245

Reconfigurable

muss

maximum unsegmented SCCP message size to send to this destination as a single unsegmented message, supported values are sccp-variant dependent:

  • ITU: 160-254 inclusive

  • ANSI: 160-252 inclusive

252

Reconfigurable

pudt

SCCP message type that this destination prefers to receive when there is a choice available, supported values: UDT, XUDT

UDT

Reconfigurable

congestion-notification-timeout

ANSI SCCP only: the time in milliseconds for which a congestion notification from M3UA should be considered valid. Supported values: 0+ (0=disable)

600000

Reconfigurable

Note The JMX Object name for dpc is SGC:type=m3ua,category=dpc.

as-precond

Before the Application Server (as) becomes active, the SGC Stack may require certain TCAP Stacks representing some SSN (CGIN RAs) to register with SGC. The Application Server will be activated after ALL defined preconditions (as-precond) are satisfied.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

as-name

affected AS name

ssn

subsystem number which must be connected

Note The JMX Object name for as-precond is SGC:type=m3ua,category=as-precond.

as-connection

Mapping of Application Server (as) to SCTP association (connection), defining which Application Servers should be active on a particular connection.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

as-name

reference to AS

conn-name

reference to SGConnection

daud-on-asp-ac

sending DAUD on asp-active

true

Reconfigurable

Note The JMX Object name for as-connection is SGC:type=m3ua,category=as-connection.

Listening for and establishing SCTP associations

Below are attributes for configuration directly related to establishing SCTP associations to M3UA peers.

local-endpoint

local-endpoint, together with local-endpoint-ip, defines the IP address where the SGC Node should listen for incoming SCTP associations.local-endpoint configuration is also used as the source address for outgoing SCTP associations. local-endpoint by itself defines the port and M3UA configuration for all connections that are associated with it.

Note Each SGC Node can use multiple local endpoints.
Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

enabled

is object enabled

false

Reconfigurable using active reconfiguration

active

is object active

Read-only

node

SGC Node where the object is used

port

local SCTP port

Reconfigurable

max-in-streams

maximum number of input streams requested by the local endpoint
during association initialization

OS default

Reconfigurable

max-out-streams

maximum number of output streams requested by the local endpoint
during association initialization

OS default

Reconfigurable

so-sndbuf

size of the socket send buffer

OS default

Reconfigurable

so-rcvbuf

size of the socket receive buffer

OS default

Reconfigurable

max-asp-active-count

maximum asp-active sending count

100

Reconfigurable

max-asp-inactive-count

maximum asp-inactive sending count

100

Reconfigurable

max-asp-up-count

maximum asp-up sending count

100

Reconfigurable

max-asp-down-count

maximum asp-down sending count

100

Reconfigurable

no-delay

enables or disables a Nagle-like algorithm

true

Reconfigurable

so-linger

linger on close if data is present

OS default

Reconfigurable

Note The JMX Object name for local-endpoint is SGC:type=m3ua,category=local-endpoint.

local-endpoint-ip

Configuration of IPs for local-endpoint, to make use of SCTP multi-homed, feature-defined, multiple IPs for a single local endpoint.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

ip

IPv4 or IPv6 address

local-endpoint-name

name of the referenced local endpoint

Note The JMX Object name for local-endpoint-ip is SGC:type=m3ua,category=local-endpoint-ip.
Tip
IPv6 considerations

When using IPv6 addressing, remember to configure the PREFER_IPV4 property in the sgcenv file.
For details, please see configuring SGC_HOME/config/sgcenv

connection

connection, together with conn-ip, defines the remote IP address of the SCTP association. The SGC Node will either try to establish that association (CLIENT mode) or expect a connection attempt from a remote peer (SERVER mode).

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

enabled

is object enabled

false

Reconfigurable using active reconfiguration

active

is object active

Read-only

port

remote SCTP port

Reconfigurable

local-endpoint-name

reference name to local endpoint

conn-type

specifies if acts as server for SCTP connect (CLIENT/SERVER)

Reconfigurable

t-ack

specifies how often ASP attempts to send AspUP

2

Reconfigurable using active reconfiguration

t-daud

specifies how often ASP attempts to send DAUD

60

Reconfigurable using active reconfiguration

t-reconnect

specifies time interval (in seconds) between connection attempts

6

Reconfigurable using active reconfiguration

state-maintenance-role

specifies whether the SGC will send ASP-UP (ACTIVE) or expects to receive ASP-UP (PASSIVE) on connection establishment.

ACTIVE

Reconfigurable

asp-id

asp identifier

Attribute is optional

Reconfigurable

is-ipsp

specifies whether connection works in IPSP mode

true

out-queue-size

maximum number of messages waiting to be written into the SCTP association

1000

Reconfigurable

Note The JMX Object name for connection is SGC:type=m3ua,category=connection.

conn-ip

Configuration of IPs for connection, to make use of SCTP multi-homed, feature-defined,multiple IPs for a single connection.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

ip

IPv4 or IPv6 address

conn-name

name of the referenced connection

Note The JMX Object name for conn-ip is SGC:type=m3ua,category=conn-ip.
Tip
IPv6 considerations

When using IPv6 addressing, remember to configure the PREFER_IPV4 property in the SGC_HOME/config/sgcenv file. For details, please see configuring SGC_HOME/config/sgcenv

SCCP Configuration

Global title translation

Tip

Any number of attributes (gtencoding, trtype, natofaddr, numplan) can be left empty to match any value in a GT. More specific rules, defining specific attribute values, are matched first.

To create a rule matching all GTs, also set is-prefix = true, and leave addrinfo empty.

To configure GT translation, see the following sections:

Note
Atribute modification restrictions
  • Only attributes that are "Reconfigurable" can be modified after a configuration object is created.

  • Attributes that do not support "Active Reconfiguration" can be changed only when the configuration object is disabled (the value of its disabled attribute is false).

Incoming GT translation

Use the following attributes to configure GT translation for incoming SCCP messages. Whenever the incoming called address for an SCCP message is a GT, it is matched against this configuration to determine whether it should be accepted and, optionally, which SSN should receive it.

inbound-gtt

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

trtype

translation type

Attribute optional; when unspecified matches ANY value

Reconfigurable

numplan

numbering plan

Attribute optional; when unspecified matches ANY value

Reconfigurable

natofaddr

nature of address

Attribute optional; when unspecified matches ANY value

Reconfigurable

addrinfo

address string

Attribute optional; when unspecified and is-prefix is true, matches ANY value

Reconfigurable

is-prefix

specifies if address string contains prefix

false

Reconfigurable

ssn

local SSN that handles the traffic

Attribute optional; when unspecified the SSN from the Called Party Address will be used

Reconfigurable

Note The JMX Object name for inbound-gtt is SGC:type=sccp,category=inbound-gtt.

Outgoing GT translation

GT translation configuration used for outgoing SCCP messages. Whenever the outgoing SCCP message called address is a GT, it is matched against this configuration to derive the destination PC and optionally SSN. After translation SCCP message called address may be modified according to the replace-gt configuration.

outbound-gt

Use the following attributes to configure GT translation in an outgoing direction. An outgoing SCCP message that contains a GT in the called address parameter will be matched against the outbound-gt definitions.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

trtype

translation type

Attribute optional; when unspecified matches ANY value

Reconfigurable

numplan

numbering plan

Attribute optional; when unspecified matches ANY value

Reconfigurable

natofaddr

nature of address

Attribute optional; when unspecified matches ANY value

Reconfigurable

addrinfo

address string

Attribute optional; when unspecified and is-prefix is true matches ANY value

Reconfigurable

is-prefix

specifies if the address string contains a prefix

false

Reconfigurable

Note The JMX Object name for outbound-gt is SGC:type=sccp,category=outbound-gt.

outbound-gtt

Use these attributes to represent a Destination Point Code where the SCCP message with a matching GT (referenced through the gt attribute) will be sent.

Tip

Multiple outbound-gt definitions can reference a single outbound GT. In such cases, a load-balancing procedure is invoked.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

gt

reference to outbound-gt

Reconfigurable

dpc

destination point code; supported format is sccp-variant dependent:

  • ITU; a 14-bit integer; supported formats: simple integer (4106) or int(3bit)-int(8bit)-int(3bit) (2-1-2)

  • ANSI; a 24-bit integer; supported formats: simple integer (34871) or int(8bit)-int(8bit)-int(8bit) in Network-Cluster-member order (3-125-44). This simple integer is the network byte order integer derived from the ANSI MTP/M3UA transmission order of the network, cluster and member components. i.e. [00000000 NNNNNNNN CCCCCCCC MMMMMMMM]

Reconfigurable

replace-gt

reference to replace-gt

Reconfigurable

priority

priority of this translation (larger value is higher priority)

Reconfigurable

Note The JMX Object name for outbound-gtt is SGC:type=sccp,category=outbound-gtt.

replace-gt

These attributes may be used to modify the SCCP message’s called address parameter, after a matching GT is found through the use of outbound-gt and outbound-gtt.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

route-on

what will be inserted in the SCCP called address; allowed values are:

  • SSN — route on SSN; include translation-derived DPC

  • GT — route on GT

  • RI — leave intact.

RI

Reconfigurable

gtencoding

new encoding of the address

Attribute optional

Reconfigurable

trtype

new translation type

Attribute optional

Reconfigurable

numplan

new numbering plan

Attribute optional

Reconfigurable

natofaddr

new nature of address

Attribute optional

Reconfigurable

addrinfo

new address string in hex/bcd format

Attribute optional

Reconfigurable

ssn

specify new SSN to add to GT

Attribute optional

Reconfigurable

gti

new global title indicator; allowed values are: 0 (no GT), 1, 2, 3, or 4

Attribute optional

Reconfigurable

Note The JMX Object name for replace-gt is SGC:type=sccp,category=replace-gt.

Concerned Point Codes

cpc

CPC configuration stores information about remote SCCP nodes that should be informed about the local subsystem availability state.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

dpc

concerned point code which is notified about status changes; format depends on sccp-variant configuration:

  • ITU; a 14-bit integer; supported formats: simple integer (4106) or int(3bit)-int(8bit)-int(3bit) (2-1-2)

  • ANSI; a 24-bit integer; supported formats: simple integer (34871) or int(8bit)-int(8bit)-int(8bit) in Network-Cluster-member order (3-125-44). This simple integer is the network byte order integer derived from the ANSI MTP/M3UA transmission order of the network, cluster and member components. i.e. [00000000 NNNNNNNN CCCCCCCC MMMMMMMM]

Note The JMX Object name for cpc is SGC:type=sccp,category=cpc.

Load balancing

Global Title translation may be used to provide load balancing and high availability functions. If there is more than one outbound-gtt referencing a single outbound-gt, the SCCP layer is responsible for routing the message to one of the vailable SCCP nodes (destination point codes). If the SCCP message is a subsequent message in stream class 1 (sequenced connectionless) and the previously selected SCCP node (PC) is still reachable, then that previously used PC is selected. If there is any other message for which GT translation results in multiple reachable point codes, messages are load balanced among the available PCs with highest priority.

The pseudo-algorithm is:

  1. outbound-gtts referencing the same GT (outbound-gt) are grouped according to their priority (larger value is higher priority).

  2. Unreachable PCs are filtered out.

  3. Unreachable destination subsystems (for which SSP has been received) are filtered out.

  4. If there is more than one PC of highest priority, then messages are load balanced using a round robin algorithm between those PCs.

Whenever the prefer-local attribute of outbound-gtt is set to value true, routes local to the node are used in that algorithm (if such routes are currently available; otherwise prefer-local is ignored). Routes local to the node are those that are available through an SCTP association that was established by that particular node.

SNMP Configuration

Interoperability with SNMP-aware management clients

The SS7 SGC stack includes an SNMP agent, for interoperability with external SNMP-aware management clients. The SGC SNMP agent provides a read-only view of SGC statistics and alarms (through SNMP polling), and supports sending SNMP notifications related to SGC alarms and notifications to an external monitoring system.

In a clustered environment, individual SGC nodes may run their own instances of the SNMP agent, so that statistics and notifications can still be accessed in the event of node failure. Each node is capable of running multiple SNMP agents with different user-defined configuration.

Tip For detailed information about SGC exposed statistics, please see Statistics. For details about SGC alarms and notifications, please see Alarms and Notifications.
Note
Attribute modification restrictions
  • Only attributes that are "Reconfigurable" can be modified after a configuration object is created.

  • Attributes that do not support "Active Reconfiguration" can be changed only when the configuration object is disabled (the value of its disabled attribute is false).

SNMP configuration

Each snmp-node configuration object represents an SNMP agent running as part of a particular SGC node. Exposed configuration allows a single SNMP agent to support a single version of the SNMP protocol. Currently supported SNMP versions are 2c and 3. Multiple snmp-nodes (SNMP agents) can run within a single SGC node. In a clustered environment, each newly created snmp-node is automatically connected to the existing target-address and usm-user.

snmp-node

snmp-node represents an SNMP agent running as part of a particular SGC node.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

enabled

is object enabled

false

Reconfigurable using active reconfiguration

active

is object active

Read-only

node

SGC node where the object is used

transport-type

Comma separated list of SNMP address type(s) to configure this node for (supported values: UDP, TCP)

UDP

Reconfigurable

port

local SNMP listening port

Reconfigurable

host

local SNMP listening bind address

127.0.0.1

Reconfigurable

community

community for read operations

public

Reconfigurable

snmp-version

SNMP version (supported values: v3, v2c)

v2c

Reconfigurable

extended-traps

whether extended traps (and informs) should be generated by this node. Extended traps/informs include a longer description field, plus cause, effect and action fields which can result in an SNMP PDU up to 1400 octets long. A value of false here will exclude these fields, resulting in traps under 484 octets long.

true

Reconfigurable

Note The JMX Object name for snmp-node is SGC:type=snmp,category=snmp-node.

target-address

target-address is cluster wide and represents an SNMP notification target (defines where SNMP notifications will be sent and which protocol version is used).

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

transportDomain

SNMP transport domain

(SNMP transport protocol supported values: TcpIpv4, UDP, TcpIpv6, UdpDns, UdpIpv6, UdpIpv4, TcpDns)

In order for notifications to be emitted, the chosen transport domain must be compatible with one of the the selected transport-type values in snmp-node. i.e. UdpIpv4 cannot be used if the transport-type is set to TCP, but may be used if the transport-type is UDP,TCP or UDP.

UDP

Reconfigurable

target-host

target host address

Reconfigurable

target-port

target port

162

Reconfigurable

timeout

timeout value (in units of 0.01 seconds) after which unacknowledged SNMP notifications (type inform) will be retransmitted

200

Reconfigurable

retries

number of retransmissions of unacknowledged SNMP notifications (type inform)

1

Reconfigurable

community

community name definition

public

Reconfigurable

notifyType

SNMP notification mechanism

(supported values: trap — asynchronous unacknowledged notification,
inform — asynchronous acknowledged notification)

trap

Reconfigurable

snmp-version

SNMP version

(supported values: v3, v2c)

v2c

Reconfigurable

Note The JMX Object name for target-address is SGC:type=snmp,category=target-address.
Warning target-address can be created, deleted, and its attributes reconfigured, only when all referenced snmp-nodes are disabled.

usm-user

usm-user is cluster wide and represents the SNMP v3 USM user and authentication details.

Attribute name Attribute description Default Modification

oname

object name

dependencies

number of items which depend on this object

Read-only

authProto

authentication protocol

(supported values: SHA, MD5, NONE)

SHA

Reconfigurable

authPassphrase

authentication protocol pass phrase

Reconfigurable

privProto

privacy protocol

(supported values: AES192, DES, DES3, AES256, NONE, AES128)

DES

Reconfigurable

privPassphrase

privacy protocol pass phrase

Reconfigurable

community

specifies community

public

Reconfigurable

Note The JMX Object name for usm-user is SGC:type=snmp,category=usm-user.
Warning usm-user can be created, deleted, and its attributes reconfigured only when all referenced snmp-nodes are disabled.

SGC Stack MIB definitions

MIB definitions for the SGC stack are separated into three files:

  • COMPUTARIS-MIB — basic OID definitions used by the SGC stack

  • OPENCLOUD-OCSS7-MIB — the Metaswitch enterprise MIB definition and OCSS7 System OIDs

  • CTS-SGC-MIB — SNMP managed objects and SNMP notifications used by the SGC stack.

Tip MIB definitions are also included in the SGC Stack release package under ./doc/mibs/

SNMP managed objects

Managed objects defined in CTS-SGC-MIB can be separated in two groups:

Statistics managed objects

Here are the managed objects representing SGC statistics:

Symbolic OID Numerical OID Equivalent Statistics attribute

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcAssociationTable

.1.3.6.1.4.1.35787.1.1.1

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcAsStateTable

.1.3.6.1.4.1.35787.1.1.2

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcLocalSsnTable

.1.3.6.1.4.1.35787.1.1.3

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcTcapConnectionTable

.1.3.6.1.4.1.35787.1.1.4

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcDpcStatusTable

.1.3.6.1.4.1.35787.1.1.5

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcGtRoutingTable

.1.3.6.1.4.1.35787.1.1.6

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcPcRoutingTable

.1.3.6.1.4.1.35787.1.1.7

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcRemoteSsnTable

.1.3.6.1.4.1.35787.1.1.8

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcHealthTable

.1.3.6.1.4.1.35787.1.1.9

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcSccpStatsTable

.1.3.6.1.4.1.35787.1.1.10

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcSccpErrorStatsTable

.1.3.6.1.4.1.35787.1.1.11

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcTcapStatsTable

.1.3.6.1.4.1.35787.1.1.12

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcObjects.sgcTcapErrorStatsTable

.1.3.6.1.4.1.35787.1.1.13

Alarms managed objects

Here are the managed objects representing SGC alarms:

Symbolic OID Numerical OID Equivalent Alarms MBean

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents.sgcActiveAlarmsTable

.1.3.6.1.4.1.35787.1.2.5

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents.sgcEventHistoryTable

.1.3.6.1.4.1.35787.1.2.6

SNMP notifications

The MIB defined in CTS-SGC-MIB specifies two notification types that can be emitted by SGC:

  • SGC Alarm — emitted whenever an SGC Alarm is raised

  • SGC Notification — emitted whenever an SGC Notification is emitted.

Notifications can be raised in either basic format or extended format.

Here are the notification types emitted by SGC:

Symbolic OID Numerical OID

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents.sgcAlarm

.1.3.6.1.4.1.35787.1.2.2

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents.sgcNotification

.1.3.6.1.4.1.35787.1.2.3

Here is the content of SGC-emitted SNMP notifications:

Symbolic OID Numerical OID Details

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcEventId

.1.3.6.1.4.1.35787.1.2.4.2

unique SGC notification / alarm identifier

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcTimestamp

.1.3.6.1.4.1.35787.1.2.4.3

time the of SGC alarm / notification

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcAlarmName

.1.3.6.1.4.1.35787.1.2.4.4

name of the SGC alarm / notification type

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcAlarmSeverity

.1.3.6.1.4.1.35787.1.2.4.5

SGC alarm / notification severity

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcAdditionalInfo

.1.3.6.1.4.1.35787.1.2.4.6

comma-separated list of SGC Alarm / Notification type specific attribute=value pairs;
this set of attributes depends on SGC Alarm / Notification type and is described in
Alarm Types and Notification Types

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcAlarmMessage

.1.3.6.1.4.1.35787.1.2.4.7

short decription of the alarm or notification

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcAlarmDescription

.1.3.6.1.4.1.35787.1.2.4.8

(extended format only)
long description of the alarm

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcAlarmCause

.1.3.6.1.4.1.35787.1.2.4.9

(alarms only)
(extended format only)
possible causes of the alarm

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcAlarmEffect

.1.3.6.1.4.1.35787.1.2.4.10

(alarms only)
(extended format only)
possible consequences of the alarm

.iso.org.dod.internet.private.enterprises. ctsRegMIB.ctsObjects.sgcEvents. sgcAlarmObjects.sgcAlarmAction

.1.3.6.1.4.1.35787.1.2.4.11

(alarms only)
(extended format only)
remedial action to be taken

Configuration Procedure

This is a high-level description of the usual SGC configuration procedure listing the commands involved for each of the three layers:

It is up to the user to provide the details required by the commands; links to the configuration reference material for each command have been provided to make this easier.

General configuration

Step

Operation

1

Set the cluster-wide SCCP variant.

Configure the SCCP variant to the required value. Supported variants are:

  • itu (default)

  • ansi

2

Set the cluster-wide local SS7 Point Code address.

Modify the sp attribute to the desired point code. The format depends on the value of the sccp-variant configuration parameter.

  • ITU: 14 bits

    • simple integer, such as 4106

    • 3bit-8bit-3bit format (3 most significant bits, 8 middle bits, 3 least significant bits) as decimal integer values, such as 2-1-2.

  • ANSI: 24 bits

    • simple integer, such as 34871. This simple integer is the network byte order integer derived from the ANSI MTP/M3UA transmission order of the network, cluster and member components. i.e. [00000000 NNNNNNNN CCCCCCCC MMMMMMMM]

    • 8bit-8bit-8bit format in Network-Cluster-Member order as decimal integer values, such as 3-125-44.

3

Create two new nodes for the cluster.

The oname node attribute must be equal to the ss7.instance property value of the SGC Stack instance that will act as that particular node.

SCCP configuration

Below are the steps for configuring outgoing and incoming GTs to be translated, and CPCs.

Note Configuring SCCP GT Translation and Concerned Point Codes is optional.

Outgoing GT

For each outgoing GT to be translated, repeat these steps:

Step

Operation

1

Create the GT definition for outbound use.

2

Create the address definition that should be the result of matching the previously defined GT.

Be sure that the gt attribute is equal to the oname of the outbound-gt definition from the previous step.

3

Optionally, as a result of matching a particular GT, modify the called address before sending.

After creating the replace-gt, be sure to modify the replace-gt attribute of the outbound-gtt created in the previous step

Incoming GT

For each incoming GT to be translated, repeat this step:

Step Operation

1

Create a GT and address definition (SSN) that should be the result of translation.

2

Create the GT definition for outbound use, making sure it matches the inbound GTT correctly.

3

Create the address definition that should be the result of matching the previously defined GT.

Be sure that the gt attribute is equal to the oname of the outbound-gt definition from the previous step.

Tip The second and third commands may look somewhat surprising, as they create an outbound GTT rule. SCCP’s service messages (UDTS and XUDTS) may be generated locally in response to traffic we are attempting to send, and these service messages are routed as outbound messages. It is safest to create outbound GTT rules mirroring your inbound GTT rules in case they are needed by your network configuration.

Concerned Point Code

For each remote signalling node that is concerned about local SSN availability, repeat this step:

Step Operation

1

Define the PC that should be notified about subsystem availability changes.

M3UA configuration

Below are instructions for configuring M3UA in AS, IPSP Client, and IPSP Server modes.

Step Operation

1

If not done previously, define a local-endpoint for the existing node.

2

Define IPs for the local-endpoint.

3

If you are:

  • connecting to an SG, set the:

    • is-ipsp attribute to false,

    • conn_type attribute to CLIENT, and

    • state-maintenance-role to ACTIVE.

  • using IPSP mode as a client, set the:

    • is-ipsp attribute to true,

    • conn_type attribute to CLIENT, and

    • state-maintenance-role to ACTIVE.

  • using IPSP mode as a server, set the:

    • is-ipsp attribute to true,

    • conn_type attribute to SERVER, and

    • state-maintenance-role to PASSIVE.

4

Define one or more IP addresses for connection.

5

Define one or more Application Server (Routing Contexts).

Set the traffic-maintenance-role attribute to ACTIVE.

6

Define one or more Destination Point Codes that will be available for the AS.

7

Define one or more routes that associate previously created DPC(s) and AS(s).

8

Define one or more associations for AS(s) that are available through particular connection(s).

9

Enable the node, as, local-endpoint, and connection.

Configuration Subsystem Details

Stack configuration and cluster joining

The main functions of the configuration subsystem are:

  • managing, distributing, and persisting SGC Stack configuration

  • performing the cluster-join procedure.

Below are details of the cluster-join procedure (which is part of SGC Node initialization) and a basic description of the JMX MBeans exposed by the configuration subsystem that may be of use when developing custom management tools.

Note
SGC_HOME

In the following instructions, SGC_HOME represents the path to the SGC Stack installation directory.

Cluster-join procedure

During startup, if the SGC cluster already exists, a node instance initiates a cluster-join procedure. The configuration system loads a locally persisted configuration and compares its version vector against the current cluster configuration.

IF THEN

Versions are equal, or the local version is stale.

The SGC node uses the cluster configuration state. It joins the cluster and instantiates a cluster-wide and per-node state.

The local version is greater.

The local instance first instantiates all configuration objects which are not present in the cluster. Then it updates all configuration objects which are defined both in the cluster and in the local configuration. Finally, it deletes all configuration objects which are not present in the local configuration.

There is a version conflict.

The node aborts the cluster-join procedure, outputs a failure log message, and aborts start up.

Version conflict reconciliation

The local node stores a copy of the configuration in the SGC_HOME/var/sgc.dat file in the SS7 SGC node working directory. This is an XML file containing the entire cluster configuration as last known by that node. The usual approach to configuration reconciliation is for the node to join the cluster and use the current cluster configuration to initialize (dropping its local state).

Tip To force this behaviour, remove or rename the SGC_HOME/var/sgc.dat file containing persisted configuration.
Note
Configuration backup

After each configuration change a new version of sgc.dat file is created, before that current version is renamed to SGC_HOME/var/sgc.dat.old.

Factory MBeans for configuration objects

The SS7 SGC Stack is built around a configuration subsystem that plays a central role in managing the lifecycle of all configuration objects in SGC. SS7 SGC configuration is exposed through a set of JMX MBeans on each node. The configuration subsystem exposes a set of "factory" MBeans that are responsible for creating configuration objects of a certain type.

Each factory MBean exposes either one or two create- operations used to create new configuration objects (a create operation that accepts more parameters allows for defining optional configuration object attributes during creation). The creation of a configuration object results in the creation of an MBean representing the state of that configuration object. Attributes can be modified directly through that MBean, which also exposes a remove operation that allows removal of the configuration object (and associated MBeans).

Note
Command-Line Management Console

These processes are abstracted away by the Command-Line Management Console and exposed as a set of CRUD commands.

Configuration MBean naming

SGC Configuration MBean names use the common domain SGC and a set of properties:

  • type — represents a subsystem / layer (general, m3ua, sccp, or snmp)

  • category — represents the name of the factory in the subsystem

  • id — represents that instance of the processing object.

For example, the cluster node factory MBean has the name SGC:type=general,category=node, which exposes the create-node operation. Invoking the create-node operation creates a processing object representing a node in the cluster with the object name SGC:type=general,category=node,id=NODE_NAME. The id property is set based on the oname (object name) parameter of the create-node operation.

Most GUI-based JMX-Management tools represent the naming space as a tree of MBean objects, like this:

sgc config node
Note There is a single special MBean object named SGC:type=general,category=parameters that is neither a factory MBean nor a processing object. This MBean stores cluster-wide system parameters.

Operational State and Instance Management

Operational state

SS7 SGC Stack operational state information is exposed cluster wide. That is, the same operational state information is exposed on each cluster node, independent of the particular source of information (particular node). It is enough to connect to a single cluster node to observe the operational state of all other nodes in the cluster.

The same operational state is exposed through:

  • commands exposed by the Command-Line Management Console that is distributed with the SGC SS7 Stack

  • the SNMP protocol when an SNMP agent is configured to run within a particular SGC node.

Warning
Notification propagation

An important distinction related to SS7 SGC Stack notification support is that notifications (both through JMX and SNMP) are sent only through the node that actually observes a related event (for example when a connection goes down).

Operational state is exposed through Alarms, Notifications, and Statistics. Operating details of the SS7 SGC stack can be observed using the Logging subsystem.

Static SGC instance configuration

Each instance (node) of the SS7 SGC Stack exposes information that can be used to check the current instance configuration properties. Current instance configuration properties are queried using display-local command of Command-Line Management Console.

Alarms

What are SS7 SGC alarms?

Alarms in the SS7 SGC stack alert the administrator to exceptional conditions. Subsystems in the SS7 SGC stack raise them upon detecting an error condition or an event of high importance. The SS7 SGC stack clears alarms automatically when the error conditions are resolved; an administrator can clear any alarm at any time. When an alarm is raised or cleared, the SS7 SGC stack generates a notification that is sent as a JMX Notification and an SNMP trap/notification.

The SS7 SGC stack defines multiple alarm types. Each alarm type corresponds to a type of error condition or important event (such as "SCTP association down"). The SGC stack can raise multiple alarms of any type (for example, multiple "SCTP association down" alarms, one for each disconnected association).

Alarms are inspected and managed through a set of commands exposed by the Command-Line Management Console, which is distributed with SGC SS7 Stack.

Active alarms and event history

The SS7 SGC Stack stores and exposes two types of alarm-related information:

  • active alarms — a list of alarms currently active

  • event history — a list of alarms and notifications that where raised or emitted in the last 24 hours (this is default value — see Configuring the SS7 SGC Stack).

At any time, an administrator can clear all or selected alarms.

Generic alarm attributes

Alarm attributes represent information about events that result in an alarm being raised. Each alarm type has the following generic attributes, plus a group of attributes specific to that alarm type (described in the following sections).

There are two types of generic attribute; basic and extended.

Basic attributes:

  • Are displayed by default in the SGC’s CLI.

  • They are always included in full in SNMP traps.

  • And they are returned in full by SNMP queries.

Extended attributes:

  • Are not displayed by default in the SGC’s CLI. This behaviour may be overridden by specifiying additional columns using the column attribute in the display-active-alarm or display-event-history CLI commands.

  • Will be included in an SNMP trap or inform if the SNMP agent is configured for extended-traps, otherwise these will be omitted.

  • Are returned in full by SNMP queries.

The full set of attributes is described in the following table:

Attribute Type Description

id

basic

A unique alarm instance identifier, presented as a number. This identifier can be used to track alarms, for example by using it to identify the raise and clear event entries for an alarm in the event history, or to refer to a specific alarm in the commands which can be used to manipulate alarms.

name

basic

The name of the alarm type. A catalogue of alarm types is given below.

severity

basic

The alarm severity:

  • CRITICAL — application encountered an error which prevents it from continuing (it can no longer provide services)

  • MAJOR — application encountered an event which significantly impacts delivered services; some services may no longer be available

  • MINOR — application reports an event which does not have significant impact on delivered services

  • INFO — application reports an information event which does not have any impact on delivered services

  • CLEARED — alarm has been cleared.

timestamp

basic

The date and time at which the event occurred.

parameters

basic

A comma-separated list of key=value pairs specific to the alarm. The catalogue below documents the parameters each alarm may have.

description

basic

A short description of the alarm.

longDescription

extended

A longer description of the alarm.

cause

extended

A guide to some possible causes of the alarm. The described causes should not be considered exhaustive.

effect

extended

The possible consequences of the condition that caused the alarm to be raised.

action

extended

Actions that can be taken to remedy the alarm. Note that not all remedies can be described within the constraints of an alarm text. Refer back to this guide or contact support for more assistance.

Alarm types

This section describes all alarm types that can be raised in an SGC cluster.

General Alarms

This section describes alarms raised concerning the general operational state of the SGC or SGC cluster.

commswitchbindfailure

The CommSwitch is unable to bind to the configured switch-local-address and switch-port. This alarm is cleared when the CommSwitch is able to successfully bind the configured address and port.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

commswitchbindfailure

severity

alarm severity

CRITICAL

timestamp

timestamp when the event occurred

description

short alarm description

The CommSwitch was unable to bind its listen port

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeId

affected node

failureDescription

the cause of the bind failure

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

The CommSwitch is unable to bind to the configured switch-local-address and switch-port. This alarm is cleared when the CommSwitch is able to successfully bind the configured address and port.

cause

possible alarm causes

Typically misconfigured; the administrator must ensure that the CommSwitch is configured to use a host and port pair which is always available for the SGC’s exclusive use.

effect

potential consequences

SGC nodes in the same cluster are unable to route messages to each other.

action

summary of remedial action

Correct the SGC’s CommSwitch configuration or locate and terminate the process that is bound to the SGC’s CommSwitch address and port.

configSaveFailed

This alarm is raised when the SGC is unable to save its configuration. This alarm is cleared when the configuration file is next successfully saved.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

configSaveFailed

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

Failed to save SGC configuration

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeId

affected node

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when the SGC is unable to save its configuration. This alarm is cleared when the configuration file is next successfully saved.

cause

possible alarm causes

Insufficient storage space, or changes to the read/write permissions of any previously saved configuration files.

effect

potential consequences

The SGC configuration may be out of date or not saved at all.

action

summary of remedial action

Examine SGC logs to determine cause of save failure and rectify.

distributedDataInconsistency

This alarm is raised when a distributed data inconsistency is detected. This alarm must be cleared manually since it indicates a problem that may result in undefined behaviour within the SGC, and requires a restart of the SGC cluster to correct. When restarting the cluster it is necessary to fully stop all SGC nodes and only then begin restarting them to properly correct the problem detected by this alarm.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

distributedDataInconsistency

severity

alarm severity

CRITICAL

timestamp

timestamp when the event occurred

description

short alarm description

a distributed data inconsistency has been detected

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeId

affected node

source

the location where the data inconsistency was detected

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when a distributed data inconsistency is detected. This alarm must be cleared manually since it indicates a problem that may result in undefined behaviour within the SGC, and requires a restart of the SGC cluster to correct. When restarting the cluster it is necessary to fully stop all SGC nodes and only then begin restarting them to properly correct the problem detected by this alarm.

cause

possible alarm causes

A distributed data inconsistency has been detected; the most likely cause of this is a misconfigured backup-count parameter in hazelcast.xml

effect

potential consequences

Undefined behaviour from the SGC is possible at any time

action

summary of remedial action

Fully terminate the cluster, correct the underlying issue, then restart the whole cluster

illegalClusterEntry

This alarm is raised when a node that doesn’t support the current cluster version enters the cluster. This alarm must be cleared manually.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

illegalClusterEntry

severity

alarm severity

CRITICAL

timestamp

timestamp when the event occurred

description

short alarm description

A node entered the cluster with an unsupported distributed data version.

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeInfo

information about the illegally entering node

mode

current cluster mode

clusterVersion

current cluster version

targetVersion

target cluster version

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when a node that doesn’t support the current cluster version enters the cluster. This alarm must be cleared manually.

cause

possible alarm causes

A node that doesn’t support the current cluster version entered the cluster.

effect

potential consequences

Potential for cluster data corruption and instability.

action

summary of remedial action

Terminate the node that doesn’t support the current cluster version. Evaluate cluster status.

mapdatalosspossible

This alarm is raised when the number of SGC nodes present in the cluster exceeds 1 plus the backup-count configured for Hazelcast map data structures. See Hazelcast cluster configuration for information on how to fix this. This alarm must be cleared manually since it indicates a configuration error requiring correction and a restart of the SGC.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

mapdatalosspossible

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

Hazelcast data loss possible due to mismatch between configured map backup count and actual cluster size

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

backupCount

the currently configured backup count

nodeCount

the number of nodes in the cluster when at its largest

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when the number of SGC nodes present in the cluster exceeds 1 plus the backup-count configured for Hazelcast map data structures. See Hazelcast cluster configuration for information on how to fix this. This alarm must be cleared manually since it indicates a configuration error requiring correction and a restart of the SGC.

cause

possible alarm causes

Misconfiguration of cluster nodes or unexpected nodes have entered the cluster.

effect

potential consequences

Potential for distributed data loss.

action

summary of remedial action

See Hazelcast cluster configuration for information on how to correct this.

migrationErrors

This alarm is raised when errors are encountered during the data migration phase of an SGC cluster upgrade or revert. This alarm must be cleared manually since it indicates a potentially critical error during the cluster upgrade which may have an impact on cluster stability.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

migrationErrors

severity

alarm severity

CRITICAL

timestamp

timestamp when the event occurred

description

short alarm description

Errors were encountered during data migration

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

sourceVersion

data version before migration

targetVersion

data version being migrated to

migrationErrors

detailed information about the migration errors

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when errors are encountered during the data migration phase of an SGC cluster upgrade or revert. This alarm must be cleared manually since it indicates a potentially critical error during the cluster upgrade which may have an impact on cluster stability.

cause

possible alarm causes

One or more errors were encountered during the data migration phase of an SGC cluster upgrade or revert.

effect

potential consequences

The SGC cluster’s behaviour may be undefined, either now or in the future.

action

summary of remedial action

Run bin/generate-report.sh on each cluster member, then terminate the whole cluster. Reinstate the previous cluster version from backups and start the old cluster. Submit a support request.

nodeManagerBindFailure

This alarm is raised when the legacy node manager is unable to bind to the configured stack-http-address and stack-http-port for any reason. This is typically caused by misconfiguration; the administrator must ensure that the node manager is configured to use a host and port pair which is always available for the SGC’s exclusive use. This alarm is cleared when the node manager is able to successfully bind the configured socket.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

nodeManagerBindFailure

severity

alarm severity

CRITICAL

timestamp

timestamp when the event occurred

description

short alarm description

The legacy node manager was unable to bind its listen socket

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeId

affected node

failureDescription

additional information about the failure

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when the legacy node manager is unable to bind to the configured stack-http-address and stack-http-port for any reason. This is typically caused by misconfiguration; the administrator must ensure that the node manager is configured to use a host and port pair which is always available for the SGC’s exclusive use. This alarm is cleared when the node manager is able to successfully bind the configured socket.

cause

possible alarm causes

The configured stack-http-address and stack-http-port cannot be bound.

effect

potential consequences

Legacy TCAP stacks (those using the urlList method) will not be able to connect to the affected SGC.

action

summary of remedial action

Ensure that stack-http-address and stack-http-port are correctly configured and that no other applications have bound the configured address and port.

nodefailure

This alarm is raised whenever a node configured in the cluster is down. It is cleared when an SGC instance acting as that particular node becomes active.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

nodefailure

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

A node has left the cluster.

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeId

affected node

failureDescription

additional information about the node failure

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a node configured in the cluster is down. It is cleared when an SGC instance acting as that particular node becomes active.

cause

possible alarm causes

A configured node is not running. This may be due to administrative action, or the node may have exited abnormally.

effect

potential consequences

Any remaining cluster nodes will continue to provide service.

action

summary of remedial action

Determine why the node is not running, resolve any issues and restart the stopped node.

poolCongestion

This alarm is raised whenever over 80% of a pool’s pooled objects are in use. This is typically caused by misconfiguration, see Static SGC instance configuration. It is cleared when less than 50% of pooled objects are in use.

Note
What is a task pool?

A task pool is a pool of objects used during message processing, where each allocated object represents a message that may be processing or waiting to be processed. Each SGC node uses separate task pools for outgoing and incoming messages.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

poolCongestion

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

Task Pool congestion occurred

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeId

affected node

poolName

name of the affected task pool

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever over 80% of a pool’s pooled objects are in use. This is typically caused by misconfiguration, see Static SGC instance configuration. It is cleared when less than 50% of pooled objects are in use.

cause

possible alarm causes

Misconfiguration

effect

potential consequences

None unless poolExhaustion alarm is also raised

action

summary of remedial action

Examine the SGC’s sizing requirements

poolExhaustion

This alarm is raised whenever a task allocation request is made on a pool whose objects are all already allocated. This is typically caused by misconfiguration, see Static SGC instance configuration. This alarm must be cleared manually.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

poolExhaustion

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

Task Pool exhaustion occurred

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeId

affected node

poolName

name of the affected task pool

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a task allocation request is made on a pool whose objects are all already allocated. This is typically caused by misconfiguration, see Static SGC instance configuration. This alarm must be cleared manually.

cause

possible alarm causes

A task allocation request is made on a task pool whose members are all in use

effect

potential consequences

Delays processing messages and/or messages being dropped

action

summary of remedial action

Examine the SGC’s sizing requirements.

workgroupCongestion

This alarm is raised when the worker work queue is over 80% occupied. It is cleared when the worker work queue is less than 50% occupied.

Note
What is a worker group?

A worker group is a group of workers (threads) that are responsible for processing tasks (incoming/outgoing messages). Each worker has a separate work queue.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

workgroupCongestion

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

Workgroup thread is congested

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

nodeId

affected node

threadIndex

affected worker index

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when the worker work queue is over 80% occupied. It is cleared when the worker work queue is less than 50% occupied.

cause

possible alarm causes

The queue of tasks waiting to be processed is larger than 80% of the configured maximum worker queue size

effect

potential consequences

Tasks may fail to be queued if the workgroup congestion hits 100%

action

summary of remedial action

Examine the SGC’s sizing requirements.

M3UA

This section describes alarms raised concerning the M3UA layer of the SGC cluster.

asConnDown

This alarm is raised when an AS connection which was active becomes inactive. This alarm can be caused either by misconfiguration at one or both ends of the M3UA association used, such as by a disagreement on the routing context to be used, or by network failure. It is cleared when the Application Server becomes active on the connection.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

asConnDown

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

AS is down on selected connection

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

asId

the affected AS

connectionId

name of affected connection

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when an AS connection which was active becomes inactive. This alarm can be caused either by misconfiguration at one or both ends of the M3UA association used, such as by a disagreement on the routing context to be used, or by network failure. It is cleared when the Application Server becomes active on the connection.

cause

possible alarm causes

Misconfiguration of one or both ends of the M3UA association or network failure.

effect

potential consequences

The affected AS connection cannot be used for message send or receive.

action

summary of remedial action

Correct configuration or resolve network failure.

asDown

This alarm is raised whenever a configured M3UA Application Server is not active. This alarm is typically caused by either a misconfiguration at one or both ends of an M3UA association or by network failure. It is cleared when the Application Server becomes active again.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

asDown

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

AS identified by asId is DOWN

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

asId

the affected AS

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a configured M3UA Application Server is not active. This alarm is typically caused by either a misconfiguration at one or both ends of an M3UA association or by network failure. It is cleared when the Application Server becomes active again.

cause

possible alarm causes

Misconfiguration of one or both ends of the M3UA association or network failure.

effect

potential consequences

The Application Server is down and messages cannot be sent or received on that AS

action

summary of remedial action

Correct configuration or resolve network failure.

associationCongested

This alarm is raised whenever an SCTP association becomes congested. An association is considered congested if the outbound queue size grows to more than 80% of the configured out-queue-size for the connection. This alarm is cleared when the outbound queue size drops below 50% of the configured out-queue-size.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

associationCongested

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

Association is congested

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

connectionId

name of affected connection

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever an SCTP association becomes congested. An association is considered congested if the outbound queue size grows to more than 80% of the configured out-queue-size for the connection. This alarm is cleared when the outbound queue size drops below 50% of the configured out-queue-size.

cause

possible alarm causes

The association’s outbound queue size has grown to more than 80% of the configured out-queue-size.

effect

potential consequences

Possible higher latency sending M3UA messages and if the queue becomes full, message send failure.

action

summary of remedial action

Examine the SGC’s sizing requirements

associationDown

This alarm is raised whenever a configured connection is not active. This alarm is typically caused either by a misconfiguration at one or both ends of the M3UA association or by network failure. It is cleared when an association becomes active again.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

associationDown

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

Association is not established

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

connectionId

name of affected connection

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a configured connection is not active. This alarm is typically caused either by a misconfiguration at one or both ends of the M3UA association or by network failure. It is cleared when an association becomes active again.

cause

possible alarm causes

Misconfiguration at one or both ends of the M3UA association or network failure.

effect

potential consequences

The SCTP association will not be used.

action

summary of remedial action

Correct configuration or resolve network failure.

associationPathDown

This alarm is raised whenever a network path within an association becomes unreachable but the association as a whole remains functional because at least one other path remains available. This alarm is only raised for assocations using SCTP’s multi-homing feature (i.e. having multiple connection IP addresses assigned to a single connection). Association path failure is typically caused by either misconfiguration at one or both ends or by network failure. This alarm will be cleared when SCTP signals that the path is available again, or when all paths have failed, in which case a single associationDown alarm will be raised to replace all the former associationPathDown alarms.

Note This alarm will also always be raised briefly during association establishment for all paths within the association which SCTP does not consider primary while SCTP is testing the alternative paths.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

associationPathDown

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

SCTP association path is down

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

connectionId

name of affected connection

pathId

the peer address which has become unreachable

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a network path within an association becomes unreachable but the association as a whole remains functional because at least one other path remains available. This alarm is only raised for assocations using SCTP’s multi-homing feature (i.e. having multiple connection IP addresses assigned to a single connection). Association path failure is typically caused by either misconfiguration at one or both ends or by network failure. This alarm will be cleared when SCTP signals that the path is available again, or when all paths have failed, in which case a single associationDown alarm will be raised to replace all the former associationPathDown alarms.

cause

possible alarm causes

A network path within the SCTP association has become unreachable.

effect

potential consequences

Other path(s) within the association will be used.

action

summary of remedial action

Correct configuration or resolve network failure.

associationUnresolvable

This alarm is raised whenever an association is detected to be configured with an unresolvable remote address. This alarm will be cleared whenever a connect attempt is made using any address on the association and the unresolvable address has since become resolvable. It may also be cleared if the connection is disabled and the address has become unresolvable.

Since automatic clearing of the alarm is dependent upon association activity (reconnect attempts or disabling) this may not happen for some time - for example if there are alternate addresses and one of those was used for a successful connect. In this case the user may prefer to clear the alarm manually.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

associationUnresolvable

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

SCTP association address could not be resolved

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

connectionId

name of affected connection

address

the peer address which could not be resolved

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever an association is detected to be configured with an unresolvable remote address. This alarm will be cleared whenever a connect attempt is made using any address on the association and the unresolvable address has since become resolvable. It may also be cleared if the connection is disabled and the address has become unresolvable.

cause

possible alarm causes

An association has been configured with an unresolvable remote address.

effect

potential consequences

If this is the only address on the association, then the association will be DOWN. If other resolvable addresses exist, one of these will be used to establish the association.

action

summary of remedial action

Correct configuration or resolve network failure (e.g. DNS lookup).

dpcRestricted

This alarm is raised when the SGC receives a Destination Restricted message from its remote SGP or IPSP peer for a remote destination point code. It is cleared when the DPC restricted state abates on a particular SCTP association.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

dpcRestricted

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

DPC is restricted on this connection in the context of AS

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

asId

the affected AS

dpcId

the affected DPC

connectionId

name of affected connection

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when the SGC receives a Destination Restricted message from its remote SGP or IPSP peer for a remote destination point code. It is cleared when the DPC restricted state abates on a particular SCTP association.

cause

possible alarm causes

The remote SGP or IPSP peer has sent a Destination Restricted message to the SGC.

effect

potential consequences

The SGC will route traffic to the affected DPC via an alternate route if possible.

action

summary of remedial action

None at the SGC.

dpcUnavailable

This alarm is raised when a configured DPC is unreachable through a particular SCTP association. It is cleared when a DPC becomes reachable again through the particular SCTP association.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

dpcUnavailable

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

Dpc is not reachable on this connection in the context of AS

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

asId

the affected AS

dpcId

the affected DPC

connectionId

name of affected connection

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised when a configured DPC is unreachable through a particular SCTP association. It is cleared when a DPC becomes reachable again through the particular SCTP association.

cause

possible alarm causes

Network failure or misconfiguration.

effect

potential consequences

The DPC cannot be reached through the affected connection and affected AS.

action

summary of remedial action

Correct configuration or resolve network failure.

mtpCongestion

This alarm is raised whenever a remote MTP reports congestion for an association and a specific destination point code normally reachable through that association. It is cleared when the remote MTP reports that congestion has abated.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

mtpCongestion

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

MTP congestion reported (SCON)

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

connectionId

name of affected connection

dpcId

the affected DPC

mtpCongestionLevel

the reported MTP congestion level (ANSI only)

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a remote MTP reports congestion for an association and a specific destination point code normally reachable through that association. It is cleared when the remote MTP reports that congestion has abated.

cause

possible alarm causes

The remote MTP has reported congestion.

effect

potential consequences

Standard MTP congestion procedures are followed.

action

summary of remedial action

None; the alarm is automatically cleared when the remote MTP reports abatement.

SCCP

This section describes Alarms raised by the SCCP subsystem.

sccpLocalSsnProhibited

This alarm is raised whenever all previously connected TCAP stacks (with the CGIN RA) using a particular SSN become disconnected. This is typically caused by either network failure or administrative action (such as deactivating an RA entity in Rhino). It is cleared when at least one TCAP stack configured for the affected SSN connects.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

sccpLocalSsnProhibited

severity

alarm severity

MAJOR

timestamp

timestamp when the event occurred

description

short alarm description

SSN that is prohibited

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

ssn

affected SubSystem

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever all previously connected TCAP stacks (with the CGIN RA) using a particular SSN become disconnected. This is typically caused by either network failure or administrative action (such as deactivating an RA entity in Rhino). It is cleared when at least one TCAP stack configured for the affected SSN connects.

cause

possible alarm causes

All TCAP stacks registered for the affected SSN have disconnected.

effect

potential consequences

Messages received for the affected SSN will follow SCCP return procedures.

action

summary of remedial action

Correct network failure or resolve administrative action.

sccpRemoteNodeCongestion

This alarm is raised whenever a remote SCCP node reports congestion. It is cleared when the congestion abates. This alarm will only be emitted when the sccp-variant in General Configuration is configured for ITU. See mtpCongestion for information on MTP-level congestion notification (SCON/MTP-STATUS) alarms.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

sccpRemoteNodeCongestion

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

Remote DPC reports congestion

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

dpc

affected PointCode

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a remote SCCP node reports congestion. It is cleared when the congestion abates. This alarm will only be emitted when the sccp-variant in General Configuration is configured for ITU. See mtpCongestion for information on MTP-level congestion notification (SCON/MTP-STATUS) alarms.

cause

possible alarm causes

The remote SCCP has reported congestion.

effect

potential consequences

ITU-T SCCP congestion algorithms will be applied to the specified DPC.

action

summary of remedial action

None; this alarm is automatically cleared when congestion abates.

sccpRemoteNodeNotAvailable

This alarm is raised whenever a remote SCCP node becomes unavailable. It is cleared when the remote node becomes available.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

sccpRemoteNodeNotAvailable

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

Remote DPC is no longer available

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

dpc

affected PointCode

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a remote SCCP node becomes unavailable. It is cleared when the remote node becomes available.

cause

possible alarm causes

The remote SCCP node has become unavailable.

effect

potential consequences

The remote SCCP will not have messages sent to it.

action

summary of remedial action

None; this alarm is cleared when the remote SCCP becomes available.

sccpRemoteSsnProhibited

This alarm is raised whenever a remote SCCP node reports that a particular SSN is prohibited. It is cleared when the remote SCCP node reports that the affected SSN is available.

The following table shows the basic attributes raised as part of this alarm.

Attribute Description Values of constants

id

unique alarm identifier

name

name of alarm type

sccpRemoteSsnProhibited

severity

alarm severity

MINOR

timestamp

timestamp when the event occurred

description

short alarm description

Remote subsystem is prohibited

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following alarm-specific parameters:

Attribute Description

dpc

affected PointCode

ssn

affected SubSystem

This alarm’s extended attributes have the following fixed values:

Attribute Description Value

longDescription

long alarm description

This alarm is raised whenever a remote SCCP node reports that a particular SSN is prohibited. It is cleared when the remote SCCP node reports that the affected SSN is available.

cause

possible alarm causes

The remote SCCP has reported the affected SSN to be prohibited.

effect

potential consequences

The affected SSN at the affected point code will not have messages sent to it.

action

summary of remedial action

None; this alarm is cleared when the remote SCCP reports that the affected SSN is available.

Notifications

What are SS7 SGC notifications?

Notifications in the SS7 SGC stack alert the administrator about infrequent major events. Subsystems in the SS7 SGC stack emit notifications upon detecting an error condition or an event of high importance.

Management clients may use either Java JMX MBean or SNMP trap/notifications to receive notifications emitted by the SS7 SGC stack.

Below are descriptions of:

You can review the history of emitted notifications using commands exposed by the Command-Line Management Console, which is distributed with the SGC SS7 stack.

How are notifications different from alarms?

Notifications are very similar to Alarms in the SGC stack:

  • Notifications have attributes (general and notification type-specific attributes).

  • The SGC stack stores a history of emitted notifications.

Warning The difference is that notifications are emitted (sent) whenever a particular event occurs; and there is no notion of active notifications or a notification being cleared.

Generic notification attributes

Notification attributes represent information about events that result in a notification being emitted. Each notification type has the following generic attributes, plus a group of attributes specific to that notification type (described in the following sections).

Attribute Description

id

unique notification identifier

name

name of notification type

severity

notification severity:

  • CRITICAL — application encountered an error which prevents it from continuing (it can no longer provide services)

  • MAJOR — application encountered an event which significantly impacts delivered services; some services may no longer be available

  • MINOR — application reports an event which does not have significant impact on delivered services

  • INFO — application reports an information event which does not have any impact on delivered services

  • CLEARED — alarm has been cleared.

timestamp

timestamp when the event occurred

parameters

comma-separated list of key=value pairs specific to the notification. The catalogue below documents the parameters each notification may have.

description

short description of the notification

longDescription

longer description of the notification.
Will be omitted if the SNMP node is not configured for extended-traps

Notification types

This section describes all notification types that can be emitted by the SGC cluster:

mtpDecodeErrors

This notification contains information about the number of badly formatted messages at the MTP layer and the number of messages directed to an unsupported MTP user. This notification is emitted periodically, with a summary of the number of errors in that period. It is not emitted if there are no errors.

The following table shows the basic attributes emitted with this notification:

Attribute Description Values of constants

id

unique notification identifier

name

name of notification type

mtpDecodeErrors

severity

notification severity

INFO

timestamp

timestamp when the event occurred

description

short description of the notification

Periodic report on M3UA decode errors

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following notification-specific parameters:

Attribute Description

nodeId

the affected node

connectionId

name of affected connection

decodeFailuresLastTime

number of message decode failures during report interval

unsupportedUserFailuresTime

number of messages with unsupported user part during report interval

This notification’s extended parameters have the following fixed values:

Attribute Description Value

longDescription

long description of the notification

This notification contains information about the number of badly formatted messages at the MTP layer and the number of messages directed to an unsupported MTP user. This notification is emitted periodically, with a summary of the number of errors in that period. It is not emitted if there are no errors.

sccpDecodeErrors

This notification contains information about the number of badly formatted messages at the SCCP layer and the number of messages directed to a prohibited SSN. This notification is emitted periodically, with a summary of the number of errors in that period. It is not emitted if there are no errors.

The following table shows the basic attributes emitted with this notification:

Attribute Description Values of constants

id

unique notification identifier

name

name of notification type

sccpDecodeErrors

severity

notification severity

INFO

timestamp

timestamp when the event occurred

description

short description of the notification

Errors in SCCP processing notification

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following notification-specific parameters:

Attribute Description

nodeId

affected node

decodeFailuresLastTime

number of message decode failures during report interval

ssnFailuresLastTime

number of messages directed to prohibited SSN during report interval

This notification’s extended parameters have the following fixed values:

Attribute Description Value

longDescription

long description of the notification

This notification contains information about the number of badly formatted messages at the SCCP layer and the number of messages directed to a prohibited SSN. This notification is emitted periodically, with a summary of the number of errors in that period. It is not emitted if there are no errors.

tcapDecodeErrors

This notification contains information about the number of badly formatted TCAP messages and the number of messages that the SGC is unable to forward to a TCAP stack (CGIN RA). This notification is emitted periodically, with a summary of the number of errors in that period. It is not emitted if there are no errors.

The following table shows the basic attributes emitted with this notification:

Attribute Description Values of constants

id

unique notification identifier

name

name of notification type

tcapDecodeErrors

severity

notification severity

INFO

timestamp

timestamp when the event occurred

description

short description of the notification

Periodic report on TCAP decode errors

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following notification-specific parameters:

Attribute Description

nodeId

affected node

decodeFailuresLastTime

number of message decode failures during report interval

missingPeerLastTime

number of messages that the SGC was unable to forward to a TCAP stack

This notification’s extended parameters have the following fixed values:

Attribute Description Value

longDescription

long description of the notification

This notification contains information about the number of badly formatted TCAP messages and the number of messages that the SGC is unable to forward to a TCAP stack (CGIN RA). This notification is emitted periodically, with a summary of the number of errors in that period. It is not emitted if there are no errors.

tcapStackRegister

This notification is emitted whenever a TCAP stack registers with the SGC.

The following table shows the basic attributes emitted with this notification:

Attribute Description Values of constants

id

unique notification identifier

name

name of notification type

tcapStackRegister

severity

notification severity

INFO

timestamp

timestamp when the event occurred

description

short description of the notification

TCAP stack registered

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following notification-specific parameters:

Attribute Description

ssn

registered SSN

nodeId

affected node

stackIp

IP address of the stack

prefix

allocated transaction prefix

This notification’s extended parameters have the following fixed values:

Attribute Description Value

longDescription

long description of the notification

This notification is emitted whenever a TCAP stack registers with the SGC.

tcapStackUnregister

This notification is emitted whenever a TCAP stack deregisters from the SGC.

The following table shows the basic attributes emitted with this notification:

Attribute Description Values of constants

id

unique notification identifier

name

name of notification type

tcapStackUnregister

severity

notification severity

INFO

timestamp

timestamp when the event occurred

description

short description of the notification

TCAP stack unregistered

Additionally, the parameters attribute is a basic attribute consisting of a comma-separated list of key=value pairs containing the following notification-specific parameters:

Attribute Description

ssn

registered SSN

nodeId

affected node

stackIp

IP address of the stack

prefix

allocated transaction prefix

This notification’s extended parameters have the following fixed values:

Attribute Description Value

longDescription

long description of the notification

This notification is emitted whenever a TCAP stack deregisters from the SGC.

Statistics

What are SS7 SGC statistics?

Statistics are usage, status, and health information exposed by the SS7 SGC stack. Each layer of the SS7 SGC stack exposes a set of statistics that provide insight into the current operational state of the cluster. The statistics data is available cluster-wide, independent of which particular node happens to be responsible for gathering a particular set of statistics.

Warning
Statistics update interval

To maximize performance, statistics exposed by SGC cluster nodes are updated periodically (every 1 second). So any management tools polling SGC statistics should use a polling interval that is > 1s.

The SGC statistics subsystem collects multiple types of statistical data, which can be separated in two broad categories:

  • frequently updated statistic data types:

    • counter — monotonically increasing integer values; for example the number of received messages

    • gauge — the amount of a particular object or item, which can increase and decrease within some arbitrary bound, typically between 0 and some positive number; for example, the number of processing threads in a worker group.

  • character values that can change infrequently or not at all; these include:

    • identifying information — usually a name of an object for which a set of statistics is gathered; for example, the name of a node

    • status information — information about the state of a given configuration object; for example, the SCTP association state ESTABLISHED.

The SGC statistics subsystem is queried using commands exposed by the Command-Line Management Console, which is distributed with SGC SS7 Stack.

Statistics

Below are details of the exposed subsystem-specific statistics:

M3UA

The M3UA layer exposes statistical data about AS, Association, and DPC.

Note The JMX Object name for M3UA statistics is SGC:module=m3ua,type=info
AsInfo

The set of configured Application Servers currently associated with connections.

Statistic Description

RXCount

number of received messages directed to this AS through a particular connection

TXCount

number of sent messages originating from this AS through a particular connection

asId

name of the AS

connectionId

name of the connection

nodeId

name of the node

status

status of this AS as seen through a particular connection
(possible values: INACTIVE, ACTIVE)

Note This information is also exposed through multiple MBeans named SGC:type=info,module=m3ua,elem=AsInfo,id=CONNECTION_NAME&AS_NAME, where id is a concatenation of the relevant connection and AS names.
AssociationInfo

The set of SCTP associations defined within the cluster.

Statistic Description

RXDataCount

number of received M3UA Payload Data Messages

RXErrorCount

number of received M3UA Management Error Messages

RXSctpCount

number of received M3UA Messages

TXDataCount

number of transmitted M3UA Payload Data Messages

TXErrorCount

number of transmitted M3UA Management Error Messages

TXSctpCount

number of transmitted M3UA Messages

connectionId

name of connection

nodeId

name of node

numberOfInStreams

number of input streams

numberOfOutStreams

number of output streams

outQSize

number of messages waiting to be written into SCTP association

peerAddresses

peer IP addresses

sctpOptions

socket options associated with an SCTP channel

status

status of this connection
(possible values: INACTIVE, ESTABLISHED)

Note This information is also exposed through multiple MBeans named SGC:type=info,module=m3ua,associationInfo,id=CONNECTION_NAME, where id is the name of the relevant connection.
DpcInfo

The set of Destination Point Codes that are associated with connections at this point in time.

Statistic Description

ConnectionId

name of connection

Dpc

destination point code

DpcId

name of DPC

Status

status of this DPC
(possible values: ACTIVE, CONGESTED, INACTIVE, RESTRICTED)

MtpCongestionLevel

ANSI SCCP only: the MTP congestion level (possible values: 0 - 3)

Note This information is also exposed through multiple MBeans named SGC:type=info,module=m3ua,dpcInfo,id=CONNECTION_NAME&DPC, where id is a concatenation of the relevant connection name and DPC.

SCCP

The SCCP layer exposes statistic data about the local SSN, GT translation rules (outgoing), PC routing, remote SSN, SCCP messages and SCCP errors.

Note The JMX Object name for SCCP statistics is SGC:module=sccp,type=info
LocalSsnInfo

The set of local SSN; that is, TCAP Stacks (CGIN RAs, scenario simulators) connected to the cluster representing particular SSNs.

Statistic Description

RXCount

number of messages received by the SSN

TXCount

number of messages originated by the SSN

ssn

local SSN

status

status of this SSN
(possible values: PROHIBITED, CONGESTED, ALLOWED)

NUnitdataReqCount

number of N-UNITDATA req primitives received by the SSN

segmentationFailureCount

number of segmentation failures at this SSN

NUnitdataIndCount

number of N-UNITDATA ind primitives received by this SSN

reassembledNUnitdataIndCount

number of reassembled N-UNITDATA ind primitives received by this SSN

reassemblyFailureCount

number of reassembly failures at this SSN

Note This information is also exposed through multiple MBeans named SGC:type=info,module=sccp,elem=LocalSsnInfo,ssn=SSN_NUMBER, where ssn is the value of the relevant local SSN.
OgtInfo

The set of Global Title translation rules currently used for outgoing messages. Different translation rules might be in effect at the same time on different nodes, depending on the prefer-local attribute of the outgoing-gtt configuration object.

The set of currently used translation rules depends on:

  • the priority attribute of the outgoing-gtt configuration object

  • the priority attribute of the route configuration object

  • DPC reachability status (GT translations that result in unreachable DPC are ignored).

Note
Local cluster PC

Entries with an empty connId attribute and an rc attribute value of -1 represent GT translations that result in a PC of the SGC Cluster itself. Messages that are to be routed to a local PC are load balanced among TCAP Stacks within the local cluster.

Statistic Description

addrInfo

address string

addrType

values of attributes trtype, numplan, natofaddr, and isPrefix, as defined for this rule

connId

name of connection on which messages with matching GT will be transmitted

dpc

destination point code to which messages with matching GT will be transmitted

nodeId

name of the node on which translation rule is used

rc

routing context which will be used when transmitting messages with matching GT

routeNodeId

name of node where SCTP connection is established

PcInfo

The set of associations and RCs across which traffic for a given DPC are currently load balanced by each node.

Note
Local cluster PC

Entries with an empty connId attribute and an rc attribute value of -1 represent nodes of the SGC Cluster itself. Messages that are to be routed to the local DPC are load balanced among TCAP Stacks within the local cluster.

Statistic Description

dpcId

name of DPC

connId

name of connection on which messages with matching DPC will be transmitted

dpc

destination point code

nodeId

name of node where connection is established

rc

routing context which will be used when transmitting messages with matching DPC

MTPCongestionLevel

(ANSI SCCP only) MTP congestion level

RemoteSsnInfo

The set of remote SSNs for which their status is known.

Statistic Description

dpc

destination point code

ssn

remote SSN

status

status of this SSN
(possible values: PROHIBITED, CONGESTED, ALLOWED)

SccpStats

Statistics for SCCP messages received and sent by local SSN.

Statistic Description

ssn

local SSN

udtSent

number of UDT messages sent by this SSN

udtReceived

number of UDT messages received by this SSN

xudtSent

number of XUDT messages sent by this SSN

xudtReceived

number of XUDT messages received by this SSN

ludtReceived

number of LUDT messages received by this SSN

udtsSent

number of UDTS messages sent by this SSN

udtsReceived

number of UDTS messages received by this SSN

xudtsSent

number of XUDTS messages sent by this SSN

xudtsReceived

number of XUDTS messages received by this SSN

ludtsSent

number of LUDTS messages sent by this SSN

ludtsReceived

number of LUDTS messages received by this SSN

xudtSegmentationSent

number of XUDT messages sent by this SSN with a segmentation parameter

xudtSegmentationReceived

number of XUDT messages received by this SSN with a segmentation parameter

xudtNoSegmentationSent

number of XUDT messages sent by this SSN without a segmentation parameter

xudtNoSegmentationReceived

number of XUDT messages received by this SSN without a segmentation parameter

SccpErrorStats

Statistics for SCCP ReturnCauses received and sent by local SSN.

Statistic Description

ssn

local SSN

noTranslationForAddressOfSuchNatureSent

number of NO_TRANSLATION_FOR_ADDR_OF_SUCH_NATURE SCCP return causes sent by this SSN

noTranslationForThisSpecificAddress

number of NO_TRANSLATION_FOR_SPECIFIC_ADDR SCCP return causes sent by this SSN

subsystemCongestionSent

number of SUBSYSTEM_CONGESTION SCCP return causes sent by this SSN

subsystemFailureSent

number of SUBSYSTEM_FAILURE SCCP return causes sent by this SSN

unequippedUserSent

number of UNEQUIPPED_USER SCCP return causes sent by this SSN

mtpFailureSent

number of MTP_FAILURE SCCP return causes sent by this SSN

networkCongestionSent

number of NETWORK_CONGESTION SCCP return causes sent by this SSN

unqualifiedSent

number of UNQUALIFIED SCCP return causes sent by this SSN

errorInMessageTransportSent

number of ERROR_IN_MESSAGE_TRANSPORT SCCP return causes sent by this SSN

destinationCannotPerformReassemblySent

number of DESTINATION_CANNOT_PERFORM_REASSEMBLY SCCP return causes sent by this SSN

sccpFailureSent

number of SCCP_FAILURE SCCP return causes sent by this SSN

hopCounterViolationSent

number of HOP_COUNTER_VIOLATION SCCP return causes sent by this SSN

segmentationNotSupportedSent

number of SEGMENTATION_NOT_SUPPORTED SCCP return causes sent by this SSN

segmentationFailureSent

number of SEGMENTATION_FAILURE SCCP return causes sent by this SSN

otherSent

number of other SCCP return causes sent by this SSN

noTranslationForAddressOfSuchNatureReceived

number of NO_TRANSLATION_FOR_ADDR_OF_SUCH_NATURE SCCP return causes received by this SSN

noTranslationForThisSpecificAddress

number of NO_TRANSLATION_FOR_SPECIFIC_ADDR SCCP return causes received by this SSN

subsystemCongestionReceived

number of SUBSYSTEM_CONGESTION SCCP return causes received by this SSN

subsystemFailureReceived

number of SUBSYSTEM_FAILURE SCCP return causes received by this SSN

unequippedUserReceived

number of UNEQUIPPED_USER SCCP return causes received by this SSN

mtpFailureReceived

number of MTP_FAILURE SCCP return causes received by this SSN

networkCongestionReceived

number of NETWORK_CONGESTION SCCP return causes received by this SSN

unqualifiedReceived

number of UNQUALIFIED SCCP return causes received by this SSN

errorInMessageTransportReceived

number of ERROR_IN_MESSAGE_TRANSPORT SCCP return causes received by this SSN

destinationCannotPerformReassemblyReceived

number of DESTINATION_CANNOT_PERFORM_REASSEMBLY SCCP return causes received by this SSN

sccpFailureReceived

number of SCCP_FAILURE SCCP return causes received by this SSN

hopCounterViolationReceived

number of HOP_COUNTER_VIOLATION SCCP return causes received by this SSN

segmentationNotSupportedReceived

number of SEGMENTATION_NOT_SUPPORTED SCCP return causes received by this SSN

segmentationFailureReceived

number of SEGMENTATION_FAILURE SCCP return causes received by this SSN

otherReceived

number of other SCCP return causes received by this SSN

TCAP

The TCAP layer exposes statistics data about connected TCAP Stacks.

Note The JMX Object name for TCAP statistics is SGC:module=tcap,type=info
TcapConnInfo

The set of connected TCAP stacks (such as the CGIN RA).

Statistic Description

QSize

number of messages waiting to be written into the TCP connection

RXCount

number of messages directed from SGC to TCAP stack

TXCount

number of messages directed from TCAP stack to SGC

connectTime

time and date when connection was established

localIp

local SGC IP used by connection

localPort

local SGC port used by connection

prefix

transaction prefix assigned to TCAP stack

remoteIp

IP from which TCAP stack originated connection to SGC

remotePort

port from which TCAP stack originated connection to SGC

serviceNodeId

node name to which TCAP stack is connected

ssn

SSN that TCAP stack is representing

status

current connection status: ACTIVE, STOPPED, SHUTDOWN_INITIATED or SHUTDOWN_RECEIVED

tcapStackId

the identity of the connected TCAP stack

migratedPrefixes

the migrated prefixes that this stack is handling in addition to its own

Note This information is also exposed through multiple MBeans named SGC:type=info,module=tcap,elem=TcapConnInfo,ssn=SSN_NUMBER,prefix=ASSIGNED_PREFIX, where ssn is the SSN represented by the connected TCAP stack, and prefix is the transaction-ID prefix assigned to the connected TCAP stack.
ITUTcapStats

Statistics for ITU-T (Q.77x) TCAP messages sent and received by the local SSN.

Statistic Description

ssn

local SSN

beginSent

number of TC-BEGIN messages sent by the SSN

beginReceived

number of TC-BEGIN messages received by the SSN

continueSent

number of TC-CONTINUE messages sent by the SSN

continueReceived

number of TC-CONTINUE messages received by the SSN

endSent

number of TC-END messages sent by the SSN

endReceived

number of TC-END messages received by the SSN

abortSent

number of TC-ABORT messages sent by the SSN

abortReceived

number of TC-ABORT messages received by the SSN

uniSent

number of TC-UNI messages sent by the SSN

uniReceived

number of TC-UNI messages received by the SSN

ANSITcapStats

Statistics for ANSI (T1.114) TCAP messages sent and received by the local SSN.

Statistic Description

ssn

local SSN

queryWithPermissionSent

number of QueryWithPermission messages sent by the SSN

queryWithPermissionReceived

number of QueryWithPermission messages received by the SSN

queryWithoutPermissionSent

number of QueryWithoutPermission messages sent by the SSN

queryWithoutPermissionReceived

number of QueryWithoutPermission messages received by the SSN

conversationWithPermissionSent

number of ConversationWithPermission messages sent by the SSN

conversationWithPermissionReceived

number of ConversationWithPermission messages received by the SSN

conversationWithoutPermissionSent

number of ConversationWithoutPermission messages sent by the SSN

conversationWithoutPermissionReceived

number of ConversationWithoutPermission messages received by the SSN

responseSent

number of Response messages sent by the SSN

responseReceived

number of Response messages received by the SSN

abortSent

number of Abort messages sent by the SSN

abortReceived

number of Abort messages received by the SSN

uniSent

number of Uni messages sent by the SSN

uniReceived

number of Uni messages received by the SSN

TcapErrorStats

Statistics for TCAP (ITU-T Q.77x and ANSI T1.114) processing errors by SSN.

Statistic Description

ssn

local SSN

unrecognizedTransactionId

number of UNRECOGNIZED_TRANSACTION_ID errors generated within the SGC’s TCAP stack for this SSN

ssnNotFound

number of SSN_NOT_FOUND errors generated within the SGC’s TCAP stack for this SSN

resourceLimitation

number of RESOURCE_LIMITATION errors generated within the SGC’s TCAP stack for this SSN

decodeFailure

number of DECODE_FAILURE errors generated within the SGC`s TCAP stack for this SSN

destinationTransactionIdDecodeFailure

number of DESTINATION_TRANSACTION_ID_DECODE_FAILURE errors generated within the SGC’s TCAP stack for this SSN

TOP

The SGC node processing-load statistics information.

Note The JMX Object name for TOP statistics is SGC:module=top,type=info
HealthInfo

The set of current load-processing statistics for nodes in the cluster.

Statistic Description

allocatedIndTasks

number of allocated task objects, each of which represents an incoming message that may be processing or waiting to be processed

allocatedReqTasks

number of allocated task objects, each of which represents an outgoing message that may be processing or waiting to be processed

contextExecutionCount

cumulative number of tasks representing incoming or outgoing messages that have finished processing

contextExecutionTime

cumulative wall-clock time in milliseconds that tasks representing incoming or outgoing messages were in the allocated state

executorQueueSize

number of background tasks mainly related to the SCCP management state, queued for execution nodeId name of node

workerExecutionCount

cumulative number of tasks (for incoming or outgoing messages) processed by worker threads

workerExecutionTime

cumulative wall-clock time in miliseconds spent by worker threads processing tasks (for incoming or outgoing messages)

workerGroupSize

number of worker threads used to process tasks (for incoming or outgoing messages)

forceAllocatedIndTasks

cumulative number of task objects that had to be force allocated (i.e. not from the task pool) in order to process incoming messages

forceAllocatedReqTasks

cumulative number of task objects that had to be forced allocated (i.e. not from the task pool) in order to process outgoing messages

ClusterVersionInfo

The statistics showing the current cluster version and state.

Statistic Description

clusterId

the name of the SGC cluster

clusterMode

the current cluster mode: NORMAL, UPGRADE_MULTI_VERSION, UPGRADE_CONVERTING_DATA, REVERT_MULTI_VERSION or REVERT_CONVERTING_DATA

currentClusterFormat

the current cluster data format

targetClusterFormat

the target cluster format, relevant only in UPGRADE_CONVERTING_DATA and REVERT_* modes

originalClusterFormat

the original cluster format, relevant only when not in NORMAL mode

NodeVersionInfo

The statistics showing each node’s version.

Statistic Description

nodeName

the name of the SGC node

uuid

the SGC’s UUID

sgcVersion

the SGC version running on the node

nativeFormat

the SGC’s native distributed data format

supportedFormats

the distributed data formats supported by the SGC

Logging

About the Logging subsystem

The Logging subsystem in the SGC Stack is based on the Simple Logging Facade for Java (SLF4J), which serves as a simple facade or abstraction for various logging frameworks, such as java.util.logging, logback, and log4j. SLF4J allows the end user to plug in the desired logging framework at deployment time. That said, the standard SGC Stack distribution uses SLF4J backed with the Apache Log4J logging architecture (version 1.x).

By default, the SGC stack outputs a minimal set of information, which will be used before the logging subsystem has been properly initialized, or when a severe error includes a logging subsystem malfunction. The default start-up script for the SGC stack redirects the standard output to SGC_HOME/logs/startup.<timestamp>, where <timestamp> denotes the SGC start time. This file is rolled over after reaching a size of 100 MB, with at most 10 backups.

Note
SGC_HOME
SGC_HOME here represents the path to the SGC Stack installation directory.

Logger names, levels and appenders

Log4J logging architecture includes logger names, log levels, and log appenders.

Logger names

Subsystems within the SGC stack send log messages to specific loggers. For example, the alarming.log logger receives messages about alarms being raised.

Examples of logger names include:

  • root — the root logger, from which all loggers are derived (can be used to change the log level for all loggers at once)

  • com.cts.ss7.SGCShutdownHook — for log messages related to the SGC shutdown process

  • com.cts.ss7.sccp.layer.scmg.SccpManagement — for log messages related to incoming SCCP management messages.

Log levels

Log levels can be assigned to individual loggers to filter how much information the SGC produces:

Log level Information sent

OFF

no messages sent to logs (not recommended)

FATAL

error messages for unrecoverable errors only (not recommended)

ERROR

error messages (not recommended)

WARN

warning messages

INFO

informational messages (the default)

DEBUG

messages containing useful debugging information

TRACE

messages containing verbose debugging information

Each log level will log all messages for that log level and above. For example, if a logger is set to the WARN level, all of the log messages logged at the WARN, ERROR, and FATAL levels will be logged as well.

If a logger is not assigned a log level, it inherits its parent’s. For example, if the com.cts.ss7.SGCShutdownHook logger has not been assigned a log level, it will have the same effective log level as the com.cts.ss7 logger.

The root logger is a special logger which is considered the parent of all other loggers. By default, the root logger is configured with the DEBUG log level, but the com logger parent of all loggers used by SGC overwrites it to the WARN log level. In this way, all other SGC loggers will output log messages at the WARN log level or above unless explicitly configured otherwise.

Log appenders

Log appenders append log messages to destinations such as the console, a log file, socket, or Unix syslog daemon. At runtime, when SGC logs a message (as permitted by the log level of the associated logger), SGC sends the message to the log appender for writing. Types of log appenders include:

  • file appenders — which append messages to files (and may be rolling file appenders)

  • console appenders — which send messages to the standard out

  • custom appenders — which you configure to receive only messages for particular loggers.

Rolling file appenders

Typically, to manage disk usage, administrators are interested in sending log messages to a set of rolling files. They do this by setting up rolling file appenders which:

  • create new log files based on file size / daily / hourly

  • rename old log files as numbered backups

  • delete old logs when a certain number of them have been archived

Note You can configure the size and number of rolled-over log files.

Default logging configuration

The default SGC Stack logging configuration is an XML file, in a format supported by Log4J, loaded during SGC startup. The default configuration is stored in SGC_HOME/config/log4j.xml. The related DTD file is stored in SGC_HOME/config/log4j.dtd.

Tip

For more information on Log4J and Log4J XML-based configuration, please see:

Default appenders

The SGC Stack comes configured with the following appenders:

Appender Where it sends messages Logger name Type of appender

fileAppender

the SGC logs directory (SGC_HOME/logs/ss7.log); file is rolled over every hour; number of rolled over log files is not limited (by default, no log files are removed)

root

a rolling file appender

stdoutAppender

the SGC console where a node is running (to standard output stream)

Warning By default, the value of the threshold attribute is set to OFF (which effectively disables logging to the console).

root

a console appender

JMX MBean Naming Structure

Note
MBeans exposing operational state

SS7 SGC Stack operational state is exposed through a set of JMX MBeans. Description of these MBeans and their naming conventions might be of interest to a user wishing to create a custom management tool for the SGC stack.

Instance Management MBean

Each instance (node) of the SS7 SGC Stack exposes information that can be used to check the current instance configuration properties.

Note The JMX Object name for the MBean exposing this information is SGC:type=local.

This bean exposes following attribute and operations.

Properties attribute

The properties attribute is a collection of records. Each such record represents runtime properties of the currently connected SGC Stack instance (that is, the instance to which the JMX management client is connected). For more about instance configuration, please see Configuring the SS7 SGC Stack. Below are properties record fields:

Name Description

key

property key

description

property description

defaultValue

value used if configuredValue is not provided

Operations

The results of JMX MBean operations invoked on the instance-management bean are local to the currently connected SGC Stack instance (that is, the instance to which the JMX management client is connected).

Name Description

checkAlive

tests if the JMX RMI connector of the currently connected SGC Stack responds to the invocation of a JMX MBean operation

shutdown

attempts to shutdown the currently connected SGC Stack instance

Warning When this operation is successful, it will result in the JMX management client being disconnected and reporting an error, as the SGC stack instance was shutdown.

Alarm MBean naming structure

Every type of alarm used by the SS7 SGC stack is represented by a separate MBean. The names of MBeans for SGC alarms use the common domain SGC and this set of properties: type, subtype, name. The values of type and subtype are the same for all alarm MBeans; the value of the name property represents a type of alarm. Whenever an actual alarm of a given type is raised, a new MBean representing that instance of the alarm is created. The name of that MBean has an additional property, id, which uniquely identifies that alarm instance in the SGC cluster.

For example, an alarm type representing information about an SCTP association (connection) going down has the MBean name SGC:type=alarming,subtype=alarm,name=associationDown. Whenever a particular association is down, the MBean representing the error condition for that particular association is created with a name such as SGC:type=alarming,subtype=alarm,name=associationDown,id=302, where the id property value is unique among all alarms in the cluster.

Most GUI-based JMX Management tools represent the naming space as a tree of MBean objects, like this:

AlarmMbeanBrowser

Alarm raised and cleared notifications

Whenever SGC raises or clears an alarm, a JMX notification is emitted. To be notified of such events, each MBean representing an alarm type supports alarm.onset and alarm.clear notifications. An administrator can use a JMX management tool and subscribe to such events for each alarm type.

Here’s an example view of a JMX Management tool reporting received notifications:

AlarmNotification

Active alarms and alarm history

Besides alarm type-specific MBeans, the SS7 SGC stack exposes two generic MBeans that enable review of active alarms and history of alarms that where raised and subsequently cleared during cluster operation:

  • SGC:type=alarming,subtype=alarm-table —  currently active alarms

  • SGC:type=alarming,subtype=alarm-history —  history of alarms

Alarm Table

The Alarm Table MBean contains a single attribute, EventList, which is a list of records representing attributes of currently active alarms. Alarm type-specific attributes are represented within a single parameters column.

Here’s an example view of Alarm Table MBean attributes in a JMX management tool:

AlarmTableAttributes

The Alarm Table MBean exposes a clearAlarm operation that can be used to clear any raised alarm based on its id.

Alarm History

The Alarm History MBean contains a single attribute, AlarmHistory, which is a list of records representing attributes of raised and optionally cleared alarms within the last 24 hours (This period is customizable, as described in Configuring the SS7 SGC Stack). Alarm type-specific attributes are represented within a single parameters column. For each alarm that was raised and subsequently cleared, there are two records available in the list: one is generated when the alarm is raised (with a severity value other than CLEARED); the second record is generated when the alarm is cleared (with a severity value of CLEARED).

Information concerning notifications is also recorded in the Alarm History MBean.

The Alarm History MBean exposes a clear operation that can be used to clear all entries stored in the alarm history.

Notification MBean naming structure

Every type of notification used by the SS7 SGC stack is emitted by a separate MBean. SGC-alarm MBean names use the common domain SGC, and this set of properties: type, subtype, name. Values of type and subtype are the same for all notification MBeans; the value of the name property represents a type of notification.

For example, the MBean type emitting notifications about incoming SCCP-message decoding errors is represented by this MBean name: SGC:type=alarming,subtype=notif,name=sccpDecodeErrors. Whenever there is an error during decoding of an SCCP message, a JMX notification is emitted by this MBean. (Actually, for this specific notification, a single notification summarizes multiple errors over a specific time interval.)

Most GUI-based JMX Management tools represent the notification naming space as a tree of MBean objects, like this:

NotificationMBeanTree

Statistics MBean naming structure

SGC statistics are exposed through a set of JMX Management MBeans. This section describes the MBean naming structure of subsystem statistics and processing object-specific statistics.

Subsystem statistics

SGC stack subsystems expose multiple JMX MBeans containing statistical information. SGC statistics MBeans names use the common domain SGC and this set of properties: module, type. The value of type is the same for all statistics MBeans; the value of the module property represents a subsystem. Attributes of these beans are arrays of records; each record contains statistical information related to a processing object configured within the SGC cluster (such as a particular SCTP association).

For example, the statistics MBean representing information about the M3UA subsystem (layer) is represented by the MBean named SGC:module=m3ua,type=info. This MBean has (among others) the attribute DpcInfo, which contains information about DPC (destination point code) reachability through a particular connection; for example:

Attribute Value

connectionId

N1-LE1-CSG

dpc

4114

dpcId

2-2-2

status

DOWN

Most GUI-based JMX Management tools represent the naming space as a tree of MBean objects, like this:

InfoMBeanBrowser

Processing object-specific statistics

The SGC stack also exposes processing object-specific MBeans, such as an MBean containing statistics created for each connection. Information exposed through such MBeans is exactly equivalent to that exposed through subsystem statistic MBeans. SGC processing object-specific MBean names use the common domain SGC and this set of properties: module, type, element, id.

  • The value of type is the same for all statistics MBeans.

  • The value of module represents a subsystem.

  • The value of elem identifies a group of processing objects of the same type (such as connections or nodes).

  • The value of id identifies a particular processing object (such as a particular connection or particular node).

For example, the statistics MBean representing information about network reachability for a particular DPC through a particular connection in the M3UA subsystem (layer) is represented the MBean named SGC:type=info,module=m3ua,elem=dpcInfo,id=N1-LE1-CSG&4114. Information exposed through this MBean is exactly the same in the example above:

Attribute Value

connectionId

N1-LE1-CSG

dpc

4114

dpcId

2-2-2

status

DOWN

Most GUI-based JMX Management tools represent the naming space as a tree of MBean objects, like this:

DpcSpecMBeanBrowser

Command-Line Management Console

What is the SGC Stack Command-Line Console?

The SGC Stack Command-Line Management Console is a CLI tool working in interactive and non-interactive mode to manage and configure the SGC Stack cluster. The Command-Line Console uses JMX Beans exposed by the SGC Stack to manage cluster configuration. The command syntax is based on ITU-T MML Recommendations Z.315 and Z.316.

Installation and requirements

Command-Line Console installation requires unpacking the tar.gz archive file in any location. Unpacked, the folder structure is:

File/Directory Description

.

SGC CLI installation directory

./conf

logging configuration and CLI settings file

./lib

Java libraries used by SGC CLI.

./log

log file

./sgc-cli.sh

CLI start-up script

Note
JAVA_HOME

The SGC CLI start-up script expects the JAVA_HOME environment variable to be set up and point to a valid Java Virtual Machine version 7 or greater (it expects executable file JAVA_HOME/bin/java).

Working with the SGC CLI

The SGC CLI should be started by executing the sgc-cli.sh script. Default connection settings point to the SGC JMX Server exposed at:

host: 127.0.0.1
port: 10111

Here is an example of starting the CLI with the default IP and port:

./sgc-cli.sh

Here is an example of starting the CLI with an alternative IP and port setup:

./sgc-cli.sh -h 192.168.1.100 -p 10700

The SGC CLI supports other configuration parameters, which you can display by executing the startup script with the -? or --help options:

usage: sgc-cli.sh [-?] [-b <FILE>] [-h <HOST>] [-P <PASSWORD>] [-p <PORT>]
       [-ssl <true/false>] [-stopOnError <true/false>] [-U <USER>] [-x
       <FILE>]
+-----------------------------Options list-----------------------------+
 -?,--help                                 Displays usage
 -b,--batch <FILE>                         Batch file
 -h,--host <HOST>                          JMX server user
 -P,--pass <PASSWORD>                      JMX password
 -p,--port <PORT>                          JMX server port
 -ssl,--ssl <true/false>                   JMX connection SSL enabled
 -stopOnError,--stopOnError <true/false>   Stop when any error occurs in
                                           batch command
 -U,--user <USER>                          JMX user
 -x,--export <FILE>                        File where the configuration
                                           dump will be saved
+----------------------------------------------------------------------+

Enabling secured JMX server connection

CLI supports secured JMX server connection through the SSL protocol (SSL). It can be enabled in two ways:

  • by specifying the ssl=true property in conf/cli.properties,

  • by adding the configuration parameter -ssl true to the start script.

Warning The configuration parameter value takes precedence over the value defined in conf/cli.properties.

The SSL connection (in both cases) also requires specifying additional properties in the conf/cli.properties file, for the trustStore, keyStore password and location. Below is a sample configuration:

#######################
###SSL configuration###
#######################

#File location relative to conf folder
javax.net.ssl.keyStore=sgc-client.keystore
javax.net.ssl.keyStorePassword=changeit

#File location relative to conf folder
javax.net.ssl.trustStore=sgc-client.keystore
javax.net.ssl.trustStorePassword=changeit

Basic command format

The command syntax is based on ITU-T MML Recommendations Z.315 and Z.316:

command: [parameter-name1=parameter-value1][,parameter-name2=value2]…​;

Where:

  • command is the name of the command to be executed

  • parameter-name is the name of command parameter

  • parameter-value is the value of the command parameter.

Note

Command parameters are separated by commas (,).

When specifying a command with no parameters, the colon (:) is optional.

The ending semicolon (;) is optional.

MML syntax auto-completing

The CLI supports MML syntax auto-completing on the command name and command parameters level. Pressing the <TAB> key after the prompt will display all available command names:

127.0.0.1:10111 I1><TAB_PRESSED>
Display all 111 possibilities? (y or n)
abort-revert                      abort-upgrade                     batch
clear-active-alarm                clear-all-alarms                  complete-revert
complete-upgrade                  create-as                         create-as-connection
create-as-precond                 create-conn-ip                    create-connection
create-cpc                        create-dpc                        create-inbound-gtt
create-local-endpoint             create-local-endpoint-ip          create-node
create-outbound-gt                create-outbound-gtt               create-replace-gt
create-route                      create-snmp-node                  create-target-address
create-usm-user                   disable-as                        disable-connection
disable-local-endpoint            disable-node                      disable-snmp-node
display-active-alarm              display-as                        display-as-connection
display-as-precond                display-conn-ip                   display-connection
display-cpc                       display-dpc                       display-event-history
display-inbound-gtt               display-info-ansitcapstats        display-info-asinfo
display-info-associationinfo      display-info-clusterversioninfo   display-info-dpcinfo
display-info-healthinfo           display-info-itutcapstats         display-info-localssninfo
display-info-nodeversioninfo      display-info-ogtinfo              display-info-pcinfo
display-info-remotessninfo        display-info-sccperrorstats       display-info-sccpstats
display-info-tcapconninfo         display-info-tcaperrorstats       display-local
display-local-endpoint            display-local-endpoint-ip         display-node
display-outbound-gt               display-outbound-gtt              display-parameters
display-replace-gt                display-route                     display-snmp-node
display-target-address            display-usm-user                  enable-as
enable-connection                 enable-local-endpoint             enable-node
enable-snmp-node                  export                            help
modify-as                         modify-as-connection              modify-connection
modify-dpc                        modify-inbound-gtt                modify-local-endpoint
modify-node                       modify-outbound-gt                modify-outbound-gtt
modify-parameters                 modify-replace-gt                 modify-snmp-node
modify-target-address             modify-usm-user                   quit
remove-as                         remove-as-connection              remove-as-precond
remove-conn-ip                    remove-connection                 remove-cpc
remove-dpc                        remove-inbound-gtt                remove-local-endpoint
remove-local-endpoint-ip          remove-node                       remove-outbound-gt
remove-outbound-gtt               remove-replace-gt                 remove-route
remove-snmp-node                  remove-target-address             remove-usm-user
sleep                             start-revert                      start-upgrade
127.0.0.1:10111 I1>

When you press the <TAB> key after providing a command name, the CLI displays all available parameters for that command.

Pressing <TAB> after providing command parameters displays legal values (for enumeration, reference, and boolean parameters). For example:

127.0.0.1:10111 I1> modify-connection: <TAB_PRESSED>
oname=A-CONN-1        oname=B-CONN-1
127.0.0.1:10111 I1> modify-connection: oname=

Help mode

You can access the SGC CLI’s built in help system by either:

  • executing the command help: topic=topicName

  • switching to help mode, by executing the manual help command (with no parameters).

Help mode displays topics that you can access. (Alternatively, you can press the <TAB> to display available values of a topic command parameter. The list of available topics in manual help mode looks like this:

127.0.0.1:10111 I1> help
Executing help manual...
Use <TAB> to show topic list. Write 'topic name' to see its description. Use exit command if you want to quit the manual.
Hint: 'create-, display-, remove-, modify-, enable-, disable-' operations are described by single topic for given MBean name.
Available topics:

abort-revert                      abort-upgrade                     as
as-connection                     as-precond                        batch
clear-active-alarm                clear-all-alarms                  complete-revert
complete-upgrade                  conn-ip                           connection
cpc                               disable-local                     display-active-alarm
display-event-history             display-info-ansitcapstats        display-info-asinfo
display-info-associationinfo      display-info-clusterversioninfo   display-info-dpcinfo
display-info-healthinfo           display-info-itutcapstats         display-info-localssninfo
display-info-nodeversioninfo      display-info-ogtinfo              display-info-pcinfo
display-info-remotessninfo        display-info-sccperrorstats       display-info-sccpstats
display-info-tcapconninfo         display-info-tcaperrorstats       dpc
exit                              export                            help
inbound-gtt                       local-endpoint                    local-endpoint-ip
node                              outbound-gt                       outbound-gtt
parameters                        replace-gt                        route
sleep                             snmp-node                         start-revert
start-upgrade                     target-address                    usm-user
help>

The result of executing a help topic command looks like this:

127.0.0.1:10111 I1> help: topic=<TAB_PRESSED>
topic=abort-revert                      topic=abort-upgrade                     topic=as
topic=as-connection                     topic=as-precond                        topic=batch
topic=clear-active-alarm                topic=clear-all-alarms                  topic=complete-revert
topic=complete-upgrade                  topic=conn-ip                           topic=connection
topic=cpc                               topic=disable-local                     topic=display-active-alarm
topic=display-event-history             topic=display-info-ansitcapstats        topic=display-info-asinfo
topic=display-info-associationinfo      topic=display-info-clusterversioninfo   topic=display-info-dpcinfo
topic=display-info-healthinfo           topic=display-info-itutcapstats         topic=display-info-localssninfo
topic=display-info-nodeversioninfo      topic=display-info-ogtinfo              topic=display-info-pcinfo
topic=display-info-remotessninfo        topic=display-info-sccperrorstats       topic=display-info-sccpstats
topic=display-info-tcapconninfo         topic=display-info-tcaperrorstats       topic=dpc
topic=exit                              topic=export                            topic=help
topic=inbound-gtt                       topic=local-endpoint                    topic=local-endpoint-ip
topic=node                              topic=outbound-gt                       topic=outbound-gtt
topic=parameters                        topic=replace-gt                        topic=route
topic=sleep                             topic=snmp-node                         topic=start-revert
topic=start-upgrade                     topic=target-address                    topic=usm-user
127.0.0.1:10111 I1> help: topic=conn-ip;
Configuration of IPs for "connection", to make use of SCTP multi-homed feature define multiple IPs for single "connection".
Object conn-ip is defined by the following parameters:
        oname: object name
        ip: IP address
        conn-name: Name of the referenced connection
Supported operations on conn-ip are listed below ({param=x} - mandatory parameter, [param=x] - optional parameter):
        display-conn-ip: [oname=x],[column=x];
        create-conn-ip: {oname=x}, {ip=x}, {conn-name=x};
        remove-conn-ip: {oname=x};
127.0.0.1:10111 I1>

Interactive mode

By default, the CLI starts in interactive mode, which lets the System Administrator execute commands and observe their results through the system terminal. For example, here’s a successfully executed command:

127.0.0.1:10111 I1> display-info-healthinfo: ;
Found 1 object(s):
+---------------+----------+----------+---------------+---------------+---------------+---------------+----------+---------------+---------------+----------+
|nodeId         |allocatedI|allocatedR|forceAllocatedI|forceAllocatedR|workerExecution|workerExecution|workerGrou|contextExecutio|contextExecutio|executorQu|
|               |ndTasks   |eqTasks   |ndTasks        |eqTasks        |Count          |Time           |pSize     |nCount         |nTime          |eueSize   |
+---------------+----------+----------+---------------+---------------+---------------+---------------+----------+---------------+---------------+----------+
|PC1-1          |0         |0         |0              |0              |0              |0              |0         |0              |0              |18        |
+---------------+----------+----------+---------------+---------------+---------------+---------------+----------+---------------+---------------+----------+
127.0.0.1:10111 I1>

Here’s an example of a command that failed:

127.0.0.1:10111 I1> remove-conn-ip: oname=invalidName;
ERROR REMOVE_OBJECT_FAILED: Instance 'SGC:category=conn-ip,type=m3ua,id=invalidName' doesn't exist.
127.0.0.1:10111 I1>

Command result truncation

CLI supports the truncation of command result (cell data) to make the output more convenient (In case of large cell data). This configuration property is defined in the conf/cli.properties file. Following is the property:

table.format.maxCellContentLength=40

Supported CLI Operations

SGC CLI commands

SGC CLI commands can be grouped into five sets of commands:

Warning Most CLI operations are executed on the SGC Cluster using JMX management beans. Therefore the CLI requires successful establishment of a JMX connection to the SGC Cluster.

Management of SGC Stack processing objects

Most processing objects that can be managed within the SGC cluster support a set of CRUD commands. Some processing objects can also be enabled or disabled. Command names are a concatenation of the operation name and processing object name; for example: create-node, display-as, or remove-conn-ip. Generic commands are described below. For a detailed description of operations available for a particular processing object type, please see the SGC CLI built-in help system. The commands for managing processing objects are:

Operation

What it’s for

Sample use case

Examples

create

Creating a new instance of a processing object within the SGC cluster.

Warning The create operation requires providing the mandatory parameters for a given SGC processing object; otherwise a validation error will display on the terminal.

Creating a new connection object, which together with conn-ip defines remote the IP address of an SCTP association.

127.0.0.1:10111 I1> create-connection: ;
ERROR VALIDATION_FAILED: Mandatory parameter 'oname' was not set.
ERROR VALIDATION_FAILED: Mandatory parameter 'port' was not set.
ERROR VALIDATION_FAILED: Mandatory parameter 'local-endpoint-name' was not set.
ERROR VALIDATION_FAILED: Mandatory parameter 'conn-type' was not set.

remove

Removes an instance of a processing object within the SGC Cluster.

Removing the conn-ip associated with a connection.

127.0.0.1:10111 I1> remove-conn-ip: oname=ip1ForConnectionA;
OK conn-ip removed.

modify

Updates the state of a processing object within the SGC Cluster.

Warning Can cause validation/operation errors for processing objects, which must be be disabled and deactivated before update.

Updating the application server configuration (modify-as) by increasing the number of maximum pending messages and maximum pending time required.

Execution of Modify operation on and active application server (as) processing object resulting in a validation error:

127.0.0.1:10111 I1> modify-as: oname=AS-RC-1, pending-size=1000;
ERROR MODIFY_OBJECT_FAILED: com.cts.ss7.common.SGCException: Parameter pending-size cannot be modified when bean AS-RC-1 is enabled or active

display

Display the configuration of SGC Cluster processing objects.

Note The column parameter can be specified multiple times to restrict displayed columns.

Displaying the attribute values of connection-processing objects.

Successfully executed display operation for all defined connections:

127.0.0.1:10111 I1> display-connection:;
Found 2 object(s):
+---------------+----------+--------+--------+----------+---------------+---------------+----------+----------+----------+---------------+----------+--------+----------+
|oname          |dependenci|enabled |active  |port      |local-endpoint-|conn-type      |t-ack     |t-daud    |t-reconnec|state-maintenan|asp-id    |is-ipsp |out-queue-|
|               |es        |        |        |          |name           |               |          |          |t         |ce-role        |          |        |size      |
+---------------+----------+--------+--------+----------+---------------+---------------+----------+----------+----------+---------------+----------+--------+----------+
|C-CL2-N1       |0         |true    |false   |30115     |N1-E           |SERVER         |2         |60        |6         |ACTIVE         |null      |true    |1000      |
+---------------+----------+--------+--------+----------+---------------+---------------+----------+----------+----------+---------------+----------+--------+----------+
|C-CL2-N2       |1         |true    |false   |30105     |N1-E           |CLIENT         |10        |60        |5         |ACTIVE         |1         |true    |1000      |
+---------------+----------+--------+--------+----------+---------------+---------------+----------+----------+----------+---------------+----------+--------+----------+

Successfully executed display operation for a single connection (oname parameter specified):

127.0.0.1:10111 I1> display-connection: oname=C-CL2-N2;
Found 1 object(s):
+---------------+----------+--------+--------+----------+---------------+---------------+----------+----------+----------+---------------+----------+--------+----------+
|oname          |dependenci|enabled |active  |port      |local-endpoint-|conn-type      |t-ack     |t-daud    |t-reconnec|state-maintenan|asp-id    |is-ipsp |out-queue-|
|               |es        |        |        |          |name           |               |          |          |t         |ce-role        |          |        |size      |
+---------------+----------+--------+--------+----------+---------------+---------------+----------+----------+----------+---------------+----------+--------+----------+
|C-CL2-N2       |1         |true    |false   |30105     |N1-E           |CLIENT         |10        |60        |5         |ACTIVE         |1         |true    |1000      |
+---------------+----------+--------+--------+----------+---------------+---------------+----------+----------+----------+---------------+----------+--------+----------+

Successfully executed display operation for a single connection with a restricted number of columns (oname and column parameters specified):

127.0.0.1:10111 I1> display-connection: oname=C-CL2-N2,column=oname, column=enabled, column=conn-type;
Found 1 object(s):
+---------------+--------+---------------+
|oname          |enabled |conn-type      |
+---------------+--------+---------------+
|C-CL2-N2       |true    |CLIENT         |
+---------------+--------+---------------+

enable
disable

Change the "enabled" state of: as, connection, local-endpoint, node, and snmp-node processing object types.

Warning These processing objects can be considered as fully functional only when enabled. Most configuration parameters of such objects cannot be changed when the processing object is enabled.

Enabling and disabling a connection:

127.0.0.1:10111 I1> enable-<TAB_PRESSED>
enable-as               enable-connection       enable-local-endpoint   enable-node             enable-snmp-node
127.0.0.1:10111 I1> enable-connection: oname=
oname=A-CONN        oname=B-CONN
127.0.0.1:10111 I1> enable-connection: oname=A-CONN;
OK connection enabled.
127.0.0.1:10111 I1> disable-connection: oname=A-CONN;
OK connection disabled.

Alarms and event history

The SGC CLI provides commands for displaying and clearing alarms generated by the SGC Cluster. Alarms raised and subsequently cleared, or notifications emitted, can be reviewed using a separate operation: display-event-history. Display operations for alarms let filter expressions be specified for id, name, and severity. Filter expressions can include % wildcard characters at the start, middle, or end of the expression. Also, the column parameter can be specified multiple times to restrict presented columns. Below are some examples:

Displaying all active alarms (no filtering criteria specified):
127.0.0.1:10111 I1> display-active-alarm:;
Found 2 object(s):
+---------------+----------+---------------+---------------+---------------+--------------------+
|description    |id        |name           |parameters     |severity       |timestamp           |
+---------------+----------+---------------+---------------+---------------+--------------------+
|The node in the|36        |nodefailure    |nodeId='Node101|MAJOR          |2014-02-10 10:47:35 |
| cluster disapp|          |               |',failureDescri|               |                    |
|eared          |          |               |ption='Mis...  |               |                    |
+---------------+----------+---------------+---------------+---------------+--------------------+
|The node in the|37        |nodefailure    |nodeId='Node102|MAJOR          |2014-02-10 10:47:37 |
| cluster disapp|          |               |',failureDescri|               |                    |
|eared          |          |               |ption='Mis...  |               |                    |
+---------------+----------+---------------+---------------+---------------+--------------------+
Displaying all active alarms with filters:
127.0.0.1:10111 I1> display-active-alarm: id=36, severity=M%, name=%failure;
Found 2 object(s):
+---------------+----------+---------------+---------------+---------------+--------------------+
|description    |id        |name           |parameters     |severity       |timestamp           |
+---------------+----------+---------------+---------------+---------------+--------------------+
|The node in the|36        |nodefailure    |nodeId='Node101|MAJOR          |2014-02-10 10:47:35 |
| cluster disapp|          |               |',failureDescri|               |                    |
|eared          |          |               |ption='Mis...  |               |                    |
+---------------+----------+---------------+---------------+---------------+--------------------+
Displaying all active alarms with filters and column parameters:
127.0.0.1:10111 I1> display-active-alarm: severity=M%, name=%failure, column=id, column=description, column=timestamp;
Found 1 object(s):
+----------+---------------+--------------------+
|id        |description    |timestamp           |
+----------+---------------+--------------------+
|36        |The node in the|2014-02-10 10:47:35 |
|          | cluster disapp|                    |
|          |eared          |                    |
+----------+---------------+--------------------+

Clearing an active alarm:

Note
Clearing alarms

Any alarm can be cleared by the System Administrator. There are two clear operations exposed by the CLI:

  • clear-active-alarm — clears an active alarm as defined by the mandatory id parameter

  • clear-all-alarms — clears all active alarms.

127.0.0.1:10111 I1> clear-active-alarm: id=36;
OK alarm cleared.

Displaying all registered alarms:

Note
Displaying event history

The display-event-history operation is used to display alarm and notification history. The output of this command can be filtered by specifying any of last, time-from, and time-to parameters. Also, the column parameter can be specified multiple times to restrict presented columns.

127.0.0.1:10111 I1> display-event-history: ;
Found 1 object(s):
+---------------+----------+---------------+---------------+---------------+--------------------+
|description    |id        |name           |parameters     |severity       |timestamp           |
+---------------+----------+---------------+---------------+---------------+--------------------+
|The node in the|36        |nodefailure    |nodeId='Node102|MAJOR          |2014-02-10 10:47:35 |
| cluster disapp|          |               |',failureDescri|               |                    |
|eared          |          |               |ption='Mis...  |               |                    |
+---------------+----------+---------------+---------------+---------------+--------------------+

Statistics (Info)

A set of commands allow interrogation of statistics exposed by the SGC Stack. For details, please see Statistics. The available statistics are:

Module Statistic

M3UA

asinfo

associationinfo

dpcinfo

SCCP

localssninfo

ogtinfo

pcinfo

remotessninfo

sccpstats

sccperrorstats

TCAP

tcapconninfo

itutcapstats

ansitcapstats

tcaperrorstats

TOP

healthinfo

clusterversioninfo

nodeversioninfo

Note
Filtering statistical information

Commands displaying statistical information support filtering. Filtering of statistics is based on the equality between the filter value and statistic column value. Also, the column parameter can be specified multiple times to restrict presented columns.

Below are some examples.

Displaying statistic without filters:

127.0.0.1:10111 I1> display-info-asinfo:;
Found 1 object(s):
+---------------+---------------+---------------+---------------+---------------+---------------+
|connectionId   |asId           |TXCount        |RXCount        |status         |nodeId         |
+---------------+---------------+---------------+---------------+---------------+---------------+
|C-CL2-N1       |AS-RC-1        |0              |0              |INACTIVE       |CL1-N1         |
+---------------+---------------+---------------+---------------+---------------+---------------+

Displaying asinfo statistics with filters on the nodeId:

127.0.0.1:10111 I1> display-info-asinfo:<TAB_PRESSED>

RXCount        TXCount        asId           column         connectionId   nodeId         status
127.0.0.1:10111 I1> display-info-asinfo: nodeId=CL1-N1;
Found 1 object(s):
+---------------+---------------+---------------+---------------+---------------+---------------+
|connectionId   |asId           |TXCount        |RXCount        |status         |nodeId         |
+---------------+---------------+---------------+---------------+---------------+---------------+
|C-CL2-N1       |AS-RC-1        |0              |0              |INACTIVE       |CL1-N1         |
+---------------+---------------+---------------+---------------+---------------+---------------+

Export / Import

The following commands allowing exporting and importing the SGC Stack configuration to or from a text file.

Operation

What it’s for

Examples

export

Produces a dump of the current configuration in a format that is directly usable by the batch command.

Warning When the file argument is not provided, the output will be printed to the standard location.
Tip The export operation can be used also in non-interactive mode, if the sgc-cli.sh script is executed with the -x, --export parameter.

Non-interactive mode:

./sgc-cli.sh -x config-backup.mml

Interactive mode:

127.0.0.1:10111 I1> export: file=config-backup.mml
OK configuration exported.

batch

Loads a text file containing a set of MML commands, and executes them after connecting to the SGC Cluster (in a format that is produced by the export command).

Tip Batch operation can also be used in non-interactive mode, if the sgc-cli.sh script is executed with the -b, --batch parameter.
Note The batch command supports an additional flag: -stopOnError, --stopOnError. If this flag is set to false, it won’t stop processing commands even if execution of a command results in an error. (The default behaviour is to stop processing commands after the first error.)

Non-interactive mode:

./sgc-cli.sh -b config-backup.mml -stopOnError false

Interactive mode:

127.0.0.1:10111 I1> batch: file=set-of-displays.mml
display-conn-ip: oname=conn-ip1;
Found 1 object(s):
+---------------+----------+---------------+---------------+
|oname          |dependenci|ip             |conn-name      |
|               |es        |               |               |
+---------------+----------+---------------+---------------+
|conn-ip1       |0         |192.168.1.101  |C-CL2-N1       |
+---------------+----------+---------------+---------------+
display-conn-ip: oname=conn-ip2;
Found 1 object(s):
+---------------+----------+---------------+---------------+
|oname          |dependenci|ip             |conn-name      |
|               |es        |               |               |
+---------------+----------+---------------+---------------+
|conn-ip2       |0         |192.168.1.192  |C-CL2-N1       |
+---------------+----------+---------------+---------------+

Upgrading the SGC

The following commands are used to manage the SGC cluster during the upgrade and reversion processes:

  • start-upgrade

  • complete-upgrade

  • abort-upgrade

  • start-revert

  • complete-revert

  • abort-revert

  • display-info-nodeversioninfo

  • display-info-clusterversioninfo

For more information on how to perform an upgrade (or reversion) of the SGC cluster please refer to Automated Upgrade of the SGC.

Miscellaneous

The following commands are miscellaneous commands that don’t fall into the previous categories.

Operation

What it’s for

Examples

sleep

Sleeps the CLI for the specified number of milliseconds.

127.0.0.1:10111 I1> sleep: millis=1134
OK requested=1134 start=1554155345880 end=1554155347014 actual=1134

SGC TCAP Stack

What is the SGC TCAP Stack ?

The SGC TCAP Stack is a Java component that is embedded in the CGIN Unified RA and the IN Scenario Pack to provide OCSS7 support. The TCAP stack communicates with the SGC via a proprietary protocol.

Tip For a general description of the TCAP Stack Interface defined by the CGIN Unified RA, please see Inside the CGIN Connectivity Pack.

TCAP Stack and SGC Stack Connectivity

After the CGIN Unified RA is activated the SGC TCAP stack uses one of two procedures to establish communication with the SGC stack:

  1. The newer, and recommended, ocss7.sgcs registration process, introduced in release 1.1.0.

  2. The legacy ocss7.urlList registration process, supported by releases up to and including 3.0.0.

ocss7.sgcs Connection Method

This connection method:

When correctly configured, all TCAP stacks in the Rhino cluster will be connected to all available SGCs in the OCSS7 cluster simultaneously, with every connection servicing traffic. This is sometimes referred to as the Meshed Connection Manager because of the fully meshed network topology it uses.

Load Balancing

Dialogs are load balanced across all active connections between the SGCs and the TCAP stacks. An SGC that receives traffic from the SS7 network will round-robin new dialogs between all TCAP stacks that have an active connection to it. Similarly, each TCAP stack will round-robin new outgoing dialogs between the SGCs it is connected to.

If a configured connection is down for any reason the TCAP stack will attempt to re-establish it at regular intervals until successful.

In some OA&M or failure situations an SGC may have no TCAP stack connections. When this happens the SGC will route traffic to the other SGCs in the cluster for forwarding to the TCAP stacks. Traffic will remain balanced across the TCAP stacks.

If a new SGC needs to be added to the cluster with a new connection address then the TCAP stack configuration will need to be adjusted. This adjustment can be done either before or after the SGC is installed and configured, and the cluster will behave as described above. Updates to the TCAP stack ocss7.sgcs list can be done while the system is active and processing traffic — existing connections will not be affected unless they are removed from the list, and the TCAP stack will attempt to establish any newly configured connections.

Dialog Failover

Dialog failover occurs for in-progress dialogs when an SGC becomes unavailable to a TCAP stack.

Warning This feature does not provide failover between Rhino nodes when a Rhino node becomes unavailable. Failover between Rhino nodes is not available in CGIN 2.0.0 with any TCAP stack because CGIN itself does not provide replication facilities.

Under normal operating conditions a TCAP dialog continues to use the same connection between the TCAP stack and the SGC for the lifetime of that dialog. If that connection becomes unavailable (i.e. the SGC is unreachable or down) then the Dialog Failover feature is used to migrate the dialogs associated with that connection (and by extension, the SGC) to an alternative SGC in the same cluster. In this way it is possible to continue sending and receiving messages on those dialogs.

For example, TCAP stack A is connected to SGCs 1 and 2 with connections C1 and C2 respectively. If SGC 2 terminates unexpectedly, all dialogs associated with connection C2 will begin to use connection C1 instead. All traffic received by the SGCs for those dialogs will now be sent to the TCAP stack via connection C1, hosted on SGC 1.

It is important to note that dialogs failover between SGCs, not TCAP stacks. For dialog failover to work there must be an existing connection between the original TCAP stack and at least one substitute SGC which is capable of accepting a failed over dialog. Dialogs which cannot be failed over to a replacement SGC will be dropepd and any in-flight messages lost.

Note Failure recovery can never be perfect — network messages currently being processed at the time of failure will be lost — so some dialogue failure must be expected.

Legacy ocss7.urlList Connection Method

Warning This connection method is deprecated and should not be used for new installations. Existing installations are strongly recommended to migrate to the ocss7.sgcs connection method.

In this method the TCAP stack:

  1. Is configured with the ocss7.urlList property populated with the address of each SGC’s node manager ( stack-http-address:stack-http-port).

  2. When activated:

    1. The TCAP stack connects to the first SGC node manager in that list.

    2. On successful connection to a node manager the node manager responds with the address of an SGC’s data socket.

    3. The TCAP stack then connects to the indicated data socket.

    4. On failure to either connect to a node manager, the indicated data socket, or later failure of that established connection the process is repeated with the next node manager.

This connection method is not recommended because:

  • It does not provide dialog failover capability.

  • It only has a partial load-balancing implementation: in the event that one SGC fails the TCAP nodes will reconnect to another SGC in the cluster. Over time this can result in all TCAP stacks being connected to a single SGC. Should this occur manual rebalancing of the TCAP stack connections will be required.

Rebalancing TCAP Stack Data Transfer Connections
Tip The need to rebalance TCAP stack connections can be completely avoided by using the recommended ocss7.sgcs connection method.

When an SGC node joins a running SGC cluster (for example, after a planned outage or failure), the existing TCAP stack connections are not affected. That is, the TCAP stack connections remain connected to whichever SGCs they were previously connected to. The newly joined SGC will have no TCAP stack connections until the next time a TCAP stack reconnects.

In the worst case scenario this can result in a single SGC servicing all of the TCAP stacks whilst the other SGCs service none.

This can be mitigated by performing a rebalancing procedure as described next.

Example rebalancing procedure
Note This example uses a basic production deployment of an SGC Cluster, composed of two SGC Nodes cooperating with the CGIN Unified RA, deployed within a two-node Rhino cluster (as depicted above).

Imagine the following scenario:

  • Due to hardware failure, SGC Stack Node 2 was not operational. This resulted in both instances of the CGIN Unified RA being connected to SGC Stack Node 1.

  • That hardware failure is removed, and SGC Stack Node 2 is again fully operational and part of the cluster.

  • To rebalance the data transfer connections of the CGIN Unified RA entity running within Rhino cluster, that entity must be deactivated and activated again on one of the Rhino nodes. (Deactivation is a graceful procedure where the CGIN Unified RA waits until all dialogs that it services are finished; at the same time all new dialogs are directed to the CGIN Unified RA running within the other node.)

You would do the following:

  1. Make sure that the current traffic level is low enough so that it can be handled by a CGIN Unified RA on a single Rhino node.

  2. Deactivate the CGIN Unified RA entity on one of the Rhino nodes.

  3. Wait for the deactivated CGIN Unified RA entity to become STOPPED.

  4. Activate the previously deactivated CGIN Unified RA entity,

Tip
Per-node activation state

For details of managing per-node activation state in a Rhino Cluster, please see Per-Node Activation State in the Rhino Administration and Deployment Guide.

TCAP Stack configuration

General description and configuration of the CGIN RA is detailed in the CGIN RA Installation and Administration Guide. Below are configuration properties specific to the SGC TCAP Stack.

Parameter

Usage and description

Active
Reconfig?

Values

Default

stack

Short name of the TCAP stack to use.

for the SGC TCAP Stack, this value must be ocss7

tcap-variant

TCAP variant to use.

ITU or ANSI

ITU

default-tcap-version

The TCAP version to use where this has not been specified (outbound dialogs) or cannot be automatically derived from the ProtocolVersion field of the DialoguePortion (inbound dialogs).

This affects some encoding choices, such as use of ParamSet when encoding Reject components, bounds on private operation and error codes, and whether to use a Response or an Abort package when a dialog must be aborted.

1988 (ANSI TCAP only) or 2000

2000

ocss7.schThreads

Maximum number of threads used by the scheduler. Number of threads used to trigger timeout events.

in the range 1 - 100

10

ocss7.schNodeListSize

Number of events that may be scheduled on a single scheduler thread.

in the range 1000- 41943040 or 0 (autosize)

ocss7.schNodeListSize * ocss7.schThreads should be directly proportional to
(maxExpectedConcurrentInvokeTimers + 1) * ocss7.trdpCapacity (the maximum number of open dialogs).

If this value is set too low, then some invoke and activity timers may not fire, potentially leading to hung dialogs.

The higher this value is, the higher the memory requirements of the TCAP stack.

0

ocss7.taskdpCapacity

Maximum number of inbound messages and timeout events that may be waiting to be processed.

in the range 1000 - 41943040 or 0 (autosize)

ocss7.taskdpCapacity should be directly proportional to
ocss7.trdpCapacity (the maximum number of open dialogs)

0

ocss7.trdpCapacity

Maximum number of opened transactions (dialogs).

in the range 1000 - 2097152

100000

ocss7.wgQueues

Number of threads used by the worker group to process timeout events and inbound messages.

in the range 1 - 100

100

ocss7.wgQueuesSize

Maximum number of tasks in one worker queue.

in the range 1000 - 41943040 or 0 (autosize)

ocss7.wgQueues * ocss7.wgQueuesSize should be directly proportional to
2 * ocss7.taskdpCapacity (the maximum number of triggered timeout and inbound messages)

0

ocss7.senderQueueSize

Maximum number of outbound messages in the sender queue.

in the range 1000 - 41943040 or 0 (autosize)

(if less than default value the default value is silently used)

ocss7.senderQueueSize is directly proportional to
ocss7.trdpCapacity (the maximum number of open dialogs)

0

ocss7.sgcs

Comma-separated list of SGCs to connect to. Only one of ocss7.sgcs or ocss7.urlList may be configured at a time, and this is the recommended one - see TCAP Stack and SGC Stack cooperation for details.

If both ocss7.sgcs and ocss7.urlList are unset, then a configuration of ocss7.sgcs is assumed with no configured connections.

*

comma-separated list in the format address:port,
pointing to the listening addresses of SGC data ports within the SGC stack cluster

unset

ocss7.urlList

Comma-separated list of URLs used to connect to legacy TCAP Manager(s). Only one of ocss7.sgcs or ocss7.urlList may be configured at a time, and ocss7.sgcs is the recommended one - see TCAP Stack and SGC Stack cooperation for details.

*

comma-separated list in the format address:port,
pointing to the listening addresses of TCAP Managers within the SGC Stack cluster

unset

ocss7.urlList.retrySleep

Wait interval in milliseconds between subsequent connection attempts for the ocss7.urlList.

in the range 0 - 600000

not related to single URLs on the list (but the whole list)

1000

ocss7.local-ssn

SSN number used when the local-sccp-address property does not provide a SSN (for example, if set to auto).

in the range 2 - 255

ocss7.heartbeatEnabled

v1.0.0 protocol variant only - Enables or disables the OCSS7 to SGC connection heartbeat. N.B. if the heartbeat is enabled on the SGC it must also be enabled here. v2.0.0 protocol variant - heartbeat is always enabled.

true or false

true

ocss7.heartbeatPeriod

The period between heartbeat sends in seconds. Heartbeat timeout is also a function of this value, as described in Stack Originated Heartbeat Configuration. The value configured here must be smaller than the value of com.cts.ss7.commsp.server.recvTimeout given to the SGC.

2 or larger

5

* If a TCAP data transfer connection is established, changing this property has no effect until the data transfer connection fails and the TCAP Stack repeats the registration process.

ANSI TCAP

Starting with SGC version 2.2.0 and CGIN version 2.0.0, the SGC and OCSS7 TCAP stack support ANSI TCAP (T1.114-2000, T1.114-1988).

Requirements for use of ANSI TCAP:

  • The TCAP stack must be configured to use the ocss7.sgcs connectivity method.

  • The SGC must be at least version 2.2.0.0. Older SGCs do not support ANSI TCAP.

The SGC does not require any specific configuration to use ANSI TCAP; it is able to support both ITU TCAP and ANSI TCAP clients simultaneously.

To configure the TCAP stack to use ANSI TCAP:

  • Set the CGIN configuration property tcap-variant. A value of ITU indicates ITU TCAP, and a value of ANSI indicates ANSI TCAP.

  • Set the CGIN configuration property default-tcap-version. This specifies the TCAP version to use where CGIN is unable to determine this from the ProtocolVersion field of the DialoguePortion (inbound dialogs), or where it has not been specified for outbound dialogs.

See Configuring the TCAP stack for further information.

ANSI SCCP

From SGC version 2.1.0.x and CGIN version 1.5.4.3, the SGC and OCSS7 TCAP stack support both ITU and ANSI SCCP.

SCCP Variant

The TCAP stack’s choice of SCCP variant is configured via the local-sccp-address CGIN configuration property. A value of C7 indicates ITU SCCP, and a value of A7 indicates ANSI SCCP.

To configure the SGC, please see SCCP variant configuration.

The TCAP stack and the SGC must be configured for the same SCCP variant. Furthermore, use of ANSI SCCP also requires use of the ocss7.sgcs connectivity method as the legacy ocss7.urlList method does not support the advanced feature negotiation required to successfully configure for ANSI SCCP.

The table below shows the supported configurations:

SGC sccp-variant

TCAP Stack local-sccp-address Type

C7

A7

ITU

ocss7.urlList, ocss7.sgcs

ANSI

ocss7.sgcs

ANSI Point Codes

Tip
ANSI SCCP point code configuration recommendations

Care must be taken when configuring ANSI SCCP point codes via the simple integer format supported by CGIN and the SGC as CGIN parses this value according to ANSI SCCP encoding standards. The SGC parses this value according to ANSI MTP/M3UA standards. The two standards differ in the order in which they process the network, cluster and member fields.

Both CGIN and the SGC support specification of ANSI SCCP point codes in N-C-M format and it is recommended that this format is used in preference to the simple integer format.

Connection Heartbeat

The data connection between the TCAP stack and the SGC supports a configurable heartbeat consisting of:

  1. A heartbeat request/response message pair

  2. TCAP stack side timeout detection

  3. SGC side timeout detection

The heartbeat configuration method and behaviour depends on the combination of the CGIN and SGC version. The table below summarizes how the heartbeat must be configured according to these:

CGIN 1.5.2.x CGIN 1.5.3.x and newer

SGC 1.0.1.x

legacy

legacy

SGC 1.1.0.x and newer

legacy

stack

stack

This heartbeat configuration method requires CGIN 1.5.3 or newer, plus SGC 1.1.0 or newer.

The heartbeat period is configured in the TCAP stack by setting the TCAP stack ocss7.heartbeatPeriod to the desired value (in ms). This value is automatically communicated to the SGC. As a result the heartbeat is configured per RA entity and may be different for different entities.

The TCAP stack sends a heartbeat ping message at the configured interval. On receipt of a ping the SGC will respond with a pong.

If the SGC does not receive the ping within 2 * ocss7.heartbeatPeriod ms of the previous ping or connection establishment it will mark the connection as failed and close it.

If the TCAP stack does not receive a pong before sending its next ping (i.e. within ocss7.heartbeatPeriod ms) it will mark the connection as failed and close it.

legacy

This heartbeat configuration method must be used if either of CGIN 1.5.2 or SGC 1.0.1 is being used.

The heartbeat request/response message pair is enabled by setting the TCAP stack ocss7.heartbeatEnabled property to true. This configures the TCAP stack to send a heartbeat request. The SGC will respond to any heartbeat request with a heartbeat response message, regardless of SGC configuration. The frequency with which the heartbeat request message is sent (and therefore also the heartbeat response) is configured with the TCAP stack’s ocss7.heartbeatPeriod configuration parameter.

If heartbeats are enabled in the TCAP stack, then the TCAP stack will automatically perform timeout detection. If a heartbeat response message hasn’t been seen within a period of time equal to twice the heartbeat period (2 * ocss7.heartbeatPeriod) the connection will be marked as defective and closed, allowing the TCAP stack to select a new SGC data connection.

The SGC stack may also perform timeout detection; this is controlled by the SGC properties com.cts.ss7.commsp.heartbeatEnabled and com.cts.ss7.commsp.server.recvTimeout. When the heartbeatEnabled property is set to true the SGC will close a connection if a heartbeat request hasn’t been received from that TCAP stack in the last recvTimeout seconds.

Warning If the SGC stack is configured to perform timeout detection then every TCAP stack connecting to it must also be configured to generate heartbeats. If a heartbeat-disabled TCAP stack connects to a heartbeat-enabled SGC the SGC will close the connection after recvTimeout seconds, resulting in an unstable TCAP stack to SGC connection.

Upgrading the SGC and TCAP Stack

This section of the manual covers upgrading the SGC and the TCAP stack.

Upgrading the SGC

Up to four upgrade options are available depending on the SGC releases being upgraded from and to. The options provide differing levels of automation, service continuity and implementation complexity.

  • Automated Online Upgrade — automated upgrade using a single point code

  • Manual Online Upgrade — manually upgrade using a single point code

  • STP Redirection — manual upgrade method requiring two point codes and support from the network

  • Offline Upgrade — manual upgrade method requiring a single point code plus either an alternate site that traffic may be routed to, or a tolerance of complete loss of service during the upgrade window

Upgrading from OCSS7 3.0.0.x to OCSS7 3.0.0.x

Notable Changes

None — no patch release is available yet.

Upgrading from OCSS7 2.2.0.x to OCSS7 3.0.0.x

Notable Changes

Upgrading from OCSS7 2.1.0.x to OCSS7 2.2.0.x

Notable Changes

Support for ANSI T1.114 TCAP has been added:

  • Existing configurations will continue to work as previously.

  • No configuration changes are required to the SGC to use ANSI TCAP.

  • The TCAP stack must be reconfigured if ANSI TCAP is required. See ANSI TCAP for further information.

Upgrade Options

Upgrading from OCSS7 2.0.0.x to OCSS7 2.1.0.x

Notable Changes

Support for ATIS 1000112 (formerly ANSI T1.112) SCCP has been added:

  • Existing configurations will continue to work as previously.

  • Added sccp-variant and national configuration parameters to 'General Configuration'.

  • Default values for mss and muss in the create-dpc MML command have changed. See DPC configuration for details.

The TCAP stack may be configured to use ANSI SCCP by configuring an SCCP address of type A7 instead of C7. Both the SGC and the TCAP stack must be configured for the same SCCP variant. See ANSI SCCP for further information.

Upgrade Options

Upgrading from OCSS7 1.1.0.x to OCSS7 2.0.0.x

Notable Changes

Hazelcast has been upgraded from version 2.6.5 to version 3.7. This fixes a number of cluster stability issues. Members of a Hazelcast 3.7 cluster are unable to communicate with members of a Hazelcast 2.6.x cluster, therefore it is necessary to ensure that all cluster members are running either SGC 2.0.0.x or 1.1.0.x, not a mix of the two.

The configuration file hazelcast.xml has changed substantially. The configuration should be reviewed, particularly for clusters with more than two members - see Hazelcast cluster configuration for details.

New SCCP and TCAP message statistics are available. Some statistics from the LocalSsnInfo JMX MBean (and associated display-info-localssninfo command line client command and SNMP counters) have been moved to the new SccpStats JMX MBean. See Statistics for details.

Upgrade Options

Automated Online Upgrade

This section of the documentation describes how to use Orca to perform upgrades and reversions of the SGC.

The documentation is broken down into several parts, as follows:

General Upgrade and Reversion Requirements

General requirements for performing cluster upgrade or reversion

Introduction to the SGC Upgrade Bundle

An introduction to the SGC upgrade bundle

Viewing the SGC Cluster State

How to view the state of the SGC cluster(s) installed on one or more hosts

Upgrading the SGC Cluster

How to perform an online upgrade of the SGC cluster

Reverting the SGC Cluster

How to perform an online revert of the SGC cluster

SGC Command Reference

The command reference for SGC related Orca operations

SGC Command Argument Reference

The command argument reference for SGC related Orca operations

General Upgrade and Reversion Requirements

The following are general requirements for both online upgrade and online reversion of the SGC.

  • The cluster must consist of at least two nodes.

    Note Single node clusters can still be upgraded or reverted via the online upgrade method, but there will be a period of service loss during the process.
  • Each cluster member must be configured such that it can operate as the only cluster member in the event of failure or administrative shutdown of all other cluster members. This is a standard SGC clustering requirement and is not specific to the upgrade or reversion process.

  • The backup-count must be correctly configured in the existing cluster’s hazelcast.xml configuration files. If this is not correctly specified, then the risk of catastrophic cluster failure exists. This risk is not specific to the upgrade or reversion process, but must be corrected before an online upgrade or reversion is attempted.

In addition, the following general requirements apply:

  • The newest (highest) SGC version must support online upgrade from the oldest (lowest) SGC version. This applies regardless of whether upgrade or revert is being performed. i.e. If it’s possible to online upgrade from 3.0.0.0 to 3.0.0.1 it will also be possible to perform an online revert from 3.0.0.1 to 3.0.0.0. See the Online Upgrade Support Matrix for information on which SGC release combinations support this process.

  • The old and new installations must comply with the Recommended Installation Structure as documented in the OCSS7 Installation and Administration Guide.

  • Configuration files (such as sgcenv, SGC.properties) must be contained within the OCSS7 installation directory or sub-directory, not located elsewhere on the filesystem. The default SGC installation structure meets this requirement.

    Note The SGC’s log files, configured via the LOG_BASE property in sgcenv, may be located anywhere on the filesystem, including outside of the OCSS7 installation directory.

Introduction to the SGC Upgrade Bundle

The SGC upgrade bundle contains the components required to perform an upgrade of the OCSS7 SGC:

  • orca — the upgrade tool

  • packages/* — the packages required to upgrade the SGC

The upgrade bundle must be extracted prior to use.

Extracting the SGC Upgrade Bundle

The upgrade bundle must be extracted prior to first use:

$ unzip sgc-upgrade-bundle-3.0.0.1.zip
Archive:sgc-upgrade-bundle-3.0.0.1.zip
 extracting: sgc-upgrade-bundle-3.0.0.1/README
 extracting: sgc-upgrade-bundle-3.0.0.1/core/__init__.py
 extracting: sgc-upgrade-bundle-3.0.0.1/core/command.py
 extracting: sgc-upgrade-bundle-3.0.0.1/core/constants.py
 extracting: sgc-upgrade-bundle-3.0.0.1/core/exceptions.py
 extracting: sgc-upgrade-bundle-3.0.0.1/core/host.py
 extracting: sgc-upgrade-bundle-3.0.0.1/core/logger.py
 extracting: sgc-upgrade-bundle-3.0.0.1/core/terminal.py
 extracting: sgc-upgrade-bundle-3.0.0.1/core/upgrade_info.py
 extracting: sgc-upgrade-bundle-3.0.0.1/core/utils.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/__init__.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/common.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/feature_script_diff.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/orca_migrate_helper.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/rem_helper.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/__init__.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/sgc_api.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/sgc_backup.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/sgc_common.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/sgc_config.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/sgc_hazelcast_xml.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/sgc_node.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/sgc_package.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/sgc/sgc_views.py
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/slee-data-migration-package.zip
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/slee-data-transformation-standalone.jar
 extracting: sgc-upgrade-bundle-3.0.0.1/helpers/standardize_paths.py
 extracting: sgc-upgrade-bundle-3.0.0.1/licenses/third-party-licenses.txt
 extracting: sgc-upgrade-bundle-3.0.0.1/orca
 extracting: sgc-upgrade-bundle-3.0.0.1/packages/ocss7-3.0.0.1.zip
 extracting: sgc-upgrade-bundle-3.0.0.1/packages/packages.cfg
 extracting: sgc-upgrade-bundle-3.0.0.1/resources/orca-version.properties
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/__init__.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/apply_patch.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/cleanup.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/cleanup_rem.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/import_feature_scripts.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/major_upgrade.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/migrate.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/minor_upgrade.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/multi_stage_operation.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/orca_workflow.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/patch_common.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/prepare.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/prepare_new_rhino.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/revert_patch.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/rhino_only_upgrade.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/rollback.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/rollback_rem.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/run.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sentinel_upgrade.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/__init__.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_abort_revert.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_abort_upgrade.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_backup.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_commands.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_complete_revert.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_complete_upgrade.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_constants.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_install.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_prepare.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_prune_backups.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_revert_cluster.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_revert_node.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_rollback_revert.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_rollback_upgrade.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_start_node.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_start_revert.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_start_upgrade.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_status.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_stop_node.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_upgrade_cluster.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_upgrade_node.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_utils.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_validators.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/sgc/sgc_workflow.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/standardize_paths.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/status.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/upgrade_common.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/upgrade_rem.py
 extracting: sgc-upgrade-bundle-3.0.0.1/workflows/workflow_decorator.py

Following extraction, change to the extracted directory:

$ cd sgc-upgrade-bundle-3.0.0.1/

Orca Basics

The basic pattern that all orca commands follow is:

$ ./orca --hosts <host_list> <subcommand> <subcommand_arguments>

The <host_list> argument is a mandatory comma-separated list of hosts containing SGC nodes. --hosts can also be replaced with the short form -H.

The <subcommand> argument is the SGC specific sub command to execute.

<subcommand_arguments> are zero or more arguments specific to the sub command being executed.

Viewing the SGC Cluster State

The sgc-status sub-command may be used to display a summary of the SGC installations on one or more hosts. For example, to view the SGC installations on hosts vm1, vm2 and vm3:

$ ./orca -H vm1,vm2,vm3 sgc-status

Example output:

Host vm1
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-1
        [Stopped] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]
      * [Running] PC1-1 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.1]

Host vm2
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-2
        [Stopped] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]
      * [Running] PC1-2 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.1]

Host vm3
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-3
        [Stopped] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]
      * [Running] PC1-3 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.1]

The above example shows an installation containing one cluster, PC1, consisting of three nodes, PC1-1, PC1-2 and PC1-3. Each node has two SGC versions installed: 3.0.0.0 and 3.0.0.1. The asterisk next to version 3.0.0.1 indicates that this is the active installation.

Upgrading the SGC Cluster

Orca supports online upgrade of the SGC from version 3.0.0.x to select newer versions.

The OCSS7 installation to be upgraded must meet the documented general requirements.

The sgc-upgrade-cluster command is used to upgrade the SGC cluster.

There are two basic options for an upgrade:

  • Install a new SGC version from an installation package and copy the configuration files from the existing installation to the new. The installation package may either be a standalone installation package, or the package provided as part of the Orca SGC upgrade bundle.

  • Upgrade to a pre-installed and pre-configured SGC installation.

The automated upgrade process performs the following steps:

  • It makes a backup of the current running installation’s critical configuration files.

  • It places the cluster into UPGRADE_MULTI_VERSION mode.

  • (New SGC installation only) Installs the new SGC version on each node.

  • (New SGC installation only) Copies key configuration files from the old SGC to the new on each node.

  • Shuts down the old SGC and starts the new SGC. This is performed one node at a time, with a wait period between nodes in order to allow the cluster to perform the operations required to maintain cluster integrity.

  • Marks the upgrade as completed and places the cluster into NORMAL mode.

Upgrading to a New SGC Installation

To perform an upgrade where the new SGC is installed and configured as part of the upgrade requires either the --package-directory or --sgc-package command line argument.

For example, to use the Orca SGC upgrade bundle supplied installation packages:

$ ./orca -H vm1,vm2,vm3 sgc-upgrade-cluster --package-directory packages/

This process installs the new SGC following the recommended installation structure and copies configuration files from the currently running SGC to the new. Then, a single node at a time, it stops the currently running node, updates the current symbolic link to point to the new SGC and starts the new node.

Alternatively, the user may use a standalone OCSS7 installation package:

$ ./orca -H vm1,vm2,vm3 sgc-upgrade-cluster --sgc-package /path/to/ocss7-3.0.0.1.zip
Tip If more than one cluster is installed on the target hosts it will be necessary to specify the cluster to upgrade using the --cluster argument.

Upgrading to a Pre-Existing SGC Installation

To perform an upgrade that uses a pre-existing SGC installation the --target-version command line argument is required.

The pre-existing SGC installation must be fully configured as configuration files are not copied from the old installation to the new during this process.

For example:

$ ./orca -H vm1,vm2,vm3 sgc-upgrade-cluster --target-version 3.0.0.1

This process skips all installation and configuration steps. One by one, each node is stopped, has its current symbolic link updated to point to the target version and restarted.

Example Upgrade Using the Orca SGC Upgrade Bundle

This example is for a 3-node SGC cluster (PC1) consisting of:

  • Host vm1: PC1-1

  • Host vm2: PC1-2

  • Host vm3: PC1-3

Before starting, check the current status of the SGC cluster:

$ ./orca -H vm1,vm3,vm2 sgc-status
Host vm1
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-1
      * [Running] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]

Host vm3
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-3
      * [Running] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]

Host vm2
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-2
      * [Running] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]

If satisfied with the current cluster state, issue the upgrade command:

$ ./orca -H vm3,vm1,vm2 sgc-upgrade-cluster --package-directory packages/
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
No nodes specified with --nodes <nodes>, using auto-detected nodes: [u'PC1-3', u'PC1-1', u'PC1-2']
Backing up SGC cluster on nodes [u'PC1-3', u'PC1-1', u'PC1-2']
Running command on host vm3: orca_migrate_helper.py tmpzcmRnu=>{"function": "sgc_backup", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3

Running command on host vm1: orca_migrate_helper.py tmpQ5lQ39=>{"function": "sgc_backup", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1

Running command on host vm2: orca_migrate_helper.py tmp13Xs3g=>{"function": "sgc_backup", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2

Starting SGC upgrade process on host PC1-3
Running command on host vm3: orca_migrate_helper.py tmpJDOH4d=>{"function": "sgc_start_upgrade", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3

Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
Preparing new SGC cluster on nodes [u'PC1-3', u'PC1-1', u'PC1-2']
Running command on host vm3: orca_migrate_helper.py tmpmKVEid=>{"function": "sgc_prepare", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3

Running command on host vm1: orca_migrate_helper.py tmpVBwfZS=>{"function": "sgc_prepare", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1

Running command on host vm2: orca_migrate_helper.py tmpfAjL3n=>{"function": "sgc_prepare", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2

Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
Upgrading SGC nodes in turn
Upgrading SGC node PC1-3.  This may take a couple of minutes.
Running command on host vm3: orca_migrate_helper.py tmpdmdejS=>{"function": "sgc_upgrade_node", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3

Waiting 60 seconds for SGC cluster to redistribute data before upgrading the next node
Upgrading SGC node PC1-1.  This may take a couple of minutes.
Running command on host vm1: orca_migrate_helper.py tmp4ZuKGY=>{"function": "sgc_upgrade_node", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1

Waiting 60 seconds for SGC cluster to redistribute data before upgrading the next node
Upgrading SGC node PC1-2.  This may take a couple of minutes.
Running command on host vm2: orca_migrate_helper.py tmpKDmSkk=>{"function": "sgc_upgrade_node", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2

Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
Completing SGC upgrade process on node PC1-3
Running command on host vm3: orca_migrate_helper.py tmpKpeaK7=>{"function": "sgc_complete_upgrade", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3

Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
Available actions:
  - sgc-backup
  - sgc-backup-prune
  - sgc-upgrade-start
  - sgc-upgrade-cluster
  - sgc-revert-start
  - sgc-revert-cluster
  - sgc-install
  - sgc-node-start
  - sgc-node-stop
  - sgc-status

And finally, re-check the cluster status:

$ ./orca -H vm1,vm3,vm2 sgc-status
Host vm1
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-1
        [Stopped] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]
      * [Running] PC1-1 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.1]

Host vm3
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-3
        [Stopped] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]
      * [Running] PC1-3 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.1]

Host vm2
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-2
        [Stopped] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]
      * [Running] PC1-2 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.1]

Reverting the SGC Cluster

Orca supports online reversion of the SGC to version 3.0.0.x from select newer versions.

The OCSS7 installation to be reverted must meet the documented general requirements.

The sgc-revert-cluster command is used to revert the SGC cluster.

There are two options for a reversion:

  • Install the replacement SGC version from an installation package and copy the configuration files from the existing installation to the replacement. The installation package may either be a standalone installation package, or the package provided as part of the Orca SGC upgrade bundle.

  • Revert to a pre-installed and pre-configured SGC installation.

The reversion process performs some pre-checks to ensure that the cluster is in an appropriate state and then reverts the cluster. This process includes:

  • Making a backup of the current running installation.

  • Placing the cluster into REVERT_MULTI_VERSION mode.

  • Optionally, installing the replacement SGC version on each node.

  • Optionally, copying key configuration files from the existing SGC to the replacement node.

  • Shutting down the current SGC and starting the replacement SGC. This is performed one node at a time, with a wait period between nodes in order to allow the cluster perform operations required to maintain normal operation.

  • Marking the reversion as completed and placing the cluster into NORMAL mode.

The reversion process takes several minutes per node to complete.

Reverting to a New SGC Installation

To perform a reversion where the replacement SGC is installed and configured as part of the reversion requires either the --package-directory or --sgc-package command line argument.

For example, to use the Orca SGC upgrade bundle supplied installation packages:

$ ./orca -H vm1,vm2,vm3 sgc-revert-cluster --package-directory packages/

This process installs the replacement SGC following the recommended installation structure and copies configuration files from the currently running SGC to the replacement. Then, a single node at a time, it stops the currently running node, updates the current symbolic link to point to the replacement SGC and starts the replacement node.

Alternatively, the user may use a standalone OCSS7 installation package:

$ ./orca -H vm1,vm2,vm3 sgc-revert-cluster --sgc-package ocss7-3.0.0.0.zip
Tip If there is more than one SGC cluster installed on the target hosts it will be necessary to specify the cluster name using --cluster argument.

Reverting to a Pre-Existing SGC Installation

To perform a reversion that uses a pre-existing SGC installation the --target-version command line argument is required.

The pre-existing SGC installation must be fully configured as configuration files are not copied from the old installation to the replacement during this process.

For example:

$ ./orca -H vm1,vm2,vm3 sgc-revert-cluster --target-version 3.0.0.1

This process skips all installation and configuration steps. One by one, each node is stopped, has its current symbolic link updated to point to the target version and restarted.

Example Reversion

This example is for a 3-node SGC cluster (PC1) consisting of:

  • Host vm1: PC1-1

  • Host vm2: PC1-2

  • Host vm3: PC1-3

Before starting, check the current status of the SGC cluster:

$ ./orca -H vm1,vm3,vm2 sgc-status
Host vm1
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-1
        [Stopped] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]
      * [Running] PC1-1 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.1]

Host vm3
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-3
        [Stopped] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]
      * [Running] PC1-3 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.1]

Host vm2
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-2
        [Stopped] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]
      * [Running] PC1-2 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.1]

If satisfied with the current cluster state, issue the revert command:

$ ./orca -Hvm1,vm2,vm3 sgc-revert-cluster --target-version 3.0.0.0
Getting status for cluster PC1 from hosts [vm1, vm2, vm3]
No nodes specified with --nodes <nodes>, using auto-detected nodes: [u'PC1-1', u'PC1-2', u'PC1-3']
Backing up SGC cluster on nodes [u'PC1-1', u'PC1-2', u'PC1-3']
Running command on host vm1: orca_migrate_helper.py tmpT_chqd=>{"function": "sgc_backup", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1

Running command on host vm2: orca_migrate_helper.py tmpi_qCVS=>{"function": "sgc_backup", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2

Running command on host vm3: orca_migrate_helper.py tmpyvwQKT=>{"function": "sgc_backup", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3

Starting SGC revert process on node PC1-1
Running command on host vm1: orca_migrate_helper.py tmpyAbsSv=>{"function": "sgc_start_revert", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1

Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm1, vm2, vm3]
Using existing SGC installation version 3.0.0.0
Reverting SGC nodes in turn
Reverting SGC node PC1-1.  This may take a couple of minutes.
Running command on host vm1: orca_migrate_helper.py tmpClz8rt=>{"function": "sgc_revert_node", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1

Waiting 60 seconds for SGC cluster to redistribute data before reverting the next node
Reverting SGC node PC1-2.  This may take a couple of minutes.
Running command on host vm2: orca_migrate_helper.py tmpxOS3vv=>{"function": "sgc_revert_node", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2

Waiting 60 seconds for SGC cluster to redistribute data before reverting the next node
Reverting SGC node PC1-3.  This may take a couple of minutes.
Running command on host vm3: orca_migrate_helper.py tmpnslp63=>{"function": "sgc_revert_node", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3

Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm1, vm2, vm3]
Completing SGC reversion process on node PC1-1
Running command on host vm1: orca_migrate_helper.py tmpERJGq2=>{"function": "sgc_complete_revert", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1

Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm1, vm2, vm3]
Available actions:
  - sgc-backup
  - sgc-backup-prune
  - sgc-upgrade-start
  - sgc-upgrade-cluster
  - sgc-revert-start
  - sgc-revert-cluster
  - sgc-install
  - sgc-node-start
  - sgc-node-stop
  - sgc-status

And finally, re-check the cluster status:

$ ./orca -H vm1,vm3,vm2 sgc-status
Host vm1
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-1
      * [Running] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]
        [Stopped] PC1-1 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.1]

Host vm3
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-3
      * [Running] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]
        [Stopped] PC1-3 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.1]

Host vm2
SGC Clusters:
  Cluster PC1
    mode=NORMAL format=30000
    Node PC1-2
      * [Running] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]
        [Stopped] PC1-2 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.1]

SGC Command Reference

Upgrade related commands:

Reversion related commands:

Maintenance:

  • sgc-install — installs an SGC node or nodes from a given OCSS7 installation package

  • sgc-prepare — prepares an SGC node or nodes for upgrade or reversion

  • sgc-backup — creates a backup of an SGC node or nodes essential configuration data

  • sgc-backup-prune — prunes the number of SGC backups on a host

  • sgc-node-start — starts an SGC node or nodes

  • sgc-node-stop — stops an SGC node or nodes

  • sgc-status — displays a summary of the status of SGC clusters and nodes on the given hosts

sgc-backup

Create a backup of an SGC installation’s critical configuration files. This is not a full backup and should not replace a rigorous backup regime.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster to backup

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --version  — the version to backup, otherwise the current version will be backed up

Example Usage

To backup the entire example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-backup --cluster PC1 --nodes PC1-1,PC1-2,PC1-3

To create a backup of just PC1-1 and its version 3.0.0.0 installation:

$ ./orca -H vm1 sgc-backup --cluster PC1 --nodes PC1-1 --version 3.0.0.0

To backup all nodes belonging to only cluster installed on vm2:

$ ./orca -H vm2 sgc-backup

sgc-backup-prune

Remove all but the most recent x backups from the specified cluster, node(s) and version.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster to prune the backups on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --retain  — the number of backups to retain, if unspecified uses the default value (5)

Example Usage

To leave just the default number of backups for each node in the example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-backup-prune --cluster PC1 --nodes PC1-1,PC1-2,PC1-3

To remove all backups from nodes PC1-1 and PC1-3 in the example cluster:

$ ./orca -H vm1,vm3 sgc-backup-prune --cluster PC1 --nodes PC1-1,PC1-3 --retain 0

sgc-install

Creates a new SGC installation with a minimal configuration.

The installation has the following parameters set in SGC.properties:

  • hazelcast.group

  • ss7.instance

The default hazelcast.xml is installed, with backup-count set to the value provided by the --backup-count parameter.

All other parameters and configuration files are set to the default values.

Warning sgc-install is only used to create a completely new SGC installation. To prepare an upgraded SGC instance for an online upgrade see sgc-prepare.

Mandatory Parameters

  • --cluster  — the name of the cluster that the new node(s) belong to

  • --nodes  — the name of the new nodes

  • One of the following parameters must be supplied:

    • --package-directory  — the path to the packages directory from an Orca SGC upgrade bundle containing a packages.cfg file.

    • --sgc-package  — the local path to the SGC installation package

Optional Parameters

  • --backup-count  — the value to be used for backup-count in hazelcast.xml. If unspecified backup-count will be set to one less than the number of nodes provided in the --nodes argument, or to one if only one node is provided.

  • --overwrite — overwrite any existing SGC installation at the target location. Default behaviour is to return an error if an installation already exists with this cluster, node and version.

Warning The default value of backup-count should only be used if the entire cluster is installed in a single sgc-install operation. In all other cases this value should be manually specified to be one less than the final cluster size.

Example Usage

To install the entire example cluster (cluster=PC1, nodes=PC1-1,PC1-2 and PC1-3) from scratch using OCSS7 3.0.0.1:

$ ./orca -H vm1,vm2,vm3 sgc-install --cluster PC1 --nodes PC1-1,PC1-2,PC1-3 --sgc-package /path/to/ocss7-3.0.0.1.zip

Or to install just node PC1-1 into cluster PC1 that will eventually contain 3 nodes:

$ ./orca -H vm1,vm2,vm3 sgc-install --cluster PC1 --nodes PC1-1 --sgc-package /path/to/ocss7-3.0.0.1-zip --backup-count 2
Tip The backup-count in hazelcast.xml can be altered manually later if required. Doing so requires a cluster restart if the cluster is running.

sgc-node-start

Starts one or more SGC nodes.

This command pauses for 120s after starting each node to allow the node to join the SGC cluster and complete data redistribution.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster containing the node(s) to start

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --ignore-state — do not perform the pre-checks associated with this operation

Example Usage

To start PC1-1 in the example cluster:

$ ./orca -H vm1 sgc-node-start --cluster PC1 --nodes PC1-1

To start all nodes in the example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-node-start --cluster PC1

sgc-node-stop

Stops one or more SGC nodes.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster containing the node(s) to stop

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --ignore-state — do not perform the pre-checks associated with this operation

Example Usage

To stop PC1-1 in the example cluster:

$ ./orca -H vm1 sgc-node-stop --cluster PC1 --nodes PC1-1

To stop all nodes in the example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-node-stop --cluster PC1

sgc-prepare

Creates one or more new SGC installations and copies configuration from the existing active SGC instance(s) of the same nodes and cluster to the new installation.

sgc-prepare is typically used over sgc-install when performing an online upgrade from an older to a newer SGC version.

Mandatory Parameters

  • --package-directory  — the path to the packages directory from an Orca SGC upgrade bundle containing a packages.cfg file.

  • --sgc-package  — the local path to the SGC installation package

Optional Parameters

  • --cluster  — the name of the cluster that the new node(s) belong to

  • One of the following parameters must be supplied:

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --overwrite — overwrite any existing SGC installation at the target location. Default behaviour is to return an error if an installation already exists with this cluster, node and version.

Example Usage

To prepare OCSS7 3.0.0.1 for the entire example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-prepare --cluster PC1 --nodes PC1-1,PC1-2,PC1-3 --sgc-package /path/to/ocss7-3.0.0.1.zip

To re-run the prepare operation for just node PC1-3 in the example cluster:

$ ./orca -H vm3 sgc-prepare --cluster PC1 --nodes PC1-3 --sgc-package /path/to/ocss7-3.0.0.1.zip --overwrite

sgc-revert-abort

Aborts an in-progress SGC reversion.

Pre-requisites:

  • The cluster must be in REVERT_MULTI_VERSION mode.

  • All cluster members must be running the version of the SGC that was running prior to sgc-revert-start command being issued.

  • At least one cluster member must be running.

Once complete the SGC cluster will be in NORMAL mode.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster to abort the upgrade on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

Example Usage

To abort the reversion of the example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-revert-abort --cluster PC1 --nodes PC1-1,PC1-2,PC1-3

sgc-revert-cluster

Performs a complete SGC revert from the current version to the specified older version in a single command.

This command performs the operations of the following commands:

Pre-requisites:

  • The cluster must be in NORMAL mode.

  • All cluster members must be specified using the --nodes parameter.

  • All cluster members must be running.

  • (Optional) If --target-version is specified, the target installation must exist on all nodes.

The reversion process takes several minutes per node.

Once completed, the cluster will be running the version specified with either --sgc-package or --target-version and be in NORMAL mode.

Mandatory Parameters

  • One of the following parameters must be specified:

    • --package-directory  — the path to the packages directory from an Orca SGC upgrade bundle containing a packages.cfg file.

    • --sgc-package — the local path to the SGC installation package.

    • --target-version — the pre-installed target version to upgrade to.

Optional Parameters

  • b--cluster  — the cluster to start the upgrade on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --overwrite — overwrite any existing installation when the --sgc-package argument is provided.

Example Usage

To revert the cluster from version 3.0.0.1 to a pre-installed 3.0.0.0 version:

orca -Hvm1,vm3,vm2 sgc-revert-cluster --cluster PC1 --target-version 3.0.0.0

Or to revert the cluster from version 3.0.0.1 to a non-existent 3.0.0.0 version:

orca -Hvm1,vm2,vm3 sgc-revert-cluster --cluster PC1 --sgc-package /path/to/ocss7-3.0.0.0.zip

sgc-revert-complete

Completes the reversion process on the specified cluster and nodes, placing the cluster into NORMAL mode.

Pre-requisites:

  • The cluster must be in REVERT_MULTI_VERSION mode.

  • All nodes must be running the target version.

  • All cluster members must be running.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster to perform the operation on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

Example Usage

To complete the reversion of the example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-revert-complete --cluster PC1

sgc-revert-node

Stops the specified nodes, replaces them with the older SGC installation, and starts the replacement node(s).

Pre-requisites:

  • The cluster must be in REVERT_MULTI_VERSION mode.

  • The nodes to be reverted must have a pre-prepared installation of the target version available.

  • The target version must be older than the current version.

  • All cluster members must be running.

If multiple nodes are to be reverted these will be processed one at a time, with a pause in between reversions. This wait period is necessary to allow time for the reverted node to rejoin the cluster and for the cluster to repartition itself.

Mandatory Parameters

Optional Parameters

  • --cluster  — the cluster to perform the operation on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

Example Usage

To revert the whole example cluster from OCSS7 3.0.0.1 to OCSS7 3.0.0.0:

$ ./orca -H vm1,vm2,vm3 sgc-revert-node --cluster PC1 --target-version 3.0.0.0

To revert just node PC1-2 in the example cluster:

$ ./orca -H vm2 sgc-revert-node --cluster PC1 --nodes PC1-2 --target-version 3.0.0.0

sgc-revert-rollback

Rolls back an in-progress SGC reversion, reinstating the specified SGC version.

Warning The user must take care to ensure that the specified version is the original version prior to the revert. The revert process does not 'remember' the previous version.

By default the rollback process is performed one node at a time to minimise the chance of service loss. If the --hard parameter is specified the entire cluster will be stopped, rolled back, and then restarted. This will result in a service outage.

Once complete the SGC cluster will be running the specified SGC version and be operating in NORMAL mode.

Mandatory Parameters

Optional Parameters

  • --cluster  — the cluster to abort the upgrade on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --hard — perform a hard rollback resulting in service loss instead of an online rollback

Example Usage

To rollback an in-progress revert (to version 3.0.0.0) to the original version (3.0.0.1):

$ ./orca -H vm1,vm2,vm3 sgc-revert-rollback --cluster PC1 --nodes PC1-1,PC1-2,PC1-3 --target-version=3.0.0.1

sgc-revert-start

Starts the SGC reversion process.

Once completed the cluster will be in REVERT_MULTI_VERSION mode.

Mandatory Parameters

  • --target-version  — the target SGC version to revert to. This SGC version must already be installed and configured.

Optional Parameters

  • --cluster  — the cluster to perform the operation on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

Example Usage

To start the reversion process of the example cluster to SGC version 3.0.0.0 (from 3.0.0.1):

$ ./orca -H vm1,vm2,vm3 sgc-revert-start --cluster PC1 --target-version=3.0.0.0

sgc-status

Display a summary of the SGC installations and current status on the hosts provided in the orca primary -H (--hosts) parameter.

Mandatory Parameters

None.

Optional Parameters

  • --cluster : only display clusters with this name

  • --nodes : only display this node/these nodes

Example Usage

Display the status of all clusters and nodes on hosts vm1, vm2 and vm3:

$ ./orca -H vm1,vm2,vm3 sgc-status

Display the status of nodes PC1-2 and PC1-3 (but not PC1-1) in cluster PC1:

$ ./orca -H vm1,vm2,vm3 sgc-status --cluster PC1 --nodes PC1-2,PC1-3

sgc-upgrade-abort

Aborts an in-progress SGC upgrade.

Pre-requisites:

  • The cluster must be in UPGRADE_MULTI_VERSION mode.

  • All cluster members must be running a version of the SGC whose native data format is equal to the current native format of the cluster.

  • At least one cluster member must be running.

Once complete the SGC cluster will be in NORMAL mode.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster to abort the upgrade on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

Example Usage

To abort the upgrade of the example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-upgrade-abort --cluster PC1

sgc-upgrade-cluster

Performs a complete SGC upgrade from the current version to the specified target version in a single command.

This command performs the operations of the following commands:

Pre-requisites:

  • The cluster must be in NORMAL mode.

  • All cluster members must be specified using the --nodes parameter.

  • All cluster members must be running.

  • (Optional) If --target-version is specified, the target installation must exist on all nodes.

The upgrade process takes several minutes per node.

Once completed, the cluster will be running the version specified with either --sgc-package or --target-version and be in NORMAL mode.

Mandatory Parameters

  • --ignore-state — do not perform the sanity checks associated with the node stopping and starting parts of this operation

  • One of the following parameters must be specified:

    • --package-directory  — the path to the packages directory from an Orca SGC upgrade bundle containing a packages.cfg file.

    • --sgc-package — the local path to the SGC installation package.

    • --target-version — the pre-installed target version to upgrade to.

Optional Parameters

  • --cluster  — the cluster to start the upgrade on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --overwrite — overwrite any existing installation when the --sgc-package argument is provided.

Example Usage

To upgrade the cluster PC1 from current version 3.0.0.0 to 3.0.0.1 using the Orca SGC upgrade package. This will install the new 3.0.0.1 nodes and copy the configuration from the existing nodes:

orca -Hvm1,vm3,vm2 sgc-upgrade-cluster --cluster PC1 --package-directory packages/

To upgrade the cluster PC1 from version 3.0.0.0 to 3.0.0.1 using the standalone OCSS7 installation ZIP file. This will install the new 3.0.0.1 nodes and copy the configuration from the existing nodes.

orca -Hvm3,vm2,vm1 sgc-upgrade-cluster --cluster PC1 --sgc-package /path/to/ocss7-3.0.0.1.zip

Alternatively, to upgrade the cluster PC1 from version 3.0.0.0 to a pre-installed and pre-configured version 3.0.0.1:

orca -Hvm1,vm2,vm3 sgc-upgrade-cluster --cluster PC1 --target-version 3.0.0.1

sgc-upgrade-complete

Marks the SGC upgrade as completed.

Pre-requisites:

  • The cluster must be in UPGRADE_MULTI_VERSION mode.

  • All nodes must be running the same SGC version.

  • All cluster members must be running.

Once completed the SGC cluster will be in NORMAL mode.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster to perform the operation on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

Example Usage

To complete the upgrade of the example cluster:

$ ./orca -H vm1,vm2,vm3 sgc-upgrade-complete --cluster PC1 --nodes PC1-1,PC1-2,PC1-3

sgc-upgrade-node

Stops the specified nodes, replaces them with the pre-prepared upgraded SGC installation, and starts the replacement node(s).

Pre-requisites:

  • The cluster must be in UPGRADE_MULTI_VERSION mode.

  • The nodes to be upgraded must have a pre-prepared installation of the target version available.

  • The target version must be newer than the current version.

  • All cluster members must be running.

If multiple nodes are to be upgraded these will be processed one at a time, with a wait in between upgrades. This wait period is necessary to allow time for the upgraded node to rejoin the cluster and for the cluster to repartition itself.

Mandatory Parameters

Optional Parameters

  • --cluster  — the cluster to perform the operation on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

Example Usage

To upgrade the whole example cluster from OCSS7 3.0.0.0 to OCSS7 3.0.0.1:

$ ./orca -H vm1,vm2,vm3 sgc-upgrade-node --cluster PC1 --target-version 3.0.0.1

To upgrade just node PC1-2 in the example cluster:

$ ./orca -H vm2 sgc-upgrade-node --cluster PC1 --nodes PC1-2 --target-version 3.0.0.1

sgc-upgrade-rollback

Rolls back an in-progress SGC upgrade, reinstating the specified SGC version.

Warning The user must take care to ensure that the specified version is the original version prior to the upgrade. The upgrade process does not 'remember' the previous version.

By default the rollback process is performed one node at a time to minimise the chance of service loss. If the --hard parameter is specified the entire cluster will be stopped, rolled back, and then restarted. This will result in a service outage.

Once complete the SGC cluster will be running the specified SGC version and be operating in NORMAL mode.

Mandatory Parameters

Optional Parameters

  • --cluster  — the cluster to abort the upgrade on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

  • --hard — perform a hard rollback resulting in service loss instead of an online rollback

Example Usage

To rollback an in-progress upgrade of the example cluster to SGC version 3.0.0.0:

$ ./orca -H vm1,vm2,vm3 sgc-upgrade-rollback --cluster PC1 --nodes PC1-1,PC1-2,PC1-3 --target-version=3.0.0.0

sgc-upgrade-start

Begin the SGC upgrade process.

Pre-requisites:

  • The cluster must be in NORMAL mode.

  • All cluster members must be specified using the --nodes parameter.

  • At least one cluster member must be running.

  • One backup matching the current SGC configuration must have been taken using the sgc-backup command.

Once complete the SGC cluster will be in UPGRADE_MULTI_VERSION mode.

Mandatory Parameters

None.

Optional Parameters

  • --cluster  — the cluster to start the upgrade on

  • --nodes  — the nodes in the cluster to apply this operation to. If unspecified the nodes will be auto-detected.

Example Usage

$ ./orca -H vm1,vm2,vm3 sgc-upgrade-start --cluster PC1

SGC Command Argument Reference

--backup-count <count>

Use a backup-count of <count> instead of the automatically calculated value. <count> count must be an integer value between 1 and 6. For example:

--backup-count 2

--cluster <name>

Specifies the cluster that the operation should be carried out on.

For commands that consider this parameter to be optional the meaning is auto-detected if it is not supplied. This means:

  • sgc-status: all clusters

  • Other commands: the only installed cluster. If there is more than one installed cluster the --cluster parameter must be included.

For example, to get the status of the cluster named test:

./orca -H vm1,vm2,vm3 sgc-status --cluster test

Or to stop the node(s) on the cluster test on host vm1:

./orca -H vm1 sgc-node-stop --cluster test

And to start all nodes belonging to the only cluster installed on host vm2:

./orca -H vm2 sgc-node-start

--hard

By default the sgc-upgrade-rollback and sgc-revert-rollback commands perform an online rollback. Supplying the --hard option to either of these commands will result in the entire cluster being stopped, rolled back, and then restarted. Service will be lost during this process.

Use of the --hard option is only recommended where the cluster is unstable following the upgrade or rollback and a complete cluster restart is required.

--ignore-state

Causes the SGC start or stop operations to ignore the current SGC node state.

--nodes <nodes>

Specifies the nodes that the operation should be carried out on. For some commands this parameter is optional, and when not supplied by the user is taken to mean all nodes belonging to the specified cluster on the hosts specified by the accompanying -H (--hosts) parameter.

This is a comma-separated list of SGC node names, in the same order as the hosts (-H or --hosts) list. If multiple nodes exist on a single host, this can be specified as a quoted nested comma-separated list of node names.

For example, a configuration with:

  • Host vm1 has node PC1-1

  • Host vm2 has node PC1-2

  • Host vm3 has node PC1-3

Wil specify both the -H (--hosts) and --nodes parameters as:

-H vm1,vm2,vm3 --nodes PC1-1,PC1-2,PC1-3

Supposing that host vm3 gains a second node, PC1-4, such that the cluster now looks like:

  • Host vm1 has node PC1-1

  • Host vm2 has node PC1-2

  • Host vm3 has nodes PC1-3 and PC1-4

Then the both the -H (--hosts) and --nodes parameters are specified as:

-H vm1,vm2,vm3 --nodes PC1-1,PC1-2,"PC1-3,PC1-4"

Note the quotation marks (") around the nodes specified for vm3.

--overwrite

If provided to the sgc-install, sgc-prepare, sgc-upgrade-cluster and sgc-revert-cluster commands will allow any existing installation to be overwritten.

--package-directory <directory>

The local path to the packages directory from the Orca SGC upgrade pack. This directory must contain the Orca supplied packages.cfg file and the OCSS7 installation ZIP. For example:

--package-directory packages/

--retain <count>

Used only by the sgc-backup-prune command. Specifies the number of backups to retain following the prune operation. <count> must be an integer greater than or equal to 0. For example:

--retain 3

--sgc-package <package>

The local path to the package containing the SGC installation to use as the target installation for a revert, upgrade or install action. For example:

--sgc-package /path/to/ocss7-3.0.0.1.zip

--target-version <version>

The pre-installed SGC version to upgrade or revert to. For example:

--target-version 3.0.0.1

This SGC must have been installed following the recommended installation structure.

--version <version>

The SGC version to apply an operation to, such as sgc-backup or sgc-backup-purge. For example:

--version 3.0.0.0

Manual Online Upgrade

Warning This document applies to upgrading from SGC 3.0.0.x to a newer SGC. It cannot be used to upgrade to SGC 3.0.0.x from either the 1.x or 2.x series of SGCs. See the Online Upgrade Support Matrix for the exact release combinations that support online upgrade.

This section describes the process required to perform an online manual upgrade. During this process the cluster remains in service and provided that the connected TCAP stacks are using the ocss7.sgcs connection method, calls will fail over from one SGC to another as required.

Note Failover cannot be guaranteed to be 100% successful as failover for any given dialog or message is highly dependent on timing. For example, a message queued in an SGC at the SCTP layer will be lost if that SGC is terminated prior to transmission.

Manual Upgrade Procedure

  1. Backup the cluster.

  2. Prepare the replacement nodes:

    1. Install each replacement cluster member following the recommended installation structure.

    2. Do not copy configuration files from the existing to the new installation yet.

  3. Issue the CLI command: start-upgrade. This checks pre-requisites and places the cluster into UPGRADE mode. In this mode:

    • Calls continue to be processed.

    • Newer SGC versions may join the cluster, provided they are backwards compatible with the current cluster version.

    • Configuration changes will be rejected.

  4. Copy configuration (config/* and var/*) from the original cluster members to the replacement cluster members. This step must be carried out after executing start-upgrade to provide full resilience during the upgrade procedure.

  5. Upgrade the first cluster member:

    1. Stop the original node: $ORIGINAL_SGC_HOME/bin/sgc stop

    2. Verify that the original node has come to a complete stop by checking its logs and the process list.

    3. Start the replacement node: $REPLACEMENT_SGC_HOME/bin/sgc start

    4. Verify that the replacement node has started and successfully joined the cluster. The CLI command display-info-nodeversioninfo can be used to view the current cluster members.

    5. Wait for 2-3 minutes to allow the cluster to redistribute shared data amongst all of the members.

  6. Repeat the previous step for each of the remaining cluster members. This must be performed one node at a time.

  7. Issue the CLI command: complete-upgrade. This checks pre-requisites, then performs the actions required to leave UPGRADE mode.

  8. Verify that the cluster has completed the upgrade. The CLI commands display-info-nodeversioninfo and display-info-clusterversioninfo may be used to verify this.

Rolling Back An In-Progress Manual Upgrade

  • Before start-upgrade was issued:

    • (Optional) Delete the installation directories for the (unused) replacement cluster members.

  • After complete-upgrade:

    • This is a revert operation, not a rollback.

  • After start-upgrade and before complete-upgrade:

    1. For every cluster member that is running the replacement SGC version, ONE AT A TIME:

      1. Stop the SGC: $REPLACEMENT_SGC_HOME/bin/sgc stop

      2. Verify that the node has come to a complete halt by checking its logs and the process list.

      3. Start the original SGC: $ORIGINAL_SGC_HOME/bin/sgc start

      4. Verify that the original SGC has started and successfully joined the cluster. The CLI command display-info-nodeversioninfo can be used to view the current cluster members.

      5. Wait for 2-3 minutes to allow the cluster to redistribute shared data before proceeding to the next node.

    2. Once all nodes are running the original pre-upgrade version, complete the rollback by issuing the abort-upgrade CLI command.

Manual Online Revert of the Cluster

Warning This document applies to reverting from SGC 3.0.0.x to an older SGC 3.0.0.x release. It cannot be used to revert to a release prior to SGC 3.0.0.0.

This section describes the process required to perform an online manual revert. During this process the cluster remains in service and provided that the connected TCAP stacks are using the ocss7.sgcs connection method, calls will fail over from one SGC to another as required.

Note Failover cannot be guaranteed to be 100% successful as failover for any given dialog or message is highly dependent on timing. For example, a message queued in an SGC at the SCTP layer will be lost if that SGC is terminated prior to transmission.

Manual Revert Procedure

  1. Backup the cluster.

  2. Prepare the replacement nodes. Note that each node must conform to the recommended installation structure. Either:

    1. Use the previous installation of the SGC version being reverted to.

    2. Restore a backup of the previous installation of the SGC version being reverted to.

    3. Create a fresh installation of the SGC version being reverted to.

    4. Ensure that the current installation supports the version being reverted to. This can be verified by running the CLI command display-info-nodeversioninfo and verifying that the target version is listed in the supportedFormats column.

    5. Do not copy configuration files yet.

  3. Issue the CLI command: start-revert: target-format=$TARGET_CLUSTER_FORMAT. This command:

    • Checks pre-requisites and places the cluster into REVERT mode.

    • Converts live cluster data to the specified target data format.

    • Saves SGC configuration in the target data format (sgc.dat).

      While in this mode:

    • Calls continue to be processed.

    • Older SGC versions may join the cluster, provided that they are compatible with the target cluster version.

    • Configuration changes will be rejected.

  4. Ensure that the current cluster format is set to the target format. This can be verified by running the CLI command display-info-clusterversioninfo and ensuring that the currentClusterFormat column is set to the target version.

  5. Copy configuration (config/* and var/*) from the original cluster members to the replacement cluster members. This ensures that in the event of cluster failure the first node to restart initializes the correct configuration and not an empty configuration. This step may only be carried after once the cluster is in REVERT mode as prior to this time the configuration files may be saved in a new format not understood by the target nodes.

  6. Revert the first cluster member:

    1. Stop the original node: $ORIGINAL_SGC_HOME/bin/sgc stop

    2. Verify that the original node has come to a complete stop by checking its logs and the process list.

    3. Start the replacement node: $REPLACEMENT_SGC_HOME/bin/sgc start

    4. Verify that the replacement node has started and successfully joined the cluster. The CLI command display-info-nodeversioninfo can be used to view the current cluster members.

    5. Wait for 2-3 minutes to allow the cluster to redistribute shared data amongst all of the members.

  7. Repeat the previous step for each of the remaining cluster members. This must be performed one node at a time.

  8. Issue the CLI command: complete-revert. This checks pre-requisites, then performs the actions required to leave REVERT mode.

  9. Verify that the cluster has completed the revert. The CLI commands display-info-nodeversioninfo and display-info-clusterversioninfo may be used to verify this.

Rolling Back An In-Progress Manual Revert

  • Before start-revert was issued:

    • (Optional) Delete the installation directories for the (unused) replacement cluster members.

  • After complete-revert:

    • This is an upgrade operation, not a rollback.

  • After start-revert and before complete-revert:

    1. For every cluster member that is running the replacement SGC version, ONE AT A TIME:

      1. Stop the SGC: $REPLACEMENT_SGC_HOME/bin/sgc stop

      2. Verify that the node has come to a complete halt by checking its logs and the process list.

      3. Start the original SGC: $ORIGINAL_SGC_HOME/bin/sgc start

      4. Verify that the original SGC has started and successfully joined the cluster. The CLI command display-info-nodeversioninfo can be used to view the current cluster members.

      5. Wait for 2-3 minutes to allow the cluster to redistribute shared data before proceeding to the next node.

    2. Once all nodes are running the original pre-revert version, complete the rollback by issuing the abort-revert CLI command.

STP Redirection

This approach manages the upgrade externally to Rhino and OCSS7, and requires support from the STP and surrounding network, and in some configurations, support in the existing service. It can be used for all types of upgrade.

Prerequisites

Before upgrading using STP redirection, make sure that:

  • Inbound TC-BEGINs are addressed to a "logical" GT. The STP translates this GT to one-of-N "physical" addresses using a load-balancing mechanism. These "physical" addresses route to a particular cluster.

  • Optionally, the STP may rewrite the "logical" called party address in the TC-BEGIN to the "physical" address.

  • The STP needs the ability to reconfigure the translation addresses in use at runtime.

  • The old and new clusters are assigned different "physical" addresses.

  • If the STP did not rewrite the "logical" called party address in the TC-BEGIN to the "physical" address, then the service must ensure that the initial responding TC-CONTINUE provides an SCCP Calling Party Address that is the "physical" address for the cluster that is responding.

  • Subsequent traffic uses the "physical" address using normal TCAP procedures.

Upgrade process

To upgrade using STP redirection:

  1. Set up the new clusters (Rhino and SGC) with a new "physical" address. Ensure that the new Rhino cluster has a different clusterID to the existing Rhino cluster. Similarly, the new SGC cluster must have a different Hazelcast cluster ID to the existing SGC cluster.

  2. Configure and activate the new clusters.

  3. Reconfigure the STP to include the new cluster’s physical address when translating the logical GT.

  4. Verify that traffic is processed by the new cluster correctly.

  5. Reconfigure the STP to exclude the old cluster’s physical address when translating the logical GT.

  6. Wait for all existing dialogs to drain from the old clusters.

  7. Halt the old clusters.

Offline Upgrade

Warning The offline upgrade process involves a period of complete outage for the cluster being upgraded.

The offline upgrade process allows for the upgrade of a cluster without the use of STP redirection or a second point code. This process involves terminating the existing cluster and replacing it with a new cluster.

Consequences of this approach include:

  • A complete service outage at the site being upgraded during the upgrade window.

  • In progress dialogs will be terminated unless the operator is able to switch new traffic to an alternate site and configure calls to drain prior to starting the upgrade.

This upgrade involves two phases which are carried out sequentially. These are preparation and execution.

Preparation

The preparatory phase of the upgrade may carried out in advance of the upgrade window provided that no further configuration changes are expected or permitted to the existing cluster between the preparation phase starting and the execution phase of the upgrade being performed.

Warning Any configuration changes applied to the SGC after preparation has started will not be migrated to the upgraded cluster.

The following operations should be carried out in the listed order:

1. Backup the Existing Cluster

Create a backup of the existing cluster. This ensures that it will be possible to reinstate the original cluster in the event that files from the original cluster are inadvertently modified or removed and it becomes necessary to revert or abort the upgrade.

2. Install the Replacement Cluster

The following requirements apply to the installation of the replacement cluster:

  • The installation should follow the recommended installation structure.

  • The nodes in the new cluster must have the same name as the original nodes.

  • It is strongly recommended that the new cluster has a different name to the old cluster.

Warning Failure to follow the recommended installation structure will result in a cluster that cannot be upgraded in future using the automated online upgrade method.
Warning Failure to keep the node names the same in both clusters will result in the replacement cluster having one or more unconfigured nodes.
Warning If the new and replacement clusters have the same name and both clusters are allowed to run at the same time there is a very high chance of node instability and data corruption.

3. Copy Configuration from the Existing Cluster to the Replacement Cluster

Note This guide assumes that the locations of the SGC’s configuration files have not been customized. If any locations have been customized these customizations must be honoured when copying the files.

For each node in the cluster:

  1. Copy config/sgcenv from the existing installion to the new:

    cp $EXISTING_SGC_HOME/config/sgcenv $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/config/
  2. Copy config/SGC.properties from the existing installation to the new:

    cp $EXISTING_SGC_HOME/config/SGC.properties $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/config/
  3. Copy config/log4j.xml from the existing installation to the new:

    cp $EXISTING_SGC_HOME/config/log4j.xml $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/config/
  4. If present, copy config/hazelcast.xml from the existing installation to the new:

    cp $EXISTING_SGC_HOME/config/hazelcast.xml $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/config/
  5. Copy var/sgc.dat from the existing installation to the new:

    cp $EXISTING_SGC_HOME/var/sgc.dat $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/var/

4. Verify the Configuration of the Replacement Cluster

a) Check that the Configuration Files Copied Correctly
Note If automated upgrade is required in the future certain requirements must be met in relation to the locations of sgc.dat, log4j.xml, hazelcast.xml, sgcenv and SGC.properties. If necessary, the locations of these files can be modified to meet these requirements now.

Ensure that the destination SGC installation contains the correct version of the copied files. This is best performed by examining the contents of each file via less:

$ less $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/var/sgc.dat

Alternatively if the copied files have not been manually adjusted, md5sum can be used to verify that the destination file has the same checksum as the source file:

$ md5sum $EXISTING_SGC_HOME/var/sgc.dat
2f765f325db744986958ce20ccd9f162  $EXISTING_HOME/var/sgc.dat
$ md5sum $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/var/sgc.dat
2f765f325db744986958ce20ccd9f162  $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/var/sgc.dat
b) Verify hazelcast.xml and backup-count

If $OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/config/hazelcast.xml did not exist it should be installed and customized according to Hazelcast cluster configuration.

Warning Hazelcast’s backup-count property must be correctly set for the size of the cluster. Failure to adhere to this requirement may result in cluster failure.
c) Update SGC.properties

The sgc.tcap.maxPeers and sgc.tcap.maxMigratedPrefixes configuration properties have been removed. These should be removed from the replacement node’s SGC.properties file.

A new configuration property, sgc.tcap.maxTransactions, is available to configure the maximum number of concurrent transactions that may be handled by a single SGC. The default value should be reviewed and changed if necessary.

5. Backup the Replacement Cluster

Create a backup of the replacement cluster prior to starting the execution phase.

Execution

The execution phase should be carried out during a scheduled upgrade window. The preparation phase must have been completed prior to starting this phase.

Warning The execution phase involves a period of complete outage for the cluster being upgraded.

The execution phase is comprised of the following actions:

1. (Optional) Switch Traffic to an Alternate Site

Optionally, traffic may be switched to an alternate site.

How to do this is site specific and out of the scope of this guide.

2. Terminate the Existing Cluster

For each node in the existing cluster execute sgc stop:

$OCSS7_HOME/bin/sgc stop
Stopping processes:
    SGC:7989
    DAEMON:7974
Initiating graceful shutdown for [7989] ...
Sleeping for max 32 sec waiting for graceful shutdown to complete.
Graceful shutdown successful
Shutdown complete (graceful)
Note If the node has active calls the graceful shutdown may become a forced shutdown, resulting in active calls being terminated. This is a normal and expected consequence of an offline upgrade when calls have not been redirected and/or drained from the site to be upgraded.

And validate the state of the node using sgc status:

$OCSS7_HOME/bin/sgc status
SGC is down

3. Start the Replacement Cluster

Start each node in the replacement cluster using sgc start:

$OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/bin/sgc start
SGC starting - daemonizing ...
SGC started successfully

And validate the state of the node using sgc status:

$OCSS7_ROOT/CLUSTER_NAME/NODE_NAME/ocss7-3.0.0.0/bin/sgc status
SGC is alive

The CLI’s display-info-nodeversioninfo and display-info-clusterversioninfo commands may also be used to view the node and cluster status respectively. Also, display-node may be used to view configured nodes that are in the active state.

Note display-info-nodeversioninfo and display-info-clusterversioninfo are OCSS7 3.0.0.0+ only commands.

4. Verify Cluster Operation

It is strongly recommended that correct cluster operation is verified with either test calls or a very small number of live calls prior to resuming full operation.

The process of generating test calls or sending a small number of live calls to the cluster is unique to the site and therefore out of the scope of this guide.

5. (Optional) Resume Normal Traffic

If traffic was switched to an alternate site this traffic should be resumed.

The procedure for this is site specific and outside the scope of this guide.

Appendix A: SGC Properties

The following table contains a description of the configuration properties that may be set in SGC.properties:

Property What it specifies Default

com.cts.ss7.commsp.heartbeatEnabled

enables or disables the heartbeat timeout mechanism in the SGC. If this is enabled in the SGC then the TCAP stack must also be configured to send heartbeats, otherwise the connection between the TCAP stack and SGC will be marked as timed out after com.cts.ss7.commsp.server.recvTimeout seconds, resulting in an unstable TCAP stack to SGC connection. . See Data Connection Heartbeat Mechanism for details

true

com.cts.ss7.commsp.server.handshake.recvTimeout

timeout (in seconds) waiting for a handshake between the SGC and TCAP stack

2

com.cts.ss7.commsp.server.recvTimeout

how many seconds to wait before the Peer closes the connection after not receiving anything

Value must be greater than the heartbeat period configured in the TCAP stack, see Data Connection Heartbeat Mechanism for details.

11

com.cts.ss7.commsp.server.sendQueueCapacity

capacity of the Peer sending queue

1024

com.cts.ss7.commsp.tcpNoDelay

whether to disable the Nagle algorithm for the connection between the SGC and TCAP stack

true

com.cts.ss7.commsw.client.letSystemChooseSourceAddress

whether to ignore the configured switch-local-address when establishing a client intra-cluster-communication (comm switch module) connection; if true, the OS is responsible for choosing an appropriate source address

switch-local-address is an attribute of the node configuration object

false

com.cts.ss7.commsw.client.tcpNoDelay

whether to disable the Nagle algorithm in intra-cluster communication client mode (comm switch module)

true

com.cts.ss7.commsw.client.threads

number of threads serving connections (client mode) from other SGC nodes; for intra-cluster communication (comm switch module)

Each thread requires three File Descriptors. The recommended value should be one less than number of nodes in the cluster.

1

com.cts.ss7.commsw.server.acceptor.threads

number of threads accepting connections from other SGC nodes; for intra-cluster communication (comm switch module)

Each thread requires three File Descriptors.

1

com.cts.ss7.commsw.server.tcpNoDelay

whether to disable the Nagle algorithm in intra-cluster communication server mode (comm switch module)

true

com.cts.ss7.commsw.server.threads

number of threads serving connections (server mode) from other SGC nodes; for intra-cluster communication (comm switch module)

Each thread requires three File Descriptors. The recommended value should be one less than number of nodes in the cluster.

1

com.cts.ss7.commsw.so_rcvbuf

size of the socket receive buffer (in bytes) used by intra-cluster communication (comm switch module)

Use a value <= 0 for OS default.

-1

com.cts.ss7.commsw.so_sndbuf

size of the socket send buffer (in bytes) used by intra-cluster communication (comm switch module)

Use a value <= 0 for OS default.

-1

com.cts.ss7.config.save-delay

how long to wait, in milliseconds, after a configuration update has been made before the updated config is saved.

This delay improves the speed of batched config imports. A value of 0 or less will disable this feature.

1000

com.cts.ss7.peer.data.acceptor.threads

number of threads accepting data connections from TCAP stack instances

Each thread requires three File Descriptors.

2

com.cts.ss7.peer.data.service.threads

number of threads serving data connections from TCAP stack instances

Each thread requires three File Descriptors. The recommended value should be equal to the number of TCAP stack instances served by the cluster.

2

com.cts.ss7.peer.http.acceptor.threads

number of threads accepting connections from TCAP stack instances requesting balancing information

1

com.cts.ss7.peer.http.service.threads

number of threads serving accepted connections from TCAP stack instances requesting balancing information

1

com.cts.ss7.shutdown.gracefulOnUncaughtEx

whether to try a graceful shutdown first (value true) when shutting down because of a uncaught exception

true

com.cts.ss7.shutdown.gracefulWait

how long to wait (in milliseconds) during graceful shutdown, before forcing a shutdown

30000

com.cts.ss7.shutdown.ignoreUncaughtEx

whether an uncaught exception should result in instance shutdown (value false), or just be logged (value true)

Severe errors (java.lang.Error) always result in shutdown without trying a graceful shutdown first.

false

hazelcast.config.file

optional path to the Hazelcast config file

none, but the default SGC.properties file sets this to config/hazelcast.xml

hazelcast.group

cluster group to which this instance belongs

STANDALONE_{random hexadecimal string}

sgc.alarming.factory.class

implementation of the alarming factory

com.cts.ss7.alarming. impl.AlarmingFactoryImpl

sgc.alarming.maxHistoryAge

maximum alarming history age (in minutes)

1440

sgc.data.dir

property name used to get the path where the SGC data file should be stored

current working directory at the time of startup

sgc.data.file

property name used to get the actual SGC data file name

sgc.dat

sgc.ignore_config_tampering

whether the SGC should ignore validation of the MD5 signature of the XML configuration file

false

sgc.ind.pool_size

maximum number of inbound messages that may be processing or waiting to be processed

10000

sgc.peer.pendingConnectionTTL

how long (in seconds) the SGC should consider a connection pending after peer node allocation, but before actual connection (after this time, the SGC assumes that the connection will not happen)

This value is used only by the Legacy 'ocss7.urlList' Connection Method to assist with balancing TCAP stacks amongst SGCs.

If it is set too small then under certain failure conditions it may result in a TCAP stack continually trying to reconnect to an SGC data port that it cannot reach.

The suggested value for this property is (total_TCAP_stacks_connections * (total_SGCs_in_cluster - 1) + 1) * 4 + safety_factor.

The default value allows for 13 TCAP stacks and 2 SGCs under worst case failure conditions with a 4 second safety factor. The safety factor allows for network latency and timer precision.

60

sgc.req.pool_size

maximum number of outbound messages that may be processing or waiting to be processed

10000

sgc.tcap.maxTransactions

maximum number of concurrent transactions that this SGC may process

The MAX_HEAP_SIZE parameter must be configured accordingly. For details, please see Configuring SGC_HOME/config/sgcenv.

1000000

sgc.upgradepacks.dir

The path where the SGC upgrade packs are stored. If the provided path is relative, then this is relative to SGC_HOME.

lib/upgrade-packs

sgc.worker.queue

maximum number of tasks (messages to be processed) in one worker queue

256

sgc.worker.threads

number of threads used by the worker group to process inbound and outbound messages

32

snmp.data.bc.file boot

counter file name for snmp4j

sgc-snmp-bc.dat

snmp.data.file

file name for snmp4j persistent storage

sgc-snmp.dat

ss7.http.tcp_no_delay

whether TCAP manager uses a Nagle algorithm for incoming connections

false

ss7.instance

name of the SGC instance within the cluster

I1

Appendix B: Support Requests

This document provides guidance on material to include with OCSS7 support requests.

Gathering Support Data

OCSS7 support requests should contain the following information:

Gathering Support Data from the SGC

The SGC installation includes a convenience script $SGC_HOME/bin/generate-report.sh that should be used to gather information for a support request. The information gathered by this script includes:

  • Information about the SGC version being used.

  • General system information, such as network interfaces, running processes, mounted filesystems.

  • The SGC’s general configuration files: $SGC_HOME/config/*

  • The SGC startup scripts: $SGC_HOME/bin/*

  • The SGC’s log files: $SGC_HOME/logs/*

  • The current state of the SGC as indicated by the various display-* command line client commands, if the SGC is running.

  • A thread dump from the SGC process, if the SGC is running.

  • The current saved MML configuration: $SGC_HOME/var/sgc.dat

This is the preferred method of gathering information for a support request, and this script is considered safe to use on a live SGC.

Configuring the generate-report.sh script

The generate-report.sh script is configured with some default values that may require changing for non-standard installations.

These values may be changed by opening the script in your preferred text editor and following the instructions in that script.

Warning Do not edit anything after the section marked "End of user-configurable section".
Property What it specifies Default

LOGLEVEL

The level of logging that the generate-report.sh script should emit.

0 = silent, 1 = basic information (default), 2 = debug, 3 = more debug

1

SGC_HOME

Where the SGC is installed.

The parent directory of the generate-report.sh script.

LOG_BASE

Where the SGC’s ss7.log and startup.log are saved.

$SGC_HOME/logs

JMX_HOST

The JMX hostname to use to connect to the SGC to gather runtime status information.

Auto-detected from $SGC_HOME/config/sgcenv

JMX_PORT

The JMX port to use to connect to the SGC to gather runtime status information.

Auto-detected from $SGC_HOME/config/sgcenv

NUM_DAYS

How many days of SGC logging to retrieve.

generate-report.sh will retrieve up to this number of days of logging, provided that there is an acceptable amount of disk space available.

The script is conservative and will always ensure that there is at least 1GB or 10% (whichever is the larger) disk space remaining after gathering logs.

30

tar

The tar command to use to package up the generated report.

By default tar is run at a low priority to minimise any possible impact on system performance.

nice -n 19 tar

Executing the generate-report.sh script

One node at a time, for each node in the cluster, execute the script (this may take a few minutes):

cd $SGC_HOME
./bin/generate-report.sh

Example output where LOGLEVEL is set to 1 (default) and the SGC is running looks like:

ocss7@badak:~/testing/PC2-1$ ./bin/generate-report.sh
    Initializing...
      - Using SGC_HOME /home/ocss7/testing/PC2-1 (if this is not correct, override in generate-report.sh)
    Generating report...
      - Getting general system information
      - Getting info from /proc subsystem for SGC pid 28229
      - Getting configuration files
      - Getting SGC scripts
      - Getting SGC logs
    tar: Removing leading `/' from member names
      - Getting runtime state from JMX
      - Getting thread dump from SGC PC2-1 (pid=28229)
      - Getting runtime SGC configuration
    Cleaning up...

    Report written to /home/ocss7/testing/PC2-1/ocss7-report-PC2-1-2017-12-18_103720.tar
    *** Note that this report is not compressed.  You may compress it with bzip2 or xz if you wish. ***

If the SGC is not running, the output may look like this:

ocss7@badak:~/testing/PC1-1$ ./bin/generate-report.sh
Initializing...
  - Using SGC_HOME /home/ocss7/testing/PC1-1 (if this is not correct, override in generate-report.sh)
[WARN]: The SGC is not running, limited reports will be generated.
Generating report...
  - Getting general system information
  - Not getting /proc/ info for process (SGC is not running)
  - Getting configuration files
  - Getting SGC scripts
  - Getting SGC logs
tar: Removing leading `/' from member names
  - Not getting runtime state from JMX (SGC is not running)
  - Not getting thread dump (SGC is not running)
  - Getting runtime SGC configuration
Cleaning up...

Report written to /home/ocss7/testing/PC1-1/ocss7-report-PC1-1-2017-12-18_090911.tar
*** Note that this report is not compressed.  You may compress it with bzip2 or xz if you wish. ***
Tip As indicated by the script, it may be desirable to compress the resulting tar file. We recommend using nice if performing this on the SGC host so as not to compromise node performance.

Please verify the contents of this file before uploading, especially if SGC components are in non-standard locations. In particular, ensure that ss7.log- and startup.log- files are present.

Gathering Support Data from the OCSS7 TCAP Stack

The OCSS7 TCAP stack is an integral component of the CGIN RA. Please refer to Appendix C. Support Requests in the CGIN documentation for further details.

Appendix C: Online Upgrade Support Matrix

This appendix details the SGC versions that online upgrade and reversion may be applied to.

Online upgrade and reversion are symmetric operations. If it is possible to upgrade from release A to release B it will also be possible to revert from release B to release A.

Source Version

Target Version

1.x

2.x

3.0.0.x

1.x

Unsupported

Unsupported

Unsupported

2.x

Unsupported

Unsupported

Unsupported

3.0.0.x

Unsupported

Unsupported

Supported

Appendix D: Glossary of Acronyms

AS

Application Server

CLI

Command Line Interface

CPC

Concerned Point Code

CRUD

Create Remove Update Display

DPC

Destination Point Code

GT

Global Title

IPSP

IP Server Process

JDK

Java Development Kit

JKS

Java KeyStore

MML

Man Machine Language

SNMP

Simple Network Management Protocol

SG

Signalling Gateway

SPC

Signalling Point Code

SS7 SGC

Signalling System No. 7 Signalling Gateway Client

SSL

Secure Socket Layer

SSN

SubSystem Number

USM

User Security Model