This section provides a step-by-step walk through basic OCSS7 setup from unpacking software packages through running test traffic. The end result is a functional OCSS7 network suitable for basic testing. For production installations and installation reference material please see Installing the SGC.

Introduction

In this walk-through we will be:

  • setting up two OCSS7 clusters, each with a single SGC node, and

  • running one of the example IN scenarios through the network using the Metaswitch Scenario Simulator.

To complete this walk-through you will need the:

These instructions should be followed on a test system which:

  • runs Linux,

  • has SCTP support, and

  • is unlikely to be hampered by local firewall or other security restrictions.

Finally, you will need to make sure that the JAVA_HOME environment variable is set to the location of your Oracle Java JDK installation.

The Plan

We will set up two clusters, each with a single node, both running on our single test system. At the M3UA level:

  • cluster 1 will use Point Code 1

  • cluster 2 will use Point Code 2

  • there will be one Application Server (AS) with routing context 2

  • there will be one SCTP association between the two nodes

We will test the network using two Metaswitch Scenario Simulators:

  • Simulator 1 will:

    • connect to cluster 1

    • use SSN 101

    • use GT 1234

  • Simulator 2 will:

    • connect to cluster 2

    • use SSN 102

    • use GT 4321

Routing between the two simulators will be via Global Title translation.

Once we think that the network is operational we will test it by running one of the example scenarios shipped with the IN Scenario Pack for the Scenario Simulator.

SGC installation

Naming Conventions

The cluster naming convention in this example uses PC followed by the point code. For example, a cluster whose point code is 1 will have a name of PC1.

The node naming convention lists the cluster name first, and then a hyphen, followed by the node number within that SGC cluster. For example, PC1-1 or PC2-1. During this walk-through the number after the hyphen will always be 1, but this convention provides space to expand if you wish to add additional nodes after completing the walk through.

Installation

We will now install two SGC clusters, each containing one node.

1

Create the root installation directory for the PC1 cluster and PC1-1 node:
mkdir PC1/PC1-1
Tip This installation structure follows the recommended Installation structure and creates an installation that is compatible with the automated upgrade tool, Orca.

2

Unpack the SGC archive file in the PC1/PC1-1 directory
unzip ocss7-package-VERSION.zip

(replacing ocss7-package-VERSION.zip with the correct file name).

This creates the distribution directory, ocss7-X.X.X.X, in the current working directory.

Example:

$ unzip ocss7-package-3.0.0.0.zip
Archive:  ocss7-package-3.0.0.0.zip
   creating: ocss7-3.0.0.0/
  inflating: ocss7-3.0.0.0/CHANGELOG
  inflating: ocss7-3.0.0.0/README
   creating: ocss7-3.0.0.0/config/
   creating: ocss7-3.0.0.0/doc/
   creating: ocss7-3.0.0.0/license/
   creating: ocss7-3.0.0.0/logs/
   creating: ocss7-3.0.0.0/var/
  inflating: ocss7-3.0.0.0/config/SGC.properties
  inflating: ocss7-3.0.0.0/config/SGC_bundle.properties.sample
  inflating: ocss7-3.0.0.0/config/log4j.dtd
  inflating: ocss7-3.0.0.0/config/log4j.test.xml
  inflating: ocss7-3.0.0.0/config/log4j.xml
  inflating: ocss7-3.0.0.0/config/sgcenv
  inflating: ocss7-3.0.0.0/license/LICENSE.apache-log4j-extras.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.commons-cli.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.commons-collections.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.commons-lang.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.guava.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.hazelcast.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.jline.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.jsr305.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.log4j.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.netty.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.protobuf.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.slf4j.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.snmp4j.txt
  inflating: ocss7-3.0.0.0/license/LICENSE.velocity.txt
   creating: ocss7-3.0.0.0/bin/
  inflating: ocss7-3.0.0.0/bin/generate-report.sh
  inflating: ocss7-3.0.0.0/bin/sgc
  inflating: ocss7-3.0.0.0/bin/sgcd
  inflating: ocss7-3.0.0.0/bin/sgckeygen
  inflating: ocss7-3.0.0.0/sgc.jar
   creating: ocss7-3.0.0.0/lib/
  inflating: ocss7-3.0.0.0/lib/apache-log4j-extras-1.2.17.jar
  inflating: ocss7-3.0.0.0/lib/guava-14.0.1.jar
  inflating: ocss7-3.0.0.0/lib/hazelcast-3.7.jar
  inflating: ocss7-3.0.0.0/lib/jsr305-1.3.9.jar
  inflating: ocss7-3.0.0.0/lib/log4j-1.2.17.jar
  inflating: ocss7-3.0.0.0/lib/netty-buffer-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-codec-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-codec-http-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-common-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-handler-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/netty-transport-4.0.28.jar
  inflating: ocss7-3.0.0.0/lib/protobuf-java-2.3.0.jar
  inflating: ocss7-3.0.0.0/lib/protobuf-library-2.3.0.1.jar
  inflating: ocss7-3.0.0.0/lib/slf4j-api-1.7.25.jar
  inflating: ocss7-3.0.0.0/lib/slf4j-log4j12-1.7.25.jar
  inflating: ocss7-3.0.0.0/lib/snmp4j-2.2.2.jar
  inflating: ocss7-3.0.0.0/lib/snmp4j-agent-2.0.10a.jar
   creating: ocss7-3.0.0.0/lib/upgrade-packs/
  inflating: ocss7-3.0.0.0/lib/upgrade-packs/ocss7-upgrade-pack-3.0.0.0.jar
   creating: ocss7-3.0.0.0/cli/
  inflating: ocss7-3.0.0.0/cli/sgc-cli.sh
   creating: ocss7-3.0.0.0/cli/conf/
   creating: ocss7-3.0.0.0/cli/lib/
  inflating: ocss7-3.0.0.0/cli/conf/cli.properties
  inflating: ocss7-3.0.0.0/cli/conf/log4j.xml
  inflating: ocss7-3.0.0.0/cli/lib/commons-cli-1.2.jar
  inflating: ocss7-3.0.0.0/cli/lib/commons-collections-3.2.1.jar
  inflating: ocss7-3.0.0.0/cli/lib/commons-lang-2.6.jar
  inflating: ocss7-3.0.0.0/cli/lib/jline-1.0.jar
  inflating: ocss7-3.0.0.0/cli/lib/log4j-1.2.17.jar
  inflating: ocss7-3.0.0.0/cli/lib/ocss7-cli.jar
  inflating: ocss7-3.0.0.0/cli/lib/ocss7-remote-3.0.0.0.jar
  inflating: ocss7-3.0.0.0/cli/lib/slf4j-api-1.7.25.jar
  inflating: ocss7-3.0.0.0/cli/lib/slf4j-log4j12-1.7.25.jar
  inflating: ocss7-3.0.0.0/cli/lib/velocity-1.7.jar
  inflating: ocss7-3.0.0.0/cli/sgc-cli.bat
   creating: ocss7-3.0.0.0/doc/mibs/
  inflating: ocss7-3.0.0.0/doc/mibs/COMPUTARIS-MIB.txt
  inflating: ocss7-3.0.0.0/doc/mibs/CTS-SGC-MIB.txt
  inflating: ocss7-3.0.0.0/doc/mibs/OPENCLOUD-OCSS7-MIB.txt
  inflating: ocss7-3.0.0.0/config/hazelcast.xml.sample

3

Create the root installation directory for the PC2 cluster and PC2-1 node:
mkdir PC2/PC2-1

4

Unpack the SGC archive file in the PC2/PC2-1 directory
unzip ocss7-package-VERSION.zip

(replacing ocss7-package-VERSION.zip with the correct file name).

This creates the distribution directory, ocss7-X.X.X.X, in the current working directory.

We now have two SGC nodes with no configuration. The next step is to set up their cluster configuration.

SGC cluster membership configuration

We will now do the cluster membership configuration for our two SGC nodes/clusters:

  • The node name is specified by the ss7.instance parameter

  • And the cluster name is specified by the hazelcast.group parameter

Later on, during SS7 configuration, the ss7.instance value is used to specify which node in the cluster certain configuration elements (such as SCTP endpoints) are associated with.

1

Give node PC1-1 its identity

Edit the file PC1/PC1-1/ocss7-3.0.0.0/config/SGC.properties and set ss7.instance to PC1-1 and hazelcast.group to PC1. When you are done the file should look like this:

# SGC instance node name
ss7.instance=PC1-1
# Path to the Hazelcast config file
hazelcast.config.file=config/hazelcast.xml
# Default Hazelcast group name
hazelcast.group=PC1

#path where sgc data file should be stored
sgc.data.dir=var

2

Give node PC2-1 its identity

Edit the file PC2/PC2-1/ocss7-3.0.0.0/config/SGC.properties and set ss7.instance to PC2-1 and hazelcast.group to PC2. When you are done the file should look like this:

# SGC instance node name
ss7.instance=PC2-1
# Path to the Hazelcast config file
hazelcast.config.file=config/hazelcast.xml
# Default Hazelcast group name
hazelcast.group=PC2

#path where sgc data file should be stored
sgc.data.dir=var
Tip

For clusters with multiple nodes the hazelcast.group property must be set, see the installation reference. For single-node clusters this value is optional. If unconfigured, the node will create a unique group for itself. However, in order to add further members to the cluster later the group must be set.

Starting the clusters

We will now start the two SGC clusters.

1

Check JAVA_HOME

Make sure your JAVA_HOME environment variable points to supported Java installation. Example:

$ echo $JAVA_HOME
/opt/jdk-11

2

Change the management port for node PC1-1

Edit the file PC1/PC1-1/ocss7-3.0.0.0/config/sgcenv and change the JMX_PORT setting to 10111.

The JMX_PORT is the management port to which the command line management console will connect. It is not normally necessary to change this setting, but we must since we are running multiple nodes on a single system and they cannot both bind to the same port.

Tip Throughout this walk-through we will need a number of ports for different things. Any unique port numbers may be used, but it is helpful if there is some structure or pattern in place to help remember or calculate which port should be used in each situation. In this walk-through port numbers will be created as a concatenation of a purpose-specific prefix followed by the cluster number and the node number within that cluster. For management ports we’re adopting the prefix 101, therefore node PC1-1 has management port 10111, and node PC2-1 has management port 10121. If we were to add a second node to cluster PC2 later on we would assign it 10122 to use as its JMX_PORT.

3

Start node PC1-1
./PC1/PC1-1/ocss7-3.0.0.0/bin/sgc start

If all is well, you should see:

SGC starting - daemonizing ...
SGC started successfully

4

Change the management port for node PC2-1

Edit the file PC2/PC2-1/ocss7-3.0.0.0/config/sgcenv and change the JMX_PORT setting to 10121.

The JMX_PORT is the management port to which the command line management console will connect. It is not normally necessary to change this setting, but we must since we are running multiple nodes on a single system and they cannot both bind to the same port.

5

Start node PC2-1
./PC2/PC2-1/ocss7-3.0.0.0/bin/sgc start

If all is well, you should see:

SGC starting - daemonizing ...
SGC started successfully

If the SGC start command reported any errors please double check your JAVA_HOME environment variable and make sure that nothing has already bound the management ports 10111 and 10121. If these ports are already in use on your system you may simply change them to something else and make a note of the values for later use.

Connect the management console

We now have two running OCSS7 clusters with blank configuration. The configuration we have done so far was done on a per-node basis using configuration files, but this does no more than give a node the minimal configuration it needs to boot and become a cluster member. The rest of our SGC configuration will now be done using the Command-Line Management Console. Configuration done in this manner becomes cluster-wide configuration which is automatically propagated to and saved by every other cluster node, although for our single-node clusters that detail will not be particularly relevant.

It is recommended that you start one management console per node for this walk-through, however, if your system is low on RAM you may wish to start and stop these consoles as required.

1

Start the mangement console for PC1-1
./PC1/PC1-1/ocss7-3.0.0.0/cli/sgc-cli.sh

Example:

$ ./PC1/PC1-1/ocss7-3.0.0.0/cli/sgc-cli.sh
Preparing to start SGC CLI ...
Checking environment variables
[JAVA_HOME]=[/opt/jdk-11]
[CLI_HOME]=[/home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0/cli]
Environment is OK!
Determining SGC home and JMX configuration
[SGC_HOME]=/home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0
[JMX_HOST]=127.0.0.1
[JMX_PORT]=10111
Done
+---------------------------Environment--------------------------------+
CLI_HOME: /home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0/cli
JAVA: /opt/jdk-11
JAVA_OPTS:  -Dlog4j.configuration=file:/home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0/cli/conf/log4j.xml -Dsgc.home=/home/ocss7/quick-start/PC1/PC1-1/ocss7-3.0.0.0/cli
+----------------------------------------------------------------------+
127.0.0.1:10111 PC1-1>

Here we can see the management console’s prompt, which identifies the node to which it is connected by host and port.

Tip The host and port settings were determined automatically by the CLI, which is possible because it is currently part of an SGC installation and can read the SGC configuration files. The CLI can also be copied out to elsewhere and run from another location or another host. When the CLI is not part of an SGC installation it is necessary to provide host and port options to the CLI on the command line.

2

Start the management console for PC2-1
./PC2/PC2-1/ocss7-3.0.0.0/cli/sgc-cli.sh

Example:

$ ./PC2/PC2-1/ocss7-3.0.0.0/cli/sgc-cli.sh
Preparing to start SGC CLI ...
Checking environment variables
[JAVA_HOME]=[/opt/jdk-11/]
[CLI_HOME]=[/home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0/cli]
Environment is OK!
Determining SGC home and JMX configuration
[SGC_HOME]=/home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0
[JMX_HOST]=127.0.0.1
[JMX_PORT]=10121
Done
+---------------------------Environment--------------------------------+
CLI_HOME: /home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0/cli
JAVA: /opt/jdk-11/
JAVA_OPTS:  -Dlog4j.configuration=file:/home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0/cli/conf/log4j.xml -Dsgc.home=/home/ocss7/quick-start/PC2/PC2-1/ocss7-3.0.0.0/cli
+----------------------------------------------------------------------+
127.0.0.1:10121 PC2-1>
Tip

The management console supports tab completion and suggestions. If you hit tab while in the console it will complete the command, parameter, or value as best it can. If the console is unable to complete the command, parameter, or value entirely because there are multiple completion choices then it will display the available choices.

Tip You can exit the management console either by hitting ctrl-d or entering the quit command.

General configuration

General Configuration is that which is fundamental to the cluster and the nodes within it. For our purposes this means:

  • setting the local Point Codes for the clusters,

  • setting the basic communication attributes of each node.

The basic communication attributes of each node are used to control:

  • payload message transfer between SGCs within the cluster; and

  • communication with client TCAP stacks running in Rhino or the Scenario Simulator.

Tip The distinction between clusters and nodes is about to become apparent because each cluster has exactly one local Point Code for which it provides services and which is set once for the entire cluster. In contrast, each node must be defined and given its own basic communication configuration.

1a

Set the Point Code for PC1-1’s cluster to 1

Within the management console for PC1-1 run:

modify-parameters: sp=1

Example:

127.0.0.1:10111 PC1-1> modify-parameters: sp=1
OK parameters updated.

1b

Set the Point Code for PC2-1’s cluster to 2

Within the management console for PC2-1 run:

modify-parameters: sp=2

Example:

127.0.0.1:10121 PC2-1> modify-parameters: sp=2
OK parameters updated.

2a

Configure node PC1-1’s basic communication attributes

Within the management console for PC1-1 run:

create-node: oname=PC1-1, switch-local-address=127.0.0.1, switch-port=11011, stack-data-port=12011, stack-http-port=13011, enabled=true

Example:

127.0.0.1:10111 PC1-1> create-node: oname=PC1-1, switch-local-address=127.0.0.1, switch-port=11011, stack-data-port=12011, stack-http-port=13011, enabled=true
OK node created.
Warning The value given to oname above must exactly match the value of ss7.instance which was set in SGC cluster membership configuration. If the values are different the running node will assume this configuration is not intended for it, but for some other cluster node.

This command configures network communication for:

  • message passing between SGC cluster members, through the attributes starting with switch-; and

  • communication with client TCAP stacks such as Rhino or the Scenario Simulator, through the attributes starting with stack-.

Tip There are two attributes not specified in this command,stack-data-address and stack-http-address, which control the addresses used for client TCAP stack communications. These have been left to default to the value of switch-local-address because we are running everything on a single system. A typical production installation would partition the two traffic types by having one value for switch-local-address, and a different value shared between stack-data-address and stack-http-address. See Network Architecture Planning and the node configuration attributes for details.

2b

Configure node PC2-1’s basic communication attributes

Within the management console for PC2-1 run:

create-node: oname=PC2-1, switch-local-address=127.0.0.1, switch-port=11021, stack-data-port=12021, stack-http-port=13021, enabled=true
Tip This command differs from the previous command in that the ports have been carefully chosen not to conflict with those of the other SGC process. This is only necessary because all the SGC processes are running on a single test system in this walk through. Typical production installations, where each host has only one SGC node running on it, could omit the ports entirely and use the default values on every node.

Example:

127.0.0.1:10121 PC1-1> create-node: oname=PC2-1, switch-local-address=127.0.0.1, switch-port=11021, stack-data-port=12021, stack-http-port=13021, enabled=true
OK node created.
Warning The value given to oname above must exactly match the value of ss7.instance which was set in SGC cluster membership configuration. If the values are different the running node will assume this configuration is not intended for it, but for some other cluster node.

Of the attributes we set above only the switch-local-address and stack-data-port settings are required for future configuration; we’ll use them when we get to the Scenario Simulator configuration section.

Tip The create-node command is discussed above in the context of configuring the basic communication attributes to be used, but it also creates a node configuration object which can be enabled or disabled and for which the current state can be seen when using the display-node command. It has been discussed this way because some configuration must be provided, no matter what your configuration. If it was not necessary to provide some configuration then the SGC could simply automatically detect and add cluster nodes as they come online.

M3UA configuration

We will now begin configuring the M3UA layer of our network. There are a number of ways this can be done, but for the purposes of this walk-through we will use:

  • a single Application Server (AS) between the two instances,

  • the cluster for Point Code 1 as a client (in IPSP mode),

  • the cluster for Point Code 2 as a server (in IPSP mode), and

  • one SCTP association between the two nodes.

At a high level the procedure we’re about to follow will:

  • define the Application Server (AS) on each cluster,

  • define routes to our destination Point Codes through the defined AS,

  • define the SCTP connection on each node, and

  • associate the SCTP connection with the Application Server.

All the steps below are in two parts, the part "a" commands must be run on the management console connected to node PC1-1 and the part "b" commands must be run on the management console connected to node PC2-1. If this becomes confusing please check the examples given, which will indicate the correct management console by the port number in the prompt.

Tip Those familiar with M3UA will note that Single Exchange is used. The SGC does not support double exchange.

1a

Define the Application Server for PC1-1

Create the AS with traffic-maintenance-role=ACTIVE and Routing Context 2:

create-as: oname=PC2, traffic-maintenance-role=ACTIVE, rc=2, enabled=true

Example:

127.0.0.1:10111 PC1-1> create-as: oname=PC2, traffic-maintenance-role=ACTIVE, rc=2, enabled=true
OK as created.

1b

Define the Application Server for PC2-1

On PC2-1 note that traffic-maintenance-role=PASSIVE:

create-as: oname=PC2, traffic-maintenance-role=PASSIVE, rc=2, enabled=true

Example:

127.0.0.1:10121 PC1-1> create-as: oname=PC2, traffic-maintenance-role=PASSIVE, rc=2, enabled=true
OK as created.

2a

Define the local SCTP association’s endpoint for PC1-1
create-local-endpoint: oname=PC1-1-PC2-1, node=PC1-1, port=21121

Example:

127.0.0.1:10111 PC1-1> create-local-endpoint: oname=PC1-1-PC2-1, node=PC1-1, port=21121
OK local-endpoint created.

This defines a local endpoint which will be bound to SCTP port 21121.

Tip

Naming and oname

Every configuration object has an object name field called oname. These onames serve both as user documentation and the method of referring to other configuration objects, as seen here, where the node attribute refers to our node’s oname.

It is a good idea to plan a consistent and informative naming scheme before starting. In this walk-through several strategies are used:

  • the owning cluster name is used where a cluster as a whole owns an object (the AS is named PC2 because the cluster for PC 2 is acting as the server, and therefore has the greatest claim to ownership);

  • the owning node name is used where a specific node owns an object (each node is named for itself, for example);

  • where an object connects two things (like an SCTP association end point which will be used for outgoing connections) the name is the client node followed by the server node (for example oname=PC1-1-PC2-1 above)

2b

Define the local SCTP association’s endpoint for PC2-1
create-local-endpoint: oname=PC2-1, node=PC2-1, port=22100

This defines a local endpoint which will be bound to SCTP port 22100.

Example:

127.0.0.1:10121 PC1-1> create-local-endpoint: oname=PC2-1, node=PC2-1, port=22100
OK local-endpoint created.
Tip The oname here is the same as the node’s oname because the node owns this configuration object. In this walk-through we will only be using it to connect to PC1-1, but because we intend to have this node acting in the server role for SCTP we could reasonably expect more nodes from our peer cluster to connect to it, so it would be misleading to name it PC2-1-PC1-1 in the style used for step 2a.

3a

Define the local SCTP endpoint IP addresses for PC1-1

We will now define the IP address to be used by our SCTP association.

create-local-endpoint-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, local-endpoint-name=PC1-1-PC2-1

Example:

127.0.0.1:10111 PC1-1> create-local-endpoint-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, local-endpoint-name=PC1-1-PC2-1
OK local-endpoint-ip created.

As you can see above, a local-endpoint-ip associates itself with one particular local-endpoint by setting local-endpoint-name to the oname value of the intended local-endpoint. This step is necessary because SCTP supports "multi-homing", meaning that one association can be bound to multiple local IP addresses. Typically these IP addresses would be associated with resilient physical network paths, allowing multi-homing to provide protection against network failure.

3b

Define the local SCTP endpoint IP addresses for PC2-1

Similar to 3a, above:

create-local-endpoint-ip: oname=PC2-1, ip=127.0.0.1, local-endpoint-name=PC2-1

Example:

127.0.0.1:10121 PC1-1> create-local-endpoint-ip: oname=PC2-1, ip=127.0.0.1, local-endpoint-name=PC2-1
OK local-endpoint-ip created.

4a

Enable the local endpoint for PC1-1

The local endpoint was created in its default enabled=false to allow us to add local endpoint IP addresses to it. The SGC does not allow changes to enabled local endpoints to avoid unexpected service interruptions while it tears down the connection and establishes it with new configuration. We are now done modifying this configuration, so it is time to enable the local endpoint:

enable-local-endpoint: oname=PC1-1-PC2-1

Example:

127.0.0.1:10111 PC1-1> enable-local-endpoint: oname=PC1-1-PC2-1
OK local-endpoint enabled.

4b

Enable the local endpoint for PC2-1
enable-local-endpoint: oname=PC2-1

Example:

127.0.0.1:10121 PC1-1> enable-local-endpoint: oname=PC2-1
OK local-endpoint enabled.

5a

Define the client connection for PC1-1 to PC2-1

We will now define the SCTP association used by PC1-1, as well as some M3UA settings for the connection:

create-connection: oname=PC1-1-PC2-1, port=22100, local-endpoint-name=PC1-1-PC2-1, conn-type=CLIENT, state-maintenance-role=ACTIVE, is-ipsp=true, enabled=true

Example:

127.0.0.1:10111 PC1-1> create-connection: oname=PC1-1-PC2-1, port=22100, local-endpoint-name=PC1-1-PC2-1, conn-type=CLIENT, state-maintenance-role=ACTIVE, is-ipsp=true, enabled=true
OK connection created.

The port here is the remote SCTP port to which the SGC should connect, the local IP and port information comes from the local-endpoint-name. This connection will act in all ways as a "client" connection, in that it will initiate the connection and begin the conversation. If you’re interested in the exact details please see the connection reference documentation.

5b

Define the server connection for PC2-1 from PC1-1

Similar to the above, this defines a client connection to the node, which is acting as a server:

create-connection: oname=PC1-1-PC2-1, port=21121, local-endpoint-name=PC2-1, conn-type=SERVER, state-maintenance-role=PASSIVE, is-ipsp=true, enabled=true

Example:

127.0.0.1:10121 PC1-1> create-connection: oname=PC1-1-PC2-1, port=21121, local-endpoint-name=PC2-1, conn-type=SERVER, state-maintenance-role=PASSIVE, is-ipsp=true, enabled=true
OK connection created.

the port in this command is the remote port from which the connection will be initiated. It must match the configuration in node PC1-1 or the connection will not be accepted.

6a

Define the connection IP addresses for PC1-1 to PC2-1

Just as we had to define local endpoint IP addresses earlier, we must now define the remote connection IP addresses to which the node should connect:

create-conn-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, conn-name=PC1-1-PC2-1

Example:

127.0.0.1:10111 PC1-1> create-conn-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, conn-name=PC1-1-PC2-1
OK conn-ip created.

Again, this extra step is because SCTP supports multi-homing.

6b

Define the connection IP addresses for PC2-1 from PC1-1

The compliment of step 6a, above, PC2-1 needs to know which IP addresses to expect a connection from:

create-conn-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, conn-name=PC1-1-PC2-1

Example:

127.0.0.1:10121 PC1-1> create-conn-ip: oname=PC1-1-PC2-1, ip=127.0.0.1, conn-name=PC1-1-PC2-1
OK conn-ip created.

The IP address here must match the local-endpoint-ip address from PC1-1 or the connection will not be accepted by PC2-1.

7a

Connect the AS to the connection on PC1-1

We must now tell the SGC that our AS should use the connection we have defined:

create-as-connection: oname=PC1-1-PC2-1, as-name=PC2, conn-name=PC1-1-PC2-1

Example:

127.0.0.1:10111 PC1-1> create-as-connection: oname=PC1-1-PC2-1, as-name=PC2, conn-name=PC1-1-PC2-1
OK as-connection created.

This as-connection is necessary because one AS may use many connections, and a connection may serve many Application Servers, in an "many-to-many" relationship.

7b

Connect the AS to the connection on PC2-1
create-as-connection: oname=PC1-1-PC2-1, as-name=PC2, conn-name=PC1-1-PC2-1

Example:

127.0.0.1:10121 PC1-1> create-as-connection: oname=PC1-1-PC2-1, as-name=PC2, conn-name=PC1-1-PC2-1
OK as-connection created.

8a

Define the route on PC1-1 to Point Code 2

The final step, we must now define which Destination Point Codes can be reached via our Application Server. Define a Destination Point Code for PC=2 and a route to it via our AS with the following commands:

create-dpc: oname=PC2, dpc=2
create-route: oname=PC2, as-name=PC2, dpc-name=PC2

Example:

127.0.0.1:10111 PC1-1> create-dpc: oname=PC2, dpc=2
OK dpc created.
127.0.0.1:10111 PC1-1> create-route: oname=PC2, as-name=PC2, dpc-name=PC2
OK route created.

8b

Define the route on PC2-1 to Point Code 1

Define a Destination Point Code for PC=1 and a route to it via our AS with the following commands:

create-dpc: oname=PC1, dpc=1
create-route: oname=PC1, as-name=PC2, dpc-name=PC1

Example:

127.0.0.1:10121 PC1-1> create-dpc: oname=PC1, dpc=1
OK dpc created.
127.0.0.1:10121 PC1-1> create-route: oname=PC1, as-name=PC2, dpc-name=PC1
OK route created.

General and M3UA configuration is now complete. In the next section we will check that everything is working correctly.

M3UA state inspection

You should now have two SGCs which are connected to each other at the M3UA layer. Before we move on to the upper layers of configuration we should check that everything is working as expected up to this point. If you are confident of your setup and in a hurry you can skip this section.

Please note that it is not normally necessary to check state in this exhaustive a manner, we are doing it in this step-by-step fashion to provide some familiarization with the SGC state inspection facilities and assist with troubleshooting.

Tip Most of the commands shown below show both the definition and the state of the various configuration objects they examine, and are intended for those modifying or considering modifying the configuration of the SGC. If you are interested strictly in state rather than configuration, there is a related family of commands which start with display-info- which will show extended state information without any configuration details.

1

Check the display-active-alarms command for problems

The display-active-alarms command can show problems from any aspect of the SGC’s operation. If you check it now you should see:

PC1-1

127.0.0.1:10111 PC1-1> display-active-alarm:
Found 0 objects.

PC2-1

127.0.0.1:10121 PC1-1> display-active-alarm:
Found 0 objects.

If, instead, you see one or more alarms, don’t worry, we’ll step through the diagnostics one by one.

2

Check the node state

If something is wrong with the node state or configuration than nothing will work. Run

display-node

on both nodes. Both nodes should say that the active state is true. If your active state is not true then either:

  • the enabled attribute is set to false, and you need to use the enable-node command to enable it; or

  • the ss7.instance values does not match the node’s oname value.

3

Check the local endpoint state

The local endpoint must be enabled and active before the connection between the nodes will work. Run:

display-local-endpoint

on both nodes. Both nodes should say that the active state is true. If the active state is not true then:

  • the enabled attribute is set to false, which can be corrected with the enable-local-endpoint command.

4

Check the connection state

The next thing to check, working up the stack, is the SCTP association. Run

display-connection

on both nodes. Both nodes should say that the active state is true. If the active state is not true then either:

  • the enabled attribute is set to false on one or the other of the nodes, and you need to use the enable-connection command to enable it;

  • there is a configuration mismatch between the nodes with regard to ports or IP addresses; or

  • the ports selected in this walk through may not work correctly due to your local operating system configuration, in which case you may need to consult Configuring network features.

It is often helpful to consult either the active alarms list or the logs when diagnosing connection issues, but that is outside the scope of this walk-through.

5

Check the AS state

The AS should be active on both nodes. Run:

display-as

on both nodes to check. The state should be listed as ACTIVE. If the state is not ACTIVE then:

  • there is a problem at a lower layer;

  • the two clusters disagree about the rc value; or

  • one of the create-as-connection commands was omitted.

6

Check the SCCP state

SCCP is the next layer up, and we have not yet configured it, but it should be able to activate and communicate with its peer at this point. Run this command on both nodes to check:

display-info-remotessninfo

This should show the following output on both nodes:

127.0.0.1:10111 PC1-1> display-info-remotessninfo
Found 2 object(s):
+----------+----------+---------------+
|dpc       |ssn       |status         |

+----------+----------+---------------+
|1         |1         |ALLOWED        |

+----------+----------+---------------+
|2         |1         |ALLOWED        |

+----------+----------+---------------+

This output shows that the SCCP layers on each node are communicating with each other.

Tip SSN=1 is the SCCP management SubSystem Number. If the status of SSN=1 is not ALLOWED then the SCCP layers are unable to communicate with each other and no other SSN will be reachable.

If the status shown above is PROHIBITED for the remote Destination Point Code then:

SCCP configuration

In The Plan we can see that the two Scenario Simulators expect to refer to each other by their global titles as follows:

  • 1234: PC=1,SSN=101

  • 4321: PC=2,SSN=102

Several inbound and outbound global title translation (GTT) rules are required to allow this to happen, which we will create now.

Also, while not technically necessary, we will configure Concerned Point Codes for each of the two nodes, so that they will inform each other about changes to the state of interesting SSNs.

All the steps below are in two parts, the part "a" commands must be run on the management console connected to node PC1-1 and the part "b" commands must be run on the management console connected to node PC2-1. If this becomes confusing please check the examples given, which will indicate the correct management console by the port number in the prompt.

1a

Outbound GTT setup on PC1-1

Run the following commands to setup outbound global title translation on PC1-1:

create-outbound-gt: oname=4321, addrinfo=4321
create-outbound-gtt: oname=4321, gt=4321, dpc=2, priority=5

Example:

127.0.0.1:10111 PC1-1> create-outbound-gt: oname=4321, addrinfo=4321
OK outbound-gt created.
127.0.0.1:10111 PC1-1> create-outbound-gtt: oname=4321, gt=4321, dpc=2, priority=5
OK outbound-gtt created.

This defines a Global Title and then creates a translation rule which will cause messages with that GT in the Called Party Address to be routed to our peer at PC=2.

1b

Outbound GTT setup on PC2-1

Run the following commands to setup outbound global title translation on PC2-1:

create-outbound-gt: oname=1234, addrinfo=1234
create-outbound-gtt: oname=1234, gt=1234, dpc=1, priority=5

Example:

127.0.0.1:10121 PC1-1> create-outbound-gt: oname=1234, addrinfo=1234
OK outbound-gt created.
127.0.0.1:10121 PC1-1> create-outbound-gtt: oname=1234, gt=1234, dpc=1, priority=5
OK outbound-gtt created.

This defines a Global Title and then creates a translation rule which will cause messages with that GT in the Called Party Address to be routed to our peer at PC=1.

2a

Inbound GTT setup on PC1-1

Run the following to setup inbound GTT on PC1-1:

create-inbound-gtt: oname=1234, addrinfo=1234, ssn=101
create-outbound-gt: oname=1234, addrinfo=1234
create-outbound-gtt: oname=1234, gt=1234, dpc=1, priority=5

Example

127.0.0.1:10111 PC1-1> create-inbound-gtt: oname=1234, addrinfo=1234, ssn=101
OK inbound-gtt created.
127.0.0.1:10111 PC1-1> create-outbound-gt: oname=1234, addrinfo=1234
OK outbound-gt created.
127.0.0.1:10111 PC1-1> create-outbound-gtt: oname=1234, gt=1234, dpc=1, priority=5
OK outbound-gtt created.

The first command creates an inbound GTT rule for the Global Title we expect to be accepted traffic on. The second and third commands may look somewhat surprising, as they create an outbound GTT rule. This is the correct configuration for our network, as SCCP’s service messages (UDTS and XUDTS) may be generated locally in response to traffic we are attempting to send, and these service messages are routed as outbound messages.

2b

Inbound GTT setup on PC2-1

Run the following to setup inbound GTT on PC2-1:

create-inbound-gtt: oname=4321, addrinfo=4321, ssn=102
create-outbound-gt: oname=4321, addrinfo=4321
create-outbound-gtt: oname=4321, gt=4321, dpc=2, priority=5

Example:

GC[127.0.0.1:10121]> create-inbound-gtt: oname=4321, addrinfo=4321, ssn=102
OK inbound-gtt created.
127.0.0.1:10121 PC1-1> create-outbound-gt: oname=4321, addrinfo=4321
OK outbound-gt created.
127.0.0.1:10121 PC1-1> create-outbound-gtt: oname=4321, gt=4321, dpc=2, priority=5
OK outbound-gtt created.

3a

Create the Concerned Point Code on PC1-1

Run the following to configure PC1-1 to announce SSN changes for SSN=101 to the PC2 cluster:

create-cpc: oname=PC2-101, dpc=2, ssn=101

Example:

127.0.0.1:10111 PC1-1> create-cpc: oname=PC2-101, dpc=2, ssn=101
OK cpc created.

3b

Create the Concerned Point Code on PC2-1

Run the following to configure PC2-1 to announce SSN changes for SSN=102 to the PC1 cluster:

create-cpc: oname=PC1-102, dpc=1, ssn=102

Example:

127.0.0.1:10121 PC1-1> create-cpc: oname=PC1-102, dpc=1, ssn=102
OK cpc created.

This completes our SCCP configuration, which we will check in the next section.

SCCP state inspection

We now have two fully configured SCCP layers. We will now check their state to make sure they will work as expected.

1

Check the outbound GTT state on PC1-1

The following command will show the current state of configured outbound GTT rules:

display-info-ogtinfo: column=addrInfo, column=connId, column=rc, column=dpc

Example on PC1-1

127.0.0.1:10111 PC1-1> display-info-ogtinfo: column=addrInfo, column=connId, column=rc, column=dpc
Found 2 object(s):
+---------------+---------------+----------+----------+
|addrInfo       |connId         |rc        |dpc       |

+---------------+---------------+----------+----------+
|1234           |               |-1        |1         |

+---------------+---------------+----------+----------+
|4321           |PC1-1-PC2-1    |2         |2         |

+---------------+---------------+----------+----------+

For GT 1234 we can see that:

  • it has no associated connection, and no valid routing context (-1), and

  • the DPC to be used is 1, which is our local PC.

This GT will be routed to the local SGC.

For GT 4321 we can see that:

  • it has an associated connection and Routing Context, and

  • the DPC to be used is 2, which is the remote PC.

This GT will be routed to PC2-1 using the specified connection and Routing Context.

Example on PC2-1

127.0.0.1:10121 PC1-1> display-info-ogtinfo: column=addrInfo, column=connId, column=rc, column=dpc
Found 2 object(s):
+---------------+---------------+----------+----------+
|addrInfo       |connId         |rc        |dpc       |

+---------------+---------------+----------+----------+
|1234           |PC1-1-PC2-1    |2         |1         |

+---------------+---------------+----------+----------+
|4321           |               |-1        |2         |

+---------------+---------------+----------+----------+
Tip All display- commands can be given a list of column= arguments to restrict the columns listed. This feature has been used in the examples above to reduce clutter and make the output more readable.

2

Check the local SSN state

The command:

display-info-localssninfo

will list the state of all SSNs which are either:

  • declared in Concerned Point Code configuration, or

  • known from current or previous SSN connections.

Example on PC1-1

GC[127.0.0.1:10111]> display-info-localssninfo: column=ssn, column=status
Found 2 object(s):
+----------+---------------+
|ssn       |status         |

+----------+---------------+
|1         |ALLOWED        |

+----------+---------------+
|101       |PROHIBITED     |

+----------+---------------+

Example on PC2-1

127.0.0.1:10121 PC1-1> display-info-localssninfo: column=ssn, column=status
Found 2 object(s):
+----------+---------------+
|ssn       |status         |

+----------+---------------+
|1         |ALLOWED        |

+----------+---------------+
|102       |PROHIBITED     |

+----------+---------------+

Scenario Simulator installation

This quick start walk-through will use the OC Scenario Simulator to test the network, rather than Rhino with CGIN, for simplicity.

For this quick start we will be assuming that your Scenario Simulator package is shipped with an IN Scenario Pack which does not support OCSS7 (which is true for Scenario Simulator 2.2.0.x), or with an obsolete version of the IN Scenario Pack. If you know that your Scenario Simulator contains a suitable IN Scenario Pack you may skip this section after completing it through step 2.

1

Unpack the Scenario Simulator archive file
unzip scenario-simulator-package-VERSION.zip

(replacing scenario-simulator-package-VERSION.zip with the correct file name).

This creates the distribution directory, scenario-simulator-VERSION, in the current working directory.

Example:

$ unzip scenario-simulator-package-2.3.0.6.zip
Archive:  scenario-simulator-package-2.3.0.6.zip
   creating: scenario-simulator-2.3.0.6/
   creating: scenario-simulator-2.3.0.6/licenses/
  inflating: scenario-simulator-2.3.0.6/licenses/LICENSE-XPathOverSchema.txt
  inflating: scenario-simulator-2.3.0.6/licenses/LICENSE-antlr.txt
[...]

2

Change directory into the Scenario Simulator directory
cd scenario-simulator-VERSION

(replacing scenario-simulator-package-VERSION.zip with the correct file name).

Example:

$ cd scenario-simulator-2.3.0.6/

3

Install the new IN Scenario Pack

We want to replace the old IN Scenario Pack with the new, which can be done with the following commands. Please ensure that you are in the Scenario Simulator’s installation directory before running these commands.

rm -r in-examples/ protocols/in-scenario-pack-*
unzip -o ../in-scenario-pack-VERSION.zip

(replacing ../in-scenario-pack-VERSION.zip with the correct file name). The -o option is being used here to automatically overwrite existing files without prompting, which is desirable in this case since we expect to replace certain files which were not explicitly removed with the rm command above.

Example:

$ rm -r in-examples/ protocols/in-scenario-pack-*
$ unzip -o ../in-scenario-pack-2.0.0.0.zip
Archive:  ../in-scenario-pack-2.0.0.0.zip
  inflating: protocols/in-scenario-pack-1.5.3.jar
   creating: in-examples/
   creating: in-examples/2sims/
   creating: in-examples/2sims/config/
   creating: in-examples/2sims/config/loopback/
   creating: in-examples/2sims/config/mach7/
   creating: in-examples/2sims/config/ocss7/
   creating: in-examples/2sims/config/signalware/
   creating: in-examples/2sims/scenarios/
   creating: in-examples/3sims/
   creating: in-examples/3sims/config/
   creating: in-examples/3sims/config/loopback/
   creating: in-examples/3sims/config/mach7/
   creating: in-examples/3sims/config/ocss7/
   creating: in-examples/3sims/config/signalware/
   creating: in-examples/3sims/scenarios/
  inflating: CHANGELOGS/CHANGELOG-in.txt
  inflating: README/README-in.txt
  inflating: in-examples/2sims/config/loopback/cgin-tcapsim-endpoint1.properties
  inflating: in-examples/2sims/config/loopback/cgin-tcapsim-endpoint2.properties
  inflating: in-examples/2sims/config/loopback/setup-sim1.commands
  inflating: in-examples/2sims/config/loopback/setup-sim2.commands
  inflating: in-examples/2sims/config/loopback/tcapsim-gt-table.txt
  inflating: in-examples/2sims/config/mach7/mach7-endpoint1.properties
  inflating: in-examples/2sims/config/mach7/mach7-endpoint2.properties
  inflating: in-examples/2sims/config/mach7/setup-mach7-endpoint1.commands
  inflating: in-examples/2sims/config/mach7/setup-mach7-endpoint2.commands
  inflating: in-examples/2sims/config/ocss7/ocss7-endpoint1.properties
  inflating: in-examples/2sims/config/ocss7/ocss7-endpoint2.properties
  inflating: in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands
  inflating: in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands
  inflating: in-examples/2sims/config/setup-examples-sim1.commands
  inflating: in-examples/2sims/config/setup-examples-sim2.commands
  inflating: in-examples/2sims/config/signalware/setup-signalware-endpoint1.commands
  inflating: in-examples/2sims/config/signalware/setup-signalware-endpoint2.commands
  inflating: in-examples/2sims/config/signalware/signalware-endpoint1.properties
  inflating: in-examples/2sims/config/signalware/signalware-endpoint2.properties
  inflating: in-examples/2sims/scenarios/CAPv3-Demo-ContinueRequest.scen
  inflating: in-examples/2sims/scenarios/CAPv3-Demo-ReleaseCallRequest.scen
  inflating: in-examples/2sims/scenarios/INAP-SSP-SCP.scen
  inflating: in-examples/3sims/config/loopback/cgin-tcapsim-endpoint1.properties
  inflating: in-examples/3sims/config/loopback/cgin-tcapsim-endpoint2.properties
  inflating: in-examples/3sims/config/loopback/cgin-tcapsim-endpoint3.properties
  inflating: in-examples/3sims/config/loopback/setup-sim1.commands
  inflating: in-examples/3sims/config/loopback/setup-sim2.commands
  inflating: in-examples/3sims/config/loopback/setup-sim3.commands
  inflating: in-examples/3sims/config/loopback/tcapsim-gt-table.txt
  inflating: in-examples/3sims/config/mach7/mach7-endpoint1.properties
  inflating: in-examples/3sims/config/mach7/mach7-endpoint2.properties
  inflating: in-examples/3sims/config/mach7/mach7-endpoint3.properties
  inflating: in-examples/3sims/config/mach7/setup-mach7-endpoint1.commands
  inflating: in-examples/3sims/config/mach7/setup-mach7-endpoint2.commands
  inflating: in-examples/3sims/config/mach7/setup-mach7-endpoint3.commands
  inflating: in-examples/3sims/config/ocss7/ocss7-endpoint1.properties
  inflating: in-examples/3sims/config/ocss7/ocss7-endpoint2.properties
  inflating: in-examples/3sims/config/ocss7/ocss7-endpoint3.properties
  inflating: in-examples/3sims/config/ocss7/setup-sim-endpoint1.commands
  inflating: in-examples/3sims/config/ocss7/setup-sim-endpoint2.commands
  inflating: in-examples/3sims/config/ocss7/setup-sim-endpoint3.commands
  inflating: in-examples/3sims/config/setup-examples-sim1.commands
  inflating: in-examples/3sims/config/setup-examples-sim2.commands
  inflating: in-examples/3sims/config/setup-examples-sim3.commands
  inflating: in-examples/3sims/config/signalware/setup-signalware-endpoint1.commands
  inflating: in-examples/3sims/config/signalware/setup-signalware-endpoint2.commands
  inflating: in-examples/3sims/config/signalware/setup-signalware-endpoint3.commands
  inflating: in-examples/3sims/config/signalware/signalware-endpoint1.properties
  inflating: in-examples/3sims/config/signalware/signalware-endpoint2.properties
  inflating: in-examples/3sims/config/signalware/signalware-endpoint3.properties
  inflating: in-examples/3sims/scenarios/CAPv2-Relay.scen
  inflating: in-examples/3sims/scenarios/INAP-SSP-SCP-HLR.scen
  inflating: in-examples/3sims/scenarios/MAP-MT-SMS-DeliveryAbsentSubscriber.scen
  inflating: in-examples/3sims/scenarios/MAP-MT-SMS-DeliveryPresentSubscriber.scen
  inflating: in-examples/README-in-examples.txt
  inflating: licenses/LICENSE-netty.txt
  inflating: licenses/LICENSE-slf4j.txt
  inflating: licenses/README-LICENSES-in-scenario-pack.txt

Scenario Simulator configuration

We will now configure two Scenario Simulator instances and connect them to the cluster. This work should be done in the Scenario Simulator installation directory, which is where the steps from the previous section left us.

Tip The Scenario Simulator and CGIN use identical configuration properties and values when using OCSS7, the only difference between the two is the procedure used for setup and configuration.

1

Set the OCSS7 connection properties for Simulator 1

Edit the file in-examples/2sims/config/ocss7/ocss7-endpoint1.properties and make the following changes:

local-sccp-address = type=C7,ri=gt,ssn=101,digits=1234,national=true

and

ocss7.sgcs = 127.0.0.1:12011

The port in the ocss7.sgcs property is the stack-data-port we configured earlier for node PC1-1.

2

Set the OCSS7 connection properties for Simulator 2

Edit the file in-examples/2sims/config/ocss7/ocss7-endpoint2.properties and make the following changes:

local-sccp-address = type=C7,ri=gt,ssn=102,digits=4321,national=true

and

ocss7.sgcs = 127.0.0.1:12021

The port in the ocss7.sgcs property is the stack-data-port we configured earlier for node PC2-1.

3

Set the Scenario Simulator endpoint addresses

Edit the following files:

  • in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands

  • in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands

and replace the two lines beginning:

set-endpoint-address endpoint1
set-endpoint-address endpoint2

with

set-endpoint-address endpoint1 type=c7,ri=gt,pc=1,ssn=101,digits=1234,national=true
set-endpoint-address endpoint2 type=c7,ri=gt,pc=2,ssn=102,digits=4321,national=true
Tip

We’ve given the PC and SSN information in the above endpoint addresses, and you may be wondering why and whether global title translation is actually happening. The reason for this is that the Scenario Simulator is just that, a simulator, and it needs the PC and SSN information to work out roles and endpoints correctly for scenario validation. If you wish, you can change the SSN given to the simulator in set-endpoint-address to some other value without changing the value used by the TCAP stack in local-sccp-address and our test scenario will still work correctly because global title translation is actually happening and the inbound SSN is ignored by the receiving SGC.

The Scenario Simulators are now fully configured and ready to test our network.

Test the network

We will now test the network using the Metaswitch Scenario Simulator and one of the example IN scenarios included with it.

1

Start the scenario simulators

We need two Scenario Simulator instances for this test, one to initiate our test traffic, and one to respond. Start them with these two commands:

./scenario-simulator.sh -f in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands -f in-examples/2sims/config/setup-examples-sim1.commands

and

./scenario-simulator.sh -f in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands -f in-examples/2sims/config/setup-examples-sim2.commands

Example for Simulator 1:

$ ./scenario-simulator.sh -f in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands -f in-examples/2sims/config/setup-examples-sim1.commands
Starting JVM...
Processing commands from file at in-examples/2sims/config/ocss7/setup-sim-endpoint1.commands
Processing command: set-endpoint-address endpoint1 type=C7,ri=gt,digits=1234
Processing command: set-endpoint-address endpoint2 type=C7,ri=gt,digits=4321
Processing command: create-local-endpoint endpoint1 cgin -propsfile in-examples/2sims/config/ocss7/ocss7-endpoint1.properties
Initializing local endpoint "endpoint1" ...
Local endpoint initialized.
Finished reading commands from file
Processing commands from file at in-examples/2sims/config/setup-examples-sim1.commands
Processing command: bind-role SSP-Loadgen endpoint1
Processing command: bind-role SCP-Rhino endpoint2
Processing command: wait-until-operational 60000
Simulator is operational
Processing command: load-scenario in-examples/2sims/scenarios/INAP-SSP-SCP.scen
Playing role "SSP-Loadgen" in initiating scenario "INAP-SSP-SCP" with dialogs [SSP-SCP]
Processing command: load-scenario in-examples/2sims/scenarios/CAPv3-Demo-ContinueRequest.scen
Playing role "SSP-Loadgen" in initiating scenario "CAPv3-Demo-ContinueRequest" with dialogs [SSP-SCP]
Processing command: load-scenario in-examples/2sims/scenarios/CAPv3-Demo-ReleaseCallRequest.scen
Playing role "SSP-Loadgen" in initiating scenario "CAPv3-Demo-ReleaseCallRequest" with dialogs [SSP-SCP]
Finished reading commands from file
Ready to start

Please type commands... (type "help" <ENTER> for command help)
>

Example for Simulator 2:

$ ./scenario-simulator.sh -f in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands -f in-examples/2sims/config/setup-examples-sim2.commands
Starting JVM...
Processing commands from file at in-examples/2sims/config/ocss7/setup-sim-endpoint2.commands
Processing command: set-endpoint-address endpoint1 type=C7,ri=gt,digits=1234
Processing command: set-endpoint-address endpoint2 type=C7,ri=gt,digits=4321
Processing command: create-local-endpoint endpoint2 cgin -propsfile in-examples/2sims/config/ocss7/ocss7-endpoint2.properties
Initializing local endpoint "endpoint2" ...
Local endpoint initialized.
Finished reading commands from file
Processing commands from file at in-examples/2sims/config/setup-examples-sim2.commands
Processing command: bind-role SSP-Loadgen endpoint1
Processing command: bind-role SCP-Rhino endpoint2
Processing command: wait-until-operational 60000
Simulator is operational
Processing command: load-scenario in-examples/2sims/scenarios/INAP-SSP-SCP.scen
Playing role "SCP-Rhino" in receiving scenario "INAP-SSP-SCP" with dialogs [SSP-SCP]
Processing command: load-scenario in-examples/2sims/scenarios/CAPv3-Demo-ContinueRequest.scen
Playing role "SCP-Rhino" in receiving scenario "CAPv3-Demo-ContinueRequest" with dialogs [SSP-SCP]
Processing command: load-scenario in-examples/2sims/scenarios/CAPv3-Demo-ReleaseCallRequest.scen
Playing role "SCP-Rhino" in receiving scenario "CAPv3-Demo-ReleaseCallRequest" with dialogs [SSP-SCP]
Finished reading commands from file
Ready to start

Please type commands... (type "help" <ENTER> for command help)
>

2

Check the remote SSN information

Before running a test session let’s pause to check the

display-info-remotessninfo

command, which should now show the following on both nodes:

127.0.0.1:10111 PC1-1> display-info-remotessninfo
Found 4 object(s):
+----------+----------+---------------+
|dpc       |ssn       |status         |

+----------+----------+---------------+
|1         |1         |ALLOWED        |

+----------+----------+---------------+
|1         |101       |ALLOWED        |

+----------+----------+---------------+
|2         |1         |ALLOWED        |

+----------+----------+---------------+
|2         |102       |ALLOWED        |

+----------+----------+---------------+

From this we can see that both SGCs have registered the connected simulators and informed the Concerned Point Codes about the state change for the SSN used by the simulator.

3

Run a test session

On Simulator 1 run:

run-session CAPv3-Demo-ContinueRequest

This will run the test scenario, which is a basic CAPv3 IDP / CON scenario.

Example:

> run-session CAPv3-Demo-ContinueRequest
Send -->  OpenRequest to endpoint2
Send -->  InitialDP (Request) to endpoint2
Send -->  Delimiter to endpoint2
Recv <--  OpenAccept from endpoint2
Recv <--  Continue (Request) from endpoint2
Recv <--  Close from endpoint2
Outcome of "CAPv3-Demo-ContinueRequest" session: Matched scenario definition "CAPv3-Demo-ContinueRequest"
Tip This scenario has a 10 second delay between the OpenRequest and the OpenAccept so do not be concerned if it seems to be taking a little while to get a reply.
Previous page Next page