This document applies to upgrading from SGC 3.0.0.x to a newer SGC. It cannot be used to upgrade to SGC 3.0.0.x from either the 1.x or 2.x series of SGCs. See the Online Upgrade Support Matrix for the exact release combinations that support online upgrade. |
This section describes the process required to perform an online manual upgrade.
During this process the cluster remains in service and provided that the connected TCAP stacks are using the ocss7.sgcs
connection method, calls will fail over from one SGC to another as required.
Failover cannot be guaranteed to be 100% successful as failover for any given dialog or message is highly dependent on timing. For example, a message queued in an SGC at the SCTP layer will be lost if that SGC is terminated prior to transmission. |
Manual Upgrade Procedure
-
Prepare the replacement nodes:
-
Install each replacement cluster member following the recommended installation structure.
-
Do not copy configuration files from the existing to the new installation yet.
-
-
Issue the CLI command:
start-upgrade
. This checks pre-requisites and places the cluster intoUPGRADE
mode. In this mode:-
Calls continue to be processed.
-
Newer SGC versions may join the cluster, provided they are backwards compatible with the current cluster version.
-
Configuration changes will be rejected.
-
-
Copy configuration (
config/*
andvar/*
) from the original cluster members to the replacement cluster members. This step must be carried out after executingstart-upgrade
to provide full resilience during the upgrade procedure. -
Upgrade the first cluster member:
-
Stop the original node:
$ORIGINAL_SGC_HOME/bin/sgc stop
-
Verify that the original node has come to a complete stop by checking its logs and the process list.
-
Start the replacement node:
$REPLACEMENT_SGC_HOME/bin/sgc start
-
Verify that the replacement node has started and successfully joined the cluster. The CLI command
display-info-nodeversioninfo
can be used to view the current cluster members. -
Wait for 2-3 minutes to allow the cluster to redistribute shared data amongst all of the members.
-
-
Repeat the previous step for each of the remaining cluster members. This must be performed one node at a time.
-
Issue the CLI command:
complete-upgrade
. This checks pre-requisites, then performs the actions required to leaveUPGRADE
mode. -
Verify that the cluster has completed the upgrade. The CLI commands
display-info-nodeversioninfo
anddisplay-info-clusterversioninfo
may be used to verify this.
Rolling Back An In-Progress Manual Upgrade
-
Before
start-upgrade
was issued:-
(Optional) Delete the installation directories for the (unused) replacement cluster members.
-
-
After
complete-upgrade
:-
This is a
revert
operation, not a rollback.
-
-
After
start-upgrade
and beforecomplete-upgrade
:-
For every cluster member that is running the replacement SGC version, ONE AT A TIME:
-
Stop the SGC:
$REPLACEMENT_SGC_HOME/bin/sgc stop
-
Verify that the node has come to a complete halt by checking its logs and the process list.
-
Start the original SGC:
$ORIGINAL_SGC_HOME/bin/sgc start
-
Verify that the original SGC has started and successfully joined the cluster. The CLI command
display-info-nodeversioninfo
can be used to view the current cluster members. -
Wait for 2-3 minutes to allow the cluster to redistribute shared data before proceeding to the next node.
-
-
Once all nodes are running the original pre-upgrade version, complete the rollback by issuing the
abort-upgrade
CLI command.
-