Warning This document applies to upgrading from SGC 3.0.0.x to a newer SGC. It cannot be used to upgrade to SGC 3.0.0.x from either the 1.x or 2.x series of SGCs. See the Online Upgrade Support Matrix for the exact release combinations that support online upgrade.

This section describes the process required to perform an online manual upgrade. During this process the cluster remains in service and provided that the connected TCAP stacks are using the ocss7.sgcs connection method, calls will fail over from one SGC to another as required.

Note Failover cannot be guaranteed to be 100% successful as failover for any given dialog or message is highly dependent on timing. For example, a message queued in an SGC at the SCTP layer will be lost if that SGC is terminated prior to transmission.

Manual Upgrade Procedure

  1. Backup the cluster.

  2. Prepare the replacement nodes:

    1. Install each replacement cluster member following the recommended installation structure.

    2. Do not copy configuration files from the existing to the new installation yet.

  3. Issue the CLI command: start-upgrade. This checks pre-requisites and places the cluster into UPGRADE mode. In this mode:

    • Calls continue to be processed.

    • Newer SGC versions may join the cluster, provided they are backwards compatible with the current cluster version.

    • Configuration changes will be rejected.

  4. Copy configuration (config/* and var/*) from the original cluster members to the replacement cluster members. This step must be carried out after executing start-upgrade to provide full resilience during the upgrade procedure.

  5. Upgrade the first cluster member:

    1. Stop the original node: $ORIGINAL_SGC_HOME/bin/sgc stop

    2. Verify that the original node has come to a complete stop by checking its logs and the process list.

    3. Start the replacement node: $REPLACEMENT_SGC_HOME/bin/sgc start

    4. Verify that the replacement node has started and successfully joined the cluster. The CLI command display-info-nodeversioninfo can be used to view the current cluster members.

    5. Wait for 2-3 minutes to allow the cluster to redistribute shared data amongst all of the members.

  6. Repeat the previous step for each of the remaining cluster members. This must be performed one node at a time.

  7. Issue the CLI command: complete-upgrade. This checks pre-requisites, then performs the actions required to leave UPGRADE mode.

  8. Verify that the cluster has completed the upgrade. The CLI commands display-info-nodeversioninfo and display-info-clusterversioninfo may be used to verify this.

Rolling Back An In-Progress Manual Upgrade

  • Before start-upgrade was issued:

    • (Optional) Delete the installation directories for the (unused) replacement cluster members.

  • After complete-upgrade:

    • This is a revert operation, not a rollback.

  • After start-upgrade and before complete-upgrade:

    1. For every cluster member that is running the replacement SGC version, ONE AT A TIME:

      1. Stop the SGC: $REPLACEMENT_SGC_HOME/bin/sgc stop

      2. Verify that the node has come to a complete halt by checking its logs and the process list.

      3. Start the original SGC: $ORIGINAL_SGC_HOME/bin/sgc start

      4. Verify that the original SGC has started and successfully joined the cluster. The CLI command display-info-nodeversioninfo can be used to view the current cluster members.

      5. Wait for 2-3 minutes to allow the cluster to redistribute shared data before proceeding to the next node.

    2. Once all nodes are running the original pre-upgrade version, complete the rollback by issuing the abort-upgrade CLI command.

Previous page Next page