Verify that Rhino has no duplicate OIDs

This can be done prior to the maintenance window. For each node type with Rhino, SSH into one of the VMs.

Run the following command:

last_seen=0; rhino-console listsnmpoidmappings | while read line;do array=($line); if [ "${array[0]}" == "$last_seen" ]; then
echo "Duplicate ${array[0]}"; fi; last_seen=${array[0]}; done

If there are any duplicates, please contact your Metaswitch Customer Care representative.

Disable scheduled Rhino restarts

If you have configured scheduled Rhino restarts, then these should be disabled before running an upgrade. This can be done by commenting out the scheduled-rhino-restarts section in the VM pool YAML config files. An example is shown below.

  virtual-machines:
    - vm-id: vm01
      rhino-node-id: 101
#      scheduled-rhino-restarts:
#        day-of-week: Saturday
#        time-of-day: 03:00
    - vm-id: vm02
      rhino-node-id: 102
#      scheduled-rhino-restarts:
#        day-of-week: Saturday
#        time-of-day: 04:00

Then to update the VMs with the disabled scheduled restarts, use rvtconfig upload-config.

Verify that HTTPS certificates are valid

This step is only required if the downlevel VMs are versions 4.0.0-22-1.0.0 or earlier. It may still be useful to do this for later versions, but it is optional.

The HTTPS certificates on the VMs must be valid for more than 30 days, and must remain valid during the upgrade for the whole deployment. For example, your upgrade will fail if your certificate is valid for 32 days and it takes more than 1 day to upgrade all of the VMs for all node types.

Using your own certificates

If using your own generated certificates, check its expiry date using:

openssl x509 -in <certificate file> -enddate -noout

If the certificates are expiring, you must first upload the new certificates using rvtconfig upload-config before upgrading.

Using VM generated certificates

If you did not provide certificates to the VMs, the VM will generate its own certificates which are valid for 5 years. So if the current VMs have been deployed less than 5 years ago then there is nothing further to do. If it has been over 5 years, then please contact your Metaswitch Customer Care representative.

Verify all VMs are healthy

All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.

Collect diagnostics from all of the VMs

The diagnostics from all the VMs should be collected. To do this, follow instructions from RVT Diagnostics Gatherer. After generating the diagnostics, transfer it from the VMs to a local machine.

Disable TSN Housekeeping Tasks

Before upgrading TSN VMs, do the following on all downlevel TSN VMs:

sudo systemctl list-timers

If cassandra-repair-daily.timer is present, do the following:

sudo systemctl disable cassandra-repair-daily.timer
sudo systemctl stop cassandra-repair-daily.timer

Run the following command to verify that the cassandra-repair-daily.timer is not present:

sudo systemctl list-timers

Disable SNMP on SMO VMs if SNMPv3 is enabled

Warning

Omitting this step on the SMO VMs when SNMPv3 is configured will result in Initconf failing to converge on the uplevel VMs.

SNMP is only required to be disabled on the SMO VMs when:

  • Performing a rolling upgrade or rollback of the SMO; and

  • SNMPv3 is enabled (even if SGC notifications are disabled); and

  • the downlevel VM version is 4.0.0-23-1.0.0 or older; and

  • the uplevel VM version is 4.0.0-24-1.0.0 or newer.

The complete process for doing this is documented in Reconfiguring the SGC’s SNMP subsystem.

Download CDRs from all VMs

If your deployment is configured to generate CDRs on the Rhino VMs, these CDRs are stored on the local disk of the VMs and will be lost when the VMs are upgraded.

Therefore, if you need to keep a record of all CDRs generated the platform, you must download any CDRs not yet downloaded before the upgrade. Any CDRs not downloaded from a VM before that VM is upgraded will be permanently lost.

Upload the CSAR EFIX patches to the SIMPL VM

If not already done, transfer the CSAR EFIX patches onto the SIMPL VM. For each CSAR EFIX patch, run:

csar efix <node type>/<version> <path to CSAR EFIX>

<path to CSAR EFIX> is the full path to the CSAR EFIX patch. <node type>/<version> is the downlevel unpacked CSAR located at ~/.local/share/csar/.

For example, if a ShCM CSAR is being patched by a CSAR EFIX patch called my_patch.tar:

csar efix shcm/4.0.0-14-1.0.0 my_patch.tar
Note If you are not sure of the exact version string to use, run csar list to view the list of installed CSARs.

This will apply the efix patch to the the downlevel CSAR.

Note The new patched CSAR is now the uplevel CSAR referenced in the following steps.
Warning Don’t apply the same CSAR EFIX patch to the same CSAR target more than once. If a previous attempt to run the csar efix command failed, be sure to remove the created CSAR before re-attempting, as the csar efix command requires a clean target directory to work with.

Upload the uplevel SDF to SIMPL VM

If the CSAR EFIX patch uplevel SDF was not created on the SIMPL VM, transfer the previously written CSAR EFIX patch uplevel SDF onto the SIMPL VM.

Note Ensure the version in the each node type’s vnfcs section of the uplevel SDF is set to <downlevel-version>-<patch-version>. For example: 4.0.0-14-1.0.0-patch123, where 4.0.0-14-1.0.0 is the downlevel version and patch123 is the patch version.

Upload uplevel RVT configuration

Upload the uplevel configuration for all of the node types to the CDS. This is required for the rolling upgrade using CSAR EFIX patch to complete.

You must still have the scheduled Rhino restarts commented out in the uplevel VM pool YAML files. This should only be enabled after the upgrade has completed.

Note As configuration is stored against a specific version, you need to re-upload the uplevel configuration even if it is identical to the downlevel configuration.

The uplevel version for a CSAR EFIX patch is the format <downlevel-version>-<patch-version>. For example: 4.0.0-14-1.0.0-patch123, where 4.0.0-14-1.0.0 is the downlevel version and patch123 is the patch version.

When performing a rolling upgrade some elements of the uplevel configuration must remain identical to those in the downlevel configuration. These elements (and the remedy if that configuration change was made and the cluster upgrade process started) are described in the following table:

Node Type

Disallowed Configuration Change

Remedy

All

The secrets-private-key in the SDF may not be altered.

Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade.

All

The ordering of the VM instances in the SDF may not be altered.

Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade.

SMO

All SGC-related configuration.

Follow the instructions in Reconfiguring the SGC to either restore the original SGC configuration or to apply the updated configuration. Alternatively, rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade.

SMO

SNMP notification targets if SNMP was previously enabled for the SGC.

Reconfiguring the SGC’s SNMP subsystem to reconfigure SNMP. Alternatively, rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade.

SMO

The SGC SNMP configuration cannot be disabled if it was previously enabled.

Reconfiguring the SGC’s SNMP subsystem to reconfigure SNMP. Alternatively, rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade.

See Example configuration YAML files for example configuration files.

An in-depth description of RVT YAML configuration can be found in the Rhino VoLTE TAS Configuration and Management Guide.

Previous page Next page
Rhino VoLTE TAS VMs Version 4.0.0