This page describes the actions to take after the upgrade is completed and all relevant validation tests have passed.

Check connections to REM and Rhino clusters

On each host which was upgraded, log into the REM web application (or refresh the page if already logged in), and connect to one or more of your Rhino clusters within REM.

You should be able to access the Rhino cluster using the original connection configured in the previous version of REM.

Note that when refreshing the REM web application, do a "hard refresh" (Ctrl+F5 in most browsers) so that the browser retrieves up-to-date information from the REM server rather than reloading from its cache.

Archive the REM backup

On each REM host, orca will have generated a backup of the Rhino Element Manager installation at the downlevel version prior to starting the upgrade. This backup can be found in an auto-generated subdirectory within the backups directory. Unless otherwise specified using orca's --backup-dir option, the backups directory is the rem-backup directory under the HOME directory. Each backup is named with:

  • the timestamp at which the upgrade was performed, in the format YYYYMMDD-HHMMSS

  • a unique number to refer to it during rollback or cleanup, preceded by a # symbol.

For example, assuming that the user account is sentinel, this is the second backup in the directory, and the backup was created on May 3rd 2018, at 1:44pm, then the backup would be created as /home/sentinel/rem-backup/20180503-134400#2.

You can view a list of backups and their contents using the status command:

> ./orca --hosts remhost1 status
...
REM:
general =
  Server version: Apache Tomcat/8.5.29
...
backups =
  (#2) 20180503-134400 contains REM:2.6.1.1 Plugins:sentinel-gaa-em-2.8.0.3,sentinel-volte-element-manager-2.8.0.3
...

Copy (for example using rsync) the new backup directory with all its contents to your backup storage, if you have one. orca creates a backup on every REM host upgraded, but normally they would all contain the same software, and hence it is only necessary to archive one of these backups.

Archive the upgrade logs

In the directory from which orca was run, there will be a directory logs containing many subdirectories with log files. Copy (for example using rsync) this logs directory with all its contents to your backup storage, if you have one. These logs can be useful for Metaswitch Customer Care in the event of a problem with the upgrade.

If required, clean up unneeded older backups

Once the upgrade is confirmed to be working, you may wish to clean up older downlevel Tomcat backups to save disk space.

Tip
Retain one old backup

Keep the most recent downlevel backup in place as a fallback.

Be sure you have an external backup of any export directories you plan to delete, unless you are absolutely sure that you will not need them in the future.

Repeat the following steps on each host (each host has to be cleaned up individually). In the following commands replace the remhost1 example hostname with the hostname of each REM node in turn.

First, use the status command to obtain a list of backups.

> ./orca --hosts remhost1 status
...
REM:
general =
  Server version: Apache Tomcat/8.5.29
...
backups =
  (#1) 20180117-123500 contains REM:2.6.1.0 Plugins:sentinel-gaa-em-2.8.0.1,sentinel-volte-element-manager-2.8.0.1
  (#2) 20180503-134400 contains REM:2.6.1.1 Plugins:sentinel-gaa-em-2.8.0.3,sentinel-volte-element-manager-2.8.0.3
  (#3) 20180926-093700 contains REM:2.6.1.2 Plugins:sentinel-gaa-em-2.8.0.4,sentinel-volte-element-manager-2.8.0.4
...

For example, given the above output, you may decide to delete backups #1 and #2, leaving #3 in place. The number (e.g. #1) before each backup is called that backup’s ID.

Next, use the cleanup-rem command to remove any unwanted backups.

./orca --hosts remhost1 cleanup-rem --backups 1,2

The backups are specified using the --backups parameter, as a comma-separated list of backup IDs (without the # symbol or any spaces). Be sure to pay attention to the ID numbering of each host’s backups, which may differ from one host to the next.

Previous page Next page