This page describes the actions to take after the upgrade is completed and all relevant validation tests have passed.
Clean up REM connections
You should be able to access the entire Rhino cluster using the original connection configured in REM. If you created an additional connection to monitor the new cluster during the upgrade, delete it now.
Note that when refreshing the REM web application, be sure to do a "hard refresh" (Ctrl+F5
in most browsers) so that
the browser retrieves up-to-date information from the REM server rather than reloading from its cache.
Check service state and alarms
Symmetric activation state mode
Symmetric activation state mode must be enabled prior to the upgrade. If your deployment normally operates with it disabled, you will need to manually disable it at this point. See Symmetric Activation State in the Rhino documentation. You may also need to update RA entity activation states. |
Log into REM and access the connection to the Rhino cluster. Check that all services are active on all nodes. Also check that the RA entity activation state is as expected on all nodes. Check for unexpected alarms.
Verify SAS configuration
If your deployment is now running Rhino 2.6.0 or later and includes a MetaView Service Assurance Server (SAS), verify the SAS configuration in Rhino is as expected. See SAS Tracing in the Rhino documentation.
Check that calls made in your validation tests appear in SAS.
Restore external network element configuration
If you made any changes to external network elements (for example to timeout INVITEs more quickly or to redirect traffic), then restore the original configuration on those elements.
Archive the downlevel export generated during the upgrade, and generate a new export
On the first node, orca
will have generated an export of the Rhino installation at the downlevel version, prior to
any migration or upgrade steps. This can be found in the ~/export
directory, labelled with the version and cluster ID,
and is a useful "restore point" in case problems with the upgrade are encountered later that cannot be simply undone
with the rollback
command. Copy (for example using
rsync
) the downlevel export directory with all its contents to your backup storage, if you have one.
Follow the Rhino documentation to generate a post-upgrade export and archive this too.
Uplevel exports generated during major upgrade
Note that during major upgrade These exports can be ignored, and deleted at a later date. |
Archive the upgrade logs
In the directory from which orca
was run, there will be a directory logs
containing many subdirectories with log
files. Copy (for example using rsync
) this logs
directory with all its contents to your backup storage, if you have
one. These logs can be useful for Metaswitch Customer Care in the event of a problem with the upgrade.
Save the install.properties
file
It is a good idea to save the install.properties
file for use in a future upgrade.
Remember that the file is currently on the machine that you chose to run orca
from,
and that may not be the same machine chosen by someone doing a future upgrade.
A good location to put the file is in a /home/sentinel/install
directory on whatever you regard as the main Rhino host.
This may be the one you always provide first in the list of hosts for example, or the one originally used
to install Rhino in the first place.
Configure new features
If the uplevel software introduced new features that you plan to use in your deployment, you may wish to configure and verify them here. Refer to the Sentinel IP-SM-GW manuals and changelogs for more information.
If required, clean up downlevel clusters and unneeded exports
Once the upgrade is confirmed to be working, you may wish to clean up old downlevel cluster(s) to save disk space.
Run the status
command to view existing clusters and exports
Run the status
command and observe the clusters
and exports
sections of the output.
./orca --hosts <host1,host2,host3,…> status
For upgrades, exports are only generated on the first node. |
Identify any clusters or exports you no longer wish to keep. Note their cluster IDs, which is the last part of the name. For example, given this output:
Status of host host1
Clusters:
- ipsmgw-2.7.1.2-cluster-41
- ipsmgw-2.7.1.4-cluster-42
- ipsmgw-2.8.0.2-cluster-43 - LIVE
[...]
Exports:
- ipsmgw-2.7.1.2-cluster-41
- ipsmgw-2.7.1.4-cluster-42
- ipsmgw-2.7.1.4-cluster-42-transformed-for-2.8.0.2
- ipsmgw-2.8.0.2-cluster-43
[...]
Status of host host2
Clusters:
- ipsmgw-2.7.1.2-cluster-41
- ipsmgw-2.7.1.4-cluster-42
- ipsmgw-2.8.0.2-cluster-43 - LIVE
[...]
you may decide to delete cluster 41
and exports 41
and 42
.
Retain one old cluster
You are advised to always leave the most recent downlevel cluster in place as a fallback. Be sure you have an external backup of any export directories you plan to delete, unless you are absolutely sure that you will not need them in the future. |
Run the cleanup
command to delete clusters and exports
Run the cleanup
command, specifying the clusters and exports to delete as comma-separated lists of IDs
(without whitespace). For example:
./orca --hosts <host1,host2,host3,...> cleanup --clusters 41
./orca --hosts <host1> cleanup --exports 41,42
Update Java version of other applications running on the hosts
If performing a major upgrade with a new version of Java, orca
will install the new Java but it will only be
applied to the new Rhino installation. Global environment variables such as JAVA_HOME
, or other applications that use
Java, will not be updated.
The new Java installation can be found at ~/java/<version>
where <version>
is the JDK version that orca installed,
e.g. 8u162
. If you want other applications running on the node to use this new Java installation, update the
appropriate environment variables and/or configuration files to reference this directory.