This section documents information concerning the upgrade. Be sure you are also familiar with the new features and new configuration introduced in the uplevel Sentinel IP-SM-GW software.
In general an upgrade of a Rhino cluster running Sentinel IP-SM-GW can be completed in a single maintenance window, though for operational reasons you may wish to use more than one. The maintenance window should be of adequate length (at least 30 minutes for the first node, 10 minutes for subsequent nodes, plus additional time for traffic draining, and validation tests according to your test plan).
Manual reconfiguration steps
For deployments utilizing any or all of:
it is strongly recommended that the upgrade is trialled in a lab deployment first. This will allow you to identify any breaking issues with the upgrade, and to note any customizations to feature scripts and other configuration that needs to be applied after the upgrade.
Traffic capacity will be reduced by a fraction of 1/N where N is the number of nodes in the cluster to be upgraded. That is to say, at any given point during the upgrade one node will be unavailable but the rest will still be able to handle traffic. As such the maintenance window should be planned for a period where call traffic volumes are quieter.
Product upgrade order
A Sentinel deployment may consist of multiple products: REM, GAA, VoLTE, and IPSMGW. The clusters must be upgraded in the following order: REM (with plugins), then GAA, then VoLTE, then IPSMGW. (For clarity, all GAA nodes must be upgraded before any VoLTE nodes, and all VoLTE before any IPSMGW.)
For major upgrades, you will need to upgrade all products in close succession, since having upgraded REM to a new version first, you need to upgrade all the other products soon after to ensure that the REM plugins for the products retain maximum compatibility, and are able to provide the best management interface.
The first node in each Rhino cluster is handled separately to the rest, since the software upgrade only needs to actually be performed on one node and the rest can pick up the upgraded software by joining the new cluster (see Cluster Migration Workflow).
Always specify the hosts in reverse order of their node IDs, i.e. from highest to lowest. This is necessary since certain Rhino features use the node ID as a tie-breaker when they can otherwise not determine which node has preference, so we want to make sure the highest number node is processed first to avoid problems.
orca takes a list of hosts. The hosts are upgraded in the order specified, and only the nodes specified are
upgraded. As such,
orca supports splitting the upgrade across multiple maintenance windows by specifying only a subset
of hosts on each window.
Limitation of one cluster
orca only supports working with a list of hosts where all hosts are part of the same cluster. If your deployment has
multiple clusters to be upgraded, then these upgrades need to be done separately. This does mean that (if such an
approach is suitable for your overall deployment architecture) you can use multiple machines to run
Your Metaswitch Customer Care Representative should have provided you with an upgrade bundle, normally named
ipsmgw-<uplevel-version>-upgrade-bundle.zip or similar.
unzip the bundle to a directory of your choice
cd to it.
orca working directory
orca must always be run from the directory where it resides. It will fail to operate correctly if run from any other location.
orca script is present and executable by running
./orca --help. You should see usage information.
orca can contact the hosts you wish to upgrade by running
./orca --hosts <host1,host2,…> status. You
should see status information, such as the current live cluster and lists of services, about all the hosts specified.
SSH access to hosts
orca requires passwordless SSH access to hosts, i.e. all hosts in the cluster must be accessible from the machine running orca using SSH keys. If this is not the case, orca will throw an error saying it is unable to contact one or more of the hosts. Ensure SSH key-based access is set up to all such hosts and retry the status command until it works.
Upgrading requires an
install.properties file. Such a file was generated by the installer when installing the product
manually (the "non-interactive mode"). When doing an upgrade, copy the 'install.properties` into the same directory as
orca in the unzipped bundle, so as to associate the
install.properties file with this upgrade.
If you have run a previous upgrade for this product, then you can reuse the version of the file from that previous upgrade, and
if that upgrade followed the recommendations in this document, then the file will be in the
on whichever is regarded as the main one of the remote Rhino hosts.
However, if this is an installation that has never been upgraded, then you may be able to find the file within the hidden
/home/sentinel/.install directory on the actual host that the install was done on (where if it exists it will be prefixed with the
product name, thus
Note that in both cases this remote Rhino host is probably not the machine that you are trying to run
You will need to copy the file from the remote machine, to the one you have
orca on, renaming it if necessary in the process from
If you do not have an
install.properties file from a previous installation or upgrade, you can regenerate it by running the
installer on a scratch VM with Rhino installed - follow the instructions in the
When the installer asks "Install now?", you can select
no and the installation procedure will terminate early, but
still write the
install.properties file. As above, copy the file from the scratch VM to the machine where you have unzipped the
upgrade bundle to run