Minor upgrade bundle
A minor upgrade bundle is a self-contained package with:
-
the orca bundle
-
the product SDK with offline repositories
-
the optional packages
-
the README
A minor upgrade bundle has the name
<product name>-<product version>-minor-upgrade.zip
Example: volte-2.7.0.4-minor-upgrade.zip
Item | Description |
---|---|
README |
Contains information about the minor upgrade: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains set of scripts used by orca |
core directory |
Contains set of scripts used by orca |
workflows directory |
Contains set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
packages directory |
Contains package files and the package.cfg |
The package files directory contains:
-
the
packages.cfg
file -
the SDK in offline mode
-
a rhino install (optional)
-
a rhino config JSON file (optional)
-
a license (optional)
-
a new JDK (java) (optional)
-
the post install package or custom package (optional)
-
the post configure package (optional)
The packages.cfg file contains the name of the packages to apply. For example:
[packages] sdk=sentinel-ipsmgw-2.6.0.17-offline-sdk.zip rhino_package=rhino-install.tar license=new.license rhino_config_json=sentinel-volte-upgrade-rhino-config.json post_install_package=custom-package-2.6.0.17.zip post_configure_package=after-configuration-package-2.6.0.17.zip [versions] sdk=2.6.0.17 rhino=2.5.0.5
Requirements
A minor upgrade uses the orca tool and its requirements apply here. Applying a minor upgrade requires administrative access to the Rhino running the product, so be sure the credentials used in ssh trusted connections are valid.
By default orca assumes the $HOME directory of the remote hosts as the base directory. Use the option --remote-home-dir
or -r
if the path is different.
The upgrade requires at least 1.5GB of free disk space in the first node:
-
0.5GB for the upgrade bundle
-
0.5GB for the installer to run
-
0.5GB for logs
The hosts order should always be from the host that has the highest node to the lowest node. In case some nodes are not cleanly shutdown the nodes with lowest node id will become primary to the cluster to avoid split brain situations. See Rhino Cluster Membership. |
A valid install.properties
The install.properties file is necessary to install the correct components from the product. Ideally this install.properties should be the same used to do the current installation. Each customer installation has different options chosen during the installation which defines which components are installed. One example is the choice of CAP charging in VoLTE, which includes the IMSSF service.
The important properties to check in the install.properties are:
-
rhinoclientdir
to point to the remote path of rhino client -
doinstall
set totrue
-
deployrhino
set tofalse
-
any property that does not have a default value needs to be set. See the product documentation for the version being installed for the full list of properties.
The properties above are also checked by orca. In case they are not properly set, orca will raise an error.
The other properties will be temporarily set to default values by the product SDK installer that will populate the profile tables. However, those properties will then be restored to the existing values from the product version that was upgraded from. |
Consider this example for a VoLTE installation:
The current installation was done using the property home.domain
set to mydomain.com
.
When doing the minor upgrade the user specifies that property in the install.properties to otherdomain.com
.
The product SDK installer will install the new version of software and set the profiles that require information about the home domain to otherdomain.com
.
After installing the new version, orca proceeds to recover the previous customer configuration present in the rhino-export.
This process will remove all tables from the new installation and will create the ones from the old installation.
At the end the value mydomain
will be restored.
A custom package
If the current installation has custom components on top of the product, a zip package containing an executable install
is required.
See Applying upgrade for customers with custom installations.
Applying the minor upgrade
In order to get the concept across, this explanation starts with a simplified example that applies an upgrade to all hosts at once. You are strongly discouraged from doing this - a much better process is to do an upgrade in multiple stages: first upgrade just a single host, then perform testing to determine that the upgrade is working as expected, before finally continuing to roll out the upgrade to the rest of your Rhino hosts. |
Applying a minor upgrade to a production environment is simple. The normal steps are:
-
download the minor upgrade bundle to the management host
-
ensure the management host has ssh access to the Rhino cluster hosts
-
decompress the upgrade bundle and cd to the upgrade bundle
-
read the README for instructions and for details of the upgrade
-
prepare the install.properties file for use by the product installer, use the one used to do the current installation or create a new one with the expected values
-
verify that you have a Rhino running with the installed product, e.g. Sentinel VoLTE 2.7.0.9
using just the product SDK with or without customisations
-
use the command
./orca --hosts host1,host2,host3 minor-upgrade <the packages path> <path to the install.properties> --no-pause
for example:
./orca --hosts host1,host2,host3 minor-upgrade packages install.properties --no-pause
orca needs to be called from inside the upgrade bundle directory, otherwise it will fail due to the use of relative paths for some actions. |
using just the product SDK plus a custom package
The command is the same as above, but orca
will retrieve the custom package automatically from the packages directory
according to the entries in the package.cfg.
You can insert the custom package manually and add this line to the packages.cfg under the section packages:
post_install_package=<package file name>
In summary orca will:
-
check the connection to the hosts
-
clone the cluster on all the hosts
-
backup the installation
-
retrieve the necessary configuration
-
migrate the node on the first host in the given list to the new cluster
-
install the new version on that first host
-
copy the configuration from the old installation to the new installation
-
clean up the temporary files
-
optionally pause to allow testing of the upgraded node
-
migrate the other nodes of the cluster
To rollback the installation
./orca --hosts host1,host2,host3 rollback -f
To delete an old installation
./orca --hosts host1,host2,host3 cleanup --cluster <cluster id>
The recommended multi-stage upgrade process
Applying an upgrade in multiple stages is not that much more complex than the simplified example already given.
Instead of using the --no-pause
option, specify a --pause
option thus:
./orca --hosts host1,host2,host3 minor-upgrade packages install.properties --pause
This will cause all of the hosts listed (in this case host1
, host2
, and host3
) to be prepared for the upgrade,
but only the first host in the list (here host1
) will actually have the upgrade applied to it.
The process will then pause, to allow the upgraded installation to be tested. Note that only the first host
(host1
) has the new code on it, so you will need to arrange your testing appropriately to route traffic
to this specific host.
Once you are satisfied that the upgrade is working as required, you can run the exact same command that you previously did,
only with the --pause
changed to be --continue
. In particular, you should not change the list of hosts
given to these two commands, since the continuation process needs to access the same first host, and same
set of prepared hosts, as before.
In our example, the continuation command is therefore
./orca --hosts host1,host2,host3 minor-upgrade packages install.properties --continue
This will then migrate the other nodes of the cluster, so they are all using the upgraded product.
Applying upgrade for customers with custom installations
Currently orca supports two ways of applying a minor upgrade to customers: self contained customised SDK or product SDK plus custom package.
self contained customised SDK
This is equivalent to the product SDK, but including the customisations. It is a self contained package that used the SDK installer. The SDK installer will install all the necessary components including custom components and custom profile tables. The installation will be equivalent to the previous one but with new version components.
product SDK plus custom package
This type of minor upgrade is done by installing customizations on top of the product SDK.
The custom package needs to contain an executable called install
.
Specify the custom package name in the packages.cfg and put the custom package in the packages directory.
e.g.
./orca --hosts host1,host2 minor-upgrade packages install.properties --pause
If there is a custom package specified in the packages.cfg, orca will install the product SDK first then call this install
script and that will do all necessary operations to install the customised components to Rhino.
After this script is finished orca will restore the configuration from the last installation.
Handling Feature Scripts
A minor upgrade release might include Feature Scripts fixes.
By default the minor upgrade process keeps the Feature Scripts from the current installed version (old version).
If the new version of the Feature Scripts are the valid ones that should be present in the system after the upgrade,
you need to specify the flag --skip-feature-scripts
while running orca.
./orca --hosts host1,host2,host3 minor-upgrade packages install.properties --pause --skip_feature_scripts
Use this option carefully. In case there are custom changes in the Feature Scripts they will be lost and the system will not work as expected. Make sure to manually reapply all the custom changes if you are using this option. |
Troubleshooting
See Troubleshooting.
Example of minor upgrade execution on a 3 node cluster
This example shows a minor upgrade from Sentinel Volte 2.7.0.1 to Sentinel Volte 2.7.0.9
Applying the minor upgrade with the command
rhino@rhino-rem:~/install/sentinel-volte-sdk-2.7.1.5-upgrade$ ./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 minor-upgrade packages install.properties --no-pause
Check for prepared clusters
Starting on host rhino-vm1 Checking for prepared Rhino clusters Done on rhino-vm1 Starting on host rhino-vm2 Checking for prepared Rhino clusters Done on rhino-vm2 Starting on host rhino-vm3 Checking for prepared Rhino clusters Done on rhino-vm3
Check the sdk install.properties script
Validating install.properties Updating rhinoclientdir property in install.properties to /home/rhino/rhino/client
Prepare a new cluster and export the configuration
Applying minor upgrade package in directory 'packages' to hosts rhino-vm1,rhino-vm2,rhino-vm3 Deleting previous export /home/rhino/export/volte-2.7.1.0-cluster-108 Exporting configuration to directory /home/rhino/export/volte-2.7.1.0-cluster-108 Done on rhino-vm1 Exporting profile table and RA configuration Done on rhino-vm1 Doing Prepare Initializing the database Done on rhino-vm1 Starting on host rhino-vm2 Doing Prepare Done on rhino-vm2 Starting on host rhino-vm3 Doing Prepare Done on rhino-vm3
Migrate the first node in the hosts list
Doing Migrate Stopping node 101 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 101 Rhino has exited. Successfully shut down Rhino on node 101. Now waiting for sockets to close... Starting node 101 in cluster 109 Started Rhino. State is now: Waiting to go primary Attempting to make cluster primary for 20 seconds Successfully made cluster primary Waiting for Rhino to be ready for 230 seconds Started node 101 successfully Done on rhino-vm1
Import basic product configuration
Importing pre-install configuration Done on rhino-vm1
Install the new product version
Installing upgrade volte-2.7.1.5-offline-sdk.zip - this will take a while Unpacking the upgrade package Copying the install.properties Installing the new product. This will take a while. You can check the progress in /home/rhino/install/sdk-2.7.1.5/build/target/log/installer.log on the remote host Done on rhino-vm1
Restore the customer configuration
Importing profile table and RA configuration Done on rhino-vm1 Importing profiles from export directory volte-2.7.1.0-cluster-108 Done on rhino-vm1 Transforming the service object pools configuration Getting the services Selecting files Applying changes Done on rhino-vm1 Importing post-install configuration Done on rhino-vm1
Finish the upgrade and migrate the other nodes
Starting on host rhino-vm2 Doing Migrate Stopping node 102 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 102 Rhino has exited. Successfully shut down Rhino on node 102. Now waiting for sockets to close... Starting node 102 in cluster 109 Started Rhino. State is now: Running Waiting for Rhino to be ready for 75 seconds Started node 102 successfully Done on rhino-vm2 Starting on host rhino-vm3 Doing Migrate Stopping node 103 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 103 Rhino has exited. Successfully shut down Rhino on node 103. Now waiting for sockets to close... Starting node 103 in cluster 109 Started Rhino. State is now: Running Waiting for Rhino to be ready for 40 seconds Started node 103 successfully Done on rhino-vm3
Show the cluster status
Status of host rhino-vm1 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.7.1.5-cluster-109 - LIVE Live Node: Rhino node 101: found process with id 18550 Node 101 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: - volte-2.7.1.0-cluster-108 - volte-2.7.1.0-cluster-108-transformed-for-2.8.0.3 - volte-2.8.0.3-cluster-109 License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-130-generic (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Status of host rhino-vm2 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.7.1.5-cluster-109 - LIVE Live Node: Rhino node 102: found process with id 20976 Node 102 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Status of host rhino-vm3 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.7.1.5-cluster-109 - LIVE Live Node: Rhino node 103: found process with id 1493 Node 103 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Available actions: - prepare - prepare-new-rhino - cleanup --clusters - cleanup --exports - rollback