Before you begin the upgrade, ensure you have completed the pre-upgrade checklist.

Upgrade process

Important
Specify all hosts

Unless otherwise specified, in all orca commands, you should specify all the hosts in the cluster. orca will automatically handle the split between the first node and the others.

Hosts are specified as a comma-separated list (without whitespace), e.g. --hosts host1,host2,host3. They can be specified as IP addresses or hostnames. Specify the nodes in descending order of node ID.

Note that the following process is for a major upgrade. Minor and major upgrades are very similar; the differences for minor upgrade are listed inline.

Set session ownership configuration

The session ownership in Sentinel VoLTE is handled by Rhino 2.6.1 instead of Session Tracking to Own Device feature. Upgrading from Sentinel Volte 2.7.1 requires manual action to enable the Session Ownership facility in Rhino during the installation.

Get the cassandra configuration

Get the cassandraContactPoints or cassandraHosts and policy.protocol.port properties from cassandra-external-session-tracking Resource Adaptor.

Using REM, connect to a Sentinel VoLTE node. Click on menu Management → Resources. Select the cassandra-external-session-tracking Resource Adaptor. Take note or copy the values that contains the cassandra hosts present in either cassandraHosts or cassandraContactPoints and the policy.protocol.port property.

You can use rhino-console command listraentityconfigproperties cassandra-external-session-tracking to also list the properties.

Create property file used by orca while installing Rhino

Now create a file called rhino.properties in the path you unzipped the upgrade bundle. Add the following properties to the file:

DEFAULT_SESSION_OWNERSHIP_FACILITY_ENABLED=True
DEFAULT_REPLICATED_STORAGE_RESOURCE=DomainedMemoryDatabase
DEFAULT_CASSANDRA_CONTACT_POINTS=
DEFAULT_CASSANDRA_PORT=

Assign the values present in cassandraContactPoints or cassandraHosts to DEFAULT_CASSANDRA_CONTACT_POINTS. Assign the value present in policy.protocol.port to DEFAULT_CASSANDRA_PORT, normally the value is 9042.

Example:

DEFAULT_SESSION_OWNERSHIP_FACILITY_ENABLED=True
DEFAULT_REPLICATED_STORAGE_RESOURCE=DomainedMemoryDatabase
DEFAULT_CASSANDRA_CONTACT_POINTS=volte_tas_01,volte_tas_02,volte_tas_03
DEFAULT_CASSANDRA_PORT=9042

Save the file.

Use the parameter --installer-overrides rhino.properties as part of the orca command major-upgrade or minor-upgrade.

Upgrade the first node

Start the upgrade using the following command. For minor upgrade, replace major-upgrade with minor-upgrade.

./orca --hosts <host1,host2,…​> major-upgrade --stop-timeout <timeout> --pause --installer-overrides rhino.properties packages <install.properties>

where

  • <timeout> is the maximum amount of time you wish to allow for all calls to stop on each node

  • <install.properties> is the path to the install.properties file

  • packages is a literal name, and should not be changed.

If your upgrade includes a new Rhino version and you have been given a separate license file, specify this here by appending the parameter --license <path to license file>.

This will take approximately 30 minutes. At the end of the upgrade process, you will be prompted to continue the operation when ready:

Major upgrade has been paused after applying it to just host1.
You should now test that the major upgrade has worked.
Once this is verified, use the following command to complete the major upgrade:
  ./orca --hosts host1,host2,host3 major-upgrade --continue packages install.properties

If the upgrade failed, refer to Troubleshooting to resolve the problem. You will normally need to rollback to the original cluster in order to try again.

Verify the new node is visible in REM

Log into the REM web application. Create a new connection to the first host and connect to it. You should be able to see information about the first node. Ensure there are no unexpected alarms.

Edit the connection to include the other hosts, by updating the address field from a single host address to a list of all the required hosts.

Merge and import feature scripts

If the output from orca prompted you to merge feature scripts, follow the instructions in Feature Scripts conflicts and resolution to merge and import the feature scripts into the uplevel installation. Note that when running orca’s `import-feature-scripts command, you need only specify the first host.

Perform first node validation tests

If your test plan includes validating the first node after upgrade, run these tests now.

Upgrade the rest of the nodes

Run the same command as in the "Upgrade the first node" section, but replace the --pause with --continue. For minor upgrade, replace major-upgrade with minor-upgrade.

./orca --hosts <host1,host2,…​> major-upgrade --stop-timeout <timeout> --continue packages <install.properties>

If the output from orca reports Given hosts are not in the correct state to continue, then this probably indicates that you have not issued the correct command to continue - in particular you must ensure that the first host listed in the --hosts list is the one representing the first node that was upgraded, and that all the other hosts were also present in the original list, since those are the only ones that have been correctly prepared for update. There are 2 simple ways to get the correct command

  • either copy the command from the orca output, which tells you the exact command to use

  • or find the original command in your terminal command history, and simply replace the --pause with --continue.

During this stage, each remaining host will be migrated to the new cluster one-by-one. If you remain logged into REM and viewing the connection you created earlier, as each node is migrated into the new cluster they will become visible in REM. Note that some unexpected alarms may temporarily appear on these new nodes until they have fully deployed the new software and configuration, due to having a mixture of old and new in place. Once the new has fully deployed, alarms caused in this way will disappear, and their temporary appearance is nothing to worry about.

Tip
Migrating in two or more maintenance windows

For the purposes of example, suppose there are five hosts named node41 through node45, and that each host has a node on it whose ID follows the same order as the hosts - that is host node41 has node 41, host node42 has node 42 etc. through to host node45 with node 45 on it. This means that the node ID order is the same as the host name order. The nodes must be specified in reverse node ID order, so we use node45 down to node41. You wish to upgrade three hosts in the first maintenance window and two in the second.

In the first maintenance window, you prepare all the hosts, and upgrade the first one using the --pause option

./orca --hosts node45,node44,node43,node42,node41 major-upgrade --pause packages <install.properties>

Once you have tested the first node is performing as expected, you issue the --continue form of the command, giving just the set of nodes that you want to be upgraded in this window

./orca --hosts node45,node44,node43 major-upgrade --continue packages <install.properties>

In the second maintenance window, the command you use must keep the first host the same (since that is the host holding the node whose upgrade is being duplicated to the other nodes), but you change the rest of the list to be the remainder of the hosts that need to be upgraded

./orca --hosts node45,node42,node41 major-upgrade --continue packages <install.properties>

Perform post-upgrade validation tests

Perform your post-upgrade test plan at this stage.

The upgrade is now complete.

Next steps

Previous page Next page