Before you begin the upgrade, ensure you have completed the pre-upgrade checklist.
Upgrade process
Specify all hosts
Unless otherwise specified, in all Hosts are specified as a comma-separated list (without whitespace), e.g. |
Note that the following process is for a major upgrade. Minor and major upgrades are very similar; the differences for minor upgrade are listed inline.
Set session ownership configuration
The session ownership in Sentinel VoLTE is handled by Rhino 2.6.1 instead of Session Tracking to Own Device feature. Upgrading from Sentinel Volte 2.7.1 requires manual action to enable the Session Ownership facility in Rhino during the installation.
Get the cassandra configuration
Get the cassandraContactPoints
or cassandraHosts
and policy.protocol.port
properties from cassandra-external-session-tracking Resource Adaptor.
Using REM, connect to a Sentinel VoLTE node. Click on menu Management → Resources.
Select the cassandra-external-session-tracking Resource Adaptor.
Take note or copy the values that contains the cassandra hosts present in either cassandraHosts
or cassandraContactPoints
and the policy.protocol.port
property.
You can use rhino-console command listraentityconfigproperties cassandra-external-session-tracking
to also list the properties.
Create property file used by orca while installing Rhino
Now create a file called rhino.properties
in the path you unzipped the upgrade bundle.
Add the following properties to the file:
DEFAULT_SESSION_OWNERSHIP_FACILITY_ENABLED=True DEFAULT_REPLICATED_STORAGE_RESOURCE=DomainedMemoryDatabase DEFAULT_CASSANDRA_CONTACT_POINTS= DEFAULT_CASSANDRA_PORT=
Assign the values present in cassandraContactPoints
or cassandraHosts
to DEFAULT_CASSANDRA_CONTACT_POINTS
.
Assign the value present in policy.protocol.port
to DEFAULT_CASSANDRA_PORT
, normally the value is 9042.
Example:
DEFAULT_SESSION_OWNERSHIP_FACILITY_ENABLED=True DEFAULT_REPLICATED_STORAGE_RESOURCE=DomainedMemoryDatabase DEFAULT_CASSANDRA_CONTACT_POINTS=volte_tas_01,volte_tas_02,volte_tas_03 DEFAULT_CASSANDRA_PORT=9042
Save the file.
Use the parameter --installer-overrides rhino.properties
as part of the orca command major-upgrade
or minor-upgrade
.
Upgrade the first node
Start the upgrade using the following command. For minor upgrade, replace major-upgrade
with minor-upgrade
.
./orca --hosts <host1,host2,…> major-upgrade --stop-timeout <timeout> --pause --installer-overrides rhino.properties packages <install.properties>
where
-
<timeout> is the maximum amount of time you wish to allow for all calls to stop on each node
-
<install.properties> is the path to the install.properties file
-
packages is a literal name, and should not be changed.
If your upgrade includes a new Rhino version and you have been given a separate license file, specify this here by
appending the parameter --license <path to license file>
.
This will take approximately 30 minutes. At the end of the upgrade process, you will be prompted to continue the operation when ready:
Major upgrade has been paused after applying it to just host1.
You should now test that the major upgrade has worked.
Once this is verified, use the following command to complete the major upgrade:
./orca --hosts host1,host2,host3 major-upgrade --continue packages install.properties
If the upgrade failed, refer to orca troubleshooting to resolve the problem. You will normally need to rollback to the original cluster in order to try again.
Verify the new node is visible in REM
Log into the REM web application. Create a new connection to the first host and connect to it. You should be able to see information about the first node. Ensure there are no unexpected alarms.
Edit the connection to include the other hosts, by updating the address field from a single host address to a list of all the required hosts.
Merge and import feature scripts
If the output from orca
prompted you to merge feature scripts, follow the instructions in
Feature Scripts conflicts and resolution
to merge and import the feature scripts into the uplevel installation. Note that when running orca’s
`import-feature-scripts
command, you need only specify the first host.
Perform first node validation tests
If your test plan includes validating the first node after upgrade, run these tests now.
Add failover support for charging timers
In execution point MMTel_Post_SipAccess_ServiceTimer Feature SuppressSdpCdr needs to be added due to this execution point triggering on charging timers for failover support. To support continuation of CDR reporting post-failover VolteInterimCdr needs to run when the session uses interim CDRs. Add this conditionally, execution based on use of interim CDRs. The diff snippet below shows the changes to be applied to the feature script.
MMTel_Post_SipAccess_ServiceTimer: FeatureScriptSrc: featurescript SipAccessServiceTimer-SysPost-MMTel { run DiameterMMTelInfo run DetermineCauseCode run DiameterServiceInfo run DiameterPerLegInfo mode "outbound" if not session.MonitorCallOnly { run B2BUAScurPostFeature } + run SuppressSdpCdr + if session.InterimCdrsEnabled { + run VolteInterimCdr + } run RemoveHeadersFromOutgoingMessages run ExternalSessionTracking }
Upgrade the rest of the nodes
Run the same command as in the "Upgrade the first node" section, but replace the --pause
with --continue
.
For minor upgrade, replace major-upgrade
with minor-upgrade
.
./orca --hosts <host1,host2,…> major-upgrade --stop-timeout <timeout> --continue packages <install.properties>
If the output from orca
reports Given hosts are not in the correct state to continue
, then this
probably indicates that you have not issued the correct command to continue - in particular you must ensure that
the first host listed in the --hosts
list is the one representing the first node that was upgraded, and that all the
other hosts were also present in the original list, since those are the only ones that have been correctly
prepared for update. There are 2 simple ways to get the correct command
-
either copy the command from the
orca
output, which tells you the exact command to use -
or find the original command in your terminal command history, and simply replace the
--pause
with--continue
.
During this stage, each remaining host will be migrated to the new cluster one-by-one. If you remain logged into REM and viewing the connection you created earlier, as each node is migrated into the new cluster they will become visible in REM. Note that some unexpected alarms may temporarily appear on these new nodes until they have fully deployed the new software and configuration, due to having a mixture of old and new in place. Once the new has fully deployed, alarms caused in this way will disappear, and their temporary appearance is nothing to worry about.
Migrating in two or more maintenance windows
For the purposes of example, suppose there are five hosts named In the first maintenance window, you prepare all the hosts, and upgrade the first one using the
Once you have tested the first node is performing as expected, you issue the
In the second maintenance window, the command you use must keep the first host the same (since that is the host holding the node whose upgrade is being duplicated to the other nodes), but you change the rest of the list to be the remainder of the hosts that need to be upgraded
|
Next steps
Follow the post-upgrade checklist.