Major upgrade bundle

A Major upgrade bundle is a self-contained package with:

  • the orca bundle

  • the product SDK with offline repositories

  • the README

A Major upgrade bundle has the name

<product name>-<product version>-major-upgrade.zip

Example: volte-2.7.0.4-major-upgrade.zip

Item Description

README

Contains information about the major upgrade: how to use it, what components it changes, and what problems it fixes

orca

The orca tool

helpers directory

Contains the set of scripts used by orca

core directory

Contains the set of scripts used by orca

workflows directory

Contains the set of scripts used by orca

resources directory

Contains the properties used by orca

licenses directory

License information for third party libraries used by the patch runner

packages directory

Contains the package files and the packages.cfg

transformation-rules

Contains the transformation rules jar

required-installer-properties

Contains the required SDK properties

The package files directory contains:

  • the packages.cfg file

  • the SDK in offline mode

  • the post install package or custom package (optional)

  • the post configure package (optional)

  • the rhino installer (optional)

  • the rhino-config.json with the required Rhino properties when a new Rhino is required

  • the Java JDK (optional)

The packages.cfg file contains the name of the packages to apply. For example:

[files]
sdk=volte-2.8.0.3-offline-sdk.zip
post_install_package=post-install-package.zip
post_configure_package=post-configure-package.zip
rhino_package=rhino-install.tar
rhino_config_json=sentinel-volte-upgrade-rhino-config.json

[versions]
sdk=2.8.0.3
rhino_package=2.6.1.2

[additional_data]

Requirements

A major upgrade uses the orca tool and its requirements apply here. Applying a major upgrade requires administrative access to the Rhino running the product, so be sure the credentials used in ssh trusted connections are valid.

By default orca assumes the $HOME directory of the remote hosts as the base directory. Use the option --remote-home-dir or -r if the path is different. The upgrade requires at least 1.5GB of free disk space in the first node:

  • 0.5GB for the upgrade bundle

  • 0.5GB for the installer to run

  • 0.5GB for logs

Important The hosts order should always be from the host that has the highest node to the lowest node. In case some nodes are not cleanly shutdown the nodes with lowest node id will become primary to the cluster to avoid split brain situations. See Rhino Cluster Membership.

A valid install.properties

The install.properties file is necessary to install the correct components from the product and it should conform to what is expected by the SDK installer from the new product version. It should have the same properties from the previous installation and have the new required properties set according to the product requirements. Each customer installation has different options chosen during the installation which defines which components are installed. One example is the choice of CAP charging in VoLTE, which includes the IMSSF service.

The important properties to check in the install.properties are:

  • rhinoclientdir to point to the remote path of rhino client

  • doinstall set to true

  • deployrhino set to false

  • any property that does not have a default value needs to be set. See the product documentation for the version being installed for the full list of properties.

The properties above are also checked by orca. In case they are not properly set, orca will raise an error.

Important The other properties will be temporarily set to default values by the product SDK installer that will populate the profile tables. However, those properties will then be restored to the existing values from the product version that was upgraded from.

Consider this example for a VoLTE installation:

The current installation was done using the property home.domain set to mydomain.com. When doing the major upgrade the user specifies that property in the install.properties to otherdomain.com. The product SDK installer will install the new version of software and set the profiles that require information about the home domain to otherdomain.com. After installing the new version, orca proceeds to recover the previous customer configuration present in the rhino-export. This process will remove all tables from the new installation and will create the ones from the old installation. At the end the value mydomain will be restored.

A custom package

If the current installation has custom components on top of the product, a zip package containing an executable install is required. See Applying upgrade for customers with custom installations.

Applying the major upgrade

Important In order to get the concept across, this explanation starts with a simplified example that applies an upgrade to all hosts at once. You are strongly discouraged from doing this - a much better process is to do an upgrade in multiple stages: first upgrade just a single host, then perform testing to determine that the upgrade is working as expected, before finally continuing to roll out the upgrade to the rest of your Rhino hosts.

Applying a major upgrade to a production environment is simple. The normal steps are:

  • download the major upgrade bundle to the management host

  • ensure the management host has ssh access to the Rhino cluster hosts

  • decompress the upgrade bundle and cd to the upgrade bundle

  • read the README for instructions and for details of the upgrade

  • prepare the install.properties file for use by the product installer, use the one used to do the current installation or create a new one with the expected values

  • verify that you have a Rhino running with the installed product, e.g. Sentinel VoLTE 2.7.0.9

using just the product SDK with or without customisations

  • use the command

./orca --hosts host1,host2,host3 major-upgrade <the packages path> <path to the install.properties> --no-pause

for example:

./orca --hosts host1,host2,host3 major-upgrade packages install.properties --no-pause
Note orca needs to be called from inside the upgrade bundle directory, otherwise it will fail due to the use of relative paths for some actions.

using just the product SDK plus a custom package

The command is the same as above, but orca will retrieve the custom package automatically from the packages directory according to the entries in the package.cfg. You can insert the custom package manually and add this line to the packages.cfg under the section packages:

post_install_package=<package file name>

In summary orca will:

  • check the connection to the hosts

  • clone the cluster on all the hosts

  • backup the installation

  • retrieve the necessary configuration

  • prepare a cluster on the hosts

    • install new Java if required

    • install new Rhino if required

  • migrate the node on the first host in the given list to the new cluster

  • install the new product version on that first host

  • copy the configuration from the old installation to the new installation

  • clean up the temporary files

  • optionally pause to allow testing of the upgraded node

  • migrate the other nodes of the cluster

To rollback the installation

./orca --hosts host1,host2,host3 rollback -f

To delete an old installation

./orca --hosts host1,host2,host3 cleanup --cluster <cluster id>

Applying an upgrade in multiple stages is not that much more complex than the simplified example already given.

Instead of using the --no-pause option, specify a --pause option thus:

./orca --hosts host1,host2,host3 major-upgrade packages install.properties --pause

This will cause all of the hosts listed (in this case host1, host2, and host3) to be prepared for the upgrade, but only the first host in the list (here host1) will actually have the upgrade applied to it.

The process will then pause, to allow the upgraded installation to be tested. Note that only the first host (host1) has the software version installed, so you will need to arrange your testing appropriately to route traffic to this specific host.

Once you are satisfied that the upgrade is working as required, you can run the exact same command that you previously did, only with the --pause changed to be --continue. In particular, you should not change the list of hosts given to these two commands, since the continuation process needs to access the same first host, and same set of prepared hosts, as before.

In our example, the continuation command is therefore

./orca --hosts host1,host2,host3 major-upgrade packages install.properties --continue

This will then migrate the other nodes of the cluster, so they are all using the upgraded product.

Limitations

Rhino tracers

The tracers applied to your current installation are not retained after an upgrade. You’ll need to manually reapply any custom tracers after the upgrade.

Deleted SNMP OIDs, and deleted counters

SNMP OIDs and counters which exist in both the old and new versions involved in a major upgrade are properly preserved during that major upgrade. However, there is no protection against a future patch or upgrade reusing OIDs that the major upgrade stopped using.

For example, if a feature is deleted in a major upgrade, then the OIDs associated with that feature are also deleted. If someone is monitoring those OIDs, then after the upgrade none of those OIDs will be triggered, so the UI will not show any changes.

If a later patch or upgrade adds a new feature, then there is a possibility that Rhino will automatically allocate a previously-used OID to this new feature. In that case, the existing stats monitor starts to see new data, without realizing that the stat now being recorded is totally different.

A similar problem occurs if a major update deletes a named counter, but a subsequent patch or upgrade adds a counter that gets allocated the same counter index as was previously used.

Transformation rules

Upgrade won’t work if the correct transformation rules are not chosen. Currently the upgrades are designed to be from one version to a subsequent one, e.g Volte 2.6.0 to 2.7.0. There currently no rules that allows a direct upgrade jumping major versions, e.g, Volte 2.6.8 to 2.8.0.

Permgen configuration change

Some products using Java 7 require special configuration for permgen. Changing this configuration during an upgrade that requires a new Rhino is not supported. If the upgrade does not require a new Java version, then first change the permgen configuration in the current cluster and apply the upgrade. The upgrade will maintain the permgen configuration. If upgrading to Java 8, this configuration is irrelevant.

Rollback command requires an active Rhino

Rollback command won’t work if the Rhino in a host is not active. You will need to manually start Rhino and then use the rollback command.

Applying upgrade for customers with custom installations

Currently orca supports two ways of applying a major upgrade to customers: self contained customised SDK or product SDK plus custom package.

Self contained customised SDK

This is equivalent to the product SDK, but including the customisations. It is a self contained package that used the SDK installer. The SDK installer will install all the necessary components including custom components and custom profile tables. The installation will be equivalent to the previous one but with new version components.

Product SDK plus custom package

This type of major upgrade is done by installing customizations on top of the product SDK. The custom package needs to contain an executable called install. Specify the custom package name in the packages.cfg and put the custom package in the packages directory. e.g.

./orca --hosts host1,host2 major-upgrade packages install.properties --pause

If there is a custom package specified in the packages.cfg, orca will install the product SDK first then call this install script and that will do all necessary operations to install the customised components to Rhino. After this script is finished orca will restore the configuration from the last installation.

Feature Scripts merge

The major upgrade process includes a 3 way merge of the feature scripts. Before explaining the algorithm some terms need clarification:

  • Downlevel: is the original product without customization

  • Installed: the current product in production with or without customization

  • Uplevel: the new product version

  • D, I, U: abbreviation for Downlevel, Installed and Uplevel, respectively.

The merge is based on the following rules:

Script present? Changes Action Description

Case

Downlevel

Installed

Uplevel

D Not Equal I

I Not Equal U

D Not Equal U

1

No

Prompt for customer decision

Script not present in uplevel

2

No

No

Yes

Use uplevel

Script introduced in uplevel

3

Yes

No

Yes

Use uplevel

Script got deleted in the installed version

4

No

Yes

Yes

No

Use installed

Possible "backport" of feature

5

No

Yes

Yes

Yes

Prompt for customer decision

Ditto, except now there are functional changes too

6

Yes

Yes

Yes

Yes

Yes

Yes

Prompt for customer decision

Three-way merge case

7

Yes

Yes

Yes

No

Yes

Yes

Use uplevel

No customizations but script updated in new version

8

Yes

Yes

Yes

Yes

No

Yes

Use installed

Possible "backport" of feature

9

Yes

Yes

Yes

Yes

Yes

No

Use installed

Customization but no change in new version, preserve customization

10

Yes

Yes

Yes

Yes

No

No

Use installed

Standard case, no changes

Basically the rules 1, 5 and 6 can not be resolved without human intervention, which has to be done after the first node is migrated.

Orca will post the warning

 The feature script tool was unable to resolve differences between some scripts.

 <the list of Feature Scripts>

Once the major upgrade is complete, review the scripts in <path>/feature-scripts and make changes if required
When ready, run "orca --hosts <host> import-feature-scripts" to import the scripts into the Rhino installation.

On how to resolve Feature Scripts conflicts and import them to the current system see Feature Scripts conflicts and resolution.

New Rhino installation

If a Rhino package is present and the parameter --skip-new-rhino is not present, orca will install a new Rhino with the same configuration from the current installed one and update the configuration according to the Rhino config JSON file.

If a total new configuration is required then use the command prepare-new-rhino first with the options --installer-overrides.

New Java installation

If a Java package is present, orca will install the Java package in the $HOME directory of all hosts listed in the --hosts parameter and will change the Rhino configuration to point to this new installed version.

Important orca WILL NOT update the JAVA_HOME environment variable, because there might be other applications that require this property to be set to a specific version. This should be a manual process.

Troubleshooting

Example of major upgrade execution on a 3 node cluster

This example shows a major upgrade from Sentinel Volte 2.7.0.1 to Sentinel Volte 2.7.0.9

Applying the major upgrade with the command

rhino@rhino-rem:~/install/sentinel-volte-sdk-2.8.0.3-upgrade$ ./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 major-upgrade --pause packages install.properties

Check for prepared clusters

Starting on host rhino-vm1
Checking for prepared Rhino clusters
Done on rhino-vm1

Starting on host rhino-vm2
Checking for prepared Rhino clusters
Done on rhino-vm2

Starting on host rhino-vm3
Checking for prepared Rhino clusters
Done on rhino-vm3

Check the sdk install.properties script

Validating install.properties
Updating rhinoclientdir property in install.properties to /home/rhino/rhino/client

Export the configuration

Applying major upgrade package in directory 'packages' to hosts rhino-vm1,rhino-vm2,rhino-vm3
Exporting configuration to directory /home/rhino/export/volte-2.7.1.0-cluster-108
Done on rhino-vm1

Exporting profile table and RA configuration
Done on rhino-vm1

Install Java

Installing new JDK on hosts [rhino-vm1, rhino-vm2, rhino-vm3]
Starting on host rhino-vm1
Installing New Java
JDK 1.8.0_172 successfully installed at /home/rhino/java/jdk1.8.0_172
PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you
 would like all Java applications to use the newly installed JDK.
Done on rhino-vm1

Starting on host rhino-vm2
Installing New Java
JDK 1.8.0_172 successfully installed at /home/rhino/java/jdk1.8.0_172
PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you
 would like all Java applications to use the newly installed JDK.
Done on rhino-vm2

Starting on host rhino-vm3
Installing New Java
JDK 1.8.0_172 successfully installed at /home/rhino/java/jdk1.8.0_172
PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you
 would like all Java applications to use the newly installed JDK.
Done on rhino-vm3

Prepare the new cluster

Preparing new cluster with new Rhino on hosts [rhino-vm1, rhino-vm2, rhino-vm3]
Starting on host rhino-vm1
Preparing New Rhino
Done on rhino-vm1

Starting on host rhino-vm2
Preparing New Rhino
Done on rhino-vm2

Starting on host rhino-vm3
Preparing New Rhino
Done on rhino-vm3

Migrate the first node in the hosts list

Doing Migrate
Stopping node 101 in cluster 108
Waiting up to 120 seconds for calls to drain and SLEE to stop on node 101
Rhino has exited.
Successfully shut down Rhino on node 101. Now waiting for sockets to close...
Starting node 101 in cluster 109
Started Rhino. State is now: Waiting to go primary
Attempting to make cluster primary for 20 seconds
Successfully made cluster primary
Waiting for Rhino to be ready for 228 seconds
Started node 101 successfully
Done on rhino-vm1

Import basic product configuration

Importing pre-install configuration
Done on rhino-vm1

Install the new product version

Installing upgrade volte-2.8.0.3-offline-sdk.zip - this will take a while
Unpacking the upgrade package
Copying the install.properties
Installing the new product. This will take a while. You can check the progress in /home/rhino/install/sdk-2.8.0.3/build/target/log/installer.log on the remote host
Done on rhino-vm1

Export the new installed version and do the data transformation

Exporting configuration to directory /home/rhino/export/volte-2.8.0.3-cluster-109
Done on rhino-vm1

Exporting profile table and RA configuration
Done on rhino-vm1

Transforming Export Data
Done on rhino-vm1

Do the post install step

Applying post-install package post-install-package.zip, this can take a while
Unzipping package post-install-package.zip
Running script install
Done on rhino-vm1

Restore the customer configuration

Importing profile table and RA configuration
Done on rhino-vm1

Importing profiles from export directory volte-2.7.1.0-cluster-108-transformed-for-2.8.0.3
Done on rhino-vm1

Transforming the service object pools configuration
Getting the services
Selecting files
Applying changes
Done on rhino-vm1

Importing post-install configuration
Done on rhino-vm1

Applying post-configure package post-configure-package.zip, this can take a while
Unzipping package post-configure-package.zip
Running script install
Done on rhino-vm1

Merge feature scripts

Merging feature scripts
Merging scripts
Loading feature scripts from file packages/FeatureExecutionScriptTable.xml
Loading feature scripts from file /home/rhino/install/sentinel-volte-sdk-2.8.0.3-upgrade/volte-2.7.1.0-cluster-108.xml
Loading feature scripts from file /home/rhino/install/sentinel-volte-sdk-2.8.0.3-upgrade/volte-2.8.0.3-cluster-109.xml
Once the major upgrade is complete, review the scripts in /home/rhino/install/sentinel-volte-sdk-2.8.0.3-upgrade/feature-scripts and make changes if required.
When ready, run "orca --hosts <host> import-feature-scripts" to import the scripts into the Rhino installation.

Finish the upgrade on the first node and show the status

Querying status on hosts [rhino-vm1, rhino-vm2, rhino-vm3]
Global info:
Symmetric activation state mode is currently enabled

Status of host rhino-vm1

Clusters:
 - volte-2.7.1.0-cluster-108
 - volte-2.8.0.3-cluster-109 - LIVE

Live Node:
Rhino node 101: found process with id 10352
Node 101 is Running
Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e'

Exports:
 - volte-2.7.1.0-cluster-108
 - volte-2.7.1.0-cluster-108-transformed-for-2.8.0.3
 - volte-2.8.0.3-cluster-109

License information:
1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018
Licensed Rhino version(s): 2.*, Development

Java:
found = live cluster
java_home = /home/rhino/java/jdk1.8.0_172
version = java version "1.8.0_172", Java(TM) SE Runtime Environment (build 1.8.0_172-b11), Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

OS:
python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609]
version = Linux version 4.4.0-130-generic (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018

Services:
name=IM-SSF vendor=OpenCloud version=2.6.1
name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424-copy#1
name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424
name=sentinel.registrar vendor=OpenCloud version=2.8.0
name=sentinel.registrar vendor=OpenCloud version=current
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601-copy#1
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0
name=sentinel.volte.sip vendor=OpenCloud version=current
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150-copy#1
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0
name=sentinel.volte.ss7 vendor=OpenCloud version=current

Status of host rhino-vm2

Clusters:
 - volte-2.7.1.0-cluster-108 - LIVE
 - volte-2.8.0.3-cluster-109

Live Node:
Rhino node 102: found process with id 26136
Node 102 is Running
Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e'

Exports:

License information:
1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018
Licensed Rhino version(s): 2.*, Development

Java:
found = live cluster
java_home = /opt/java/jdk1.7.0_79
version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

OS:
python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609]
version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018

Services:
name=IM-SSF vendor=OpenCloud version=1.4.7
name=sentinel.registrar vendor=OpenCloud version=2.7.1.0-copy#1
name=sentinel.registrar vendor=OpenCloud version=2.7.1.0
name=sentinel.registrar vendor=OpenCloud version=2.7.1
name=sentinel.registrar vendor=OpenCloud version=current
name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.0-copy#1
name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.0
name=volte.sentinel.sip vendor=OpenCloud version=2.7.1
name=volte.sentinel.sip vendor=OpenCloud version=current
name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.0-copy#1
name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.0
name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1
name=volte.sentinel.ss7 vendor=OpenCloud version=current

Status of host rhino-vm3

Clusters:
 - volte-2.7.1.0-cluster-108 - LIVE
 - volte-2.8.0.3-cluster-109

Live Node:
Rhino node 103: found process with id 19814
Node 103 is Running
Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e'

Exports:

License information:
1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018
Licensed Rhino version(s): 2.*, Development

Java:
found = live cluster
java_home = /opt/java/jdk1.7.0_79
version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

OS:
python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609]
version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018

Services:
name=IM-SSF vendor=OpenCloud version=1.4.7
name=sentinel.registrar vendor=OpenCloud version=2.7.1.0-copy#1
name=sentinel.registrar vendor=OpenCloud version=2.7.1.0
name=sentinel.registrar vendor=OpenCloud version=2.7.1
name=sentinel.registrar vendor=OpenCloud version=current
name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.0-copy#1
name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.0
name=volte.sentinel.sip vendor=OpenCloud version=2.7.1
name=volte.sentinel.sip vendor=OpenCloud version=current
name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.0-copy#1
name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.0
name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1
name=volte.sentinel.ss7 vendor=OpenCloud version=current

Available actions:
  - prepare
  - prepare-new-rhino
  - cleanup --clusters
  - cleanup --exports
  - rollback


Major upgrade has been paused after applying it to just rhino-vm1.
You should now test that the major upgrade has worked.
Once this is verified, use the following command to complete the major upgrade:
  ./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 major-upgrade --continue packages install.properties

The following output lines are considered particularly significant.
They are repeated here, labelled by host, but you can view their full context above.
rhino-vm1: PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK.
rhino-vm2: PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK.
rhino-vm3: PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK.

Import feature scripts

Now that the feature scripts were merged. It is time to check them, solve any conflicts and import to the new installed system.

rhino@rhino-rem:~/install/sentinel-volte-sdk-2.8.0.3-upgrade$ ls feature-scripts/
ERSVCCRegistration_Store_Subscriber_Data_Start  MMTel_Post_SipAccess_PartyResponse                 SCCTermAnchor_SipAccess_SubscriberCheck    default_Post_SipAccess_ChargingReauth
MMTelConf_HLR_SipAccess_SubscriberCheck         MMTel_Post_SipAccess_ServiceTimer                  SCCTermTads_HLR_SipAccess_PartyResponse    default_Post_SipAccess_ControlNotRequiredPostCC
MMTelConf_Post_SubscriptionSipRequest           MMTel_Post_SipAccess_SubscriberCheck               SCCTermTads_HLR_SipAccess_SubscriberCheck  default_Post_SipAccess_CreditAllocatedPostCC
MMTelConf_Post_SubscriptionSipResponse          MMTel_Post_SipEndSession                           SCCTermTads_SipAccess_PartyResponse        default_Post_SipAccess_CreditLimitReachedPostCC
MMTelConf_SipAccess_SubscriberCheck             MMTel_Post_SipInstructionExecutionFailure          SCCTermTads_SipAccess_SubscriberCheck      default_Post_SipAccess_NetworkCheck
MMTelOrig_HLR_SipAccess_SubscriberCheck         MMTel_Post_SipLegEnd                               SCCTerm_Cassandra_SipAccess_SessionCheck   default_Post_SipAccess_OCSFailurePostCC
MMTelOrig_Post_SubscriptionSipRequest           MMTel_Post_SipMidSession_ChargingAbort             SCCTerm_HLR_SipAccess_PartyResponse        default_Post_SipAccess_PartyRequest
MMTelOrig_Post_SubscriptionSipResponse          MMTel_Post_SipMidSession_ChargingReauth            SCCTerm_HLR_SipAccess_ServiceTimer         default_Post_SipAccess_PartyResponse
MMTelOrig_SipAccess_PartyRequest                MMTel_Post_SipMidSession_CreditAllocatedPostCC     SCCTerm_HLR_SipAccess_SubscriberCheck      default_Post_SipAccess_ServiceTimer
MMTelOrig_SipAccess_PartyResponse               MMTel_Post_SipMidSession_CreditLimitReachedPostCC  SCCTerm_SipAccess_PartyRequest             default_Post_SipAccess_SubscriberCheck
MMTelOrig_SipAccess_SubscriberCheck             MMTel_Post_SipMidSession_OCSFailurePostCC          SCCTerm_SipAccess_PartyResponse            default_Post_SipEndSession
MMTelOrig_SipMidSession_PartyRequest            MMTel_Post_SipMidSession_PartyRequest              SCCTerm_SipAccess_ServiceTimer             default_Post_SipLegEnd
MMTelOrig_SipMidSession_PartyResponse           MMTel_Post_SipMidSession_PartyResponse             SCCTerm_SipAccess_SessionCheck             default_Post_SipMidSession_ChargingReauth
MMTelTerm_HLR_SipAccess_SubscriberCheck         MMTel_Pre_SipAccess_PartyRequest                   SCCTerm_SipAccess_SubscriberCheck          default_Post_SipMidSession_CreditAllocatedPostCC
MMTelTerm_SipAccess_PartyRequest                MMTel_Pre_SipAccess_PartyResponse                  SCCTerm_SipMidSession_PartyResponse        default_Post_SipMidSession_CreditLimitReachedPostCC
MMTelTerm_SipAccess_PartyResponse               MMTel_Pre_SipAccess_SessionStart                   SCC_Post_SipAccess_PartyRequest            default_Post_SipMidSession_OCSFailurePostCC
MMTelTerm_SipAccess_SubscriberCheck             MMTel_Pre_SipInstructionExecutionFailure           SCC_Post_SipAccess_PartyResponse           default_Post_SipMidSession_PartyRequest
MMTelTerm_SipLegEnd                             MMTel_Pre_SipMidSession_CreditLimitReachedPostCC   SCC_Post_SipAccess_ServiceTimer            default_Post_SipMidSession_PartyResponse
MMTelTerm_SipMidSession_PartyRequest            MMTel_Pre_SipMidSession_PartyRequest               SCC_Post_SipAccess_SubscriberCheck         default_Post_SipTransaction_SubscriberCheck
MMTelTerm_SipMidSession_PartyResponse           MMTel_Pre_SipMidSession_PartyResponse              SCC_Post_SipEndSession                     default_Post_SubscriptionNetworkCheck
MMTel_Cassandra_SipAccess_SessionCheck          Registrar_Store_Subscriber_Data                    SCC_Post_SipInstructionExecutionFailure    default_Post_SubscriptionSipRequest
MMTel_HLR_Cassandra_SipAccess_SessionCheck      SCCOrig_Cassandra_SipAccess_SessionCheck           SCC_Post_SipMidSession_PartyRequest        default_Post_SubscriptionSipResponse
MMTel_Post_CreditFinalised                      SCCOrig_SipAccess_PartyRequest                     SCC_Post_SipMidSession_PartyResponse       default_Post_SubscriptionSubscriberCheck
MMTel_Post_SipAccess_ChargingAbort              SCCOrig_SipAccess_PartyResponse                    SCC_Pre_SipAccess_PartyRequest             default_Pre_SipAccess_SessionStart
MMTel_Post_SipAccess_ChargingReauth             SCCOrig_SipAccess_SessionCheck                     SCC_Pre_SipAccess_PartyResponse            default_SipTransaction_SubscriberCheck
MMTel_Post_SipAccess_ControlNotRequiredPostCC   SCCOrig_SipAccess_SubscriberCheck                  SCC_Pre_SipAccess_SessionStart             default_SubscriptionStart
MMTel_Post_SipAccess_CreditAllocatedPostCC      SCCOrig_SipMidSession_PartyResponse                SCC_Pre_SipMidSession_PartyRequest         default_call_Cassandra_DirectAccess_NetworkPreCreditCheck
MMTel_Post_SipAccess_CreditLimitReachedPostCC   SCCTermAnchor_SipAccess_PartyRequest               SCC_Pre_SipMidSession_PartyResponse        default_sf_Post_SubscriptionStart
MMTel_Post_SipAccess_OCSFailurePostCC           SCCTermAnchor_SipAccess_PartyResponse              SCC_SipAccess_PartyRequest                 default_sf_Pre_SubscriptionStart
MMTel_Post_SipAccess_PartyRequest               SCCTermAnchor_SipAccess_ServiceTimer               SCC_SipMidSession_PartyRequest
rhino@rhino-rem:~/install/sentinel-volte-sdk-2.8.0.3-upgrade$ ./orca --hosts rhino-vm1 import-feature-scripts
Importing feature scripts into Rhino on host rhino-vm1
Done on rhino-vm1

Importing scripts
Importing into profile table 'Rocket_FeatureExecutionScriptTable'.
Importing feature script MMTel_Post_SipMidSession_PartyRequest...
Importing feature script MMTel_Post_SipAccess_CreditAllocatedPostCC...
Importing feature script MMTelTerm_HLR_SipAccess_SubscriberCheck...
Importing feature script MMTelOrig_HLR_SipAccess_SubscriberCheck...
Importing feature script MMTelOrig_Post_SubscriptionSipResponse...
Importing feature script MMTel_Post_SipAccess_PartyResponse...
Importing feature script MMTel_Pre_SipMidSession_PartyResponse...
Importing feature script MMTel_Post_SipAccess_ServiceTimer...
Importing feature script MMTelConf_Post_SubscriptionSipRequest...
Importing feature script MMTelConf_Post_SubscriptionSipResponse...
Importing feature script SCCTermAnchor_SipAccess_PartyResponse...
Importing feature script SCCTerm_SipAccess_SubscriberCheck...
Importing feature script MMTelOrig_SipAccess_SubscriberCheck...
Importing feature script default_Post_SipAccess_ServiceTimer...
Importing feature script default_Post_SipTransaction_SubscriberCheck...
Importing feature script SCC_Post_SipAccess_PartyResponse...
Importing feature script MMTelOrig_Post_SubscriptionSipRequest...
Importing feature script SCC_Post_SipMidSession_PartyResponse...
Importing feature script SCCOrig_SipAccess_PartyResponse...
Importing feature script MMTel_Post_SipAccess_ChargingReauth...
Importing feature script default_Post_SipMidSession_OCSFailurePostCC...
Importing feature script MMTelConf_SipAccess_SubscriberCheck...
Importing feature script MMTel_Post_SipEndSession...
Importing feature script SCCTerm_HLR_SipAccess_PartyResponse...
Importing feature script MMTel_Post_SipAccess_PartyRequest...
Importing feature script MMTel_Post_SipAccess_ControlNotRequiredPostCC...
Importing feature script SCCTerm_SipAccess_PartyRequest...
Importing feature script SCC_Post_SipMidSession_PartyRequest...
Importing feature script SCCTerm_SipMidSession_PartyResponse...
Importing feature script MMTel_Post_SipInstructionExecutionFailure...
Importing feature script default_SubscriptionStart...
Importing feature script MMTelOrig_SipAccess_PartyRequest...
Importing feature script MMTel_Pre_SipAccess_PartyRequest...
Importing feature script MMTel_Pre_SipMidSession_PartyRequest...
Importing feature script MMTel_Pre_SipAccess_SessionStart...
Importing feature script Registrar_Store_Subscriber_Data...
Importing feature script MMTelOrig_SipMidSession_PartyRequest...
Importing feature script MMTel_Post_SipAccess_OCSFailurePostCC...
Importing feature script SCC_Pre_SipMidSession_PartyRequest...
Importing feature script SCC_Pre_SipAccess_PartyRequest...
Importing feature script default_Post_SipAccess_SubscriberCheck...
Importing feature script SCCTerm_Cassandra_SipAccess_SessionCheck...
Importing feature script default_Post_SipAccess_PartyResponse...
Importing feature script SCC_SipAccess_PartyRequest...
Importing feature script MMTelConf_HLR_SipAccess_SubscriberCheck...
Importing feature script SCCOrig_SipAccess_SessionCheck...
Importing feature script MMTelTerm_SipAccess_SubscriberCheck...
Importing feature script MMTelTerm_SipAccess_PartyRequest...
Importing feature script SCCTerm_HLR_SipAccess_ServiceTimer...
Importing feature script default_SipTransaction_SubscriberCheck...
Importing feature script SCCTermTads_HLR_SipAccess_PartyResponse...
Importing feature script default_Post_SipAccess_OCSFailurePostCC...
Importing feature script MMTel_Post_SipMidSession_CreditAllocatedPostCC...
Importing feature script default_call_Cassandra_DirectAccess_NetworkPreCreditCheck...
Importing feature script default_sf_Pre_SubscriptionStart...
Importing feature script SCCTermAnchor_SipAccess_PartyRequest...
Importing feature script MMTel_Post_SipMidSession_ChargingReauth...
Importing feature script default_Post_SipLegEnd...
Importing feature script ERSVCCRegistration_Store_Subscriber_Data_Start...
Importing feature script default_Post_SipMidSession_ChargingReauth...
Importing feature script MMTel_Post_SipMidSession_OCSFailurePostCC...
Importing feature script SCCTerm_SipAccess_ServiceTimer...
Importing feature script default_Post_SipMidSession_PartyResponse...
Importing feature script default_Post_SubscriptionSubscriberCheck...
Importing feature script MMTel_Pre_SipMidSession_CreditLimitReachedPostCC...
Importing feature script MMTel_Cassandra_SipAccess_SessionCheck...
Importing feature script default_Post_SipAccess_ChargingReauth...
Importing feature script MMTel_Post_SipMidSession_ChargingAbort...
Importing feature script SCC_SipMidSession_PartyRequest...
Importing feature script SCCOrig_SipAccess_PartyRequest...
Importing feature script SCCTermTads_SipAccess_PartyResponse...
Importing feature script default_Post_SubscriptionNetworkCheck...
Importing feature script default_Post_SipAccess_ControlNotRequiredPostCC...
Importing feature script MMTel_Post_SipAccess_SubscriberCheck...
Importing feature script MMTelOrig_SipAccess_PartyResponse...
Importing feature script default_Post_SipAccess_CreditLimitReachedPostCC...
Importing feature script default_Post_SipAccess_PartyRequest...
Importing feature script SCC_Post_SipAccess_PartyRequest...
Importing feature script SCCTermAnchor_SipAccess_SubscriberCheck...
Importing feature script MMTel_Post_SipMidSession_PartyResponse...
Importing feature script MMTel_Post_SipLegEnd...
Importing feature script default_Post_SipMidSession_CreditAllocatedPostCC...
Importing feature script MMTel_Post_CreditFinalised...
Importing feature script MMTel_Post_SipAccess_ChargingAbort...
Importing feature script SCCTerm_SipAccess_SessionCheck...
Importing feature script MMTel_HLR_Cassandra_SipAccess_SessionCheck...
Importing feature script MMTelOrig_SipMidSession_PartyResponse...
Importing feature script MMTel_Pre_SipInstructionExecutionFailure...
Importing feature script MMTelTerm_SipMidSession_PartyRequest...
Importing feature script SCC_Post_SipAccess_ServiceTimer...
Importing feature script SCC_Post_SipEndSession...
Importing feature script SCC_Pre_SipAccess_PartyResponse...
Importing feature script SCC_Pre_SipMidSession_PartyResponse...
Importing feature script SCCOrig_SipAccess_SubscriberCheck...
Importing feature script SCCTermTads_SipAccess_SubscriberCheck...
Importing feature script SCCTerm_HLR_SipAccess_SubscriberCheck...
Importing feature script MMTel_Post_SipMidSession_CreditLimitReachedPostCC...
Importing feature script MMTel_Pre_SipAccess_PartyResponse...
Importing feature script SCCTermAnchor_SipAccess_ServiceTimer...
Importing feature script SCCOrig_Cassandra_SipAccess_SessionCheck...
Importing feature script SCC_Post_SipInstructionExecutionFailure...
Importing feature script MMTelTerm_SipAccess_PartyResponse...
Importing feature script SCCOrig_SipMidSession_PartyResponse...
Importing feature script default_Post_SipEndSession...
Importing feature script SCC_Pre_SipAccess_SessionStart...
Importing feature script SCC_Post_SipAccess_SubscriberCheck...
Importing feature script default_Post_SipMidSession_PartyRequest...
Importing feature script SCCTermTads_HLR_SipAccess_SubscriberCheck...
Importing feature script default_Post_SubscriptionSipRequest...
Importing feature script default_Post_SipMidSession_CreditLimitReachedPostCC...
Importing feature script default_Post_SipAccess_NetworkCheck...
Importing feature script default_Post_SubscriptionSipResponse...
Importing feature script default_Pre_SipAccess_SessionStart...
Importing feature script default_sf_Post_SubscriptionStart...
Importing feature script default_Post_SipAccess_CreditAllocatedPostCC...
Importing feature script MMTelTerm_SipLegEnd...
Importing feature script SCCTerm_SipAccess_PartyResponse...
Importing feature script MMTelTerm_SipMidSession_PartyResponse...
Importing feature script MMTel_Post_SipAccess_CreditLimitReachedPostCC...
All scripts imported successfully.
Done on rhino-vm1

Migrate other nodes

rhino@rhino-rem:~/install/sentinel-volte-sdk-2.8.0.3-upgrade$ ./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 major-upgrade --continue packages install.properties
Continuing major upgrade package in directory 'packages' to hosts rhino-vm1,rhino-vm2,rhino-vm3
Starting on host rhino-vm2
Doing Migrate
Stopping node 102 in cluster 108
Waiting up to 120 seconds for calls to drain and SLEE to stop on node 102
Rhino has exited.
Successfully shut down Rhino on node 102. Now waiting for sockets to close...
Starting node 102 in cluster 109
Started Rhino. State is now: Running
Waiting for Rhino to be ready for 104 seconds
Started node 102 successfully
Done on rhino-vm2

Starting on host rhino-vm3
Doing Migrate
Stopping node 103 in cluster 108
Waiting up to 120 seconds for calls to drain and SLEE to stop on node 103
Rhino has exited.
Successfully shut down Rhino on node 103. Now waiting for sockets to close...
Starting node 103 in cluster 109
Started Rhino. State is now: Running
Waiting for Rhino to be ready for 82 seconds
Started node 103 successfully
Done on rhino-vm3

Querying status on hosts [rhino-vm1, rhino-vm2, rhino-vm3]
Global info:
Symmetric activation state mode is currently enabled

Status of host rhino-vm1

Clusters:
 - volte-2.7.1.0-cluster-108
 - volte-2.8.0.3-cluster-109 - LIVE

Live Node:
Rhino node 101: found process with id 10352
Node 101 is Running
Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e'

Exports:
 - volte-2.7.1.0-cluster-108
 - volte-2.7.1.0-cluster-108-transformed-for-2.8.0.3
 - volte-2.8.0.3-cluster-109

License information:
1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018
Licensed Rhino version(s): 2.*, Development

Java:
found = live cluster
java_home = /home/rhino/java/jdk1.8.0_172
version = java version "1.8.0_172", Java(TM) SE Runtime Environment (build 1.8.0_172-b11), Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

OS:
python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609]
version = Linux version 4.4.0-130-generic (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018

Services:
name=IM-SSF vendor=OpenCloud version=2.6.1
name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424-copy#1
name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424
name=sentinel.registrar vendor=OpenCloud version=2.8.0
name=sentinel.registrar vendor=OpenCloud version=current
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601-copy#1
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0
name=sentinel.volte.sip vendor=OpenCloud version=current
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150-copy#1
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0
name=sentinel.volte.ss7 vendor=OpenCloud version=current

Status of host rhino-vm2

Clusters:
 - volte-2.7.1.0-cluster-108
 - volte-2.8.0.3-cluster-109 - LIVE

Live Node:
Rhino node 102: found process with id 7349
Node 102 is Running
Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e'

Exports:

License information:
1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018
Licensed Rhino version(s): 2.*, Development

Java:
found = live cluster
java_home = /home/rhino/java/jdk1.8.0_172
version = java version "1.8.0_172", Java(TM) SE Runtime Environment (build 1.8.0_172-b11), Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

OS:
python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609]
version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018

Services:
name=IM-SSF vendor=OpenCloud version=2.6.1
name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424-copy#1
name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424
name=sentinel.registrar vendor=OpenCloud version=2.8.0
name=sentinel.registrar vendor=OpenCloud version=current
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601-copy#1
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0
name=sentinel.volte.sip vendor=OpenCloud version=current
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150-copy#1
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0
name=sentinel.volte.ss7 vendor=OpenCloud version=current

Status of host rhino-vm3

Clusters:
 - volte-2.7.1.0-cluster-108
 - volte-2.8.0.3-cluster-109 - LIVE

Live Node:
Rhino node 103: found process with id 24048
Node 103 is Running
Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e'

Exports:

License information:
1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018
Licensed Rhino version(s): 2.*, Development

Java:
found = live cluster
java_home = /home/rhino/java/jdk1.8.0_172
version = java version "1.8.0_172", Java(TM) SE Runtime Environment (build 1.8.0_172-b11), Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

OS:
python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609]
version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018

Services:
name=IM-SSF vendor=OpenCloud version=2.6.1
name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424-copy#1
name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424
name=sentinel.registrar vendor=OpenCloud version=2.8.0
name=sentinel.registrar vendor=OpenCloud version=current
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601-copy#1
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601
name=sentinel.volte.sip vendor=OpenCloud version=2.8.0
name=sentinel.volte.sip vendor=OpenCloud version=current
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150-copy#1
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150
name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0
name=sentinel.volte.ss7 vendor=OpenCloud version=current

Available actions:
  - prepare
  - prepare-new-rhino
  - cleanup --clusters
  - cleanup --exports
  - rollback
Previous page Next page