This document describes the operational tools to support upgrades and patches for the Sentinel products.
The tools are composed of:
-
orca, a tool to implement workflows and handle Rhino clusters
-
slee patch generator and slee patch runner, used to generate and apply patches for Sentinel products
-
slee data migration and slee data transformation to handle the SLEE configuration for Sentinel upgrades
For a quick overview of the tools architecture see Operational Tools Architecture.
To understand Sentinel upgrades see Upgrades Overview.
If you want to generate and apply patches see Patches.
If you want detailed information about how to generate and apply minor and major upgrades see Minor Upgrade and Major Upgrade.
If you want detailed information about how to generate and apply a Rhino only upgrade see Rhino Only Upgrade.
Operational Tools Architecture
This document describes the high level view and architecture of patching and upgrade tools.
General overview
A patch is a deployable unit (DU), or set of DUs, which replace existing DUs of the same name in a Rhino node in order to fix a bug where it is infeasible for the operator to wait for the next release. See Patching tools overview for more information about the tools and how they work.
A minor upgrade is a new revision of the product software, indicated by an increment to the final part of the version number (e.g. 2.7.0.6 → 2.7.0.7). It will contain bugfixes, new features and new configuration for the new features. But the configuration is always backwards compatible.
A major upgrade is a new version of the product software, indicated by a change to any of the first three parts of the version number (e.g. 2.7.0.6 → 2.8.0.0). It will contain bugfixes and new features. Unlike minor upgrades or patches, it may also contain new configuration models (profiles and profile tables) and/or changes to existing profiles and tables schemas. This makes the upgrade process more complex compared to minor upgrade.
For an overview of upgrade see Upgrades Overview.
Since Rhino uses the cluster’s database to replicate DUs between all members of the cluster, it is only necessary to actually apply the patch or upgrade on one node; but all nodes must be migrated to the new cluster which will be created when the patch or upgrade is applied to the first node.
Rolling patch or upgrade is the procedure that allows the current Sentinel product to handle telecom signaling while it is being patched or upgraded.
orca is the tool that allows rolling patches and upgrades. It does that by implementing Cluster Migration Workflow.
All operations can be rolled back (undone), but patches differ from upgrades in that they can also be reverted. To revert a patch is to replace the DUs with the originals (the original DUs are included in the patch file), but without migrating back to the downlevel cluster. This can be useful, for example, to preserve configuration changes that were required as part of the patch in the case where the patch will be applied again in the near future.
Patching tools overview
The patching tools consist of:
-
slee-patch-generator
, which is used to create a patch -
slee-patch-runner
, which is used to apply or revert a patch -
several scripts for managing cluster migration
-
orca, command-line tool used to distribute upgrade/patching tasks to multiple hosts, such as all hosts in a Rhino cluster, and it implements the cluster migration workflow.
The Patch Generator requires a running Rhino with the released software that will be patched. It takes as its main input the SLEE DUs or SLEE Jars (SLEE deployable unit) to be patched. The tool will query the live Rhino to capture the dependencies on services and other components. With that information it will generate a file that captures the dependencies and a set of actions to be taken in a precise order.
The Patch Runner tool is responsible for applying the patch. It takes as input the patch for the Rhino client for the product to be patched. The tool will run a checklist to check that the system is in the expected state. Checks include component names, version and the dependency tree. If the system is in a state that can be patched, the tool will install the new components according to the list of actions generated by the Patch Generator.
In general it is expected that the user will only ever run orca
, and will do so from a dedicated linux management host (normally the REM node, or the operator’s linux workstation).
orca
must be able to access all hosts in the cluster over SSH without requiring a password. This can be accomplished by setting up SSH host keys. orca
also uses SSH to verify connectivity to the hosts before running a task.
orca
requires to be logged in as the same user the Rhino system is running. It needs to stop, start Rhino, create and delete files in the directory where the Rhino is installed.
Upgrades Overview
To better understand the steps involved in a Sentinel product upgrade it is necessary to understand:
-
the SLEE components
-
the Sentinel components
The information below gives a general overview of those concepts. For more detailed information see:
Basic SLEE components
SLEE stands for Service Logic and Execution Environment (SLEE) and it is part of the Java APIs for Integrated Networks (JAIN).
Extracted from the SLEE specification 1.1.
A SLEE is an application server. This application server is a container for software components. The SLEE specification is designed and optimized for event driven applications. An event driven application typically does not have an active thread of execution. Typically, an event driven application defines methods that are invoked when events are delivered to the application. These methods contain application code that inspect the event and perform additional processing to handle the event. The application code may interact with the resource that emitted the event or other resources, fire new events or update the application state. The SLEE architecture defines how an application can be composed of components. These components are known as Service Building Block (SBB) components. Each SBB component identifies the event types accepted by the component and has event handler methods that contain application code that processes events of these event types. In addition, an SBB component may have an interface for synchronous method invocations. At runtime, the SLEE creates instances of these components to process events and deletes components that are no longer eligible for event processing. The SLEE architecture defines the following core abstractions and concepts: * Event and event type * SBB component, SBB component graph, and root SBB component * SBB entity, SBB entity tree, and root SBB entity * Cascading removal of SBB entity sub-tree * SBB abstract class and SBB object * SBB local interface and SBB local object * Activity, Activity object, and Activity Context * How the SLEE reclaims an SBB entity tree and its associated descendent SBB entities * Activity Context Interface interface and Activity Context Interface object * Profile, Profile Table and Profile Specification * Service Resource Adaptor type and Resource Adaptor components * Library component * Deployable unit * Management interface
The core abstractions above are implemented as Java components (Jars), configuration artifacts and metadata represented in XML format.
Rhino and SLEE
Rhino is the platform that implements SLEE and is an application server that supports the development of telecommunications applications. Concretely Rhino offers a set of APIs to implement SLEE applications. A simple application is composed of:
-
Resource adaptors (RAs) that define Events and Activities
-
SBBs and SBB parts composing a SLEE service that handles events
-
Configuration data:
-
Profile tables and profiles
-
RA configuration data
-
Those components are present in Rhino as Jars and Deployable Units (DUs). A deployable unit (DU) is a set of Jars plus metadata that defines a SLEE component. The profiles are present in memory and persisted in the Rhino management database.
Rhino has a set of tools to handle those components. Some of those tools are the rhino-export, the rhino-import and the rhino-console. In a nutshell, rhino-export will dump a SLEE application including its components and configuration to the disk and rhino-import will restore the application to Rhino.
Sentinel Product Components
Sentinel is a services platform and framework that includes, out of the box, several SLEE components ready to be used in order to compose a SLEE application.
It includes main facilities for:
-
SIP, SS7 and Diameter signaling
-
Services and features to handle signaling events that implement Telecom services
Sentinel VoLTE, Sentinel IPSMGW and Sentinel GAA add new components, use and extend existing ones.
Each of these products run signaling flows according to the Telecom service they are implementing and each product has a different set of SLEE components. Each component has a set of dependencies on other components and together compose the product.
Those components when extracted from Rhino, as described above, are Jars, DUs and XML files.
What is a Sentinel Product upgrade?
A Sentinel product upgrade is the act of installing the product components in Rhino (DUs, Jars and configuration) and making sure that the configuration and customizations are maintained in order to guarantee that the product works as expected with the new components.
The difficulties faced during an upgrade are:
-
Minimize service disruption
-
Maintain the current configuration
-
Guarantee safe rollback
The upgrade procedure overview
The overview procedure is:
-
sanity check the parameters
-
check the connection to the hosts
-
clone the cluster on all the hosts
-
backup the installation
-
retrieve the actual system configuration
-
migrate the node on the first host in the given list to the new cluster
-
install the new product version on that first host
-
copy the configuration from the old installation to the new installation
-
clean up the temporary files
-
optionally pause to allow testing of the upgraded node
-
migrate the other nodes of the cluster
For more information see Major Upgrade.
Cluster migration workflow
About cluster migration
A normal production system has several hosts, each running a Rhino node. These nodes together form a cluster. A cluster runs only one product, for example you may have a VoLTE cluster and an IP-SM-GW cluster. Each cluster has a unique ID (an integer), for example the VoLTE cluster may be cluster 50 and the IP-SM-GW cluster may be cluster 200.
In order to apply a patch or an upgrade to a Rhino cluster, the tools create a new cluster to avoid service disruption and to allow easy rollback in case any problems are encountered.
They do this by:
-
taking a copy of the original cluster and its database to form a new cluster with a different ID
-
applying any required modifications to the new cluster
-
stopping the first node of the original cluster
-
starting the first node of the new cluster
-
applying the patch or the upgrade
-
stopping each of the remaining nodes of original cluster and start the new cluster nodes
The migration order should always be from the highest node to the lowest node. In case some nodes are not cleanly shutdown the nodes with lowest node id will become primary to the cluster to avoid split brain situations. See Rhino Cluster Membership. |
The original cluster is referred to as the downlevel cluster, and the new cluster is the uplevel cluster. Which cluster is which is determined by looking at the IDs - the uplevel cluster has a higher ID number. Because the ID numbers differ between the two clusters, they are completely distinct; however, only one may be running on a particular host at a given time (as a running node requires exclusive access to resources such as network ports). As such whether a node is running in the uplevel or downlevel cluster determines whether it has, or has not yet, been patched or upgraded.
To illustrate the process we consider the example below with 3 hosts (physical or virtual machines - VMs) and a cluster of 3 nodes, one node per host.
Step 1
The original installation has just one cluster active with ID 50. In Step 1 the tool checks it has connection to the hosts and if so it clones the existing cluster 50
to a new cluster 51
and leaves the new cluster inactive.
The clone process involves:
-
backup the existing cluster doing a full Rhino export
-
create a new database rhino_51
-
copy the Rhino installation to a new path according to the standardized path structure
-
configure the copied Rhino to connect to the database rhino_51
-
configure the copied Rhino to run with cluster ID 51
-
clean the node state for each node in the
cluster 51
The hosts now have 2 installations but just the cluster 50 is active, meaning all traffic is still handled by the cluster 50
.
The database name shown here just contains the cluster ID. In a production environment the database name for the new cluster is derived from the old one, but with the new cluster number. The expected database name is : rhino_<cluster ID> , i.e rhino_50 . Consequently the new database will be rhino_51 . |
Step 2
Here the tool stops the node 103 in cluster 50
and starts the node 103 in cluster 51
. In case node 103 has active traffic the tool will wait for up to 120 seconds to allow those calls to drain before forcing the node to stop (killing the node). This timeout value can be configured using the parameter --stop-timeout
.
For patches, it is also possible to configure timeouts for the service and RA deactivation when applying a patch in the uplevel cluster - see Patch timeouts for service deactivation.
Now we have 2 clusters active:
-
cluster 50
with nodes 101 and 102 -
cluster 51
with node 103
At this point the tool will apply the specified patch or upgrade to node 103 from the cluster 51
. During this process the service in the cluster 51
will stay inactive, but the cluster 50
is still handling traffic with nodes 101 and 102. After applying the patch or the upgrade the node 103 is active and is able to handle traffic.
Step 3
If the patch or the upgrade was successful, the tool proceeds to stop the node 102 from cluster 50
and start node 102 for the cluster 51
. The node 102 in cluster 51
will connect to the database rhino_51 and will retrieve the patched product or the upgraded version and join the cluster 51
.
Still 2 clusters active:
-
cluster 50
with node 103 -
cluster 51
with nodes 101 and 102
The cluster 51
can handle two-thirds of the traffic.
Step 4
If the migration of node 102 was successful, the tool proceeds to migrate the node 101 the same way it did for node 102. Now we have 1 cluster active:
-
cluster 51
with nodes 101, 102 and 103
Standard migration procedure
Service outage
Migrating a node involves a service outage for that node. We recommend all operations should be carried out in a dedicated maintenance window. |
The operator may wish to migrate all nodes in one maintenance window, or spread the work across two or more windows. They may also wish to run verification tests against the first migrated node, for example to verify that the patch has indeed fixed whatever problem it was created for.
orca
allows for this - specifically, the user specifies only the hosts they want orca
to modify, rather than all hosts in the cluster.
Note that the high-level orca
commands such as patching and upgrades use the migrate procedure internally.
A Rhino cluster has a concept of a "primary component" which is a node, or set of nodes, which are in service. In normal operation, all nodes are part of the primary component; only in a network partition situation would some nodes become non-primary. For more information see Cluster segmentation.
Adding a node to an existing cluster with a primary node will cause it to become part of the primary automatically, but the first node in a new cluster must be explicitly told to become primary. This is indicated to orca
by use of the --special-first-host
command line option to the relevant commands. It is important to ensure this flag is specified or omitted correctly when invoking orca
or else it will fail to complete the task and leave the node out of service.
Some examples of how to use the --special-first-host
flag:
-
Rolling back a patch or upgrade:
-
If there are any nodes still running in the original (downlevel) cluster, then do not specify the flag. The nodes being rolled back will join the original cluster’s primary component automatically.
-
Otherwise, specify it for the first (or only) invocation of
orca rollback
.
-
-
Migrating to an existing cluster:
-
This may be the case if after rollback it is decided to "roll forward" again, i.e. to the cluster that was rolled back from. The usage of the flag here is identical to the rollback command.
-
Standardized path structure
In order for orca
to work correctly, it requires the Rhino installation to be set up according to a standardized path structure. There must be a user configured with the username sentinel
, and the following path layout set up in their home directory:
rhino - a link to a <product>-<version>-cluster-<id> export |- <product>-<version>-cluster-<id> install |- patches <product>-<version>-cluster-<id>
Here:
-
<product>
is the name of the product installed on top of the Rhino, as a single word in lowercase, such as 'volte' or 'ipsmgw' -
<version>
is the version of the product installed, such as '2.7.0.6' -
<id>
is the cluster ID.
Rhino and the product installed with it are located in the <product>-<version>-cluster-<id>
directory. For example, the Rhino installation directory may be named volte-2.7.0.6-cluster-100
.
In addition, the database used by Rhino must be named rhino_<id>
, e.g. rhino_50
. The database must be a PostgreSQL database, but there are no restrictions on the database’s location (local or remote to any particular node).
The export/<product>-<version>-cluster-<id>
directory is used to hold exports taken as a backup prior to a patch or upgrade being applied. Patch files are copied to the install/patches
directory prior to installation, and the vm-tools
directory contains any miscellaneous tools and scripts required by the patching tools.
If an existing Rhino installation is not located in a directory with a name in the <product>-<version>-cluster-<id>
format, orca
can be used to rename it (and create the ancillary directories). See the standardize-paths command in orca for details. Note that this command cannot edit the username or database name.
The current "live" cluster is indicated by a symlink named rhino
in the user’s home directory, that points to the <product>-<version>-cluster-<id>
installation directory.
orca
Architecture
orca
is a command-line tool used to distribute upgrade/patching tasks to multiple hosts, such as all hosts in a Rhino cluster, and it implements the cluster migration workflow.
It consists of:
-
the
orca
script, which is run from a Linux management host (for example a Linux workstation or one of the cluster’s REM nodes) -
several sub-scripts and tools which are invoked by orca (never invoked directly by the user).
-
common modules
-
workflow modules
-
slee-data-migration
-
slee-data-transformation
-
slee-patch-runner
-
orca
requires ssh connections with the remote hosts and will copy the required modules and tools to the hosts, triggers the command from the management host and the transferred modules will run in the remote hosts. The control is done via ssh
connection and orca
will store logs in the management host.
The picture below shows how orca
interacts with the tools.
Upon a valid command orca
will transfer the required module to the remote host. Some commands include packages that will be used in the remote host, like a minor upgrade package or patch package. The packages are transferred automatically, and then kept in the install
path in the remote host.
When the command is executing, the module running in the remote host will send messages back to the main process in the management host. All messages are persisted in the log files in the management host for debug purposes. Some messages are shown in the console for the user, others are used internally by orca
.
Running orca
Standardized path structure
When orca operates on Rhino installations, it depends on the installation on every host being set up according to a standardized path structure, which is documented here. It can also be used to migrate an existing Rhino installation to this standardized path structure (with temporary loss of service) - see the standardize-paths command details below. Do not attempt to run any other |
Limitation of one node per host
|
The command-line syntax for invoking orca
is
orca --hosts <host1,host2,…> [--skip-verify-connections] [--remote-home-dir DIR] <command> [args…]
where
-
host1,host2,…
is a comma-separated list of all hosts on which the<command>
is to be performed. This will normally be all hosts in the cluster, but sometimes you may want to use a selected host or set of hosts. For example, you may want to do some operation in batches when the cluster is large. Some commands, such asupgrade-rem
, can operate on different types of hosts, which may not be running Rhino. -
the
--skip-verify-connections
option, or-k
for short, instructsorca
not to test connectivity to the hosts before running a command. The default is to test connectivity with every command, which reduces the risk of a command being only partially completed due to a network issue. -
the
--remote-home-dir
or-r
option is used to optionally specify a home path other than/home/sentinel
on the remote Rhino hosts. It should not be given when operating on other types of host, such as REM nodes. -
command
is the command to run: one ofstatus
,prepare
,prepare-new-rhino
,migrate
,rollback
,cleanup
,apply-patch
,revert-patch
,minor-upgrade
,major-upgrade
,upgrade-rem
,standardize-paths
,import-feature-scripts
,rhino-only-upgrade
orrun
. -
args
is one or more command-specific arguments.
For further information on command
and args
see the documentation for each command below.
orca
writes output to the terminal and also to log files on the hosts. Once the command
is complete, orca
may also copy additional log files off the hosts and store them locally under the log directory.
Running a migrate
, rollback
, apply-patch
or revert-patch
command will typically take between 5 to 10 minutes per node, depending on the timeouts set to allow time for active calls to drain.
At the end of many commands, orca
will list which commands can be run given the current node/cluster status, for example:
Available actions: - prepare - cleanup --clusters - rollback
Always use
|
Arguments common to multiple commands
Some arguments are common to multiple orca
commands. They must always be specified after the command.
SLEE drain timeout
Rhino will not shut down until all activities (traffic) on the node have concluded. In order to achieve this, the operator must redirect traffic away be redirected away from the node before starting a procedure such as a migration or upgrade.
The argument --stop-timeout N
controls how long orca
will wait for active calls to drain and for the SLEE to stop gracefully. The value is specified in seconds, or 0 for no timeout (wait forever). If calls have not drained in the specified time, and thus the SLEE has not stopped, the node will be forcibly killed.
This option applies to the following commands:
-
migrate
-
rollback
-
apply-patch
-
revert-patch
-
minor-upgrade
-
major-upgrade
-
rhino-only-upgrade
It is optional, with a default value of 120 seconds.
Dual-stage operations
The --pause
, --continue
and --no-pause
options control behaviour during long operations which apply to multiple hosts, where the user may want the operation to pause after the first host to run validation tests. They apply to the following commands:
-
apply-patch
-
revert-patch
-
minor-upgrade
-
major-upgrade
-
rhino-only-upgrade
--pause
is used to stop orca
after the first host, --continue
is used when re-running the command for the remaining hosts, and --no-pause
means that orca
will patch/upgrade all hosts in one go. These options are mutually exclusive and exactly one must be specified when using the above-listed commands.
REM installation information
-
The
--backup-dir
option informsorca
where it can locate, or should store, REM backups created during upgrades. -
The
--remote-tomcat-home
option informsorca
where the REM Tomcat installation is located on the host (thoughorca
will try to autodetect this based on environment variables, running processes and searching the home directory). It corresponds to theCATALINA_HOME
environment variable. -
In complex Tomcat setups you may also need to specify the
--remote-tomcat-base
option which corresponds to theCATALINA_BASE
environment variable.
These options apply to the following commands:
-
status
, when used on a host with REM installed -
upgrade-rem
-
rollback-rem
-
cleanup-rem
All these options are optional and any number of them can be specified at once. When exactly one of --remote-tomcat-home
and --remote-tomcat-base
is specified, the other defaults to the same value.
orca
commands
Be sure you are familiar with the concepts and patching process overview described in Cluster migration workflow.
status
The status
command prints the status of the nodes on the specified hosts:
-
which cluster directories are present
-
which cluster directory is live
-
the SLEE state of the live node
-
a list of export (backup) directories present on the node, if any.
It will also output some global status information, which at present consists of a list of nodes that use per-node service activation state.
The status
command will display much of its information even if Rhino is not running, though there may be additional information it can include when it can contact a running Rhino.
standardize-paths
The standardize-paths
command renames and reconfigures an existing Rhino installation so it conforms to the standard path structure described here. This command requires three arguments:
-
--product <product>
, where<product>
is the name of the product installed in the Rhino installation. Specify the name as a single word in lowercase, e.g.volte
oripsmgw
. -
--version <version>
, where<version>
is the version of the product installed in the Rhino installation. Specify the version as a set of numbers separated by dots, e.g.2.7.0.10
. -
--sourcedir <sourcedir>
, where<sourcedir>
is the path to the existing Rhino installation, relative to therhino
user’s home directory.
Note that the standardize-paths
command can only perform filesystem-level manipulation. In particular, it cannot
-
change the user under whose home directory Rhino is installed (this should be the
rhino
user) -
change the name of the database (this should be in the form
rhino_<cluster ID>
) -
change any init.d, systemctl or similar scripts that start Rhino automatically on boot, because editing these requires root privileges.
prepare
The prepare
command prepares for a migration by creating one node in the uplevel cluster.
It can take three arguments:
-
The
--copy-db
argument will copy the management database of the live cluster to the new cluster. This means that the new cluster will contain the same deployments as the live cluster. -
The
--init-db
argument will initialize an empty management database in the new cluster. This will allow a different product version to be installed in the new cluster. -
The
-n
/--new-version
argument takes a string that will be used as the version in the name of the new cluster, e.g. "2.7.0.6".
Note that the --copy-db
and --init-db
arguments cannot be used together.
-
The new cluster will have the next sequential ID to the current live cluster. For example, if the current live cluster has cluster ID 100, the new one will have cluster ID 101.
-
The configuration of the new cluster is adjusted automatically; there is no need to manually change any configuration files.
-
When preparing the first set of nodes in the uplevel cluster, use the
--copy-db
or--init-db
option which will causeorca
to also prepare the new cluster’s database.
prepare-new-rhino
The prepare-new-rhino
command does the same action as the prepare
command with the exception that it will clone the current cluster to a new cluster with a new specified Rhino.
The arguments are:
-
The
-n
/--new-version
argument takes a string that will be used as the version in the name of the new cluster, e.g. "2.7.0.6". The version number here is for the Sentinel product and not the Rhino version. -
The
-r
/--rhino-package
argument takes the Rhino install package to use, e.g. "rhino-install-2.6.1.2.tar" -
The
-o
/--installer-overrides
argument takes a properties files used by the rhino-install.sh script.
Use --installer-overrides with care. If specified it will override the existing properties used by the current running system. For more details about rhino installation see Install Unattended |
migrate
The migrate
command runs a migration from a downlevel to an uplevel cluster. You must have first prepared the uplevel cluster using the prepare
command.
To perform the migration, orca
will
-
if the
--special-first-host
option was passed to the command, export the current live node configuration (as a backup) -
stop the live node, and wait to ensure all sockets are closed cleanly
-
edit the
rhino
symlink to point at the uplevel cluster -
start Rhino in the uplevel cluster (if the
--special-first-host
option is used, then this Rhino node will be explicitly made into a primary node — use this if and only if the uplevel cluster is empty or has no nodes currently running)
rollback
The rollback
command will move a node back from the uplevel cluster to the downlevel cluster - the reverse of migrate
. If there is more than one old cluster, the most recent (highest ID number) is assumed to be the target downlevel cluster.
The process is:
-
stop the live node, and wait to ensure all sockets are closed cleanly
-
edit the
rhino
symlink to point at the downlevel cluster -
start Rhino in the downlevel cluster (if the
--special-first-host
option is used, then this Rhino node will be explicitly made into a primary node - use this if and only if the downlevel cluster is empty or has no nodes currently running).
The usage of --special-first-host
or -f
can change depending on the cluster states. If there are 2 clusters active (50 and 51 for example) and the user want to rollback a node from cluster 51, don’t use the --special-first-host
option. The reason is that the cluster 50 already has a primary member.
If the only active cluster is the cluster 51 and the user wants to rollback one node or all the nodes, then include the --special-first-host
option, because the cluster 50 is inactive and the first node migrated has to be set as part of the primary group before other nodes join the cluster.
cleanup
The cleanup
command will delete a set of specified cluster or export (backup) directories on the specified nodes.
It takes one or two arguments:
-
--clusters <id1,id2,…>
and/or -
--exports <id1,id2,…>
, whereid1,id2,…
is a comma-separated list of cluster IDs.
For example, cleanup --clusters 102 --exports 103,104
will delete the cluster directory numbered 102 and the export directories corresponding to clusters 103 and 104.
This command will reject any attempt to remove the live cluster directory.
In general this command should only be used to delete cluster directories:
-
once any patching or upgrading is fully completed, and the uplevel version has passed all acceptance tests and been accepted as live configuration
-
if it is determined that the patch or upgrade is faulty, and rollback to the downlevel version has been fully completed on all nodes.
When cleaning up a cluster directory, the corresponding cluster database is also deleted.
apply-patch
The apply-patch
command will perform a prepare
and migrate
to apply a given patch to the specified nodes in a single step. It requires one argument: <file>
, where <file>
is the path (full or relative path) to the patch .zip file.
Specifically, the apply-patch
command does the following:
-
on the first host in the specified list of hosts, prepare the uplevel cluster and migrate to it using the same processes as in the
prepare
andmigrate
commands -
on the other hosts, prepare the uplevel cluster
-
copy the patch to the first host
-
apply the patch to the first host (using the
apply-patch.sh
script provided in the patch .zip file) -
once the first host has successfully recovered, migrate the other hosts. They will pick up the patch automatically as a consequence of being in the same new Rhino cluster.
revert-patch
The revert-patch
command operates identically to the apply-patch
command, but reverts the patch instead of applying it. Like apply-patch
, it takes a single <file>
argument.
Since apply-patch
does a migration, it would be most common to revert a patch using the rollback
command. However a rollback will lose any configuration changes made after applying the patch, whereas this command performs a second migration and hence preserves configuration changes.
minor-upgrade
The minor-upgrade
command will perform a prepare
creating a new cluster, clear the management database for the new cluster, install the new software version and migrate
nodes in a single step. It requires the following arguments:
-
<the packages path>
, a path that contains the compressed product sdk with the offline repositories -
<install properties file>
, the install properties file for the product
To run a custom package during the installation, specify either or both in the packages.cfg
:
-
post_install_package
to specify a custom package to be applied after the new software is installed but before the configuration is restored -
post_configure_package
to specify the custom package to be applied after all configuration and data migration is done on the first node, but before the other nodes are migrated
The workflow is as follows:
-
validate the specified install.properties, and check it is not empty
-
check the connectivity to the hosts
-
prepare a new cluster. If new Rhino or/and Java is specified install them as part of the process.
-
clean the new cluster database
-
create a rhino export from the current installation
-
persist RA configuration and profile table definitions (their table names and profile specifications)
-
migrate the node as explained in Cluster migration workflow
-
import licenses from the old cluster into the new cluster
-
copy the upgrade pack to the first node with the specified install.properties
-
install the new version of the product into the new cluster using the product’s installer
-
if post install package is present, copy the custom package to the first node and run the
install
executable from that package -
recreate profile tables in the new cluster as needed, to match the set of profile tables from the previous cluster
-
restore the rhino configuration from the customer export (access, logging, SNMP, object pools, etc)
-
restore the RA configuration
-
restore the profiles from the customer export (maintain the current customer configuration)
-
if post configure package is present, copy the post-configuration custom package to the first node and run the
install
executable from that package -
migrate other nodes as explained in Cluster migration workflow
A command example is:
./orca --hosts host1,host2,host3 minor-upgrade packages $HOME/install/install.properties
major-upgrade
The major-upgrade
command will perform a prepare
creating a new cluster, clear the management database for the new cluster, install the new software version and migrate
nodes in a single step. It requires the following arguments:
-
<the packages path>
, a path that contains the compressed product sdk with the offline repositories -
<install properties file>
, the install properties file for the product -
optional
--skip-new-rhino
, indicates to not install a new Rhino present as part of the upgrade package -
optional
--installer-overrides
, takes a properties file used by the rhino-install.sh script -
optional
--license
, a license file to install with the new Rhino. If no license is specified,orca
will check for a license in the upgrade package and no license is present it will check if the current installed license is supported for the new Rhino version.
Use --installer-overrides with care. If specified it will override the existing properties used by the current running system. For more details about rhino installation see Install Unattended |
To run a custom package during the installation, specify either or both in the packages.cfg:
-
post_install_package
to specify a custom package to be applied after the new software is installed but before the configuration is restored -
post_configure_package
to specify a custom package to be applied after all configuration and data migration is done on the first node, but before the other nodes are migrated
The workflow is as follows:
-
validate the specified install.properties, and check it is not empty
-
check the existing packages defined in
packages.cfg
exist: sdk, rhino, java, post install, post configure -
check the connectivity to the hosts
-
prepare a new cluster. If new Rhino or/and Java is specified install them as part of the process.
-
clean the new cluster database
-
create a rhino export from the current installation
-
persist RA configuration and profile table definitions (their table names and profile specifications)
-
migrate the node as explained in Cluster migration workflow
-
import licenses from the old cluster into the new cluster
-
copy the upgrade pack to the first node with the specified install.properties
-
install the new version of the product into the new cluster using the product’s installer
-
generate an export from the new version
-
apply data transformation rules on the downlevel export
-
recreate profile tables in the new cluster as needed, to match the set of profile tables from the previous cluster
-
if post install package is present, copy the custom package to the first node and run the
install
executable from that package -
restore the RA configuration after transformation
-
restore the profiles from the transformed export (maintain the current customer configuration)
-
restore the rhino configuration from the customer export (access, logging, SNMP, object pools, etc)
-
if post configure package is present, copy the post-configuration custom package to the first node and run the
install
executable from that package -
do a 3 way merge for Feature Scripts
-
manual import the Feature Scripts after checking they are correct; see Feature Scripts conflicts and resolution
-
migrate other nodes as explained in Cluster migration workflow
A command example is:
./orca --hosts host1,host2,host3 major-upgrade packages $HOME/install/install.properties
upgrade-rem
The upgrade-rem
command upgrades Rhino Element Manager hosts to new versions.
Like the other commands, it takes a list of hosts which the upgrade should be performed on, but these hosts are likely to be specific to Rhino Element Manager, and not actually running Rhino itself.
The command can be used to update both the main REM package, and also plugin modules.
A command example is:
./orca --hosts remhost1,remhost2 upgrade-rem packages
The information of which plugins to upgrade are present in the packages.cfg
file.
As part of this command orca
generates a backup. This is stored in the backup directory, which (if not overridden by the --backup-dir
option) defaults to ~/rem-backup
on the REM host. The backup takes the form of a directory named <timestamp>#<number>
, where <timestamp>
is the time the backup was created in the form YYYYMMDD-HHMMSS
, and <number>
is a unique integer (starting at 1). For example:
20180901-114400#2
indicates a backup created at 11:44am on 1st September 2018, labelled as backup number 2. The backup number can be used to refer to the backup in the rollback-rem
and cleanup-rem
commands described below.
The backup contains a copy of:
-
the
rem_home
directory (which contains the plugins) -
the
rem.war
web application archive -
the
rem-rmi.jar
file
rollback-rem
The rollback-rem
command reverts a REM installation to a previous backup, by stopping Tomcat, copying the files in the backup into place, and restarting Tomcat. Its syntax is
./orca --hosts remhost1 rollback-rem [--target N]
The --target
parameter is optional and specifies the number of the backup (that appears in the backup’s name after the #
symbol) to roll back to. If not specified, orca
defaults to the highest-numbered backup, which is likely to be the most recent one.
Because REM is not clustered in the same way as Rhino nodes are, installations and backups may differ between REM nodes. As such, to avoid unexpected results, the --target
parameter can only be used if exactly one host is specified.
cleanup-rem
The cleanup-rem
command deletes unwanted REM backup directories from a host. Its syntax is
./orca --hosts remhost1 cleanup-rem --backups N[,N,N...]
There is one mandatory parameter, --backups
, which specifies the backup(s) to clean up by their number(s) (that appear in the backups' names after the #
symbol), comma-separated without any spaces.
Because REM is not clustered in the same way as Rhino nodes are, installations and backups may differ between REM nodes. As such, to avoid unexpected results, the cleanup-rem
command can only be used on one host at a time.
run
The run
command allows the user to run a custom command on each host. It takes one mandatory argument <command>
, where <command>
is the command to run on each host (if it contains spaces or other characters that may be interpreted by the shell, be sure to quote it). It also takes one optional argument --source <file>
, where <file>
is a script or file to upload before executing the command. (Files specified in this way are uploaded to a temporary directory, and so the command may need to move it to its correct location.)
For example:
./orca --hosts VM1,VM2 run "ls -lrt /home/rhino"
./orca --hosts VM1,VM2 run "mv file.zip /home/rhino; cd /home/rhino; unzip file.zip" --source /path/to/file.zip
Note on symmetric activation state for upgrades
Symmetric activation state mode must be enabled before starting a major or minor upgrade. This means that all services will be forced to have the same state across all nodes in the cluster. This ensures that all services start correctly on all nodes after the upgrade process completes. If your cluster normally operates with symmetric activation state mode disabled, it will need to be manually disabled after the upgrade and related maintenance operations are complete.
See Activation State in the Rhino documentation.
Note on Rhino 2.6.x export
The Rhino export configuration from Rhino 2.5.x to Rhino 2.6.x changed. More specifically, Rhino 2.6.x exports now include data for each SLEE namespace and include SAS bundle configuration. orca
just restores configuration for the default namespace. Sentinel products are expected to have just the default namespace; if there is more than one namespace orca
will raise an error.
orca
does not restore the SAS configuration when doing minor upgrade. The new product version being installed as part of the minor upgrade includes all the necessary SAS bundles. For Rhino documentation about namespaces see Namespaces. For Rhino SAS configuration see MetaView Service Assurance Server (SAS) Tracing
Troubleshooting
See Troubleshooting.
Patches
This document covers the procedure and the tools for applying patches for Sentinel products.
General information
The Patch Generator requires a running Rhino with the released software that will be patched. It takes as its main input the SLEE DUs or SLEE Jars (SLEE deployable unit) to be patched. The tool will query the live Rhino to capture the dependencies on services and other components. With that information it will generate a file that captures the dependencies and a set of actions to be taken in a precise order.
The Patch Runner tool is responsible for applying the patch. It takes as input the patch for the Rhino client for the product to be patched. The tool will run a checklist to assert that the system is in the expected state. Checks include component names, version and the dependency tree. If the system is in a state that can be patched, the tool will install the new components according to the list of actions generated by the Patch Generator.
Patch Generator
Architecture
The slee-patch-generator tool is intended to create patches for SLEE applications. Internally it uses the Rhino Management Interface to get information about the components installed in Rhino.
The tool receives a list of SLEE components that will be installed and checks the dependencies of other components and creates a set of actions that will be used by the slee-patch-runner.
On how to create a patch see Creating a patch. To apply the patch see Applying patches to non live system.
The high level algorithm is:
-
Detect the target components to be updated
-
Assert that
-
The replacement components have the same SLEE component IDs
-
The replacement DUs have the same set structure as those they’re replacing
-
Any relevant bindings are only for SBBs / SBB parts
-
-
Detect all Rhino deployment state which depends on those target components
-
Detect all the dependencies between components and other state
-
Calculate a dependency graph
-
Convert the dependency graph into a list (ordered from most downstream to most upstream)
-
Create a set of pre-conditions to assert based on the target & downstream components & state
-
Create a series of actions from downstream to upstream which disassembles / uninstalls
-
Create a series of actions from upstream to downstream which re-assembles / installs
-
Create a set of post-conditions to assert based on the target & downstream components & state
-
Create the metadata that will hold the patch history
-
Generate a reverse patch to do the reverse of the above
For each step the tool generates logs used to identify problems and causes of failure.
SLEE components dependencies
SLEE components have a hierarchy of dependencies shown in the picture below.
In order to be able to install a component it is necessary to remove the dependency between the patched component and the other components. It is equivalent to the rhino-console command "unverify".
To patch profile specifications it is also necessary to remove any profile table that depends on the profile specification being patched.
The patch.yaml
The patch.yaml
contains the information used by the slee-patch-runner to apply the patch. The model contains a set of actions, some of which are directional: forwardAction
or reverseAction
. The former is used when uninstalling the patch.
The file is divided in the higher level sections as follows:
commonConditions
Common conditions are conditions that are checked in pre-conditions and post-conditions. For example assert the service state.
-
AssertServiceExists
-
AssertInstallLevel
preconditions
Pre conditions are actions to be executed before changing any state in the target system.
-
AssertDeployableUnitInstalled
disassemblyActions
Those are actions that remove the target component from the system before installing the new one. Actions are:
-
SaveServiceActivationState
-
SaveTraceLevels
-
DeactivateService
-
AwaitServiceDeactivation
-
RemoveBindings
-
UninstallDeployableUnit
The metadata file
When generating a patch the user can specify a properties file containing information that will be propagated to the documentation.
The properties are:
-
patchName
-
productName
-
productVersion
-
patchVersion
-
ticket
-
description
The file is in yaml format. Example:
patchName: 'customer-name-VOLTE-1234'
productName: 'Volte'
productVersion: '2.7.0.7'
patchVersion: '1'
ticket: 'VOLTE-1234'
description: 'Fix crash on call teardown'
The document template
The patch generator also generates the README.txt included in each patch. The file is generated based on a template file as part of the slee-patch-generator package. Each section of the README.txt is based on a token substitution, where a token has the format of !word!
.
The current template is:
1. PATCH DETAILS Patch name: !patch.name! Date: !patch.date! Product to apply: !product.name! Product version: !product.version! Reference ticket: !ticket! Patch description: !description! It will install new versions of the following components: !components! 2. HOW TO APPLY THE PATCH The patch requires: - A running Rhino with an installed product and version expected by the patch. - A client Rhino able to connect to the running rhino. - JAVA_HOME variable set To apply the patch run the command apply-patch.sh -c <absolute-path-to-rhino-client> NOTE: The patch checks for the correct component based on the component name and the checksum. To force the patch to be applied use the option --allow-checksum-mismatch. Beware that it will skip the checksum and will remove the installed component. The patch DOES NOT MAKE a copy of the component. The original component is shipped with the patch, meaning that if the patch is reverted the shipped component is installed. 3. LOGS GENERATED The patch generates logs of the actions executed and also collects the rhino logs. In case of failure the logs will give the indication of the problem. 4. UNINSTALL THE PATCH To revert the patch execute the following command apply-patch.sh -c <absolute-path-to-rhino-client> -r NOTE: the revert patch will reinstall the expected original component shipped with the patch. The component will be the same as the previous installed one, unless the patch was applied with the option --allow-checksum-mismatch. 5. ACTIONS EXECUTED BY THE PATCH 5.1. Pre conditions to check !pre.conditions! 5.2. Actions !actions! 5.3. Post conditions to check !post.conditions!
Creating a patch
Patch development guideline
The general steps in creating a patch are:
-
a bug is identified and the system needs a fix
-
create a branch from the same release version of the product
-
replicate the problem by creating an integration test that simulates the bug
-
make the proper code modifications to solve the bug
-
if necessary create more integration tests
-
check the problem is fixed by running the system against specific integration tests
-
verify that all integration tests are passing to guarantee the fix didn’t break any other behaviour
-
create the patch
-
test the patch against the same installed version of the destination system
-
test the reverse patch against the system with the patch installed
-
review and add the proper information in the patch README.txt
-
release the patch to be applied
The steps above compose the recommended guideline to create a patch and the patch generator is just a step in the whole procedure.
Creating the patch
Creating a patch after the code changes were applied is simple:
-
build the product module that will be patched and check the jar is in the local cache or in the
target/artifacts
directory -
have a Rhino running with the product to be patched
-
get the
slee-patch-generator-package.zip
from the Artifactory and save it to the local machine -
define a metadata file with the following format:
patchName: 'customer-name-VOLTE-1234'
productName: 'Volte'
productVersion: '2.7.0.7'
patchVersion: '1'
ticket: 'VOLTE-1234'
description: 'Fix crash on call teardown'
buildInfo:
sentinel-volte: 45476adc
sentinel-core: 33397cc
The recommended filename is metadata.yaml
, but this is not important.
Use --dev-mode to avoid specifying the metadata. This shall not be used for production patches. |
-
unzip
slee-patch-generator-package.zip
-
run the
generate-patch.sh
"generate-patch.sh" \ -c <rhino client dir> \ -o <destination patch dir> \ -m <metadata file> \ <replacement jars>
This will create (a zip file containing) the patch in the <destination patch dir>
.
The patch is composed of
Item | Description |
---|---|
README.txt |
Contains the information about the patch: how to use it, what components it changes, and what problems it fixes |
apply-patch.sh |
The script that wraps the slee-patch-runner |
patch.yaml |
Contains the instructions used to apply the patch |
log4j.properties |
The log level definition for the patch runner |
third-party-licenses.txt |
License information for third party libraries used by the patch runner |
artifacts directory |
Contains the original and patched components that will be applied to the system |
lib directory |
Contains the tools used to apply the patch to the system |
profilespec directory |
Contains database definitions |
You can apply the patch locally to check it is working. For how to apply a patch see Applying patches to non live system.
Production patch
The production patch contains the patch generated above plus the orca tool. To pack the production patch do:
-
get the latest version of
orca-bundler.zip
from operational-tools -
decompress it
-
run
./generate-orca-bundle \ patch \ --patch-path <the absolute path to the patch zip file> \ --out <the zip bundle name> \ --product-name=<product name> \ --product-version=<product version> \ --ticket=<a ticket number related to the patch> \ --description="<description of the patch>" \ --patch-name=<name of the patch> \ --patch-version=<a number>
The generate-orca-bundle
options for the patch
command are:
usage: generate-orca-bundle patch [-h] --patch-path PATCH_PATH --patch-name PATCH_NAME --patch-version PATCH_VERSION --ticket TICKET --out OUT --product-name PRODUCT_NAME --product-version PRODUCT_VERSION --description DESCRIPTION optional arguments: -h, --help show this help message and exit --patch-path PATCH_PATH Path of patch zip to be bundled --patch-name PATCH_NAME Name of the patch, for the README --patch-version PATCH_VERSION Version of the patch, for the README --ticket TICKET A JIRA ticket, for the README --out OUT Path of orca bundle to generate --product-name PRODUCT_NAME Product name, for the README --product-version PRODUCT_VERSION Product version, for the README --description DESCRIPTION Description, for the README
What the patch generator does
The patch generator detects all the SLEE dependencies to the component being patched and creates a set of actions that the patch runner will execute.
The actions are:
-
check the SLEE component to be patched is installed and has the correct version and checksum
-
check all the SLEE service dependencies
-
stop the SLEE services and remove the dependencies
-
uninstall the component
-
install the new component
-
add the component dependencies and start the SLEE services
It also generates a basic README.txt with information about how to execute the patch and the components being patched.
For more information see Patch Generator.
What can be patched
Any type of SLEE component can be patched:
-
Service building block (SBB)
-
Service building block part (SBB part)
-
Library
-
Event type
-
Profile specification
-
Resource Adaptor Type
-
Resource Adaptor
Constraints on what can be patched
-
Components cannot be added.
-
Components cannot be removed.
-
Components are always packaged in deployable units. When replacing a SLEE component with a particular deployable unit, all the components in that DU must be patched.
-
The patched component must have exactly the same component ID as the original component.
-
Services are an exception to this rule—they can differ only in the fourth part of their version. That is, when the version is of the form
a.b.c.d
, thed
part can change. -
Products whose component IDs change between release builds (such as SIS 2.5.* and earlier) would require patch-specific builds, in order to create a patched component with a matching version
-
Tool parameters
generate-patch.sh --help Usage: java ...PatchGeneratorMain [OPTION]... <INPUT_FILE>... INPUT_FILE : One or more SLEE components / deployable unit jars / service XML files containing the components to patch --dev-mode : Skip metadata validation. Do not use to create production patches. --nozip : Skip zip file generation. (default: false) -c (--rhino-client-dir) CLIENT_DIR : Client directory of the Rhino to base the patch on -g (--generator-dir) GENERATOR_DIR : Base directory of the patch generator -h (--help) : Print usage information (default: false) -l (--log-file) LOG_FILE : Log file -m (--metadata-file) METADATA_FILE : YAML file with metadata used by the patch generator -o (--output-dir) OUTPUT_DIR : Output directory: where to put the generated patch
Example of patch generation
Example of generating a patch for the Volte MMTel communication diversion and MMTel Call waiting feature:
generate-patch.sh -c rhino/client -o $HOME/patch -m metadata.yaml mmtel-cdiv.jar
12:42:36,625 INFO [patch.generator] Log file for SLEE patch generator. 12:42:36,638 INFO [patch.generator] Using Rhino client home: /home/user/amauriala/rhino_client_2.7.0.6 12:42:36,638 INFO [patch.generator] Using SLEE component, service XML, or DU jar file(s) as input: [../volte-2_7_0_7/units/mmtel-cdiv-2.7.0.7.jar] 12:42:36,638 INFO [patch.generator] Using output patch dir: /home/user/amauriala/patches/test-all/cdiv 12:42:36,638 INFO [patch.generator] Using generator dir: /home/user/amauriala/patches/test-all 12:42:36,638 INFO [patch.generator] Using log file: /home/user/amauriala/patches/test-all/logs/patch-generator.log 12:42:36,638 INFO [patch.generator] Connecting to Rhino... 12:42:37,052 INFO [patch.generator] Connected to Rhino. 12:42:37,115 DEBUG [patch.generator] Preparing temp dir at '/home/user/amauriala/patches/test-all/tmp/patch-creation-original-artifacts3728006358016587614', to be deleted when the JVM exits. 12:42:37,115 DEBUG [patch.generator] Preparing temp dir at '/home/user/amauriala/patches/test-all/tmp/patch-creation-patched-artifacts6911442179420888524', to be deleted when the JVM exits. 12:42:37,116 DEBUG [patch.generator] Ensuring that directory 'cdiv' exists, creating it if necessary. 12:42:37,116 INFO [patch.generator] Building patch model... 12:42:37,299 INFO [patch.generator] Detected that '../volte-2_7_0_7/units/mmtel-cdiv-2.7.0.7.jar' is a deployable unit jar. 12:42:37,302 INFO [patch.generator] Component IDs in du jar file at '../volte-2_7_0_7/units/mmtel-cdiv-2.7.0.7.jar': - SbbPartID[name=mmtel-cdiv,vendor=OpenCloud,version=2.7.0] 12:42:39,709 INFO [patch.generator] Detected deployable unit to replace for du jar file at '../volte-2_7_0_7/units/mmtel-cdiv-2.7.0.7.jar': - DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar] 12:42:39,858 INFO [patch.generator] Building patch model based on input file(s): - du jar file at '../volte-2_7_0_7/units/mmtel-cdiv-2.7.0.7.jar' 12:42:39,859 INFO [patch.generator] Calculating component dependencies. (This usually takes a few seconds) 12:42:39,863 INFO [patch.generator] Querying Rhino for currently deployed components... 12:42:39,863 INFO [patch.generator] Gathering all installed component IDs 12:42:40,006 INFO [patch.generator] Mapping components to descriptors... 12:42:41,111 INFO [patch.generator] Querying Rhino for component dependencies... 12:42:41,111 INFO [patch.generator] Filtering installed components... 12:42:41,120 INFO [patch.generator] Building outgoing components map... 12:42:43,254 INFO [patch.generator] Calculating DUs from components... 12:42:43,255 INFO [patch.generator] targetDus:[DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar]] 12:42:43,847 INFO [patch.generator] componentSetsDownstreamToUpstream: [SbbPartID[name=mmtel-cdiv,vendor=OpenCloud,version=2.7.0]] 12:42:44,435 INFO [patch.generator] Detected service to update: ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] 12:42:44,503 INFO [patch.generator] affected services:[ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current]] 12:42:44,503 INFO [patch.generator] affected bindings:[BindingDescriptorID[name=mmtel-cdiv-volte.sentinel.sip-bindings,vendor=opencloud,version=2.7.0]] 12:42:44,684 INFO [patch.generator] All affected services: - ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] 12:42:44,685 INFO [patch.generator] Building patch model based on component dependencies. 12:42:44,694 INFO [patch.generator] Finished building patch model. 12:42:44,697 INFO [patch.generator] Built patch model: PatchModel{metadata=Metadata{patchName='CDIV-patch', patchVersion='1', productName='VoLTE', productVersion='2.7.0.7', ticket='OCS-1234', description='Fix fix for call forward unconditional', originalComponentIds=[SbbPart[mmtel-cdiv/OpenCloud/2.7.0]], patchedComponentIds=[SbbPart[mmtel-cdiv/OpenCloud/2.7.0]], patchToolsVersion='1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922', patchToolsCommit='9bfb922', buildInfo={sentinel-volte=123abc}}, commonConditions=[AssertServiceExists{componentId=Service[volte.sentinel.sip/OpenCloud/current]}, AssertInstallLevel{componentId=SbbPart[mmtel-cdiv/OpenCloud/2.7.0], installLevel=DEPLOYED}], preconditions=[DirectionalAction{forwardAction=AssertDeployableUnitInstalled{deployableUnitID=DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar], componentIdsToChecksums={SbbPart[mmtel-cdiv/OpenCloud/2.7.0]=f43407c155c4cc4ca5ee379428cc256261c30f0b}}, reverseAction=AssertDeployableUnitInstalled{deployableUnitID=DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar], componentIdsToChecksums={SbbPart[mmtel-cdiv/OpenCloud/2.7.0]=c5a6bbe4726448fd1868442948b09b5d5dd0d166}}}], disassemblyActions=[SaveServiceActivationState{componentId=Service[volte.sentinel.sip/OpenCloud/current]}, SaveTraceLevels{componentId=Service[volte.sentinel.sip/OpenCloud/current]}, DeactivateService{componentId=Service[volte.sentinel.sip/OpenCloud/current]}, AwaitServiceDeactivation{componentId=Service[volte.sentinel.sip/OpenCloud/current], timeoutSeconds=null}, RemoveBindings{componentId=Service[volte.sentinel.sip/OpenCloud/current], bindingDescriptors=[BindingDescriptorID[name=mmtel-cdiv-volte.sentinel.sip-bindings,vendor=opencloud,version=2.7.0]]}, DirectionalAction{forwardAction=UninstallDeployableUnit{deployableUnitID=DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar], componentIds=[SbbPart[mmtel-cdiv/OpenCloud/2.7.0]]}, reverseAction=UninstallDeployableUnit{deployableUnitID=DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar], componentIds=[SbbPart[mmtel-cdiv/OpenCloud/2.7.0]]}}], reassemblyActions=[DirectionalAction{forwardAction=InstallDeployableUnit{deployableUnitJarName=mmtel-cdiv-2.7.0.7.jar, url=file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar, isPatched=true}, reverseAction=InstallDeployableUnit{deployableUnitJarName=mmtel-cdiv-2.7.0.6.jar, url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar, isPatched=false}}, AddBindings{componentId=Service[volte.sentinel.sip/OpenCloud/current], bindingDescriptors=[BindingDescriptorID[name=mmtel-cdiv-volte.sentinel.sip-bindings,vendor=opencloud,version=2.7.0]]}, DeployService{componentId=Service[volte.sentinel.sip/OpenCloud/current]}, RestoreServiceActivationState{componentId=Service[volte.sentinel.sip/OpenCloud/current]}, RestoreTraceLevels{componentId=Service[volte.sentinel.sip/OpenCloud/current]}], postconditions=[DirectionalAction{forwardAction=AssertDeployableUnitInstalled{deployableUnitID=DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar], componentIdsToChecksums={SbbPart[mmtel-cdiv/OpenCloud/2.7.0]=c5a6bbe4726448fd1868442948b09b5d5dd0d166}}, reverseAction=AssertDeployableUnitInstalled{deployableUnitID=DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar], componentIdsToChecksums={SbbPart[mmtel-cdiv/OpenCloud/2.7.0]=f43407c155c4cc4ca5ee379428cc256261c30f0b}}}]}. 12:42:44,700 INFO [patch.generator] Writing patch... 12:42:44,700 DEBUG [patch.generator] Assembling patch YAML at cdiv/patch.yaml 12:42:44,739 DEBUG [patch.generator] Creating artifacts dir 12:42:44,740 DEBUG [patch.generator] Ensuring that directory 'cdiv/artifacts' exists, creating it if necessary. 12:42:44,740 DEBUG [patch.generator] Ensuring that directory 'cdiv/artifacts/original' exists, creating it if necessary. 12:42:44,740 DEBUG [patch.generator] Ensuring that directory 'cdiv/artifacts/patched' exists, creating it if necessary. 12:42:44,740 DEBUG [patch.generator] Copying deployable unit jars to artifacts dir. 12:42:44,740 DEBUG [patch.generator] Copying 'apply-patch.sh' script to output dir. 12:42:44,740 DEBUG [patch.generator] Copying file 'apply-patch.sh' from '/home/user/amauriala/patches/test-all/resources' to 'cdiv', 12:42:44,746 DEBUG [patch.generator] Copying 'log4j.properties' file to output dir. 12:42:44,746 DEBUG [patch.generator] Copying file 'log4j.properties' from '/home/user/amauriala/patches/test-all/resources' to 'cdiv', 12:42:44,748 DEBUG [patch.generator] Copying library jars to output dir. 12:42:44,748 DEBUG [patch.generator] Ensuring that directory 'cdiv/lib' exists, creating it if necessary. 12:42:44,748 DEBUG [patch.generator] Copying file 'rhino-remote-2.7.0-TRUNK.0-M5-SNAPSHOT.r3482-f3a5e42.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,748 DEBUG [patch.generator] Copying file 'slee-patch-common.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,749 DEBUG [patch.generator] Copying file 'slee-patch-history-profile-1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,749 DEBUG [patch.generator] Copying file 'log4j-1.2.17.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,759 DEBUG [patch.generator] Copying file 'guava-17.0.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,771 DEBUG [patch.generator] Copying file 'sdk-common-2.9.0-TRUNK.0-SNAPSHOT.r1353-301b05c.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,799 DEBUG [patch.generator] Copying file 'slf4j-api-1.7.7.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,799 DEBUG [patch.generator] Copying file 'args4j-2.32.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,808 DEBUG [patch.generator] Copying file 'snakeyaml-1.19.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,809 DEBUG [patch.generator] Copying file 'core-util-2.4.5-M1.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,820 DEBUG [patch.generator] Copying file 'slee-patch-runner.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,824 DEBUG [patch.generator] Copying file 'jline-2.12.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,829 DEBUG [patch.generator] Copying file 'gson-2.2.4.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,831 DEBUG [patch.generator] Copying file 'slee-1.1.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,832 DEBUG [patch.generator] Copying file 'slf4j-log4j12-1.7.7.jar' from '/home/user/amauriala/patches/test-all/lib' to 'cdiv/lib', 12:42:44,832 DEBUG [patch.generator] Copying history profile spec jar to output dir. 12:42:44,832 DEBUG [patch.generator] Ensuring that directory 'cdiv/profilespec' exists, creating it if necessary. 12:42:44,843 DEBUG [patch.generator] Copying file 'slee-patch-history-profile-1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922.du.jar' from '/home/user/amauriala/patches/test-all/profilespec' to 'cdiv/profilespec', 12:42:44,848 INFO [patch.generator] Patch written to '/home/user/amauriala/patches/test-all/cdiv'. 12:42:44,848 INFO [patch.generator] Reading in the YAML file to check that the model remains unchanged... 12:42:44,912 INFO [patch.generator] Preview of actions to be executed by the patch... 12:42:44,915 INFO [patch.generator] Dry-run actions against prototype install... 12:42:45,121 INFO [patch.rhinoactionapplier] Updated installed services: 12:42:45,121 INFO [patch.rhinoactionapplier] - ServiceID[name=volte.sentinel.ss7,vendor=OpenCloud,version=2.7.0.6] 12:42:45,121 INFO [patch.rhinoactionapplier] - ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=2.7.0.6] 12:42:45,121 INFO [patch.rhinoactionapplier] - ServiceID[name=IM-SSF,vendor=OpenCloud,version=1.4.6] 12:42:45,121 INFO [patch.rhinoactionapplier] - ServiceID[name=sentinel.registrar,vendor=OpenCloud,version=2.7.0.7] 12:42:45,122 INFO [patch.generator] Starting to run 3 pre-patch checks. 12:42:45,134 INFO [patch.generator] Running pre-patch check 1 of 3: Assert that 'Service[volte.sentinel.sip/OpenCloud/current]' is present in the target Rhino 12:42:45,159 INFO [patch.rhinoactionapplier] Found service ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] 12:42:45,159 INFO [patch.generator] Finished pre-patch check 1 in 25ms 12:42:45,160 INFO [patch.generator] Running pre-patch check 2 of 3: Assert that 'SbbPart[mmtel-cdiv/OpenCloud/2.7.0]' is installed, and is at the DEPLOYED install level. 12:42:45,163 INFO [patch.rhinoactionapplier] Component SbbPartID[name=mmtel-cdiv,vendor=OpenCloud,version=2.7.0] is in state DEPLOYED 12:42:45,163 INFO [patch.generator] Finished pre-patch check 2 in 3ms 12:42:45,163 INFO [patch.generator] Running pre-patch check 3 of 3: Assert deployable unit 'DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar]' (or equivalent) is installed in the SLEE with the following components and SHA-1 checksums: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] -> f43407c155c4cc4ca5ee379428cc256261c30f0b 12:42:45,169 INFO [patch.rhinoactionapplier] Found installed deployable unit DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar] 12:42:45,171 INFO [patch.rhinoactionapplier] - Found matching checksum for component SbbPartID[name=mmtel-cdiv,vendor=OpenCloud,version=2.7.0] 12:42:45,173 INFO [patch.generator] Finished pre-patch check 3 in 10ms 12:42:45,173 INFO [patch.generator] Writing patch documentation... 12:42:45,173 INFO [patch.generator] Generating the patch documentation ... 12:42:45,176 INFO [patch.generator] Writing documentation to cdiv/README.txt 12:42:45,182 INFO [patch.generator] Patch documentation generated. 12:42:45,182 INFO [patch.generator] Finished. 12:42:45,182 INFO [patch.generator] Patch is ready, written to 'cdiv'.
Applying patches to non live system
Patches
A patch is encapsulated in a zip file and should follow the naming convention:
<product name>-<product version>-patch-<date and time>-<english code name>.zip
Example: volte-2.7.0.4-patch-20180517-1403-cdiv-timers.zip
The patch file is self contained and has the artifacts and the necessary tools to apply the patch. For consistency, all patch contents are contained within a directory called "patch" within the zip file.
The contents of a patch file are:
Item | Description |
---|---|
README.txt |
Contains the information about the patch: how to use it, what components it changes, and what problems it fixes |
apply-patch.sh |
The script that wraps the slee-patch-runner |
patch.yaml |
Contains the instructions used to apply the patch |
log4j.properties |
The log level definition for the patch runner |
third-party-licenses.txt |
License information for third party libraries used by the patch runner |
artifacts directory |
Contains the original and patched components that will be applied to the system |
lib directory |
Contains the tools used to apply the patch to the system |
profilespec directory |
Contains database definitions |
Getting the available patches
The released patches are published in the Metaswitch artifactory and it’s necessary to have the proper credentials to access those.
Applying the patch in test environments
Applying a patch to a non production environment is simple. The normal steps are:
-
do a Rhino backup using the rhino-export script
-
decompress the patch
-
read the README.txt for instructions and for details of the patch
-
what the patch fixes
-
how to install and uninstall
-
the actions the patch does
-
-
verify that you have a Rhino running with the installed product, i.e Sentinel VoLTE
-
run
*./apply-patch.sh -c <path to the Rhino client>*
In summary the patch runner:
-
checks if the patch can be applied (pre-conditions)
-
stops the SLEE services bound to the components being patched
-
uninstalls the previous components
-
installs the new components and adds a suffix "patched-<version>" to the component URL
-
restarts the SLEE services
-
checks if the new component is present
Applying a patch requires administrative access to the Rhino running the product to be patched.
To keep the patches organized we recommend creating a specific directory in which to decompress the patches.
The patch has to stop all the SLEE services that the component is bound to. In a cluster environment this means the services will be deactivated cluster wide. |
What the patch runner does
The patch runner executes several steps. The major actions are:
Check the pre-conditions determined in the patch.yaml
Several pre-conditions are checked:
-
product name and product version
-
component name, version and checksum
-
services and RAs related to the component
Deactivate the services the component is bound to
A component could be bound to more than one service. The patch runner deactivates the services and waits until the services all move to INACTIVE
state. In the case when one of the services does not transition to INACTIVE
state, the patch runner will time out and try force a service deactivation. See Timeouts for service deactivation.
Remove the bindings related to the component
All service bindings to all components with dependencies to the patching component have to be removed and later restored. The dependencies are present in the patch.yaml and are known upfront when creating the patch.
Unverify the dependent components
Unverify all the components that have dependencies to the patching component.
Install the new component
Install the new component. The component name URL is changed to the format: file:<path>/<component-name>.patched-<version>.jar
Activate the services
Activate the services related to the patched component and check the service is active.
Write the patch history
The patch runner writes the patch history to the PatchHistoryProfileTable
. Each patch creates an entry in the table with the patch name and date. Some information is extracted from the metadata provided when generating the patch. Other information is extracted dynamically when applying the patch.
The contents are:
Item | Description |
---|---|
PatchAppliedDate |
The date and time the patch was applied |
PatchName |
The patch name as provided when generating the patch |
PatchVersion |
The patch version as provided when generating the patch |
ProductName |
The product name the patch was created for |
ProductVersion |
The product version the patch was created for |
PatchTicket |
The reference ticket as provided when generating the patch |
Description |
The patch description as provided when generating the patch |
ComponentInfo |
The components the patch changed |
BuildInfo |
The project build information as provided when generating the patch |
IsReversePatch |
If the patch-runner was used to uninstall the patch with the |
IsForced |
If the patch-runner was used with the |
IsAppliedSuccessfully |
If the patch was applied or reverted successfully |
ExtensionData |
Generic name/value pair field for extra information |
PatchToolsVersion |
The patch tool version used to generate the patch |
PatchToolsCommit |
The last source code commit from the tool used to create the patch |
An example of patch history:
$rhino-console listprofileattributes PatchHistoryProfileTable 2018-04-04-12:43:32-CDIV-patch BuildInfo={"sentinel-volte":"123abc"} ComponentInfo={"originalComponents":[{"name":"mmtel-cdiv","vendor":"OpenCloud","version":"2.7.0","componentType":"SbbPartID","hash":805159952}],"patchedComponents":[{"name":"mmtel-cdiv","vendor":"OpenCloud","version":"2.7.0","componentType":"SbbPartID","hash":805159952}]} Description=Fix cdiv for call forward unconditional ExtensionData={null} IsAppliedSuccessfully=true IsForced=false IsReversePatch=false PatchAppliedDate=Wed Apr 04 12:43:32 NZST 2018 PatchName=CDIV-patch PatchTicket=OCS-1234 PatchToolsCommit=9bfb922 PatchToolsVersion=1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922 PatchVersion=1 ProductName=VoLTE ProductVersion=2.7.0.7
Reverting a patch
Every delivered patch can be reverted. The procedures are the same as a patch but re-installing the original component. The component restored is the expected component and is shipped with the patch.
To revert a patch run:
*./apply-patch.sh -c <path to the Rhino client> --revert*
Apply patch with --allow-checksum-mismatch option
There might be situations where forcing a patch installation is required. The --allow-checksum-mismatch
option allows the patch to continue even if encountering checksum mismatches for the components being patched. It will still check that the name, vendor and version are equivalent, but will allow for builds to be installed which differ from the expected builds. WARNING: Be aware that when using --allow-checksum-mismatch
, the exact builds of the components in the system are not recoverable unless restoring via rhino-import. That is, reversing the patch will install the expected original components, rather than the components which were actually installed.
Timeouts for service deactivation
The slee-patch-runner has 2 main timeouts:
-
timeout waiting for service or RA to transition to stopped state before starting the force stop procedure.
-
timeout waiting for service or RA to transition to stopped state after starting the force stop procedure.
The default values are 30 seconds for both, but they can be specified in the command line using the arguments:
Argument | Description |
---|---|
--deactivate-time |
Time in seconds to allow all services or RAs to deactivate before failing or forcing or |
--force-deactivate |
How to handle service or RA deactivation timeouts. |
Troubleshooting
See Troubleshooting.
Besides the information on the console, the patch runner provides detailed output of the actions taken in the log file. The log file by default is located where the patch is stored under the path logs
, with the name patch-runner.log
.
Running out of Perm Gen space
Installing a patch is actually a pretty complex operation as far as the Java virtual machine is concerned, and with patches that impact lots of the system, there is a chance that the operation will halt with an exception that describes the system as having run out of Perm Gen space. This is particularly likely if a number of patches are applied without restarting the Rhino between each one, as the loss of Perm Gen space can build up over time.
Unfortunately there is no recovery from this situation (beyond restoring the backup you took before starting the patch operation). To avoid this, it is advisable to stop and restart Rhino before starting the patch operation, which ensures that it has the maximum headroom of Perm Gen space available during the patching operation.
Example
Here is one example of a patch to the mmtel-cdiv feature.
Using JAVA_HOME '/opt/jdk1.7.0_79'. 12:43:30,884 INFO [patch.runner] Log file for SLEE patch runner. 12:43:30,888 INFO [patch.runner] Using patch file: /home/user/amauriala/patches/test-all/cdiv/patch.yaml 12:43:30,888 INFO [patch.runner] Using log file: /home/user/amauriala/patches/test-all/cdiv/logs/patch-runner.log 12:43:30,888 INFO [patch.runner] Using default deactivation timeout: 0 12:43:30,888 INFO [patch.runner] Using Rhino client home: /home/user/amauriala/rhino_client_2.7.0.6 12:43:30,888 INFO [patch.runner] Using original artifacts dir: /home/user/amauriala/patches/test-all/cdiv/artifacts/original 12:43:30,888 INFO [patch.runner] Using patched artifacts dir: /home/user/amauriala/patches/test-all/cdiv/artifacts/patched 12:43:30,888 INFO [patch.runner] Connecting to Rhino... 12:43:31,171 INFO [patch.runner] Connection established. 12:43:31,765 INFO [patch.runner] JVM.GarbageCollector.heapUsed 1217.64M 12:43:31,765 INFO [patch.runner] JVM.GarbageCollector.heapCommitted 4294.77M 12:43:31,766 INFO [patch.runner] JVM.GarbageCollector.heapInitial 4294.97M 12:43:31,766 INFO [patch.runner] JVM.GarbageCollector.heapMaximum 4294.77M 12:43:31,767 INFO [patch.runner] JVM.GarbageCollector.nonHeapUsed 398.43M 12:43:31,767 INFO [patch.runner] JVM.GarbageCollector.nonHeapCommitted 545.19M 12:43:31,768 INFO [patch.runner] JVM.GarbageCollector.nonHeapInitial 539.43M 12:43:31,768 INFO [patch.runner] JVM.GarbageCollector.nonHeapMaximum 587.20M 12:43:31,769 INFO [patch.runner] JVM.GarbageCollector.classesCurrentLoaded 47.0K 12:43:31,769 INFO [patch.runner] JVM.GarbageCollector.classesTotalLoaded 47.0K 12:43:31,769 INFO [patch.runner] JVM.GarbageCollector.classesTotalUnloaded 0 12:43:31,770 INFO [patch.runner] JVM.GarbageCollector.ConcurrentMarkSweep.collectionCount 0 12:43:31,770 INFO [patch.runner] JVM.GarbageCollector.ConcurrentMarkSweep.collectionTime 0 12:43:31,770 INFO [patch.runner] JVM.GarbageCollector.ParNew.collectionCount 525 12:43:31,770 INFO [patch.runner] JVM.GarbageCollector.ParNew.collectionTime 5781 12:43:31,771 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.collectionUsageInitial 4261.41M 12:43:31,771 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.collectionUsageUsed 0 12:43:31,772 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.collectionUsageMax 4261.41M 12:43:31,772 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.collectionUsageCommitted 0 12:43:31,772 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.collectionUsageThreshold 0 12:43:31,773 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.collectionUsageThresholdCount 0 12:43:31,773 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.peakUsageInitial 4261.41M 12:43:31,774 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.peakUsageUsed 1217.30M 12:43:31,774 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.peakUsageMax 4261.41M 12:43:31,775 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.peakUsageCommitted 4261.41M 12:43:31,775 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.usageThreshold 0 12:43:31,775 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.usageThresholdCount 0 12:43:31,776 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.usageInitial 4261.41M 12:43:31,776 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.usageUsed 1217.30M 12:43:31,777 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.usageMax 4261.41M 12:43:31,777 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.usageCommitted 4261.41M 12:43:31,778 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.collectionUsageInitial 536.87M 12:43:31,778 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.collectionUsageUsed 0 12:43:31,779 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.collectionUsageMax 536.87M 12:43:31,779 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.collectionUsageCommitted 0 12:43:31,779 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.collectionUsageThreshold 0 12:43:31,779 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.collectionUsageThresholdCount 0 12:43:31,780 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.peakUsageInitial 536.87M 12:43:31,780 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.peakUsageUsed 390.32M 12:43:31,781 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.peakUsageMax 536.87M 12:43:31,781 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.peakUsageCommitted 536.87M 12:43:31,782 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.usageThreshold 0 12:43:31,782 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.usageThresholdCount 0 12:43:31,782 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.usageInitial 536.87M 12:43:31,783 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.usageUsed 390.32M 12:43:31,783 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.usageMax 536.87M 12:43:31,784 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.usageCommitted 536.87M 12:43:31,784 INFO [patch.runner] JVM.MemoryPool.Code Cache.collectionUsageInitial 0 12:43:31,784 INFO [patch.runner] JVM.MemoryPool.Code Cache.collectionUsageUsed 0 12:43:31,785 INFO [patch.runner] JVM.MemoryPool.Code Cache.collectionUsageMax 0 12:43:31,785 INFO [patch.runner] JVM.MemoryPool.Code Cache.collectionUsageCommitted 0 12:43:31,785 INFO [patch.runner] JVM.MemoryPool.Code Cache.collectionUsageThreshold 0 12:43:31,786 INFO [patch.runner] JVM.MemoryPool.Code Cache.collectionUsageThresholdCount 0 12:43:31,786 INFO [patch.runner] JVM.MemoryPool.Code Cache.peakUsageInitial 2555.9K 12:43:31,787 INFO [patch.runner] JVM.MemoryPool.Code Cache.peakUsageUsed 8.11M 12:43:31,787 INFO [patch.runner] JVM.MemoryPool.Code Cache.peakUsageMax 50.33M 12:43:31,788 INFO [patch.runner] JVM.MemoryPool.Code Cache.peakUsageCommitted 8.32M 12:43:31,788 INFO [patch.runner] JVM.MemoryPool.Code Cache.usageThreshold 0 12:43:31,788 INFO [patch.runner] JVM.MemoryPool.Code Cache.usageThresholdCount 0 12:43:31,789 INFO [patch.runner] JVM.MemoryPool.Code Cache.usageInitial 2555.9K 12:43:31,789 INFO [patch.runner] JVM.MemoryPool.Code Cache.usageUsed 8.11M 12:43:31,790 INFO [patch.runner] JVM.MemoryPool.Code Cache.usageMax 50.33M 12:43:31,790 INFO [patch.runner] JVM.MemoryPool.Code Cache.usageCommitted 8.32M 12:43:31,791 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.collectionUsageInitial 33.16M 12:43:31,791 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.collectionUsageUsed 0 12:43:31,791 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.collectionUsageMax 33.16M 12:43:31,792 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.collectionUsageCommitted 33.16M 12:43:31,792 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.collectionUsageThreshold 0 12:43:31,792 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.collectionUsageThresholdCount 0 12:43:31,793 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.peakUsageInitial 33.16M 12:43:31,793 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.peakUsageUsed 33.16M 12:43:31,794 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.peakUsageMax 33.16M 12:43:31,794 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.peakUsageCommitted 33.16M 12:43:31,794 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.usageThreshold 0 12:43:31,794 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.usageThresholdCount 0 12:43:31,795 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.usageInitial 33.16M 12:43:31,795 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.usageUsed 356.3K 12:43:31,795 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.usageMax 33.16M 12:43:31,796 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.usageCommitted 33.16M 12:43:31,796 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.collectionUsageInitial 196.6K 12:43:31,796 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.collectionUsageUsed 0 12:43:31,797 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.collectionUsageMax 196.6K 12:43:31,797 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.collectionUsageCommitted 196.6K 12:43:31,797 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.collectionUsageThreshold 0 12:43:31,797 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.collectionUsageThresholdCount 0 12:43:31,798 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.peakUsageInitial 196.6K 12:43:31,798 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.peakUsageUsed 1048 12:43:31,798 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.peakUsageMax 196.6K 12:43:31,798 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.peakUsageCommitted 196.6K 12:43:31,799 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.usageThreshold 0 12:43:31,799 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.usageThresholdCount 0 12:43:31,799 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.usageInitial 196.6K 12:43:31,799 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.usageUsed 0 12:43:31,799 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.usageMax 196.6K 12:43:31,800 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.usageCommitted 196.6K 12:43:31,800 INFO [patch.runner] Actions to be executed when applying this patch: 12:43:31,814 INFO [patch.runner] Patch information: Patch name: CDIV-patch Patch version: 1 Product name: VoLTE Product version: 2.7.0.7 Ticket: OCS-1234 Description: Fix fix for call forward unconditional Original components: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] Patched components: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] Patch tools version: 1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922 Patch tools commit: 9bfb922 Build info: - sentinel-volte=123abc Common conditions (checked before and after): - Assert that 'Service[volte.sentinel.sip/OpenCloud/current]' is present in the target Rhino - Assert that 'SbbPart[mmtel-cdiv/OpenCloud/2.7.0]' is installed, and is at the DEPLOYED install level. Preconditions: - Assert deployable unit 'DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar]' (or equivalent) is installed in the SLEE with the following components and SHA-1 checksums: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] -> f43407c155c4cc4ca5ee379428cc256261c30f0b DisassemblyActions: - Save service activation state for Service[volte.sentinel.sip/OpenCloud/current] - Save trace levels for Service[volte.sentinel.sip/OpenCloud/current] - Deactivate Service[volte.sentinel.sip/OpenCloud/current] - Wait until Service[volte.sentinel.sip/OpenCloud/current] is in the Inactive state - Remove bindings from Service[volte.sentinel.sip/OpenCloud/current]: - BindingDescriptorID[name=mmtel-cdiv-volte.sentinel.sip-bindings,vendor=opencloud,version=2.7.0] - Uninstall deployable unit DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar] or equivalent with these components, removing copied components as necessary: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] ReassemblyActions: - Install patched deployable unit mmtel-cdiv-2.7.0.7.jar with URL file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar - Add bindings to Service[volte.sentinel.sip/OpenCloud/current]: - BindingDescriptorID[name=mmtel-cdiv-volte.sentinel.sip-bindings,vendor=opencloud,version=2.7.0] - Deploy Service[volte.sentinel.sip/OpenCloud/current] - Restore service activation state for Service[volte.sentinel.sip/OpenCloud/current] - Restore trace levels for Service[volte.sentinel.sip/OpenCloud/current] Postconditions: - Assert deployable unit 'DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar]' (or equivalent) is installed in the SLEE with the following components and SHA-1 checksums: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] -> c5a6bbe4726448fd1868442948b09b5d5dd0d166 12:43:32,170 INFO [patch.rhinoactionapplier] Updated installed services: 12:43:32,170 INFO [patch.rhinoactionapplier] - ServiceID[name=volte.sentinel.ss7,vendor=OpenCloud,version=2.7.0.6] 12:43:32,170 INFO [patch.rhinoactionapplier] - ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=2.7.0.6] 12:43:32,170 INFO [patch.rhinoactionapplier] - ServiceID[name=IM-SSF,vendor=OpenCloud,version=1.4.6] 12:43:32,170 INFO [patch.rhinoactionapplier] - ServiceID[name=sentinel.registrar,vendor=OpenCloud,version=2.7.0.7] 12:43:32,173 INFO [patch.runner] Starting to apply patch. 12:43:32,173 INFO [patch.runner] Starting to run 3 pre-patch checks. 12:43:32,187 INFO [patch.runner] Running pre-patch check 1 of 3: Assert that 'Service[volte.sentinel.sip/OpenCloud/current]' is present in the target Rhino 12:43:32,215 INFO [patch.rhinoactionapplier] Found service ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] 12:43:32,215 INFO [patch.runner] Finished pre-patch check 1 in 28ms 12:43:32,215 INFO [patch.runner] Running pre-patch check 2 of 3: Assert that 'SbbPart[mmtel-cdiv/OpenCloud/2.7.0]' is installed, and is at the DEPLOYED install level. 12:43:32,221 INFO [patch.rhinoactionapplier] Component SbbPartID[name=mmtel-cdiv,vendor=OpenCloud,version=2.7.0] is in state DEPLOYED 12:43:32,221 INFO [patch.runner] Finished pre-patch check 2 in 6ms 12:43:32,222 INFO [patch.runner] Running pre-patch check 3 of 3: Assert deployable unit 'DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar]' (or equivalent) is installed in the SLEE with the following components and SHA-1 checksums: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] -> f43407c155c4cc4ca5ee379428cc256261c30f0b 12:43:32,232 INFO [patch.rhinoactionapplier] Found installed deployable unit DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar] 12:43:32,235 INFO [patch.rhinoactionapplier] - Found matching checksum for component SbbPartID[name=mmtel-cdiv,vendor=OpenCloud,version=2.7.0] 12:43:32,236 INFO [patch.runner] Finished pre-patch check 3 in 14ms 12:43:32,237 INFO [patch.runner] Creating patch history profile 12:43:32,321 INFO [patch.runner] Installing deployable unit /home/user/amauriala/patches/test-all/cdiv/profilespec/slee-patch-history-profile-1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922.du.jar with URL file:slee-patch-history-profile-1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922.du.jar 12:43:35,188 INFO [patch.runner] Creating profile table PatchHistoryProfileTable 12:43:35,508 INFO [patch.runner] Creating profile 2018-04-04-12:43:32-CDIV-patch 12:43:35,999 INFO [patch.runner] Finished creating patch history profile. 12:43:36,000 INFO [patch.runner] Starting to run 6 disassembly steps. 12:43:36,002 INFO [patch.runner] Running disassembly step 1 of 6: Save service activation state for Service[volte.sentinel.sip/OpenCloud/current] 12:43:36,034 INFO [patch.rhinoactionapplier] Service ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] is in state Active 12:43:36,034 INFO [patch.runner] Finished disassembly step 1 in 32ms 12:43:36,036 INFO [patch.runner] Running disassembly step 2 of 6: Save trace levels for Service[volte.sentinel.sip/OpenCloud/current] 12:43:36,048 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:43:36,137 INFO [patch.rhinoactionapplier] 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=mmtel-conf-sbb,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Finest 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=registrar.subscribers.cassandra,vendor=OpenCloud,version=2.7.0-copy#2]]:root at level Info 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=registrar.subscribers.hsscache,vendor=OpenCloud,version=2.7.0-copy#2]]:root at level Info 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=scc-fetch-msrn-feature,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Info 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=scc-send-request-to-anchor-sbb,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Finest 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=scc-tads-data-lookup-sbb,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Finest 2 tracers for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=sentinel-core-subscriber-data-lookup-feature,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Finest, sentinel.fsm at level Info 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=sentinel-sip-sbb-fsm-feature-sbb,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Info 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-example-sbb-feature,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Info 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-hss-subscriber-data-lookup,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Finest 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-hss-subscriber-data-lookup-2,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Finest 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-imsid-lookup,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Finest 1 tracer for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-imsid-lookup-with-realm,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Info 2 tracers for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte.sentinel.ro.ocs,vendor=OpenCloud,version=2.7.0]]:root at level Finest, sentinel.fsm at level Info 3 tracers for SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte.sentinel.sip,vendor=OpenCloud,version=2.7.0-copy#1]]:root at level Finest, sentinel.fsm at level Info, sentinel.fsm.DiameterMediation at level Finest 12:43:36,137 INFO [patch.runner] Finished disassembly step 2 in 101ms 12:43:36,137 INFO [patch.runner] Running disassembly step 3 of 6: Deactivate Service[volte.sentinel.sip/OpenCloud/current] 12:43:36,161 INFO [patch.rhinoactionapplier] Service ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] is in state Active 12:43:36,161 INFO [patch.rhinoactionapplier] Deactivating service ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] 12:43:36,298 INFO [patch.runner] Finished disassembly step 3 in 160ms 12:43:36,300 INFO [patch.runner] Running disassembly step 4 of 6: Wait until Service[volte.sentinel.sip/OpenCloud/current] is in the Inactive state 12:43:36,301 INFO [patch.rhinoactionapplier] Waiting forever for ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] to deactivate 12:43:37,029 INFO [patch.rhinoactionapplier] Confirmed that ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] is now inactive 12:43:37,030 INFO [patch.runner] Finished disassembly step 4 in 730ms 12:43:37,032 INFO [patch.runner] Running disassembly step 5 of 6: Remove bindings from Service[volte.sentinel.sip/OpenCloud/current]: - BindingDescriptorID[name=mmtel-cdiv-volte.sentinel.sip-bindings,vendor=opencloud,version=2.7.0] 12:43:58,984 INFO [patch.runner] Finished disassembly step 5 in 21952ms 12:43:58,986 INFO [patch.runner] Running disassembly step 6 of 6: Uninstall deployable unit DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar] or equivalent with these components, removing copied components as necessary: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] 12:43:58,994 INFO [patch.rhinoactionapplier] Removing copied components of deployable unit DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar] 12:43:59,018 INFO [patch.rhinoactionapplier] Uninstalling deployable unit DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6.jar] 12:43:59,537 INFO [patch.rhinoactionapplier] Updated installed services: 12:43:59,537 INFO [patch.rhinoactionapplier] - ServiceID[name=volte.sentinel.ss7,vendor=OpenCloud,version=2.7.0.6] 12:43:59,537 INFO [patch.rhinoactionapplier] - ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=2.7.0.6] 12:43:59,537 INFO [patch.rhinoactionapplier] - ServiceID[name=IM-SSF,vendor=OpenCloud,version=1.4.6] 12:43:59,537 INFO [patch.rhinoactionapplier] - ServiceID[name=sentinel.registrar,vendor=OpenCloud,version=2.7.0.7] 12:43:59,537 INFO [patch.runner] Finished disassembly step 6 in 551ms 12:43:59,537 INFO [patch.runner] Starting to run 5 reassembly steps. 12:43:59,538 INFO [patch.runner] Running reassembly step 1 of 5: Install patched deployable unit mmtel-cdiv-2.7.0.7.jar with URL file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar 12:43:59,538 INFO [patch.rhinoactionapplier] Installing deployable unit mmtel-cdiv-2.7.0.7.jar with URL file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar 12:44:00,230 INFO [patch.rhinoactionapplier] Updated installed services: 12:44:00,230 INFO [patch.rhinoactionapplier] - ServiceID[name=volte.sentinel.ss7,vendor=OpenCloud,version=2.7.0.6] 12:44:00,230 INFO [patch.rhinoactionapplier] - ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=2.7.0.6] 12:44:00,230 INFO [patch.rhinoactionapplier] - ServiceID[name=IM-SSF,vendor=OpenCloud,version=1.4.6] 12:44:00,230 INFO [patch.rhinoactionapplier] - ServiceID[name=sentinel.registrar,vendor=OpenCloud,version=2.7.0.7] 12:44:00,230 INFO [patch.runner] Finished reassembly step 1 in 692ms 12:44:00,232 INFO [patch.runner] Running reassembly step 2 of 5: Add bindings to Service[volte.sentinel.sip/OpenCloud/current]: - BindingDescriptorID[name=mmtel-cdiv-volte.sentinel.sip-bindings,vendor=opencloud,version=2.7.0] 12:44:15,270 INFO [patch.runner] Finished reassembly step 2 in 15038ms 12:44:15,271 INFO [patch.runner] Running reassembly step 3 of 5: Deploy Service[volte.sentinel.sip/OpenCloud/current] 12:45:17,963 INFO [patch.runner] Finished reassembly step 3 in 62691ms 12:45:17,965 INFO [patch.runner] Running reassembly step 4 of 5: Restore service activation state for Service[volte.sentinel.sip/OpenCloud/current] 12:45:18,013 INFO [patch.rhinoactionapplier] Current state for service is Inactive 12:45:18,013 INFO [patch.rhinoactionapplier] Target state for service is Active 12:45:18,013 INFO [patch.rhinoactionapplier] Activating service 12:46:06,340 INFO [patch.runner] Finished reassembly step 4 in 48375ms 12:46:06,347 INFO [patch.runner] Running reassembly step 5 of 5: Restore trace levels for Service[volte.sentinel.sip/OpenCloud/current] 12:46:06,350 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:06,441 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:06,529 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:06,604 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:06,691 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:06,779 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:06,865 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:06,952 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,041 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,125 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,210 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,294 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,370 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,457 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,563 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,650 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,741 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,827 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:07,915 INFO [patch.rhinoactionapplier] ServiceID ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] wasn't able to be mapped to any installed service 12:46:08,001 INFO [patch.rhinoactionapplier] Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=mmtel-conf-sbb,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=registrar.subscribers.cassandra,vendor=OpenCloud,version=2.7.0-copy#2]] Tracer:root to Info (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=registrar.subscribers.hsscache,vendor=OpenCloud,version=2.7.0-copy#2]] Tracer:root to Info (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=scc-fetch-msrn-feature,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Info (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=scc-send-request-to-anchor-sbb,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=scc-tads-data-lookup-sbb,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=sentinel-core-subscriber-data-lookup-feature,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=sentinel-core-subscriber-data-lookup-feature,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:sentinel.fsm to Info (was Finest) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=sentinel-sip-sbb-fsm-feature-sbb,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Info (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-example-sbb-feature,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Info (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-hss-subscriber-data-lookup,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-hss-subscriber-data-lookup-2,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-imsid-lookup,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte-imsid-lookup-with-realm,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Info (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte.sentinel.ro.ocs,vendor=OpenCloud,version=2.7.0]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte.sentinel.ro.ocs,vendor=OpenCloud,version=2.7.0]] Tracer:sentinel.fsm to Info (was Finest) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte.sentinel.sip,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:root to Finest (was Info) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte.sentinel.sip,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:sentinel.fsm to Info (was Finest) Restored NotificationSource:SbbNotification[service=ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current],sbb=SbbID[name=volte.sentinel.sip,vendor=OpenCloud,version=2.7.0-copy#1]] Tracer:sentinel.fsm.DiameterMediation to Finest (was Info) 12:46:08,001 INFO [patch.runner] Finished reassembly step 5 in 1654ms 12:46:08,008 INFO [patch.runner] Starting to run 3 post-patch checks. 12:46:08,009 INFO [patch.runner] Running post-patch check 1 of 3: Assert that 'Service[volte.sentinel.sip/OpenCloud/current]' is present in the target Rhino 12:46:08,038 INFO [patch.rhinoactionapplier] Found service ServiceID[name=volte.sentinel.sip,vendor=OpenCloud,version=current] 12:46:08,039 INFO [patch.runner] Finished post-patch check 1 in 29ms 12:46:08,039 INFO [patch.runner] Running post-patch check 2 of 3: Assert that 'SbbPart[mmtel-cdiv/OpenCloud/2.7.0]' is installed, and is at the DEPLOYED install level. 12:46:08,046 INFO [patch.rhinoactionapplier] Component SbbPartID[name=mmtel-cdiv,vendor=OpenCloud,version=2.7.0] is in state DEPLOYED 12:46:08,046 INFO [patch.runner] Finished post-patch check 2 in 6ms 12:46:08,047 INFO [patch.runner] Running post-patch check 3 of 3: Assert deployable unit 'DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar]' (or equivalent) is installed in the SLEE with the following components and SHA-1 checksums: - SbbPart[mmtel-cdiv/OpenCloud/2.7.0] -> c5a6bbe4726448fd1868442948b09b5d5dd0d166 12:46:08,059 INFO [patch.rhinoactionapplier] Found installed deployable unit DeployableUnitID[url=file:modules/opencloud/mmtel-cdiv-2.7.0.6-patched-CDIV-patch-1.jar] 12:46:08,063 INFO [patch.rhinoactionapplier] - Found matching checksum for component SbbPartID[name=mmtel-cdiv,vendor=OpenCloud,version=2.7.0] 12:46:08,066 INFO [patch.runner] Finished post-patch check 3 in 18ms 12:46:08,066 INFO [patch.runner] Setting patch as successfully applied in profile 2018-04-04-12:43:32-CDIV-patch 12:46:08,563 INFO [patch.runner] JVM.GarbageCollector.heapUsed 934.13M DELTA -283.52M 12:46:08,563 INFO [patch.runner] JVM.GarbageCollector.nonHeapUsed 470.54M DELTA 72.10M 12:46:08,564 INFO [patch.runner] JVM.GarbageCollector.nonHeapCommitted 553.71M DELTA 8.52M 12:46:08,565 INFO [patch.runner] JVM.GarbageCollector.classesCurrentLoaded 53.9K DELTA 6870 12:46:08,565 INFO [patch.runner] JVM.GarbageCollector.classesTotalLoaded 77.7K DELTA 30.6K 12:46:08,566 INFO [patch.runner] JVM.GarbageCollector.classesTotalUnloaded 23.8K DELTA 23.8K 12:46:08,566 INFO [patch.runner] JVM.GarbageCollector.ConcurrentMarkSweep.collectionCount 2 DELTA 2 12:46:08,566 INFO [patch.runner] JVM.GarbageCollector.ConcurrentMarkSweep.collectionTime 309 DELTA 309 12:46:08,567 INFO [patch.runner] JVM.GarbageCollector.ParNew.collectionCount 1559 DELTA 1034 12:46:08,567 INFO [patch.runner] JVM.GarbageCollector.ParNew.collectionTime 17.6K DELTA 11.8K 12:46:08,568 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.collectionUsageUsed 713.50M DELTA 713.50M 12:46:08,569 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.collectionUsageCommitted 4261.41M DELTA 4261.41M 12:46:08,569 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.peakUsageUsed 2064.03M DELTA 846.73M 12:46:08,570 INFO [patch.runner] JVM.MemoryPool.CMS Old Gen.usageUsed 911.92M DELTA -305.38M 12:46:08,570 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.collectionUsageUsed 423.60M DELTA 423.60M 12:46:08,571 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.collectionUsageCommitted 536.87M DELTA 536.87M 12:46:08,572 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.peakUsageUsed 502.83M DELTA 112.51M 12:46:08,572 INFO [patch.runner] JVM.MemoryPool.CMS Perm Gen.usageUsed 454.06M DELTA 63.73M 12:46:08,573 INFO [patch.runner] JVM.MemoryPool.Code Cache.peakUsageUsed 16.54M DELTA 8.43M 12:46:08,574 INFO [patch.runner] JVM.MemoryPool.Code Cache.peakUsageCommitted 16.84M DELTA 8.52M 12:46:08,574 INFO [patch.runner] JVM.MemoryPool.Code Cache.usageUsed 16.48M DELTA 8.37M 12:46:08,575 INFO [patch.runner] JVM.MemoryPool.Code Cache.usageCommitted 16.84M DELTA 8.52M 12:46:08,576 INFO [patch.runner] JVM.MemoryPool.Par Eden Space.usageUsed 22.22M DELTA 21.87M 12:46:08,577 INFO [patch.runner] JVM.MemoryPool.Par Survivor Space.peakUsageUsed 65.6K DELTA 64.5K 12:46:08,577 INFO [patch.runner] Patch was applied successfully.
Applying patches to production system
Production Patches
For live systems the patch is shipped with the orca tool that handles the cluster migration workflow. Internally, the orca tool will call the slee-patch-runner and the actions will be the same as explained in What the patch runner does. The difference here is that the patch will be applied against the first node of the new cluster created by the orca
tool.
A production patch is encapsulated in a zip file and should follow the naming convention:
<product name>-<product version>-patch-bundle-<date and time>-<english code name>.zip
Example: volte-2.7.0.4-patch-bundle-20180517-1403-cdiv-timers.zip
The patch file is self contained and has the artifacts and the necessary tools to apply the patch. For consistency, all patch contents are contained within a directory called "patch" within the zip file.
The contents of a patch bundle file are:
Item | Description |
---|---|
README |
Contains the information about the patch: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
the patch zip file |
The patch package |
Getting the available patches
The released patches are published in the Metaswitch artifactory and it’s necessary to have the proper credentials to access those.
Applying the patch in production environments
Applying a patch to a production environment is simple. The normal steps are:
-
download the patch bundle to the management host
-
assure the management host has ssh access to the cluster hosts
-
decompress the patch
-
read the README for instructions and for details of the patch
-
what the patch fixes
-
how to install and uninstall
-
the actions the patch does
-
-
verify that you have a Rhino running with the installed product, i.e Sentinel VoLTE
-
run
*./orca --hosts host1,host2,host3 apply-patch <path to the patch zip file>*
In summary orca
will:
-
check the connection to the hosts
-
clone the cluster
-
migrate the first node
-
apply the patch against the first node calling the slee-patch-runner and execute the same actions as explained here
-
migrate the other nodes of the cluster
Applying a patch requires administrative access to the Rhino running the product to be patched.
To keep the patches organized we recommend creating a specific directory in which to decompress the patches.
The patch has to stop all the SLEE services that the component is bound to. In a cluster environment this means the services will be deactivated cluster wide. |
To revert the patch
*./orca --hosts host1,host2,host3 revert-patch <path to the patch zip file>*
To rollback the installation
*./orca --hosts host1,host2,host3 rollback -f*
To delete an old installation
*./orca --hosts host1,host2,host3 cleanup --cluster <cluster id>*
Troubleshooting
See Troubleshooting.
Running out of Perm Gen space
Installing a patch is actually a pretty complex operation as far as the Java virtual machine is concerned, and with patches that impact lots of the system, there is a chance that the operation will halt with an exception that describes the system as having run out of Perm Gen space. This is particularly likely if a number of patches are applied without restarting the Rhino between each one, as the loss of Perm Gen space can build up over time.
Unfortunately there is no recovery from this situation (beyond restoring the backup you took before starting the patch operation). To avoid this, it is advisable to stop and restart Rhino before starting the patch operation, which ensures that it has the maximum headroom of Perm Gen space available during the patching operation.
Example of patch execution on a 3 node cluster
Applying the patch with the command
./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 apply-patch volte-2.7.1.5-patch-20181017-0553-time-stamps-fix.zip --no-pause
Starting on host rhino-vm1 Checking for prepared Rhino clusters Done on rhino-vm1 Starting on host rhino-vm2 Checking for prepared Rhino clusters Done on rhino-vm2 Starting on host rhino-vm3 Checking for prepared Rhino clusters Done on rhino-vm3 Doing Prepare Copying the database Done on rhino-vm1 Starting on host rhino-vm2 Doing Prepare Done on rhino-vm2 Starting on host rhino-vm3 Finished running 236 post-patch checks. Patch was applied successfully. Done on rhino-vm1 Starting on host rhino-vm2 Doing Migrate Stopping node 102 in cluster 109 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 102 Rhino has exited. Successfully shut down Rhino on node 102. Now waiting for sockets to close... Starting node 102 in cluster 110 Started Rhino. State is now: Running Waiting for Rhino to be ready for 75 seconds Started node 102 successfully Done on rhino-vm2 Starting on host rhino-vm3 Doing Migrate Stopping node 103 in cluster 109 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 103 Rhino has exited. Successfully shut down Rhino on node 103. Now waiting for sockets to close... Starting node 103 in cluster 110 Started Rhino. State is now: Running Waiting for Rhino to be ready for 41 seconds Started node 103 successfully Done on rhino-vm3 Querying status on hosts [rhino-vm1, rhino-vm2, rhino-vm3] Global info: Symmetric activation state mode is currently enabled Status of host rhino-vm1 Clusters: - volte-2.7.1.5-cluster-109 - volte-2.7.1.5-cluster-110 - LIVE Live Node: Rhino node 101: found process with id 23202 Node 101 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: - volte-2.7.1.5-cluster-109 License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-130-generic (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Status of host rhino-vm2 Clusters: - volte-2.7.1.5-cluster-109 - volte-2.7.1.5-cluster-110 - LIVE Live Node: Rhino node 102: found process with id 13391 Node 102 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Status of host rhino-vm3 Clusters: - volte-2.7.1.5-cluster-109 - volte-2.7.1.5-cluster-110 - LIVE Live Node: Rhino node 103: found process with id 31276 Node 103 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Available actions: - prepare - prepare-new-rhino - cleanup --clusters - cleanup --exports - rollback
Inspecting the applied patches
Stored information
When a patch is applied a history of the actions and what components were changed are stored inside Rhino in a profile table.
The profile table name is PatchHistoryProfileTable and it contains the following information:
Item | Description |
---|---|
PatchAppliedDate |
The date and time the patch was applied |
PatchName |
The patch name as provided when generating the patch |
PatchVersion |
The patch version as provided when generating the patch |
ProductName |
The product name the patch was created for |
ProductVersion |
The product version the patch was created for |
PatchTicket |
The reference ticket as provided when generating the patch |
Description |
The patch description as provided when generating the patch |
ComponentInfo |
The components the patch changed |
BuildInfo |
The project build information as provided when generating the patch |
IsReversePatch |
If the patch-runner was used to uninstall the patch with the |
IsForced |
If the patch-runner was used with the |
IsAppliedSuccessfully |
If the patch was applied or reverted successfully |
ExtensionData |
Generic name/value pair field for extra information |
PatchToolsVersion |
The patch tool version used to generate the patch |
PatchToolsCommit |
The last source code commit from the tool used to create the patch |
If the table is not present the patch runner will create it. That will happen for systems that were not yet patched. |
Checking the applied patches
You see the patches applied via REM by inspecting the contents of the profile table PatchHistoryProfileTable.
Another way is using the rhino-console to check the patch history by listing the contents of the PatchHistoryProfileTable profile:
Listing the patch history profiles
$rhino-console listprofiles PatchHistoryProfileTable 2018-04-04-12:43:32-CDIV-patch 2018-04-04-13:07:09-CDIV-patch
Checking the contents of both profiles
The first profile shows the patch history when applying the CDIV-patch and the second shows when uninstalling the same patch.
$rhino-console listprofileattributes PatchHistoryProfileTable 2018-04-04-12:43:32-CDIV-patch BuildInfo={"sentinel-volte":"123abc"} ComponentInfo={"originalComponents":[{"name":"mmtel-cdiv","vendor":"OpenCloud","version":"2.7.0","componentType":"SbbPartID","hash":805159952}],"patchedComponents":[{"name":"mmtel-cdiv","vendor":"OpenCloud","version":"2.7.0","componentType":"SbbPartID","hash":805159952}]} Description=Fix cdiv for call forward unconditional ExtensionData={null} IsAppliedSuccessfully=true IsForced=false IsReversePatch=false PatchAppliedDate=Wed Apr 04 12:43:32 NZST 2018 PatchName=CDIV-patch PatchTicket=OCS-1234 PatchToolsCommit=9bfb922 PatchToolsVersion=1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922 PatchVersion=1 ProductName=VoLTE ProductVersion=2.7.0.7
$rhino-console listprofileattributes PatchHistoryProfileTable 2018-04-04-13:07:09-CDIV-patch BuildInfo={"sentinel-volte":"123abc"} ComponentInfo={"originalComponents":[{"name":"mmtel-cdiv","vendor":"OpenCloud","version":"2.7.0","componentType":"SbbPartID","hash":805159952}],"patchedComponents":[{"name":"mmtel-cdiv","vendor":"OpenCloud","version":"2.7.0","componentType":"SbbPartID","hash":805159952}]} Description=Fix cdiv for call forward unconditional ExtensionData={null} IsAppliedSuccessfully=true IsForced=false IsReversePatch=true PatchAppliedDate=Wed Apr 04 13:07:09 NZST 2018 PatchName=CDIV-patch PatchTicket=OCS-1234 PatchToolsCommit=9bfb922 PatchToolsVersion=1.0.0-TRUNK.0-SNAPSHOT.r117-9bfb922 PatchVersion=1 ProductName=VoLTE ProductVersion=2.7.0.7
Minor Upgrade
This document covers the procedure and the tools for applying minor upgrade across the range of Sentinel products.
General information
A minor upgrade of a Sentinel product is when a customer wants a new product where the version differs on the fourth component: for example VoLTE 2.7.0.4 to 2.7.0.5 or 2.8.0.0 to 2.8.0.2
As a policy there is full backwards compatibility between minor releases. There might be new features, new configuration and bug fixes, but no architecture changes.
The products supported by minor upgrade are:
-
Sentinel VoLTE
-
Sentinel IP-SM-GW
-
Sentinel GAA
-
Rhino Element Manager (REM)
-
Rhino
On how to minor upgrade Sentinel products see Minor Upgrade Sentinel Product.
On how to upgrade REM see Upgrade Rhino Element Manager
Requirements for a minor upgrade
To be able to apply a minor upgrade for Sentinel successfully verify the checklist below:
check connectivity
Check the connectivity between the management node and the Rhino nodes. From the management node you must be able to open an SSH connection to each Rhino node without specifying a password.
check the upgrade bundle
The upgrade bundle for a Sentinel product consists of:
Item | Description |
---|---|
README |
Contains information about the minor upgrade: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains set of scripts used by orca |
core directory |
Contains set of scripts used by orca |
workflows directory |
Contains set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
packages directory |
Contains package files and the package.cfg |
The package files directory contains:
-
the
packages.cfg
file -
the SDK in offline mode
-
a rhino install (optional)
-
a rhino config JSON file (optional)
-
a license (optional)
-
a new JDK (java) (optional)
-
the post install package or custom package (optional)
-
the post configure package (optional)
The offline SDK must contain the product SDK with an offline repository and the product installer.
See Creating a minor upgrade bundle for more information.
check the path structure
For Sentinel products, the requirements are that the Rhino nodes have the expected path structure:
enable symmetric activation state mode
Symmetric activation state mode must be enabled prior to starting the upgrade. To check that it is enabled, use orca
's status
command. At the top under the Global info
header there will be the text Symmetric activation state mode is currently enabled
if it is enabled, or Nodes with per-node activation state: <node list>
if it is disabled. Alternatively you can use the rhino-console
command getsymmetricactivationstatemode
.
To enable it, follow the instructions in the Rhino documentation.
Creating a minor upgrade bundle
Intended audience
This page is aimed at developers who wish to publish an upgrade package for a new release of a Sentinel product.
Creating the bundle
Creating a minor upgrade bundle requires:
-
the product SDK
-
the
orca-bundler
tool
The orca-bundler
will:
-
put the product SDK in offline mode with an offline repository
-
zip the product SDK
-
combine the
orca
tools with the product SDK to create the minor upgrade bundle
To create the minor upgrade bundle:
-
get the latest version of
orca-bundler.zip
from operational-tools -
decompress it
-
run
generate-orca-bundle minor-upgrade
to bundle the offline SDK with orca
$ unzip orca-bundler.zip $ ./generate-orca-bundle minor-upgrade --release-sdk product sdk>.zip --out <the upgrade bundle>.zip
The upgrade package will contain:
Item | Description |
---|---|
README |
Contains information about the minor upgrade: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains set of scripts used by orca |
core directory |
Contains set of scripts used by orca |
workflows directory |
Contains set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
packages directory |
Contains package files and the package.cfg |
The package files directory contains:
-
the
packages.cfg
file -
the SDK in offline mode
-
a rhino install (optional)
-
a rhino config JSON file (optional)
-
a license (optional)
-
a new JDK (java) (optional)
-
the post install package or custom package (optional)
-
the post configure package (optional)
The orca-bundler will create the zip file specified in <the upgrade bundle>.zip
with the offline product SDK, the orca tool, the optional packages and a README file explaining how to install the minor upgrade. It is important to review this file and add or change any information before handing the upgrade to the customer.
Adding a new Rhino package to the upgrade bundle
To also update Rhino during a minor update, include a Rhino install tar in the bundle. To update any of the configuration properties of the new Rhino cluster, include a Rhino config json file in the bundle.
$ unzip orca-bundler.zip $ ./generate-orca-bundle minor-upgrade --release-sdk <product sdk>.zip\ --out <the upgrade bundle>.zip\ --rhino-package <new Rhino package>\ --rhino-config-json <Rhino config properties to apply>
The Rhino package
The Rhino package is the tar file used to distribute Rhino. It contains the Rhino binaries, tools and the installation script.
rhino-install.tar
The Rhino config json file
This file contains configuration that will be applied to Rhino during its installation. The file is formatted as JSON and it includes the Rhino config file destination, the properties and values. Rhino has several configuration attributes in several files with different formats. Currently orca applies the changes to the config_variables
and rhino-config.xml
file.
The json file format is:
[ { "filename": "file name with path relative to Rhino node installation", "filetype:" "properties", "settings:" [{settings 1},{settings 2},...,{settings n}] }, { "filename": "file name with path relative to Rhino node installation", "filetype:" "xml", "settings:" [{settings 1},{settings 2},...,{settings n}] }, .... { "filename": "file name with path relative to Rhino node installation", "filetype:" "sh", "settings:" [{settings 1},{settings 2},...,{settings n}] } ]
The filename path is relative to the Rhino node installation. e.g for rhino-config.xml it should be config/rhino-config.xml |
The filetype
attribute accepts the following values:
-
properties
to deal with Rhinokey=value
config properties) -
xml
to deal with rhino xml config files, e.g rhino-config.xml -
sh
to deal withread-config-variables
file
The parameters in the settings
attribute should match the expected format defined by the filetype
:
-
a property if
filetype
isproperties
orsh
-
an XPath if
filetype
isxml
Example from sentinel-volte-upgrade-rhino-config.json
[ { "filename": "config/config_variables", "filetype": "properties", "settings": [ { "name": "HEAP_SIZE", "type": "minimum", "units": "m", "value": 6144 } ] }, { "filename": "config/rhino-config.xml", "filetype": "xml", "settings": [ { "xpath": ".//memdb[jndi-name='ManagementDatabase']/committed-size", "type": "minimum", "units": "M", "value": 400 } ] } ]
Currently the only supported values for type are value and minimum . |
For the example above, orca will change the value of the HEAP_SIZE property to 6144m if the current value is lower than that. It will also change the committed-size
for element memdb
with jndi-name
equals to 'ManagementDatabase'.
Concretely it will change
<memdb> <jndi-name>ManagementDatabase</jndi-name> <committed-size>128M</committed-size> </memdb>
to
<memdb> <jndi-name>ManagementDatabase</jndi-name> <committed-size>400M</committed-size> </memdb>
Use the flag --skip-rhino-version-check in the bundler while creating test upgrade bundles against non released versions of Rhino. |
Adding a new License to the upgrade bundle
A new license can be installed during a minor update by including it in the bundle.
$ unzip orca-bundler.zip $ ./generate-orca-bundle minor-upgrade --release-sdk <product sdk>.zip\ --out <the upgrade bundle>.zip\ --rhino-package <new Rhino package>\ --rhino-config-json <Rhino config properties to apply>\ --license <license file>
Adding a new JDK to the upgrade bundle
To also update the JDK (Java) used to run rhino, include a new JDK package in the bundle.
$ unzip orca-bundler.zip $ ./generate-orca-bundle minor-upgrade --release-sdk <product sdk>.zip\ --out <the upgrade bundle>.zip\ --rhino-package <new Rhino package>\ --rhino-config-json <Rhino config properties to apply>\ --license <new-license.license>\ --java-package <jdk-package>
Adding custom packages to the upgrade bundle
In case the minor upgrade requires a custom package, the best solution is to include it with the upgrade bundle. Use the options --post-install-package
and/or --post-configure-package
to add a custom module and a post configuration module, respectively.
$ unzip orca-bundler.zip $ ./generate-orca-bundle minor-upgrade --release-sdk <product sdk>.zip\ --out <the upgrade bundle>.zip\ --post-install-package <custom package>\ --post-configure-package <configure package>
You can also generate the upgrade package in two stages by taking the SDK offline first and then using it to create the upgrade package.
$ unzip orca-bundler.zip $ ./generate-orca-bundle prepare-sdk --release-sdk <product sdk>.zip\ --out <the offline SDK>.zip $ ./generate-orca-bundle minor-upgrade --offline-sdk the offline SDK>.zip\ --out <the upgrade bundle>.zip --post-install-package <custom package>\ --post-configure-package <configure package>
Post-install and post-configure packages
During the upgrade procedure the custom packages will be installed if they are present in the packages.cfg.
"Post-install" and "post-configuration" customizations
Note that The custom packages must conform to the following specification:
|
It is recommended that the package developer edits the README in the upgrade bundle to include the extra details on the custom package(s).
You can also manually add the post-install and post-configure packages to the upgrade bundle, but it is not recommended. To do so, add the optional packages major upgrade to the bundle zip file after running the above commands. The optional packages have to be in the packages
directory and the packages.cfg
needs to have the packages names.
Example of the contents of a packages.cfg:
[packages] sdk=sentinel-ipsmgw-2.6.0.17-offline-sdk.zip rhino_package=rhino-install.tar license=new.license rhino_config_json=sentinel-volte-upgrade-rhino-config.json post_install_package=custom-package-2.6.0.17.zip post_configure_package=after-configuration-package-2.6.0.17.zip [versions] sdk=2.6.0.17 rhino=2.5.0.5
Minor Upgrade Sentinel Product
Minor upgrade bundle
A minor upgrade bundle is a self-contained package with:
-
the orca bundle
-
the product SDK with offline repositories
-
the optional packages
-
the README
A minor upgrade bundle has the name
<product name>-<product version>-minor-upgrade.zip
Example: volte-2.7.0.4-minor-upgrade.zip
Item | Description |
---|---|
README |
Contains information about the minor upgrade: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains set of scripts used by orca |
core directory |
Contains set of scripts used by orca |
workflows directory |
Contains set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
packages directory |
Contains package files and the package.cfg |
The package files directory contains:
-
the
packages.cfg
file -
the SDK in offline mode
-
a rhino install (optional)
-
a rhino config JSON file (optional)
-
a license (optional)
-
a new JDK (java) (optional)
-
the post install package or custom package (optional)
-
the post configure package (optional)
The packages.cfg file contains the name of the packages to apply. For example:
[packages] sdk=sentinel-ipsmgw-2.6.0.17-offline-sdk.zip rhino_package=rhino-install.tar license=new.license rhino_config_json=sentinel-volte-upgrade-rhino-config.json post_install_package=custom-package-2.6.0.17.zip post_configure_package=after-configuration-package-2.6.0.17.zip [versions] sdk=2.6.0.17 rhino=2.5.0.5
Requirements
A minor upgrade uses the orca tool and its requirements apply here. Applying a minor upgrade requires administrative access to the Rhino running the product, so be sure the credentials used in ssh trusted connections are valid.
By default orca assumes the $HOME directory of the remote hosts as the base directory. Use the option --remote-home-dir
or -r
if the path is different. The upgrade requires at least 1.5GB of free disk space in the first node:
-
0.5GB for the upgrade bundle
-
0.5GB for the installer to run
-
0.5GB for logs
The hosts order should always be from the host that has the highest node to the lowest node. In case some nodes are not cleanly shutdown the nodes with lowest node id will become primary to the cluster to avoid split brain situations. See Rhino Cluster Membership. |
A valid install.properties
The install.properties file is necessary to install the correct components from the product. Ideally this install.properties should be the same used to do the current installation. Each customer installation has different options chosen during the installation which defines which components are installed. One example is the choice of CAP charging in VoLTE, which includes the IMSSF service.
The important properties to check in the install.properties are:
-
rhinoclientdir
to point to the remote path of rhino client -
doinstall
set totrue
-
deployrhino
set tofalse
-
any property that does not have a default value needs to be set. See the product documentation for the version being installed for the full list of properties.
The properties above are also checked by orca. In case they are not properly set, orca will raise an error.
The other properties will be temporarily set to default values by the product SDK installer that will populate the profile tables. However, those properties will then be restored to the existing values from the product version that was upgraded from. |
Consider this example for a VoLTE installation:
The current installation was done using the property home.domain
set to mydomain.com
. When doing the minor upgrade the user specifies that property in the install.properties to otherdomain.com
. The product SDK installer will install the new version of software and set the profiles that require information about the home domain to otherdomain.com
. After installing the new version, orca proceeds to recover the previous customer configuration present in the rhino-export. This process will remove all tables from the new installation and will create the ones from the old installation. At the end the value mydomain
will be restored.
A custom package
If the current installation has custom components on top of the product, a zip package containing an executable install
is required. See Applying upgrade for customers with custom installations.
Applying the minor upgrade
In order to get the concept across, this explanation starts with a simplified example that applies an upgrade to all hosts at once. You are strongly discouraged from doing this - a much better process is to do an upgrade in multiple stages: first upgrade just a single host, then perform testing to determine that the upgrade is working as expected, before finally continuing to roll out the upgrade to the rest of your Rhino hosts. |
Applying a minor upgrade to a production environment is simple. The normal steps are:
-
download the minor upgrade bundle to the management host
-
ensure the management host has ssh access to the Rhino cluster hosts
-
decompress the upgrade bundle and cd to the upgrade bundle
-
read the README for instructions and for details of the upgrade
-
prepare the install.properties file for use by the product installer, use the one used to do the current installation or create a new one with the expected values
-
verify that you have a Rhino running with the installed product, e.g. Sentinel VoLTE 2.7.0.9
using just the product SDK with or without customisations
-
use the command
./orca --hosts host1,host2,host3 minor-upgrade <the packages path> <path to the install.properties> --no-pause
for example:
./orca --hosts host1,host2,host3 minor-upgrade packages install.properties --no-pause
orca needs to be called from inside the upgrade bundle directory, otherwise it will fail due to the use of relative paths for some actions. |
using just the product SDK plus a custom package
The command is the same as above, but orca
will retrieve the custom package automatically from the packages directory according to the entries in the package.cfg. You can insert the custom package manually and add this line to the packages.cfg under the section packages:
post_install_package=<package file name>
In summary orca will:
-
check the connection to the hosts
-
clone the cluster on all the hosts
-
backup the installation
-
retrieve the necessary configuration
-
migrate the node on the first host in the given list to the new cluster
-
install the new version on that first host
-
copy the configuration from the old installation to the new installation
-
clean up the temporary files
-
optionally pause to allow testing of the upgraded node
-
migrate the other nodes of the cluster
To rollback the installation
./orca --hosts host1,host2,host3 rollback -f
To delete an old installation
./orca --hosts host1,host2,host3 cleanup --cluster <cluster id>
The recommended multi-stage upgrade process
Applying an upgrade in multiple stages is not that much more complex than the simplified example already given.
Instead of using the --no-pause
option, specify a --pause
option thus:
./orca --hosts host1,host2,host3 minor-upgrade packages install.properties --pause
This will cause all of the hosts listed (in this case host1
, host2
, and host3
) to be prepared for the upgrade, but only the first host in the list (here host1
) will actually have the upgrade applied to it.
The process will then pause, to allow the upgraded installation to be tested. Note that only the first host (host1
) has the new code on it, so you will need to arrange your testing appropriately to route traffic to this specific host.
Once you are satisfied that the upgrade is working as required, you can run the exact same command that you previously did, only with the --pause
changed to be --continue
. In particular, you should not change the list of hosts given to these two commands, since the continuation process needs to access the same first host, and same set of prepared hosts, as before.
In our example, the continuation command is therefore
./orca --hosts host1,host2,host3 minor-upgrade packages install.properties --continue
This will then migrate the other nodes of the cluster, so they are all using the upgraded product.
Applying upgrade for customers with custom installations
Currently orca supports two ways of applying a minor upgrade to customers: self contained customised SDK or product SDK plus custom package.
self contained customised SDK
This is equivalent to the product SDK, but including the customisations. It is a self contained package that used the SDK installer. The SDK installer will install all the necessary components including custom components and custom profile tables. The installation will be equivalent to the previous one but with new version components.
product SDK plus custom package
This type of minor upgrade is done by installing customizations on top of the product SDK. The custom package needs to contain an executable called install
. Specify the custom package name in the packages.cfg and put the custom package in the packages directory. e.g.
./orca --hosts host1,host2 minor-upgrade packages install.properties --pause
If there is a custom package specified in the packages.cfg, orca will install the product SDK first then call this install
script and that will do all necessary operations to install the customised components to Rhino. After this script is finished orca will restore the configuration from the last installation.
Handling Feature Scripts
A minor upgrade release might include Feature Scripts fixes. By default the minor upgrade process keeps the Feature Scripts from the current installed version (old version). If the new version of the Feature Scripts are the valid ones that should be present in the system after the upgrade, you need to specify the flag --skip-feature-scripts
while running orca.
./orca --hosts host1,host2,host3 minor-upgrade packages install.properties --pause --skip_feature_scripts
Use this option carefully. In case there are custom changes in the Feature Scripts they will be lost and the system will not work as expected. Make sure to manually reapply all the custom changes if you are using this option. |
Troubleshooting
See Troubleshooting.
Example of minor upgrade execution on a 3 node cluster
This example shows a minor upgrade from Sentinel Volte 2.7.0.1 to Sentinel Volte 2.7.0.9
Applying the minor upgrade with the command
rhino@rhino-rem:~/install/sentinel-volte-sdk-2.7.1.5-upgrade$ ./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 minor-upgrade packages install.properties --no-pause
Check for prepared clusters
Starting on host rhino-vm1 Checking for prepared Rhino clusters Done on rhino-vm1 Starting on host rhino-vm2 Checking for prepared Rhino clusters Done on rhino-vm2 Starting on host rhino-vm3 Checking for prepared Rhino clusters Done on rhino-vm3
Check the sdk install.properties script
Validating install.properties Updating rhinoclientdir property in install.properties to /home/rhino/rhino/client
Prepare a new cluster and export the configuration
Applying minor upgrade package in directory 'packages' to hosts rhino-vm1,rhino-vm2,rhino-vm3 Deleting previous export /home/rhino/export/volte-2.7.1.0-cluster-108 Exporting configuration to directory /home/rhino/export/volte-2.7.1.0-cluster-108 Done on rhino-vm1 Exporting profile table and RA configuration Done on rhino-vm1 Doing Prepare Initializing the database Done on rhino-vm1 Starting on host rhino-vm2 Doing Prepare Done on rhino-vm2 Starting on host rhino-vm3 Doing Prepare Done on rhino-vm3
Migrate the first node in the hosts list
Doing Migrate Stopping node 101 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 101 Rhino has exited. Successfully shut down Rhino on node 101. Now waiting for sockets to close... Starting node 101 in cluster 109 Started Rhino. State is now: Waiting to go primary Attempting to make cluster primary for 20 seconds Successfully made cluster primary Waiting for Rhino to be ready for 230 seconds Started node 101 successfully Done on rhino-vm1
Import basic product configuration
Importing pre-install configuration Done on rhino-vm1
Install the new product version
Installing upgrade volte-2.7.1.5-offline-sdk.zip - this will take a while Unpacking the upgrade package Copying the install.properties Installing the new product. This will take a while. You can check the progress in /home/rhino/install/sdk-2.7.1.5/build/target/log/installer.log on the remote host Done on rhino-vm1
Restore the customer configuration
Importing profile table and RA configuration Done on rhino-vm1 Importing profiles from export directory volte-2.7.1.0-cluster-108 Done on rhino-vm1 Transforming the service object pools configuration Getting the services Selecting files Applying changes Done on rhino-vm1 Importing post-install configuration Done on rhino-vm1
Finish the upgrade and migrate the other nodes
Starting on host rhino-vm2 Doing Migrate Stopping node 102 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 102 Rhino has exited. Successfully shut down Rhino on node 102. Now waiting for sockets to close... Starting node 102 in cluster 109 Started Rhino. State is now: Running Waiting for Rhino to be ready for 75 seconds Started node 102 successfully Done on rhino-vm2 Starting on host rhino-vm3 Doing Migrate Stopping node 103 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 103 Rhino has exited. Successfully shut down Rhino on node 103. Now waiting for sockets to close... Starting node 103 in cluster 109 Started Rhino. State is now: Running Waiting for Rhino to be ready for 40 seconds Started node 103 successfully Done on rhino-vm3
Show the cluster status
Status of host rhino-vm1 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.7.1.5-cluster-109 - LIVE Live Node: Rhino node 101: found process with id 18550 Node 101 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: - volte-2.7.1.0-cluster-108 - volte-2.7.1.0-cluster-108-transformed-for-2.8.0.3 - volte-2.8.0.3-cluster-109 License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-130-generic (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Status of host rhino-vm2 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.7.1.5-cluster-109 - LIVE Live Node: Rhino node 102: found process with id 20976 Node 102 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Status of host rhino-vm3 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.7.1.5-cluster-109 - LIVE Live Node: Rhino node 103: found process with id 1493 Node 103 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.5 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.5 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Available actions: - prepare - prepare-new-rhino - cleanup --clusters - cleanup --exports - rollback
Major Upgrade
This document covers the procedure and the tools for applying major upgrade across the range of Sentinel products.
General information
A major upgrade of a Sentinel product is when a customer wants a new product where the version differs on the first, second or third component: for example VoLTE 2.7.0.4 to 2.7.1.1 or 2.7.1.1 to 2.8.0.2
Backwards compatibility between major versions is not guaranteed. The new version might introduce new features, new configuration, configuration changes, new modules or even architectural changes. Therefore the major upgrade process is more complex than a minor upgrade.
The products supported by major upgrade are:
-
Sentinel VoLTE
-
Sentinel IP-SM-GW
-
Sentinel GAA
-
Rhino Element Manager (REM)
-
Rhino
On how to major upgrade Sentinel products see Major Upgrade Sentinel Product.
On how to upgrade REM see Upgrade Rhino Element Manager
Requirements for a major upgrade
To be able to apply a major upgrade for Sentinel successfully verify the checklist below:
read the upgrade documentation and the product change log
The upgrade documentation contains the instructions on how to apply the upgrade. The product changelog contains the changes from the previous versions and indicates new features and configuration changes. It is important to be aware of those changes because new features will need configuration, which is not always captured during the upgrade. Also there might be changes in the product behavior that might affect the current session flows.
check connectivity
Check the connectivity between the management node and the Rhino nodes. From the management node you must be able to open an SSH connection to each Rhino node without specifying a password.
check the upgrade bundle
The upgrade bundle for a Sentinel product consists of:
Item | Description |
---|---|
README |
Contains information about the major upgrade: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains the set of scripts used by orca |
core directory |
Contains the set of scripts used by orca |
workflows directory |
Contains the set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
packages directory |
Contains the package files and the packages.cfg |
transformation-rules |
Contains the transformation rules jar |
required-installer-properties |
Contains the required SDK properties |
The package files directory contains:
-
the
packages.cfg
file -
the SDK in offline mode
-
the post install package or custom package (optional)
-
the post configure package (optional)
-
the rhino installer (optional)
-
the rhino-config.json with the required Rhino properties when a new Rhino is required
-
the Java JDK (optional)
The packages
path must contain at least the product SDK with an offline repository and the product installer.
See Creating a major upgrade bundle for more information.
check the path structure
For Sentinel products, the requirements are that the Rhino nodes have the expected path structure:
enable symmetric activation state mode
Symmetric activation state mode must be enabled prior to starting the upgrade. To check that it is enabled, use orca
's status
command. At the top under the Global info
header there will be the text Symmetric activation state mode is currently enabled
if it is enabled, or Nodes with per-node activation state: <node list>
if it is disabled. Alternatively you can use the rhino-console
command getsymmetricactivationstatemode
.
To enable it, follow the instructions in the Rhino documentation.
Creating a major upgrade bundle
Intended audience
This page is aimed at developers who wish to publish a major upgrade package for a new release of a Sentinel product.
Creating the bundle
Creating a major upgrade bundle requires:
-
the product SDK
-
the orca-bundler tool
-
the data transformation rules, present in the product upgrade module
and the optional packages
-
the new Rhino
-
the new Java JDK
-
the post-install package
-
the post-configure package
-
the required Rhino configuration and the required SDK install properties, both present in the product upgrade module
The orca-bundler will:
-
put the product SDK in offline mode with an offline repository
-
zip the product SDK
-
combine the orca tools with the product SDK and the optional packages to create the major upgrade bundle
To create the major upgrade bundle without the optional packages:
-
get the latest version of orca-bundler.zip from operational-tools
-
decompress it
-
run
generate-orca-bundle major-upgrade
to bundle the SDK with orca
$ unzip orca-bundler.zip $ ./generate-orca-bundle major-upgrade --release-sdk <the sdk>.zip --out <the upgrade bundle>.zip --transformation-rules <transformation rules jar>
Optionally you can generate your SDK offline if you need a customized SDK.
-
run
generate-orca-bundle prepare-sdk
to convert the release SDK to offline mode -
run
generate-orca-bundle major-upgrade
to bundle the offline SDK withorca
$ unzip orca-bundler.zip $ ./generate-orca-bundle prepare-sdk --release-sdk <product sdk>.zip --out <the offline SDK>.zip $ ./generate-orca-bundle major-upgrade --offline-sdk <the offline sdk>.zip --out <the upgrade bundle>.zip --transformation-rules <transformation rules jar>
The upgrade package will contain:
Item | Description |
---|---|
README |
Contains information about the major upgrade: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains the set of scripts used by orca |
core directory |
Contains the set of scripts used by orca |
workflows directory |
Contains the set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
packages directory |
Contains the package files and the packages.cfg |
transformation-rules |
Contains the transformation rules jar |
required-installer-properties |
Contains the required SDK properties |
The package files directory contains:
-
the
packages.cfg
file -
the SDK in offline mode
-
the post install package or custom package (optional)
-
the post configure package (optional)
-
the rhino installer (optional)
-
the rhino-config.json with the required Rhino properties when a new Rhino is required
-
the Java JDK (optional)
The orca-bundler will create the zip file specified in <the upgrade bundle>.zip
with the offline product SDK, the orca tool, the optional packages and a README file explaining how to install the major upgrade. It is important to review this file and add or change any information before handing the upgrade to the customer.
Adding custom packages to the upgrade bundle
In case the major upgrade requires a custom package, the best solution is to include it with the upgrade bundle.
To create the major upgrade bundle with the optional packages:
-
get the latest version of orca-bundler.zip from operational-tools
-
get the new Rhino version to be installed
-
get the new Java JDK to be installed
-
get the post-install package
-
get the post-configure package
-
decompress the orca bundler
-
run
generate-orca-bundle major-upgrade
to bundle the SDK and the optional packages with orca
Use the options:
-
--post-install-package
and/or--post-configure-package
to add a custom module -
--post-configure-package
to add a post-configuration module -
--rhino-package
to add a new Rhino to install -
--license
to specify a new Rhino license -
--java-package
to add the new Java JDK -
--required-properties
to specify the minimum required product SDK install.properties -
--rhino-config-json
to specify extra Rhino configuration -
--feature-scripts
to specify the original Feature scripts to be used in the 3 way merge feature scripts
A new license option is only available when installing a new Rhino version. Use the flag --skip-rhino-version-check in the bundler while creating test upgrade bundles against non released versions of Rhino. |
$ unzip orca-bundler.zip $ ./generate-orca-bundle major-upgrade --release-sdk <product sdk>.zip\ --out <the offline SDK>.zip\ --post-install-package <custom package>\ --post-configure-package <configure package>\ --rhino-package <new Rhino package>\ --rhino-config-json <Rhino config properties to apply>\ --required-properties <product required properties>\ --feature-scripts <the feature scripts xml file from rhino export>\ --transformation-rules <the transformations rules jar>\ --java-package <new Java JDK>
Post-install and post-configure packages
During the upgrade procedure the custom packages will be installed if they are present in the packages.cfg.
"Post-install" and "post-configuration" customizations
Note that The custom packages must conform to the following specification:
|
It is recommended that the package developer edits the README in the upgrade bundle to include the extra details on the custom package(s).
You can also manually add the post-install and post-configure packages to the upgrade bundle, but it is not recommended. To do so, add the optional packages major upgrade to the bundle zip file after running the above commands. The optional packages have to be in the packages
directory and the packages.cfg
needs to have the packages names.
Example of the contents of a packages.cfg:
[files] sdk=volte-2.8.0.3-offline-sdk.zip post_install_package=post-install-package.zip post_configure_package=post-configure-package.zip rhino_package=rhino-install.tar rhino_config_json=sentinel-volte-upgrade-rhino-config.json [versions] sdk=2.8.0.3 rhino_package=2.6.1.2 [additional_data]
The packages, properties and feature scripts format
The Feature Scripts
The major upgrade process includes a three-way merge process that requires the original Feature Scripts from the currently installed product. The objective is to minimize the manual work to adjust the Feature Scripts and to preserve the custom changes per customer.
The merge feature script tool receives 3 inputs:
-
the feature scripts from the product without customizations
-
the current feature scripts present in the installed system
-
the feature scripts from the new product after installation
The item 1 should be provided while creating the upgrade package. It is important to know what is the version that is current installed in customer and what is the chosen deployment module.
The item 2 and 3 are acquired during the upgrade process.
The feature script format is an xml file format generated by the rhino-export process.
The tool supports just one platform operator name per installation. It ignores the operator name while extracting the Feature Scripts file from the production system. |
The post-install and post-configure packages
The post-install and post-configure package should be of the following format:
-
a self-contained zip file (included inside the upgrade bundle in the top-level directory)
-
an executable file named
install
(with no extension; it can be of any format, e.g. bash script or binary) in the top-level directory of said zip file
orca will pass the directory of the current live Rhino installation as an argument to the executable install
.
The Rhino package
The Rhino package is the tar file used to distribute Rhino. It contains the Rhino binaries, tools and the installation script.
rhino-install.tar
The Rhino config json file
This file contains configuration that will be applied to Rhino during its installation. The file is formatted as JSON and it includes the Rhino config file destination, the properties and values. Rhino has several configuration attributes in several files with different formats. Currently orca applies the changes to the config_variables
and rhino-config.xml
file.
The json file format is:
[ { "filename": "file name with path relative to Rhino node installation", "filetype:" "properties", "settings:" [{settings 1},{settings 2},...,{settings n}] }, { "filename": "file name with path relative to Rhino node installation", "filetype:" "xml", "settings:" [{settings 1},{settings 2},...,{settings n}] }, .... { "filename": "file name with path relative to Rhino node installation", "filetype:" "sh", "settings:" [{settings 1},{settings 2},...,{settings n}] } ]
The filename path is relative to the Rhino node installation. e.g for rhino-config.xml it should be config/rhino-config.xml |
The filetype
attribute accepts the following values:
-
properties
to deal with Rhinokey=value
config properties) -
xml
to deal with rhino xml config files, e.g rhino-config.xml -
sh
to deal withread-config-variables
file
The parameters in the settings
attribute should match the expected format defined by the filetype
:
-
a property if
filetype
isproperties
orsh
-
an XPath if
filetype
isxml
Example from sentinel-volte-upgrade-rhino-config.json
[ { "filename": "config/config_variables", "filetype": "properties", "settings": [ { "name": "HEAP_SIZE", "type": "minimum", "units": "m", "value": 6144 } ] }, { "filename": "config/rhino-config.xml", "filetype": "xml", "settings": [ { "xpath": ".//memdb[jndi-name='ManagementDatabase']/committed-size", "type": "minimum", "units": "M", "value": 400 } ] } ]
Currently the only supported values for type are value and minimum . |
For the example above, orca will change the value of the HEAP_SIZE property to 6144m if the current value is lower than that. It will also change the committed-size
for element memdb
with jndi-name
equals to 'ManagementDatabase'.
Concretely it will change
<memdb> <jndi-name>ManagementDatabase</jndi-name> <committed-size>128M</committed-size> </memdb>
to
<memdb> <jndi-name>ManagementDatabase</jndi-name> <committed-size>400M</committed-size> </memdb>
The Java package
The Java package has the format jdk-<version>-<platform>.tar.gz
, e.g jdk-8u172-linux-x64.tar.gz
.
If a Java package is specified a new section in the packages.cfg will include the jdk details.
[files] sdk=volte-2.8.0.2-offline-sdk.zip rhino_package=rhino-install.tar java_package=jdk-8u172-linux-x64.tar.gz rhino_config_json=sentinel-volte-upgrade-rhino-config.json feature_scripts=downlevel_feature_scripts.xml [versions] sdk=2.8.0.2 rhino_package=2.6.1.0 java_package=1.8.0_172 [additional_data] java_package=jdk1.8.0_172
The transformation rule jar
The transformation rule jar is a compiled file including the classes to make the data transformation according to the data transformation API. The rules live in the product repository.
For more details see SLEE Data Transformation and Data Transformation API.
Major Upgrade Sentinel Product
Major upgrade bundle
A Major upgrade bundle is a self-contained package with:
-
the orca bundle
-
the product SDK with offline repositories
-
the README
A Major upgrade bundle has the name
<product name>-<product version>-major-upgrade.zip
Example: volte-2.7.0.4-major-upgrade.zip
Item | Description |
---|---|
README |
Contains information about the major upgrade: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains the set of scripts used by orca |
core directory |
Contains the set of scripts used by orca |
workflows directory |
Contains the set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
packages directory |
Contains the package files and the packages.cfg |
transformation-rules |
Contains the transformation rules jar |
required-installer-properties |
Contains the required SDK properties |
The package files directory contains:
-
the
packages.cfg
file -
the SDK in offline mode
-
the post install package or custom package (optional)
-
the post configure package (optional)
-
the rhino installer (optional)
-
the rhino-config.json with the required Rhino properties when a new Rhino is required
-
the Java JDK (optional)
The packages.cfg file contains the name of the packages to apply. For example:
[files] sdk=volte-2.8.0.3-offline-sdk.zip post_install_package=post-install-package.zip post_configure_package=post-configure-package.zip rhino_package=rhino-install.tar rhino_config_json=sentinel-volte-upgrade-rhino-config.json [versions] sdk=2.8.0.3 rhino_package=2.6.1.2 [additional_data]
Requirements
A major upgrade uses the orca tool and its requirements apply here. Applying a major upgrade requires administrative access to the Rhino running the product, so be sure the credentials used in ssh trusted connections are valid.
By default orca assumes the $HOME directory of the remote hosts as the base directory. Use the option --remote-home-dir
or -r
if the path is different. The upgrade requires at least 1.5GB of free disk space in the first node:
-
0.5GB for the upgrade bundle
-
0.5GB for the installer to run
-
0.5GB for logs
The hosts order should always be from the host that has the highest node to the lowest node. In case some nodes are not cleanly shutdown the nodes with lowest node id will become primary to the cluster to avoid split brain situations. See Rhino Cluster Membership. |
A valid install.properties
The install.properties file is necessary to install the correct components from the product and it should conform to what is expected by the SDK installer from the new product version. It should have the same properties from the previous installation and have the new required properties set according to the product requirements. Each customer installation has different options chosen during the installation which defines which components are installed. One example is the choice of CAP charging in VoLTE, which includes the IMSSF service.
The important properties to check in the install.properties are:
-
rhinoclientdir
to point to the remote path of rhino client -
doinstall
set totrue
-
deployrhino
set tofalse
-
any property that does not have a default value needs to be set. See the product documentation for the version being installed for the full list of properties.
The properties above are also checked by orca. In case they are not properly set, orca will raise an error.
The other properties will be temporarily set to default values by the product SDK installer that will populate the profile tables. However, those properties will then be restored to the existing values from the product version that was upgraded from. |
Consider this example for a VoLTE installation:
The current installation was done using the property home.domain
set to mydomain.com
. When doing the major upgrade the user specifies that property in the install.properties to otherdomain.com
. The product SDK installer will install the new version of software and set the profiles that require information about the home domain to otherdomain.com
. After installing the new version, orca proceeds to recover the previous customer configuration present in the rhino-export. This process will remove all tables from the new installation and will create the ones from the old installation. At the end the value mydomain
will be restored.
A custom package
If the current installation has custom components on top of the product, a zip package containing an executable install
is required. See Applying upgrade for customers with custom installations.
Applying the major upgrade
In order to get the concept across, this explanation starts with a simplified example that applies an upgrade to all hosts at once. You are strongly discouraged from doing this - a much better process is to do an upgrade in multiple stages: first upgrade just a single host, then perform testing to determine that the upgrade is working as expected, before finally continuing to roll out the upgrade to the rest of your Rhino hosts. |
Applying a major upgrade to a production environment is simple. The normal steps are:
-
download the major upgrade bundle to the management host
-
ensure the management host has ssh access to the Rhino cluster hosts
-
decompress the upgrade bundle and cd to the upgrade bundle
-
read the README for instructions and for details of the upgrade
-
prepare the install.properties file for use by the product installer, use the one used to do the current installation or create a new one with the expected values
-
verify that you have a Rhino running with the installed product, e.g. Sentinel VoLTE 2.7.0.9
using just the product SDK with or without customisations
-
use the command
./orca --hosts host1,host2,host3 major-upgrade <the packages path> <path to the install.properties> --no-pause
for example:
./orca --hosts host1,host2,host3 major-upgrade packages install.properties --no-pause
orca needs to be called from inside the upgrade bundle directory, otherwise it will fail due to the use of relative paths for some actions. |
using just the product SDK plus a custom package
The command is the same as above, but orca
will retrieve the custom package automatically from the packages directory according to the entries in the package.cfg. You can insert the custom package manually and add this line to the packages.cfg under the section packages:
post_install_package=<package file name>
In summary orca will:
-
check the connection to the hosts
-
clone the cluster on all the hosts
-
backup the installation
-
retrieve the necessary configuration
-
prepare a cluster on the hosts
-
install new Java if required
-
install new Rhino if required
-
-
migrate the node on the first host in the given list to the new cluster
-
install the new product version on that first host
-
copy the configuration from the old installation to the new installation
-
clean up the temporary files
-
optionally pause to allow testing of the upgraded node
-
migrate the other nodes of the cluster
To rollback the installation
./orca --hosts host1,host2,host3 rollback -f
To delete an old installation
./orca --hosts host1,host2,host3 cleanup --cluster <cluster id>
The recommended multi-stage upgrade process
Applying an upgrade in multiple stages is not that much more complex than the simplified example already given.
Instead of using the --no-pause
option, specify a --pause
option thus:
./orca --hosts host1,host2,host3 major-upgrade packages install.properties --pause
This will cause all of the hosts listed (in this case host1
, host2
, and host3
) to be prepared for the upgrade, but only the first host in the list (here host1
) will actually have the upgrade applied to it.
The process will then pause, to allow the upgraded installation to be tested. Note that only the first host (host1
) has the software version installed, so you will need to arrange your testing appropriately to route traffic to this specific host.
Once you are satisfied that the upgrade is working as required, you can run the exact same command that you previously did, only with the --pause
changed to be --continue
. In particular, you should not change the list of hosts given to these two commands, since the continuation process needs to access the same first host, and same set of prepared hosts, as before.
In our example, the continuation command is therefore
./orca --hosts host1,host2,host3 major-upgrade packages install.properties --continue
This will then migrate the other nodes of the cluster, so they are all using the upgraded product.
Limitations
Rhino tracers
The tracers applied to your current installation are not retained after an upgrade. You’ll need to manually reapply any custom tracers after the upgrade.
SNMP OIDs
The SNMP OIDs are not preserved during major upgrade. The OIDs are created based on the SLEE component ID, which contains the component version. In major upgrade the component version changes and a new set of OIDs are created. We will address this issue in a future release.
Transformation rules
Upgrade won’t work if the correct transformation rules are not chosen. Currently the upgrades are designed to be from one version to a subsequent one, e.g Volte 2.6.0 to 2.7.0. There currently no rules that allows a direct upgrade jumping major versions, e.g, Volte 2.6.8 to 2.8.0.
Permgen configuration change
Some products using Java 7 require special configuration for permgen
. Changing this configuration during an upgrade that requires a new Rhino is not supported. If the upgrade does not require a new Java version, then first change the permgen
configuration in the current cluster and apply the upgrade. The upgrade will maintain the permgen
configuration. If upgrading to Java 8, this configuration is irrelevant.
Rollback command requires an active Rhino
Rollback command won’t work if the Rhino in a host is not active. You will need to manually start Rhino and then use the rollback command.
Applying upgrade for customers with custom installations
Currently orca supports two ways of applying a major upgrade to customers: self contained customised SDK or product SDK plus custom package.
Self contained customised SDK
This is equivalent to the product SDK, but including the customisations. It is a self contained package that used the SDK installer. The SDK installer will install all the necessary components including custom components and custom profile tables. The installation will be equivalent to the previous one but with new version components.
Product SDK plus custom package
This type of major upgrade is done by installing customizations on top of the product SDK. The custom package needs to contain an executable called install
. Specify the custom package name in the packages.cfg and put the custom package in the packages directory. e.g.
./orca --hosts host1,host2 major-upgrade packages install.properties --pause
If there is a custom package specified in the packages.cfg, orca will install the product SDK first then call this install
script and that will do all necessary operations to install the customised components to Rhino. After this script is finished orca will restore the configuration from the last installation.
Feature Scripts merge
The major upgrade process includes a 3 way merge of the feature scripts. Before explaining the algorithm some terms need clarification:
-
Downlevel: is the original product without customization
-
Installed: the current product in production with or without customization
-
Uplevel: the new product version
-
D, I, U: abbreviation for Downlevel, Installed and Uplevel, respectively.
The merge is based on the following rules:
Script present? | Changes | Action | Description | |||||
---|---|---|---|---|---|---|---|---|
Case |
Downlevel |
Installed |
Uplevel |
D Not Equal I |
I Not Equal U |
D Not Equal U |
||
1 |
No |
Prompt for customer decision |
Script not present in uplevel |
|||||
2 |
No |
No |
Yes |
Use uplevel |
Script introduced in uplevel |
|||
3 |
Yes |
No |
Yes |
Use uplevel |
Script got deleted in the installed version |
|||
4 |
No |
Yes |
Yes |
No |
Use installed |
Possible "backport" of feature |
||
5 |
No |
Yes |
Yes |
Yes |
Prompt for customer decision |
Ditto, except now there are functional changes too |
||
6 |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
Prompt for customer decision |
Three-way merge case |
7 |
Yes |
Yes |
Yes |
No |
Yes |
Yes |
Use uplevel |
No customizations but script updated in new version |
8 |
Yes |
Yes |
Yes |
Yes |
No |
Yes |
Use installed |
Possible "backport" of feature |
9 |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
Use installed |
Customization but no change in new version, preserve customization |
10 |
Yes |
Yes |
Yes |
Yes |
No |
No |
Use installed |
Standard case, no changes |
Basically the rules 1, 5 and 6 can not be resolved without human intervention, which has to be done after the first node is migrated.
Orca will post the warning
The feature script tool was unable to resolve differences between some scripts. <the list of Feature Scripts> Once the major upgrade is complete, review the scripts in <path>/feature-scripts and make changes if required When ready, run "orca --hosts <host> import-feature-scripts" to import the scripts into the Rhino installation.
On how to resolve Feature Scripts conflicts and import them to the current system see Feature Scripts conflicts and resolution.
New Rhino installation
If a Rhino package is present and the parameter --skip-new-rhino
is not present, orca will install a new Rhino with the same configuration from the current installed one and update the configuration according to the Rhino config JSON file.
If a total new configuration is required then use the command prepare-new-rhino first with the options --installer-overrides
.
New Java installation
If a Java package is present, orca will install the Java package in the $HOME directory of all hosts listed in the --hosts
parameter and will change the Rhino configuration to point to this new installed version.
orca WILL NOT update the JAVA_HOME environment variable, because there might be other applications that require this property to be set to a specific version. This should be a manual process. |
Troubleshooting
See Troubleshooting.
Example of major upgrade execution on a 3 node cluster
This example shows a major upgrade from Sentinel Volte 2.7.0.1 to Sentinel Volte 2.7.0.9
Applying the major upgrade with the command
rhino@rhino-rem:~/install/sentinel-volte-sdk-2.8.0.3-upgrade$ ./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 major-upgrade --pause packages install.properties
Check for prepared clusters
Starting on host rhino-vm1 Checking for prepared Rhino clusters Done on rhino-vm1 Starting on host rhino-vm2 Checking for prepared Rhino clusters Done on rhino-vm2 Starting on host rhino-vm3 Checking for prepared Rhino clusters Done on rhino-vm3
Check the sdk install.properties script
Validating install.properties Updating rhinoclientdir property in install.properties to /home/rhino/rhino/client
Export the configuration
Applying major upgrade package in directory 'packages' to hosts rhino-vm1,rhino-vm2,rhino-vm3 Exporting configuration to directory /home/rhino/export/volte-2.7.1.0-cluster-108 Done on rhino-vm1 Exporting profile table and RA configuration Done on rhino-vm1
Install Java
Installing new JDK on hosts [rhino-vm1, rhino-vm2, rhino-vm3] Starting on host rhino-vm1 Installing New Java JDK 1.8.0_172 successfully installed at /home/rhino/java/jdk1.8.0_172 PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK. Done on rhino-vm1 Starting on host rhino-vm2 Installing New Java JDK 1.8.0_172 successfully installed at /home/rhino/java/jdk1.8.0_172 PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK. Done on rhino-vm2 Starting on host rhino-vm3 Installing New Java JDK 1.8.0_172 successfully installed at /home/rhino/java/jdk1.8.0_172 PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK. Done on rhino-vm3
Prepare the new cluster
Preparing new cluster with new Rhino on hosts [rhino-vm1, rhino-vm2, rhino-vm3] Starting on host rhino-vm1 Preparing New Rhino Done on rhino-vm1 Starting on host rhino-vm2 Preparing New Rhino Done on rhino-vm2 Starting on host rhino-vm3 Preparing New Rhino Done on rhino-vm3
Migrate the first node in the hosts list
Doing Migrate Stopping node 101 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 101 Rhino has exited. Successfully shut down Rhino on node 101. Now waiting for sockets to close... Starting node 101 in cluster 109 Started Rhino. State is now: Waiting to go primary Attempting to make cluster primary for 20 seconds Successfully made cluster primary Waiting for Rhino to be ready for 228 seconds Started node 101 successfully Done on rhino-vm1
Import basic product configuration
Importing pre-install configuration Done on rhino-vm1
Install the new product version
Installing upgrade volte-2.8.0.3-offline-sdk.zip - this will take a while Unpacking the upgrade package Copying the install.properties Installing the new product. This will take a while. You can check the progress in /home/rhino/install/sdk-2.8.0.3/build/target/log/installer.log on the remote host Done on rhino-vm1
Export the new installed version and do the data transformation
Exporting configuration to directory /home/rhino/export/volte-2.8.0.3-cluster-109 Done on rhino-vm1 Exporting profile table and RA configuration Done on rhino-vm1 Transforming Export Data Done on rhino-vm1
Do the post install step
Applying post-install package post-install-package.zip, this can take a while Unzipping package post-install-package.zip Running script install Done on rhino-vm1
Restore the customer configuration
Importing profile table and RA configuration Done on rhino-vm1 Importing profiles from export directory volte-2.7.1.0-cluster-108-transformed-for-2.8.0.3 Done on rhino-vm1 Transforming the service object pools configuration Getting the services Selecting files Applying changes Done on rhino-vm1 Importing post-install configuration Done on rhino-vm1 Applying post-configure package post-configure-package.zip, this can take a while Unzipping package post-configure-package.zip Running script install Done on rhino-vm1
Merge feature scripts
Merging feature scripts Merging scripts Loading feature scripts from file packages/FeatureExecutionScriptTable.xml Loading feature scripts from file /home/rhino/install/sentinel-volte-sdk-2.8.0.3-upgrade/volte-2.7.1.0-cluster-108.xml Loading feature scripts from file /home/rhino/install/sentinel-volte-sdk-2.8.0.3-upgrade/volte-2.8.0.3-cluster-109.xml Once the major upgrade is complete, review the scripts in /home/rhino/install/sentinel-volte-sdk-2.8.0.3-upgrade/feature-scripts and make changes if required. When ready, run "orca --hosts <host> import-feature-scripts" to import the scripts into the Rhino installation.
Finish the upgrade on the first node and show the status
Querying status on hosts [rhino-vm1, rhino-vm2, rhino-vm3] Global info: Symmetric activation state mode is currently enabled Status of host rhino-vm1 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.8.0.3-cluster-109 - LIVE Live Node: Rhino node 101: found process with id 10352 Node 101 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: - volte-2.7.1.0-cluster-108 - volte-2.7.1.0-cluster-108-transformed-for-2.8.0.3 - volte-2.8.0.3-cluster-109 License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /home/rhino/java/jdk1.8.0_172 version = java version "1.8.0_172", Java(TM) SE Runtime Environment (build 1.8.0_172-b11), Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-130-generic (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=2.6.1 name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424 name=sentinel.registrar vendor=OpenCloud version=2.8.0 name=sentinel.registrar vendor=OpenCloud version=current name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601-copy#1 name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601 name=sentinel.volte.sip vendor=OpenCloud version=2.8.0 name=sentinel.volte.sip vendor=OpenCloud version=current name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150-copy#1 name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150 name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0 name=sentinel.volte.ss7 vendor=OpenCloud version=current Status of host rhino-vm2 Clusters: - volte-2.7.1.0-cluster-108 - LIVE - volte-2.8.0.3-cluster-109 Live Node: Rhino node 102: found process with id 26136 Node 102 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.0-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.0 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.0-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.0 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.0-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.0 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Status of host rhino-vm3 Clusters: - volte-2.7.1.0-cluster-108 - LIVE - volte-2.8.0.3-cluster-109 Live Node: Rhino node 103: found process with id 19814 Node 103 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /opt/java/jdk1.7.0_79 version = java version "1.7.0_79", Java(TM) SE Runtime Environment (build 1.7.0_79-b15), Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=1.4.7 name=sentinel.registrar vendor=OpenCloud version=2.7.1.0-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.7.1.0 name=sentinel.registrar vendor=OpenCloud version=2.7.1 name=sentinel.registrar vendor=OpenCloud version=current name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.0-copy#1 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1.0 name=volte.sentinel.sip vendor=OpenCloud version=2.7.1 name=volte.sentinel.sip vendor=OpenCloud version=current name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.0-copy#1 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1.0 name=volte.sentinel.ss7 vendor=OpenCloud version=2.7.1 name=volte.sentinel.ss7 vendor=OpenCloud version=current Available actions: - prepare - prepare-new-rhino - cleanup --clusters - cleanup --exports - rollback Major upgrade has been paused after applying it to just rhino-vm1. You should now test that the major upgrade has worked. Once this is verified, use the following command to complete the major upgrade: ./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 major-upgrade --continue packages install.properties The following output lines are considered particularly significant. They are repeated here, labelled by host, but you can view their full context above. rhino-vm1: PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK. rhino-vm2: PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK. rhino-vm3: PLEASE NOTE that the newly installed JDK will only be used by the SLEE, all other Java applications will continue to use the previously installed Java version. Please manually update the system's JAVA_HOME environment variable if you would like all Java applications to use the newly installed JDK.
Import feature scripts
Now that the feature scripts were merged. It is time to check them, solve any conflicts and import to the new installed system.
rhino@rhino-rem:~/install/sentinel-volte-sdk-2.8.0.3-upgrade$ ls feature-scripts/ ERSVCCRegistration_Store_Subscriber_Data_Start MMTel_Post_SipAccess_PartyResponse SCCTermAnchor_SipAccess_SubscriberCheck default_Post_SipAccess_ChargingReauth MMTelConf_HLR_SipAccess_SubscriberCheck MMTel_Post_SipAccess_ServiceTimer SCCTermTads_HLR_SipAccess_PartyResponse default_Post_SipAccess_ControlNotRequiredPostCC MMTelConf_Post_SubscriptionSipRequest MMTel_Post_SipAccess_SubscriberCheck SCCTermTads_HLR_SipAccess_SubscriberCheck default_Post_SipAccess_CreditAllocatedPostCC MMTelConf_Post_SubscriptionSipResponse MMTel_Post_SipEndSession SCCTermTads_SipAccess_PartyResponse default_Post_SipAccess_CreditLimitReachedPostCC MMTelConf_SipAccess_SubscriberCheck MMTel_Post_SipInstructionExecutionFailure SCCTermTads_SipAccess_SubscriberCheck default_Post_SipAccess_NetworkCheck MMTelOrig_HLR_SipAccess_SubscriberCheck MMTel_Post_SipLegEnd SCCTerm_Cassandra_SipAccess_SessionCheck default_Post_SipAccess_OCSFailurePostCC MMTelOrig_Post_SubscriptionSipRequest MMTel_Post_SipMidSession_ChargingAbort SCCTerm_HLR_SipAccess_PartyResponse default_Post_SipAccess_PartyRequest MMTelOrig_Post_SubscriptionSipResponse MMTel_Post_SipMidSession_ChargingReauth SCCTerm_HLR_SipAccess_ServiceTimer default_Post_SipAccess_PartyResponse MMTelOrig_SipAccess_PartyRequest MMTel_Post_SipMidSession_CreditAllocatedPostCC SCCTerm_HLR_SipAccess_SubscriberCheck default_Post_SipAccess_ServiceTimer MMTelOrig_SipAccess_PartyResponse MMTel_Post_SipMidSession_CreditLimitReachedPostCC SCCTerm_SipAccess_PartyRequest default_Post_SipAccess_SubscriberCheck MMTelOrig_SipAccess_SubscriberCheck MMTel_Post_SipMidSession_OCSFailurePostCC SCCTerm_SipAccess_PartyResponse default_Post_SipEndSession MMTelOrig_SipMidSession_PartyRequest MMTel_Post_SipMidSession_PartyRequest SCCTerm_SipAccess_ServiceTimer default_Post_SipLegEnd MMTelOrig_SipMidSession_PartyResponse MMTel_Post_SipMidSession_PartyResponse SCCTerm_SipAccess_SessionCheck default_Post_SipMidSession_ChargingReauth MMTelTerm_HLR_SipAccess_SubscriberCheck MMTel_Pre_SipAccess_PartyRequest SCCTerm_SipAccess_SubscriberCheck default_Post_SipMidSession_CreditAllocatedPostCC MMTelTerm_SipAccess_PartyRequest MMTel_Pre_SipAccess_PartyResponse SCCTerm_SipMidSession_PartyResponse default_Post_SipMidSession_CreditLimitReachedPostCC MMTelTerm_SipAccess_PartyResponse MMTel_Pre_SipAccess_SessionStart SCC_Post_SipAccess_PartyRequest default_Post_SipMidSession_OCSFailurePostCC MMTelTerm_SipAccess_SubscriberCheck MMTel_Pre_SipInstructionExecutionFailure SCC_Post_SipAccess_PartyResponse default_Post_SipMidSession_PartyRequest MMTelTerm_SipLegEnd MMTel_Pre_SipMidSession_CreditLimitReachedPostCC SCC_Post_SipAccess_ServiceTimer default_Post_SipMidSession_PartyResponse MMTelTerm_SipMidSession_PartyRequest MMTel_Pre_SipMidSession_PartyRequest SCC_Post_SipAccess_SubscriberCheck default_Post_SipTransaction_SubscriberCheck MMTelTerm_SipMidSession_PartyResponse MMTel_Pre_SipMidSession_PartyResponse SCC_Post_SipEndSession default_Post_SubscriptionNetworkCheck MMTel_Cassandra_SipAccess_SessionCheck Registrar_Store_Subscriber_Data SCC_Post_SipInstructionExecutionFailure default_Post_SubscriptionSipRequest MMTel_HLR_Cassandra_SipAccess_SessionCheck SCCOrig_Cassandra_SipAccess_SessionCheck SCC_Post_SipMidSession_PartyRequest default_Post_SubscriptionSipResponse MMTel_Post_CreditFinalised SCCOrig_SipAccess_PartyRequest SCC_Post_SipMidSession_PartyResponse default_Post_SubscriptionSubscriberCheck MMTel_Post_SipAccess_ChargingAbort SCCOrig_SipAccess_PartyResponse SCC_Pre_SipAccess_PartyRequest default_Pre_SipAccess_SessionStart MMTel_Post_SipAccess_ChargingReauth SCCOrig_SipAccess_SessionCheck SCC_Pre_SipAccess_PartyResponse default_SipTransaction_SubscriberCheck MMTel_Post_SipAccess_ControlNotRequiredPostCC SCCOrig_SipAccess_SubscriberCheck SCC_Pre_SipAccess_SessionStart default_SubscriptionStart MMTel_Post_SipAccess_CreditAllocatedPostCC SCCOrig_SipMidSession_PartyResponse SCC_Pre_SipMidSession_PartyRequest default_call_Cassandra_DirectAccess_NetworkPreCreditCheck MMTel_Post_SipAccess_CreditLimitReachedPostCC SCCTermAnchor_SipAccess_PartyRequest SCC_Pre_SipMidSession_PartyResponse default_sf_Post_SubscriptionStart MMTel_Post_SipAccess_OCSFailurePostCC SCCTermAnchor_SipAccess_PartyResponse SCC_SipAccess_PartyRequest default_sf_Pre_SubscriptionStart MMTel_Post_SipAccess_PartyRequest SCCTermAnchor_SipAccess_ServiceTimer SCC_SipMidSession_PartyRequest rhino@rhino-rem:~/install/sentinel-volte-sdk-2.8.0.3-upgrade$ ./orca --hosts rhino-vm1 import-feature-scripts Importing feature scripts into Rhino on host rhino-vm1 Done on rhino-vm1 Importing scripts Importing into profile table 'Rocket_FeatureExecutionScriptTable'. Importing feature script MMTel_Post_SipMidSession_PartyRequest... Importing feature script MMTel_Post_SipAccess_CreditAllocatedPostCC... Importing feature script MMTelTerm_HLR_SipAccess_SubscriberCheck... Importing feature script MMTelOrig_HLR_SipAccess_SubscriberCheck... Importing feature script MMTelOrig_Post_SubscriptionSipResponse... Importing feature script MMTel_Post_SipAccess_PartyResponse... Importing feature script MMTel_Pre_SipMidSession_PartyResponse... Importing feature script MMTel_Post_SipAccess_ServiceTimer... Importing feature script MMTelConf_Post_SubscriptionSipRequest... Importing feature script MMTelConf_Post_SubscriptionSipResponse... Importing feature script SCCTermAnchor_SipAccess_PartyResponse... Importing feature script SCCTerm_SipAccess_SubscriberCheck... Importing feature script MMTelOrig_SipAccess_SubscriberCheck... Importing feature script default_Post_SipAccess_ServiceTimer... Importing feature script default_Post_SipTransaction_SubscriberCheck... Importing feature script SCC_Post_SipAccess_PartyResponse... Importing feature script MMTelOrig_Post_SubscriptionSipRequest... Importing feature script SCC_Post_SipMidSession_PartyResponse... Importing feature script SCCOrig_SipAccess_PartyResponse... Importing feature script MMTel_Post_SipAccess_ChargingReauth... Importing feature script default_Post_SipMidSession_OCSFailurePostCC... Importing feature script MMTelConf_SipAccess_SubscriberCheck... Importing feature script MMTel_Post_SipEndSession... Importing feature script SCCTerm_HLR_SipAccess_PartyResponse... Importing feature script MMTel_Post_SipAccess_PartyRequest... Importing feature script MMTel_Post_SipAccess_ControlNotRequiredPostCC... Importing feature script SCCTerm_SipAccess_PartyRequest... Importing feature script SCC_Post_SipMidSession_PartyRequest... Importing feature script SCCTerm_SipMidSession_PartyResponse... Importing feature script MMTel_Post_SipInstructionExecutionFailure... Importing feature script default_SubscriptionStart... Importing feature script MMTelOrig_SipAccess_PartyRequest... Importing feature script MMTel_Pre_SipAccess_PartyRequest... Importing feature script MMTel_Pre_SipMidSession_PartyRequest... Importing feature script MMTel_Pre_SipAccess_SessionStart... Importing feature script Registrar_Store_Subscriber_Data... Importing feature script MMTelOrig_SipMidSession_PartyRequest... Importing feature script MMTel_Post_SipAccess_OCSFailurePostCC... Importing feature script SCC_Pre_SipMidSession_PartyRequest... Importing feature script SCC_Pre_SipAccess_PartyRequest... Importing feature script default_Post_SipAccess_SubscriberCheck... Importing feature script SCCTerm_Cassandra_SipAccess_SessionCheck... Importing feature script default_Post_SipAccess_PartyResponse... Importing feature script SCC_SipAccess_PartyRequest... Importing feature script MMTelConf_HLR_SipAccess_SubscriberCheck... Importing feature script SCCOrig_SipAccess_SessionCheck... Importing feature script MMTelTerm_SipAccess_SubscriberCheck... Importing feature script MMTelTerm_SipAccess_PartyRequest... Importing feature script SCCTerm_HLR_SipAccess_ServiceTimer... Importing feature script default_SipTransaction_SubscriberCheck... Importing feature script SCCTermTads_HLR_SipAccess_PartyResponse... Importing feature script default_Post_SipAccess_OCSFailurePostCC... Importing feature script MMTel_Post_SipMidSession_CreditAllocatedPostCC... Importing feature script default_call_Cassandra_DirectAccess_NetworkPreCreditCheck... Importing feature script default_sf_Pre_SubscriptionStart... Importing feature script SCCTermAnchor_SipAccess_PartyRequest... Importing feature script MMTel_Post_SipMidSession_ChargingReauth... Importing feature script default_Post_SipLegEnd... Importing feature script ERSVCCRegistration_Store_Subscriber_Data_Start... Importing feature script default_Post_SipMidSession_ChargingReauth... Importing feature script MMTel_Post_SipMidSession_OCSFailurePostCC... Importing feature script SCCTerm_SipAccess_ServiceTimer... Importing feature script default_Post_SipMidSession_PartyResponse... Importing feature script default_Post_SubscriptionSubscriberCheck... Importing feature script MMTel_Pre_SipMidSession_CreditLimitReachedPostCC... Importing feature script MMTel_Cassandra_SipAccess_SessionCheck... Importing feature script default_Post_SipAccess_ChargingReauth... Importing feature script MMTel_Post_SipMidSession_ChargingAbort... Importing feature script SCC_SipMidSession_PartyRequest... Importing feature script SCCOrig_SipAccess_PartyRequest... Importing feature script SCCTermTads_SipAccess_PartyResponse... Importing feature script default_Post_SubscriptionNetworkCheck... Importing feature script default_Post_SipAccess_ControlNotRequiredPostCC... Importing feature script MMTel_Post_SipAccess_SubscriberCheck... Importing feature script MMTelOrig_SipAccess_PartyResponse... Importing feature script default_Post_SipAccess_CreditLimitReachedPostCC... Importing feature script default_Post_SipAccess_PartyRequest... Importing feature script SCC_Post_SipAccess_PartyRequest... Importing feature script SCCTermAnchor_SipAccess_SubscriberCheck... Importing feature script MMTel_Post_SipMidSession_PartyResponse... Importing feature script MMTel_Post_SipLegEnd... Importing feature script default_Post_SipMidSession_CreditAllocatedPostCC... Importing feature script MMTel_Post_CreditFinalised... Importing feature script MMTel_Post_SipAccess_ChargingAbort... Importing feature script SCCTerm_SipAccess_SessionCheck... Importing feature script MMTel_HLR_Cassandra_SipAccess_SessionCheck... Importing feature script MMTelOrig_SipMidSession_PartyResponse... Importing feature script MMTel_Pre_SipInstructionExecutionFailure... Importing feature script MMTelTerm_SipMidSession_PartyRequest... Importing feature script SCC_Post_SipAccess_ServiceTimer... Importing feature script SCC_Post_SipEndSession... Importing feature script SCC_Pre_SipAccess_PartyResponse... Importing feature script SCC_Pre_SipMidSession_PartyResponse... Importing feature script SCCOrig_SipAccess_SubscriberCheck... Importing feature script SCCTermTads_SipAccess_SubscriberCheck... Importing feature script SCCTerm_HLR_SipAccess_SubscriberCheck... Importing feature script MMTel_Post_SipMidSession_CreditLimitReachedPostCC... Importing feature script MMTel_Pre_SipAccess_PartyResponse... Importing feature script SCCTermAnchor_SipAccess_ServiceTimer... Importing feature script SCCOrig_Cassandra_SipAccess_SessionCheck... Importing feature script SCC_Post_SipInstructionExecutionFailure... Importing feature script MMTelTerm_SipAccess_PartyResponse... Importing feature script SCCOrig_SipMidSession_PartyResponse... Importing feature script default_Post_SipEndSession... Importing feature script SCC_Pre_SipAccess_SessionStart... Importing feature script SCC_Post_SipAccess_SubscriberCheck... Importing feature script default_Post_SipMidSession_PartyRequest... Importing feature script SCCTermTads_HLR_SipAccess_SubscriberCheck... Importing feature script default_Post_SubscriptionSipRequest... Importing feature script default_Post_SipMidSession_CreditLimitReachedPostCC... Importing feature script default_Post_SipAccess_NetworkCheck... Importing feature script default_Post_SubscriptionSipResponse... Importing feature script default_Pre_SipAccess_SessionStart... Importing feature script default_sf_Post_SubscriptionStart... Importing feature script default_Post_SipAccess_CreditAllocatedPostCC... Importing feature script MMTelTerm_SipLegEnd... Importing feature script SCCTerm_SipAccess_PartyResponse... Importing feature script MMTelTerm_SipMidSession_PartyResponse... Importing feature script MMTel_Post_SipAccess_CreditLimitReachedPostCC... All scripts imported successfully. Done on rhino-vm1
Migrate other nodes
rhino@rhino-rem:~/install/sentinel-volte-sdk-2.8.0.3-upgrade$ ./orca --hosts rhino-vm1,rhino-vm2,rhino-vm3 major-upgrade --continue packages install.properties Continuing major upgrade package in directory 'packages' to hosts rhino-vm1,rhino-vm2,rhino-vm3 Starting on host rhino-vm2 Doing Migrate Stopping node 102 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 102 Rhino has exited. Successfully shut down Rhino on node 102. Now waiting for sockets to close... Starting node 102 in cluster 109 Started Rhino. State is now: Running Waiting for Rhino to be ready for 104 seconds Started node 102 successfully Done on rhino-vm2 Starting on host rhino-vm3 Doing Migrate Stopping node 103 in cluster 108 Waiting up to 120 seconds for calls to drain and SLEE to stop on node 103 Rhino has exited. Successfully shut down Rhino on node 103. Now waiting for sockets to close... Starting node 103 in cluster 109 Started Rhino. State is now: Running Waiting for Rhino to be ready for 82 seconds Started node 103 successfully Done on rhino-vm3 Querying status on hosts [rhino-vm1, rhino-vm2, rhino-vm3] Global info: Symmetric activation state mode is currently enabled Status of host rhino-vm1 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.8.0.3-cluster-109 - LIVE Live Node: Rhino node 101: found process with id 10352 Node 101 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: - volte-2.7.1.0-cluster-108 - volte-2.7.1.0-cluster-108-transformed-for-2.8.0.3 - volte-2.8.0.3-cluster-109 License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /home/rhino/java/jdk1.8.0_172 version = java version "1.8.0_172", Java(TM) SE Runtime Environment (build 1.8.0_172-b11), Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-130-generic (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=2.6.1 name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424 name=sentinel.registrar vendor=OpenCloud version=2.8.0 name=sentinel.registrar vendor=OpenCloud version=current name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601-copy#1 name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601 name=sentinel.volte.sip vendor=OpenCloud version=2.8.0 name=sentinel.volte.sip vendor=OpenCloud version=current name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150-copy#1 name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150 name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0 name=sentinel.volte.ss7 vendor=OpenCloud version=current Status of host rhino-vm2 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.8.0.3-cluster-109 - LIVE Live Node: Rhino node 102: found process with id 7349 Node 102 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /home/rhino/java/jdk1.8.0_172 version = java version "1.8.0_172", Java(TM) SE Runtime Environment (build 1.8.0_172-b11), Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=2.6.1 name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424 name=sentinel.registrar vendor=OpenCloud version=2.8.0 name=sentinel.registrar vendor=OpenCloud version=current name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601-copy#1 name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601 name=sentinel.volte.sip vendor=OpenCloud version=2.8.0 name=sentinel.volte.sip vendor=OpenCloud version=current name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150-copy#1 name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150 name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0 name=sentinel.volte.ss7 vendor=OpenCloud version=current Status of host rhino-vm3 Clusters: - volte-2.7.1.0-cluster-108 - volte-2.8.0.3-cluster-109 - LIVE Live Node: Rhino node 103: found process with id 24048 Node 103 is Running Rhino version='2.6', release='1.2', build='201807050952', revision='c5bfb8e' Exports: License information: 1 valid license (of 1 installed), expiry date Fri Nov 09 17:08:54 UTC 2018 Licensed Rhino version(s): 2.*, Development Java: found = live cluster java_home = /home/rhino/java/jdk1.8.0_172 version = java version "1.8.0_172", Java(TM) SE Runtime Environment (build 1.8.0_172-b11), Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode) OS: python = 2.7.12 (default, Nov 20 2017, 18:23:56) , [GCC 5.4.0 20160609] version = Linux version 4.4.0-128-generic (buildd@lcy01-amd64-019) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) ) #154-Ubuntu SMP Fri May 25 14:15:18 UTC 2018 Services: name=IM-SSF vendor=OpenCloud version=2.6.1 name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424-copy#1 name=sentinel.registrar vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181005161424 name=sentinel.registrar vendor=OpenCloud version=2.8.0 name=sentinel.registrar vendor=OpenCloud version=current name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601-copy#1 name=sentinel.volte.sip vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154601 name=sentinel.volte.sip vendor=OpenCloud version=2.8.0 name=sentinel.volte.sip vendor=OpenCloud version=current name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150-copy#1 name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0.3-SNAPSHOT.r20181010154150 name=sentinel.volte.ss7 vendor=OpenCloud version=2.8.0 name=sentinel.volte.ss7 vendor=OpenCloud version=current Available actions: - prepare - prepare-new-rhino - cleanup --clusters - cleanup --exports - rollback
Rhino Only Upgrade
A customer can update the version of Rhino they are using by:
-
Creating a Rhino only upgrade bundle
-
Use
orca
to apply the Rhino only upgrade bundle
Creating a Rhino-only upgrade bundle
A Rhino-only upgrade bundle is is a self-contained package named:
-
rhino-<rhino-version>-upgrade-bundle.zip
For example, a bundle to upgrade to Rhino 2.6.0.1 would be called rhino-2.6.0.1-upgrade-bundle.zip
A Rhino-only upgrade bundle includes the orca
bundle and also:
Bundle element | Description |
---|---|
A Rhino install package |
The new version of Rhino to be installed |
(optionally) a new JDK package |
A new JDK for the new Rhino may be required or recommended |
(optionally) a new license |
A new license may be required to use the new Rhino |
(optionally) a Rhino configuration update JSON file |
This file defines any configuration changes to apply to the default Rhino installation. For example, to change the size of the Java heap, or increase the size of the Rhino management database. |
The Rhino package
The Rhino package is the tar file used to distribute Rhino. It contains the Rhino binaries, tools and the installation script.
rhino-install.tar
The Rhino config json file
This file contains configuration that will be applied to Rhino during its installation. The file is formatted as JSON and it includes the Rhino config file destination, the properties and values. Rhino has several configuration attributes in several files with different formats. Currently orca applies the changes to the config_variables
and rhino-config.xml
file.
The json file format is:
[ { "filename": "file name with path relative to Rhino node installation", "filetype:" "properties", "settings:" [{settings 1},{settings 2},...,{settings n}] }, { "filename": "file name with path relative to Rhino node installation", "filetype:" "xml", "settings:" [{settings 1},{settings 2},...,{settings n}] }, .... { "filename": "file name with path relative to Rhino node installation", "filetype:" "sh", "settings:" [{settings 1},{settings 2},...,{settings n}] } ]
The filename path is relative to the Rhino node installation. e.g for rhino-config.xml it should be config/rhino-config.xml |
The filetype
attribute accepts the following values:
-
properties
to deal with Rhinokey=value
config properties) -
xml
to deal with rhino xml config files, e.g rhino-config.xml -
sh
to deal withread-config-variables
file
The parameters in the settings
attribute should match the expected format defined by the filetype
:
-
a property if
filetype
isproperties
orsh
-
an XPath if
filetype
isxml
Example from sentinel-volte-upgrade-rhino-config.json
[ { "filename": "config/config_variables", "filetype": "properties", "settings": [ { "name": "HEAP_SIZE", "type": "minimum", "units": "m", "value": 6144 } ] }, { "filename": "config/rhino-config.xml", "filetype": "xml", "settings": [ { "xpath": ".//memdb[jndi-name='ManagementDatabase']/committed-size", "type": "minimum", "units": "M", "value": 400 } ] } ]
Currently the only supported values for type are value and minimum . |
For the example above, orca will change the value of the HEAP_SIZE property to 6144m if the current value is lower than that. It will also change the committed-size
for element memdb
with jndi-name
equals to 'ManagementDatabase'.
Concretely it will change
<memdb> <jndi-name>ManagementDatabase</jndi-name> <committed-size>128M</committed-size> </memdb>
to
<memdb> <jndi-name>ManagementDatabase</jndi-name> <committed-size>400M</committed-size> </memdb>
Creating the bundle
To create a Rhino only upgrade bundle:
-
get the latest version of
orca-bundler.zip
from operational-tools -
decompress it
-
run
generate-orca-bundle rhino-upgrade
to bundle a Rhino install with orca
$ unzip orca-bundler.zip $ ./generate-orca-bundle rhino-upgrade --out <the upgrade bundle>.zip\ --rhino-package <new Rhino package>\ --rhino-config-json <Rhino config properties to apply>
A new license can be installed during a Rhino update by including it in the bundle.
$ unzip orca-bundler.zip $ ./generate-orca-bundle rhino-upgrade --out <the upgrade bundle>.zip\ --rhino-package <new Rhino package>\ --rhino-config-json <Rhino config properties to apply>\ --license <new-license.license>
To also update the JDK (Java) used to run rhino, include a new JDK package in the bundle.
$ unzip orca-bundler.zip $ ./generate-orca-bundle rhino-upgrade --out <the upgrade bundle>.zip\ --rhino-package <new Rhino package>\ --rhino-config-json <Rhino config properties to apply>\ --license <new-license.license>\ --java-package <jdk-package>
The orca-bundler will create the zip file specified in <the upgrade bundle>.zip
with the Rhino install package, the orca tool, the optional packages and a README file explaining how to install the Rhino only upgrade. It is important to review this file and add or change any information before handing the upgrade to the customer.
Use the flag --skip-rhino-version-check in the bundler while creating test upgrade bundles against non released versions of Rhino. |
Applying a Rhino only upgrade bundle
Use the orca tool to apply a Rhino only upgrade (see: orca requirements).
Applying a Rhino only upgrade requires administrative access to hosts running Rhino. Be sure the credentials used in ssh trusted connections are valid.
Orca assumes the $HOME directory of the remote hosts is the base directory. Use the option --remote-home-dir
or -r
if the path is different. The upgrade requires at least 1.5GB of free disk space in the first node:
-
0.5GB for the upgrade bundle
-
0.5GB for the installer to run
-
0.5GB for logs
Apply a Rhino only upgrade
The steps for applying a Rhino only upgrade are:
-
download the Rhino only upgrade bundle to the management host
-
ensure the management host has ssh access to the cluster hosts running Rhino
-
decompress the upgrade bundle and cd to the upgrade bundle
-
read the README for instructions and for details of the upgrade
-
verify Rhino is running
Upgrade Rhino on all nodes with the orca command:
./orca --hosts host1,host2,host3 rhino-only-upgrade --no-pause
The recommended multi-stage upgrade process
Apply a Rhino only upgrade in multiple stages by using the --pause
option.
./orca --hosts host1,host2,host3 rhino-only-upgrade --pause
All of the hosts (in this case host1
, host2
, and host3
) are prepared for the upgrade. Only the first host (host1
) will have the new Rhino version active. You need to arrange your testing appropriately to route traffic to this specific host.
Once you are satisfied that the upgrade is working as required, continue the upgrade by:
./orca --hosts host1,host2,host3 rhino-only-upgrade --continue
Use the same list of hosts as the continuation process needs to access the same first host, and same set of prepared hosts, as before. |
This will then migrate the other nodes of the cluster, so they are all using the new version of Rhino.
Upgrade Rhino Element Manager
Upgrade bundle for REM
An upgrade bundle for REM is a self-contained package with:
-
the orca bundle
-
the new version of Rhino Element Manager
-
the Sentinel plugins if necessary
-
the README
An upgrade bundle has the name
rem-<version>-upgrade.zip
Example: rem-1.5.0.5-upgrade.zip
Item | Description |
---|---|
README |
Contains information about the REM upgrade: how to use it, what components it changes, and what problems it fixes |
orca |
The orca tool |
helpers directory |
Contains set of scripts used by orca |
core directory |
Contains set of scripts used by orca |
workflows directory |
Contains set of scripts used by orca |
resources directory |
Contains the properties used by orca |
licenses directory |
License information for third party libraries used by the patch runner |
packages directory |
Contains package files and the package.cfg |
The package files directory contains:
-
the packages.cfg file
-
REM product zip, e.g rhino-element-manager-1.5.0.5.zip
-
REM plugins. e.g volte-sentinel-element-manager-2.7.0.7.em.jar, sentinel-gaa-em-2.7.0.7.em.jar
Requirements
An upgrade to REM uses the orca tool but since it is controlling REM nodes rather than Rhino, it has different path requirements. See requirements. Applying any upgrade requires administrative access to the hosts, so be sure the credentials used in ssh trusted connections are valid.
Disk space requirements when upgrading REM is minimal, no bigger than 100 Mbytes.
The CATALINA_HOME environment variable should be set correctly on the host running Rhino Element Manager.
What should be present in a REM upgrade
REM contains the Rhino Element Manager used to manage Rhino cluster installations, but besides that REM also can contain plugins. Currently the plugins for the VoLTE solution are:
-
sentinel-volte-element-manager
-
sentinel-gaa, also known as the NAF filter
-
sis-em
The sentinel-volte-element-manager contains:
-
Provisioning REST API
-
XCAP Server
-
Specific functionalities to manage and configure Sentinel Features
A REM upgrade can be used to upgrade any or all of those. The only requirement is that they should be compatible among themselves and the REM package should be compatible with the installed Rhino version.
Creating a REM upgrade package
Before start:
-
get the new REM version you want to create the upgrade package
-
get the required plugins
-
make sure the plugins are compatible by testing them together
-
make sure the plugins are compatible with the installed products (sentinel-volte-element-manager and the NAF filter).
The orca bundler provide the features to create a REM upgrade package by including the REM and any optional plugin.
To create an upgrade package do:
-
get the latest version of orca-bundler.zip from operational-tools
-
decompress it
-
run
generate-orca-bundle rem-upgrade
to bundle REM and plugins together.
$ unzip orca-bundler.zip $ ./generate-orca-bundle rem-upgrade --rem-package <the REM package> --plugin <plugin one> --plugin <plugin two>
The --rem-package
and --plugin
are optional, but at least one of them should be given to generate a bundle and the --plugin
parameter can be as many as necessary.
Applying the upgrade to REM
Applying an upgrade to a production environment is simple. The normal steps are:
-
create the upgrade bundle for REM
-
copy the upgrade bundle for REM to the management host
-
ensure the management host has ssh access to the cluster hosts on which Rhino Element Manager is installed
-
decompress the upgrade bundle
-
read the README for instructions and for details of the upgrade
Running an upgrade for REM
Run
./orca --hosts remhost1,remhost2 upgrade-rem packages
In summary orca will:
-
check the connection to the hosts
-
For each host
-
check for the existence of any plugins, match them against the specified ones and ask for confirmation to proceed
-
backup any files to be changed by the upgrade (package and/or plugins)
-
stop the web-server on that host
-
upgrade the package and plugins as appropriate
-
restart the web-server on that host
-
You can skip the prompt questions by passing the --no-prompt
parameter.
./orca --hosts remhost1,remhost2 upgrade-rem packages --no-prompt
The default backup path is $HOME/rem-backup
. You can specify another one using the parameter --backup-dir
./orca --hosts remhost1,remhost2 upgrade-rem packages --no-prompt --backup-dir <other backup dir>
To rollback the upgrade
To rollback to the most recent backup:
./orca --hosts remhost1,remhost2 rollback-rem
To rollback to a specific backup (must be done on each host separately):
-
First identify the backup ID to roll back to by examining the status output.
./orca -H remhost1 status ... backups = (#1) 20180117-123500 contains REM:2.6.1.0 Plugins:sentinel-gaa-em-2.8.0.1,sentinel-volte-element-manager-2.8.0.1 ...
The backup ID is the number in brackets, without the #
(so 1
in the above example).
-
Then, run the
rollback-rem
command with the--target <backup ID>
parameter:
./orca --hosts remhost1 rollback-rem --target 1
To cleanup old backups
-
First identify the backup ID(s) to clean up by examining the status output.
./orca --hosts remhost1 status ... backups = (#1) 20180117-123500 contains REM:2.6.1.0 Plugins:sentinel-gaa-em-2.8.0.1,sentinel-volte-element-manager-2.8.0.1 ...
The backup ID is the number in brackets, without the #
(so 1
in the above example).
-
Then, run the
cleanup-rem
command:
./orca --hosts remhost1 cleanup-rem --backups 1,2
The backups are specified as a comma-separated list of integers without any spaces.
This procedure must be done on each host separately.
Example of rhino-element-manager upgrade
This example shows an upgrade from rhino-element-manager-1.5.0.4 to rhino-element-manager-1.5.0.5
$ ./orca --hosts vm-makara-01 upgrade-rem --rem-package rhino-element-manager-1.5.0.5.zip
Verify connections
Verifying connection to vm-makara-01 Applying REM upgrade rhino-element-manager-1.5.0.5.zip to host vm-makara-01 Running command on host vm-makara-01: upgrade-rem 'tmpj2xCvO'
Check for specific plugin and ask for confirmation
These plugins are installed but no replacement appears to be specified: volte-sentinel-element-manager-2.7.0.9.em.jar Continue anyway? (y/N): y
Unzip the package and backup the existing installation
Archive: /opt/tomcat/apache-tomcat-8.5.29/webapps/rem.war inflating: /tmp/remupg.OOPq5R/old/WEB-INF/classes/log4j.properties Archive: /tmp/remupg.OOPq5R/rhino-element-manager-1.5.0.5/admin/resources/rem.war inflating: /tmp/remupg.OOPq5R/new/WEB-INF/classes/log4j.properties Stopping Tomcat... Using CATALINA_BASE: /opt/tomcat/apache-tomcat-8.5.29 Using CATALINA_HOME: /opt/tomcat/apache-tomcat-8.5.29 Using CATALINA_TMPDIR: /opt/tomcat/apache-tomcat-8.5.29/temp Using JRE_HOME: /opt/java/jdk1.8.0_162 Using CLASSPATH: /opt/tomcat/apache-tomcat-8.5.29/bin/rem-rmi.jar:/opt/tomcat/apache-tomcat-8.5.29/bin/bootstrap.jar:/opt/tomcat/apache-tomcat-8.5.29/bin/tomcat-juli.jar Using CATALINA_PID: /opt/tomcat/apache-tomcat-8.5.29/tomcat.pid Tomcat stopped. Backing up previous version of REM to /home/rhino/rem-backup/20180618-135604...
Install the new version
Installing new version of REM... Installation complete. Restarting Tomcat... Using CATALINA_BASE: /opt/tomcat/apache-tomcat-8.5.29 Using CATALINA_HOME: /opt/tomcat/apache-tomcat-8.5.29 Using CATALINA_TMPDIR: /opt/tomcat/apache-tomcat-8.5.29/temp Using JRE_HOME: /opt/java/jdk1.8.0_162 Using CLASSPATH: /opt/tomcat/apache-tomcat-8.5.29/bin/rem-rmi.jar:/opt/tomcat/apache-tomcat-8.5.29/bin/bootstrap.jar:/opt/tomcat/apache-tomcat-8.5.29/bin/tomcat-juli.jar Using CATALINA_PID: /opt/tomcat/apache-tomcat-8.5.29/tomcat.pid Tomcat started. REM upgrade completed. Done on vm-makara-01
Example of rhino-element-manager plugin upgrade
This example shows an upgrade from volte-sentinel-element-manager-2.7.0.8.em.jar to volte-sentinel-element-manager-2.7.0.9.em.jar
$ ./orca --hosts vm-makara-01 upgrade-rem --plugin upgrade/volte-sentinel-element-manager-2.7.0.9.em.jar
Verify connections
Verifying connection to vm-makara-01 Applying REM upgrade Plugins:volte-sentinel-element-manager-2.7.0.9.em.jar to host vm-makara-01 Running command on host vm-makara-01: upgrade-rem 'tmpOMDvj0'
Backup the current installation and install the new version
Stopping Tomcat... Using CATALINA_BASE: /opt/tomcat/apache-tomcat-8.5.29 Using CATALINA_HOME: /opt/tomcat/apache-tomcat-8.5.29 Using CATALINA_TMPDIR: /opt/tomcat/apache-tomcat-8.5.29/temp Using JRE_HOME: /opt/java/jdk1.8.0_162 Using CLASSPATH: /opt/tomcat/apache-tomcat-8.5.29/bin/rem-rmi.jar:/opt/tomcat/apache-tomcat-8.5.29/bin/bootstrap.jar:/opt/tomcat/apache-tomcat-8.5.29/bin/tomcat-juli.jar Using CATALINA_PID: /opt/tomcat/apache-tomcat-8.5.29/tomcat.pid Tomcat stopped. Installing plugins... Installation complete. Restarting Tomcat... Using CATALINA_BASE: /opt/tomcat/apache-tomcat-8.5.29 Using CATALINA_HOME: /opt/tomcat/apache-tomcat-8.5.29 Using CATALINA_TMPDIR: /opt/tomcat/apache-tomcat-8.5.29/temp Using JRE_HOME: /opt/java/jdk1.8.0_162 Using CLASSPATH: /opt/tomcat/apache-tomcat-8.5.29/bin/rem-rmi.jar:/opt/tomcat/apache-tomcat-8.5.29/bin/bootstrap.jar:/opt/tomcat/apache-tomcat-8.5.29/bin/tomcat-juli.jar Using CATALINA_PID: /opt/tomcat/apache-tomcat-8.5.29/tomcat.pid Tomcat started. REM upgrade completed. Done on vm-makara-01
SLEE Configuration and Sentinel upgrades
This document explains what configurations exist inside the Sentinel products and makes references to the tools that operate on those in the scope of minor upgrade and major upgrade.
General information
The SLEE specification defines 3 types of configuration:
-
Profiles
-
Resource Adaptor (RA) properties
-
Log tracing levels.
Profiles are defined as tables and require a profile specification which can have several tables associated. The attributes defined in a profile specification can be of complex types and the way the data is persisted and retrieved is defined by Java code which is loaded into Rhino and referenced by SBBs or SBB parts. The Java code for profiles implements the profile interfaces defined in the SLEE specification. The profiles can be configured via the rhino-console or using MBean operations, which under the hood calls the methods defined in the profile interface that SLEE defines.
The RA properties are defined as part of the RA code and SLEE defines a different interface to set and read those properties. Each RA entity from the same RA can have different property values for the same properties.
Rhino also has utilities that allow the data to be exported to the file system and later re-imported to the same or another system. This data is in the form of XML files. Each profile table has a file with the profile contents and the profile default values, but it does not indicate which profile spec that profile table was created from. The RA properties are in different files and are set when the RA entity is created. This difference between RAs and profile tables requires complex handling of the data in the scope of major upgrades.
The transformation workflow
Transformation is only required for major upgrade, since by definition the only configuration changes allowed in a minor upgrade are ones that do not give rise to a need to perform transformation.
The image below shows the high level overview of how orca
handles the SLEE configuration between two versions.
The system with version A has a set of configuration schema and configuration data. The version B changes the configuration schema because new features required those.
Schema changes can:
-
add new profile attributes and RA properties
-
rename existing profile attributes or/and RA properties
-
change the type of profile attributes and/or profile properties
-
remove profile attributes and/or RA properties
-
provide new default values.
When trying to import the data from system version A into system version B, some configuration won’t apply due to schema changes, for example removing a profile attribute. The system version B will have the new schema in place and expects the data to conform to it. In order to import the version A configuration into system version B, it is necessary to first adjust that data.
orca
persists the version A configuration and via a set of transformation rules it changes the version A configuration to be compatible to the version B schema. The rules are expressed in Java using a transformation API and are applied during the upgrade procedure. Once the configuration is compatible to system version B, it is imported and the system B will have the equivalent configuration to system version A.
Two tools are used to persist the configuration: rhino-export
and slee-data-migration.
rhino-export
will persist the profile tables and its profiles, while slee-data-migration will persist the RAs configuration, profile specifications and log settings.
To give the transformation process as much information as possible, these two tools are actually used to package up the configuration data both from the original system (version A) and from the uplevel system (version B) after the new version has been installed, but before it has been configured.
Thus there are four data files used as input to the slee-data-transformation
process. The slee-data-transformation
will create a transformed copy of the profile tables attributes and the data produced by the rhino-export of version A. It will also create a transformed copy of the data produced from version A by the slee-data-migration
. Transformation is not applied to the data persisted from version B, but the transformation rules can consult it to decide how to modify the version A data.
Once the transformation is done, orca
will import the transformed profiles and call the slee-data-migration
to import the transformed RAs configuration and the (unmodified) log settings.
Certain parts of the configuration, specifically the feature-scripts, are too complex to be handled by the general data transformation process, and are therefore covered by a separate process.
SLEE Data Migration
Description
The slee-data-migration
tool is responsible for persisting some parts of the system configuration:
-
profile specs
-
profile table names
-
RA properties
-
log settings.
We need to persist this configuration from the old installation in both minor and major upgrades.
In a major upgrade, the slee-data-migration
tool is also used to extract this same information from the newly installed but not yet configured uplevel system, to provide as additional information to the transformation process. The slee-data-transformation
is then passed the 2 sets of output from the slee-data-migration
tool, and it uses the profile specs and RA properties in that output to make a transformed configuration that conforms to the schema of the new installed system.
In both minor and major upgrades, the slee-data-migration
tool is then called again to apply the appropriate configuration back to the new installed system:
-
recreate the profile tables that are not in the new system but are present in the configuration data
-
apply the RA configuration
-
set the log settings back.
Usage options
The tool is meant to be used as part of minor and major upgrade and not as a standalone tool, nevertheless there might be cases where other tools might use it so this is how to invoke the tool. The package has a shell script that wraps the Java calls.
Option "-c (--client-dir)" is required Usage: data-migration.sh -c CLIENT_DIR <-e FILE | -i FILE [-r]> -c (--client-dir) CLIENT_DIR : Rhino client directory -e (--export) FILE : Save profile tables to the given file -h (--help) : Print usage information (default: false) -i (--import) FILE : Load profile tables from the given file -l (--log-file) LOG_FILE : File to log to -r (--recreate) : Destroy and recreate profile tables when importing (default: false) -s (--skip-feature-scripts) : Skip deleting the feature point and feature scripts tables (default: false)
The tool uses Rhino Management MBean interfaces to retrieve and restore the data from a running Rhino. The configuration is persisted in JSON format. The normal usage is:
retrieve existing configuration
data-migration.sh --client-dir <path to the Rhino client> --export <file to persist the configuration> --log-file <optional log file>
restore the configuration
data-migration.sh --client-dir <path to the Rhino client> --import <file to restore the configuration> --log-file <optional log file>
The --recreate
and --skip-feature-scripts
are set differently depending if the tool is being used during a minor upgrade or major upgrade.
For minor upgrade, orca
calls the tool with -recreate
. It has the effect of cleaning all existing profiles, so the configuration is exactly the same as before. It is done this way because schema changes are not expected during a minor upgrade.
For major upgrade, orca
uses --skip-feature-scripts
. This way the new version of feature scripts are maintained because they are the correct ones for that version.
SLEE Data Transformation
Description
The slee-data-transformation
tool is responsible for converting the configuration data between major versions of Sentinel products.
The tool is an engine that executes a set of transformation steps defined in rules and operates on Profiles and RA properties. The rules are Java classes that use the transformation API and are executed against the exported profiles done via rhino-export
and on the files generated by the slee-data-migration tool.
As explained in SLEE Configuration and Sentinel upgrades the configuration data are retrieved from the rhino-export
and from the slee-data-migration
.
The transformation API can be found in Data Transformation API
Usage options
The tool was built to be used as part of Sentinel Upgrades, but can be used in isolation if required, and indeed developers who are writing rules will probably make extensive use of it whilst perfecting their rules.
For convenience, the tool is packaged as a single jar file slee-data-transformation-standalone.jar
with no need for any supporting libraries or shell scripts. It is run using a command:
java -jar slee-data-transformation-standalone.jar <required files, plus options>
There are 3 files which must be provided
-
the rules jar file, which holds the compiled version of the rules written using the transformation API
-
the path to the directory that holds the result of
rhino-export
from the system being upgraded -
the pathname of the json file produced by
slee-data-migration
from the system being upgraded
A further 2 files are optional (when doing a major upgrade orca
always passes them, but when developing rules they may be unnecessary)
-
the path to the directory that holds the result of
rhino-export
from the uplevel system -
the pathname of the json file produced by
slee-data-migration
from the uplevel system
The command options are:
-
-l LOG_FILE
or--log-file LOG_FILE
- pathname of a file to write logging information to -
-o OUTPUT_DIR
or--output-dir OUTPUT_DIR
- path of a directory to write the transformed data to. If this is not provided then the entire transformation will still be done, producing the same error and warnings as it goes, but no data files will be written out, thus providing the ability to do a dry-run of the transformation -
-v
or--verbose
- causes verbose output, detailing all the warnings that occur during the transformation.
As a convenience (principally for the orca-bundler tool), the data-transformation tool will also allow just a single input rules jar file to be passed, without passing any export files. In this mode, the transformation is meaningless, (and thus you cannot combine this single parameter option with the -o option to write out the result) but the tool will still verify that the rules files can be loaded, and set the exit code to a non-zero value if that is not possible.
The Transformation API
The Transformation API is contained in an abstract Java class called TransformationApi
, and a developer uses the API by providing a concrete Java class that extends TransformationApi
, and providing their implementation of the rules() method.
public class SimpleTransformation extends TransformationApi { ... }
By extending the API class in this way, all the methods of the API are readily available as local methods.
It is important to understand how the transformation process makes use of the rules class provided by the developer. The engine first loads in the data files from rhino-export
and slee-data-migration
, and then makes a call into the rules() method of the developer’s class. This method returns an array of rules that are later processed and interpreted by the engine. The rules are applied as a whole, not necessarily sequentially in the order they were written, and the rules express the intent of what is to be done, rather than being direct calls to API methods.
As a concrete example of this, the rules may express actions that end up both changing and deleting a particular piece of data. The net effect of this will always be that the data ends up being deleted - it does not matter what order the rules are presented in, and the change action will always be ineffective. (Yes, you should get warnings that tell you that the change rule was ignored in this case, but it’s entirely possible that with a slightly different input configuration both rules could have applied to different pieces of data, and so not have been in conflict).
As another example, rules are always expressed using the original name of something, even if there is another rule present that applies renaming to that object.
The API consists of a number of parts:
-
the
rules()
methods, that every user of the API must override, that returns the array of rules that drive the transformation -
factory methods for various object classes used as parameters by the API, used for generating or matching purposes
-
methods that define rules that provide contexts that control when their child actions apply
-
methods that define rules that express actions that are performed on the data triggered by their enclosing contexts
-
creation methods
-
query methods that allow reading values from the original data, or from the uplevel data
-
some miscellaneous methods
-
the
ValueFunction
class whose subclasses can be used to provide dynamic value evaluation to rules -
some pre-written
ValueFunction
implementations that provide common manipulations used in transformations, such as adding or removing array items
Many of the methods in the API, especially the factory methods and the rule definitions, have multiple overloaded implementations of the same named method, taking different numbers or types of parameters. This allows for flexibility in how the API is used, whilst still giving compile time checks that the appropriate type of data is being provided.
The various parts are used together as follows:
-
the engine calls the
rules()
method, which returns an array of rules-
each rule in the array is a context rule, containing further context and action rules
-
rules have varying parameters, which are either inbuilt Java types, or objects that the factory methods create
-
the choice of rules to return can be influenced by the results of the query methods
-
some rules take
ValueFunction
parameters, which allow a value to be calculated dynamically-
the dynamic function has access to the context of the rule it was being used as, and can also call the query methods
-
pre written functions are provided for common manipulation requirements
-
-
A note about code layout
The way that rules are represented in the API is as methods that take an arbitrary number of parameters, corresponding to the arbitrary number of children contained in a context rule.
All examples given in the documentation follow the following conventions, which you are recommended to follow in your own use of the API
-
a context rule is written on multiple lines
-
the first line has the context rule method name, the opening parenthesis, and the fixed parameters that describe the context, ending with a trailing comma
-
each child rule is given its own line, and indented by 2 spaces from the enclosing context rule
-
each child rule line, except the last, ends in a comma (since these are really just separate parameters, and Java does not allow a trailing comma after the last parameter)
-
child contexts of course cause their own child rules to be indented by a further 2 spaces
-
after all the child rules, the final closing parenthesis gets a line to itself, lined up with the first character of the context rule name
-
The examples in these documents have been pulled from a working example of a use of the API. As such, the rules are part of an array of such rules, and the various child rules may not always include the last child rule for that particular context. Thus many of the fragments shown here may end with a trailing comma, which is not strictly part of the fragment when viewed in isolation.
The rules() method
This is the method that all API users must implement. Its purpose is to return the rules that the engine will execute to drive the transformation.
The return value is an array, which makes the common use case very simple, since the Java language has special syntax for declaring arrays.
@Override public Rule[] rules(RulesContext rulesContext) { Rule[] rules = { profileTable("example", doNotImportProfileTable() ) }; return rules; }
For more complex cases, where you may want to build up the exact list of rules that are returned in a dynamic fashion, an alternative approach is to use a collection such as an ArrayList, and then to convert that to an array using toRulesArray()
when you are ready to return it.
@Override public Rule[] rules(RulesContext rulesContext) { List<Rule> list = new ArrayList<>(); list.add( profileTable("example", doNotImportProfileTable() ) ); return toRulesArray(list); }
The rules can be built using information about the exported data by using the various query methods in the Transformation API. In addition, the rules()
methods is passed the RulesContext object, which provides access to information that details the full set of components that are part of the downlevel and uplevel installations.
The RulesContext
The RulesContext
object has methods getDownlevelComponentsByDU()
and getUplevelComponentsByDU()
that give access to mappings that detail the full set of components that are part of the downlevel and uplevel installations. These mappings can be used to determine exactly which versions of a given component are present, and thus to provide rules that are specific to the actual versions present.
Since the mappings provide ComponentID
objects, that provide the version numbers as a multi-component dotted string, the Version
class can be used to used to make comparing such strings easy.
@Override public Rule[] rules(RulesContext rulesContext) { List<Rule> list = new ArrayList<>(); final String componentOfInterest = "SentinelDiameterMediationFeatureSPI"; Version downlevelVersion = findVersion(rulesContext.getDownlevelComponentsByDU(), componentOfInterest); Version uplevelVersion = findVersion(rulesContext.getUplevelComponentsByDU(), componentOfInterest); if (downlevelVersion.greaterThanOrEqual(new Version("1.2.3")) && downlevelVersion.lessThanOrEqual(new Version("1.3.5")) && uplevelVersion.greaterThanOrEqual(new Version("2"))) { list.add( profileTable("example", doNotImportProfileTable() ) ); } return toRulesArray(list); } private Version findVersion(Map<DeployableUnitID, Set<ComponentID>> dus, String componentOfInterest) { String versionString = null; for (DeployableUnitID duId : dus.keySet()) { ComponentID componentID = (ComponentID) dus.get(duId); if (componentID.getName().equals(componentOfInterest)) { versionString = componentID.getVersion(); } } return new Version(versionString); }
A first look at real rules
In this simple example, the engine will find every profile whose name ends with "MySuffix" in profile table "myTable", and set the value of its "myAttribute" attribute to "myNewValue".
profileTable("myTable", profileName(regex(".*MySuffix"), setAttribute("myAttribute", "myNewValue") ) )
How each of the API features shown in this small fragment work together is explained in more detail below.
The factory methods
The context rules in the API take matching parameters, and many of the action rules take a generator parameter - which is an object that can be asked for the exact value to be used when the action is performed. These matching and generator objects are implemented by the various XXXMatcher
classes and ObjectGenerator
inner classes respectively, and spreading such objects throughout your code can make it hard to read.
To simplify things, there are a number of factory methods that take simple parameters and return the appropriate matcher or generator, and you will normally never have to deal with the real classes themselves.
array()
The array()
factory methods are suitable to pass to rules that deal with attributes that have an array type.
Overrides are provided that construct an array from:
-
nothing at all (an empty array)
-
an arbitrary number of strings
-
an arbitrary number of objects
-
an arbitrary number of other generators
setAttribute("names", array()), setAttribute("names", array("just one value")), setAttribute("names", array("hello", "world")),
anyString()
The anyString()
factory method produces a matcher object that matches any string. It is a better alternative than using a regex matcher that can also be constructed to match anything.
profileSpec("Name", anyString(), "1.0.5", renameProfileSpec("z") ),
regex()
The regex()
factory method creates a StringMatcher
object which matches string values against the given regular expression. Note: in order to match, the regular expression must match the full value, rather than just a substring.
profileTable(regex("any[thing|one]"), setAttribute("a", "b") ),
Note that regular expressions are used for 'matching', not merely for 'finding' - the pattern given needs to define the whole of the name that it will match, not just part of it. Matching is done in a case sensitive manner.
component()
The component()
factory methods produce ComponentMatcher
objects used by ProfileSpec
context rules.
Each SLEE component is identified by name, vendor and version, and there are variants of this factory method that allow each of these to be specified as a constant string, or as a regex
or anyString
object.
profileSpec(component("Name", "Vendor", regex("1\\.0\\.\\d*")), renameProfileSpec("z") ),
However, use of the component
method is optional - all rules that take a component
can also take the name, vendor, and version values as individual parameters.
The context rules
The context rules all specify some matching requirement, and then contain the list of child rules that are applied when that matching requirement is met. The child rules may themselves be any mixture of action rules, and further context rules (though not all nested contexts make sense). Whether the child rules get executed is controlled by whether the context can be matched, but child action rules may sometimes take parameters that allows them to affect items that are themselves outside of their immediately enclosing context.
profileSpec()
The profileSpec()
context rule is used to identify which profile specs need to be matched for the child rules to be applied.
The spec can be given either using the result of a call to the component()
factory method, or the three parameters that would have been passed to that method can be passed to this rule directly.
profileSpec(component("a", "b", "c"), renameProfileSpec("z"), changeAttributeType("a", "type2"), changeAttributeType("b", String.class) ), profileSpec(component("a", "b", regex("ccc")), renameProfileSpec("z") ), profileSpec(component("a", regex("bbb"), "c"), renameProfileSpec("z") ), profileSpec(component("a", regex("bbb"), regex("ccc")), renameProfileSpec("z") ), profileSpec(component(regex("aaa"), "b", "c"), renameProfileSpec("z") ), profileSpec(component(regex("aaa"), "b", regex("ccc")), renameProfileSpec("z") ), profileSpec(component(regex("aaa"), regex("bbb"), "c"), renameProfileSpec("z") ), profileSpec(component(regex("aaa"), regex("bbb"), regex("ccc")), renameProfileSpec("z") ), profileSpec("a", "b", "c", renameProfileSpec("z") ), profileSpec("a", "b", regex("ccc"), renameProfileSpec("z") ), profileSpec("a", regex("bbb"), "c", renameProfileSpec("z") ), profileSpec("a", regex("bbb"), regex("ccc"), renameProfileSpec("z") ), profileSpec(regex("aaa"), "b", "c", renameProfileSpec("z") ), profileSpec(regex("aaa"), "b", regex("ccc"), renameProfileSpec("z") ), profileSpec(regex("aaa"), regex("bbb"), "c", renameProfileSpec("z") ), profileSpec(regex("aaa"), regex("bbb"), regex("ccc"), renameProfileSpec("z") ),
profileTable()
The profileTable()
context rule is used to identify which profile table needs to be matched for the child rules to be applied.
The name of the table can be specified directly as a string, or via a matcher such as returned by the regex factory.
A profileTable
context rule can be nested inside a profileSpec
context if desired.
profileTable("aa", renameProfileSpec("z") ), profileTable(regex("xx"), renameProfileSpec("z") ),
profileName()
The profileName()
context rule is used to identify which profile needs to be matched for the child rules to be applied.
The name of the profile can be specified directly as a string, or via a matcher such as returned by the regex factory.
A profileName
context rule can be nested inside profileSpec
or profileTable
contexts if desired.
profileTable("table", profileName("any", doNotImportProfile() ), profileName(regex("[Aa][Nn][Yy]"), doNotImportProfile() ) ),
ifAttributeValue()
The ifAttributeValue()
context rule will have its child rules applied only if the specified profile attribute matches the specified value.
The name of the attribute is always specified as a string, the value can be specified directly as a string, or via a matcher such as returned by the regex factory.
An ifAttributeValue
context rule can be nested inside any of the profile…
or ifAttribute…
contexts. In fact the way to match on multiple attributes at once is to nest ifAttributeValue
rules.
ifAttributeValue("a", "99", setAttribute("p", "q") ), ifAttributeValue("a", regex("99|100|101"), setAttribute("p", "q") ),
ifAttributeValueIsNull()
The ifAttributeValueIsNull()
context rule is a specialized form of the ifAttributeValue()
rule that matches on a value being null.
ifAttributeValueIsNull("x", setAttributeToNull("y") )
raEntity()
The raEntity()
context rule is used to select on entity names.
The name of the entity can be specified directly as a string, or via a matcher such as returned by the regex factory.
raEntity("a", doNotImportRaEntity(), ),
ifPropertyValue()
The ifPropertyValue()
context rule will have its child rules applied only if the specified RA configuration property matches the specified value.
The name of the property is always specified as a string, the value can be specified directly as a string, or via a matcher such as returned by the regex factory.
An ifPropertyValue
context rule can be nested inside an raEntity
context, or another ifPropertyValue
one (which allows matching on multiple properties at once).
ifPropertyValue("property", "value", setProperty("any", 99) ), ifPropertyValue("property", regex("hello*"), setProperty("any", 100) ),
The action rules
addAttribute()
The addAttribute()
action rule adds a new attribute by specifying its name and type, and optionally a default value to be used when no other more specific value is given, and when an attribute value was not already present within the export data.
The name is always given as a string, the type can be a string, or preferably a Java class whose type will be used automatically (which avoids the possibility of mistyping the string).
The optional default value can be almost any type that is found convenient, and no checking is done that it actually matches the type that you specify. A particularly powerful feature of the API is being able to pass a ValueFunction
object as the default value, which allows the value to be calculated when the rule is actually applied.
By default, new attributes are expected to be non-serialised, which is the case for the common types such as strings, integers and the like. For custom types which need to be marked as serialised in the profile export XML, use one of the addAttribute()
methods which take a serialisation version parameter, all of which create a serialised attribute.
profileTable("any", // The type parameter may be spelt out as a string addAttribute("a", "java.lang.String", "c"), addAttribute("array", "java.lang.String[]", array("c")), addAttribute("a", "java.lang.String", exampleFunction()), addAttribute("a", "java.lang.boolean", true), addAttribute("a", "java.lang.int", 99), // The type parameter can also be passed as a class rather than as a string addAttributeAsNull("a", String.class), addAttribute("a", String.class, "c"), addAttribute("array", String[].class, array("c")), addAttribute("a", String.class, exampleFunction()), addAttribute("a", boolean.class, true), addAttribute("a", int.class, 99), // Serialised attributes must explicitly be marked as such addAttributeAsNull("a", "com.example.CustomType", "2.4.0"), addAttribute("a", "com.example.CustomType", "2.4.0", exampleFunction()), addAttribute("a", "com.example.CustomType", "2.4.0", "\01\02\03\04"), addAttribute("array", "com.example.CustomType[]", "2.4.0", array("\01", "\02")), // For serialised attributes which don't have an explicit serialisation version, pass null as the version addAttributeAsNull("a", "com.example.CustomType", null), addAttribute("a", "com.example.CustomType", null, exampleFunction()), addAttribute("a", "com.example.CustomType", null, "\01\02\03\04"), addAttribute("array", "com.example.CustomType[]", null, array("\01", "\02")) ),
addAttributeAsNull()
These rules are specialized forms of the addAttribute()
rule, where the default value is a null.
profileTable("any", addAttributeAsNull("a", "java.lang.String"), addAttributeAsNull("a", String.class) ),
doNotImportAttribute()
The doNotImportAttribute()
action deletes a named attribute from the exported data for a profile table.
profileTable("any", doNotImportAttribute("a") ),
setAttribute()
The setAttribute()
action rule sets a named attribute to a particular value.
Overrides on this method allow the value to be specified in a large variety of ways, and no checking is done that the value is of the specific type that the profile stores. As with addAttribute
, the value can also be passed via a ValueFunction
object, which allows the value to be calculated when the rule is actually applied.
profileTable("table", profileName(regex("qq"), setAttribute("names", array()), setAttribute("names", array("just one value")), setAttribute("names", array("hello", "world")), setAttribute("name", exampleFunction()), setAttribute("name", false), setAttribute("name", 42), setAttribute("names", new String[]{"1","2","3"}), setAttribute("names", new Integer[]{1,2,3}), setAttribute("names", new int[]{1,2,3}), setAttribute("names", new long[]{1,2,3}), setAttribute("names", new Object[]{1,false,"third",this}), // As well as defining our own functions, there are some utility ones setAttribute("names", functionArrayAppend("extra")), setAttribute("names", functionArrayRemove("excess")), setAttribute("name", functionUplevelProfileAttribute()), setAttribute("names", functionUplevelProfileAttributeArray()), setAttribute("name", functionForDefaultProfileAndOthers("value for default profile", "value for all other profiles")), setAttribute("name", functionForDefaultProfileAndOthers("value for default profile", exampleFunction())), setAttribute("name", functionForDefaultProfileAndOthers(exampleFunction(), "value for all other profiles")), setAttribute("name", functionForDefaultProfileAndOthers(exampleFunction(), exampleFunction())), // These two rules are equivalent, so the second is an artificial example setAttribute("name", 99), setAttribute("name", functionForConstant(99)) ) ),
setAttributeToNull()
This is a specialized form of the setAttribute
action rule that uses a value of null.
setAttributeToNull("y")
renameAttribute()
The renameAttribute()
rule changes the name of an attribute, without affecting its type or value. Both original name and new name are given as strings.
renameAttribute("s", "t"),
changeAttributeType()
The changeAttributeType()
rule changes the type of the name attribute.
The name is specified as a string, and the type as either a string or the Java class of the type.
changeAttributeType("a", "type2"), changeAttributeType("b", String.class)
doNotImportProfile()
The doNotImportProfile()
rule deletes the profile from the export data.
Deleting the profile from the export data does not mean that it is always deleted from the system - data that is not in the export can still be present in the uplevel system if the installation of the new version recreates the data that is not provided by the imported export. |
profileTable("table", profileName("profile", doNotImportProfile() ) ),
doNotImportProfileTable()
The doNotImportProfileTable()
rule removes an entire table from the export data.
Like deleting a profile, removing a table from the export may not mean that is it deleted from the system if the uplevel installation creates a new version of the table. |
profileTable("table", profileName("any", doNotImportProfileTable() ) ),
renameProfile()
The renameProfile()
rule renames a profile.
The new name can be a constant value, or can be a ValueFunction
which, since it is passed the GeneratorContext, allows the existing name to be used in the production of the new name.
renameProfile("x"), renameProfile(new ValueFunction() { public Object evaluate(GeneratorContext context) { return "y"; } })
renameProfileTable()
Renames a profile table. The new name is only supported as a constant string.
renameProfileTable("y"),
renameProfileSpec()
Renames a profile spec. The new name is only supported as a constant string.
renameProfileSpec("z")
addProperty()
Adds a property to an RA entity.
Takes the name of the new property, its type (either as a string, or better as the class of the type), and the default value to be used when no other value is provided, and when the property is not already set in the export data. That value can be provided as whatever type is most convenient, which does not need to match the actual defined type of the property (just to be compatible with it, so for example you can pass a numeric parameter to a property that is stored as a string, and the string representation of the number will be used).
This action rule needs to be inside an raEntity context.
addProperty("name", "type", true), addProperty("name", "type", -101), addProperty("name", "type", "hello world"), addProperty("name", boolean.class, true), addProperty("name", long.class, -101), addProperty("name", String.class, "hello world"),
addPropertyAsNull()
This is a specialized form of addProperty
that has null as the default value.
addPropertyAsNull("name", "type"), addPropertyAsNull("name", int.class),
doNotImportProperty()
This action rule deletes an RA configuration property from the exported data.
Deleting the property from the export data does not mean that it is always deleted from the system - data that is not in the export can still be present in the uplevel system if the installation of the new version recreates the data that is not provided by the imported export. |
doNotImportProperty("name"),
setProperty()
This action sets the value of an RA configuration property, where that value can be specified in a variety of different types (which do not need to match, just be compatible with the actual type of the property).
setProperty("name", false), setProperty("name", 666), setProperty("name", "ceci n'est pas un string"),
setPropertyToNull()
This is a specialized form of the setProperty
action, where the value to be set is null.
setPropertyToNull("name"),
doNotImportRaEntity()
Deletes an RA entity from the exported data.
Deleting the RA entity from the export data does not mean that it is always deleted from the system - data that is not in the export can still be present in the uplevel system if the installation of the new version recreates the data that is not provided by the imported export. |
doNotImportRaEntity(),
The creation rules
createTable()
The createTable()
action rule is unique in that it is the one action rule that does not need to be enclosed in a context rule.
Its purpose is to create a new profile table with a specified name, that uses the given profile spec. Note that the profile spec to use is specified using individual name, vendor, and version values - it is not possible to pass the result of the component() factory method to this rule, since that rule produces a Matcher object that is used to match a profile spec, not to define one (in particular matches may include regular expressions as any of the individual parts).
createTable("tableName", "specName", "vendor", "1.0"), profileTable("tableName", addAttribute("first", String.class, "use this for default"), createProfile("extra profile") ),
The query methods
getRaProperty()
Get the current (i.e. prior to upgrade) value of a property.
String raProperty = getRaProperty("diameter-sentinel-internal", "SessionTimeout");
getUplevelRaProperty()
Get the uplevel (i.e. present in the upgraded version) value of a property.
String uplevelRaProperty = getUplevelRaProperty("diameter-sentinel-internal", "SessionTimeout");
getProfileAttribute()
Get the current (i.e. prior to upgrade) value of a profile attribute.
If the particular attribute being queried is of an array type, this method will return null, and give a warning. To get the value of an array attribute, you must use getProfileAttributeArray
which has a return type which can correctly express the array contents.
String value = getProfileAttribute("dbquery-config", "", "maxSyncTransactionAge");
getProfileAttributeArray()
Get the current (i.e. prior to upgrade) value of a profile attribute which has an array type.
If the particular attribute being queried is not of an array type, this method will return null, and give a warning. To get the value of a non-array attribute, you must use getProfileAttribute
which has a return type which can correctly express the simple contents.
String[] array = getProfileAttributeArray("dbquery-config","postgres-Config", "dataSourceProfileIDs");
getUplevelProfileAttribute()
Get the uplevel (i.e. present in the upgraded version) value of a profile attribute.
If the particular attribute being queried is of an array type, this method will return null, and give a warning. To get the value of an array attribute, you must use getUplevelProfileAttributeArray
which has a return type which can correctly express the array contents.
String uplevelValue = getUplevelProfileAttribute("dbquery-config", "", "maxSyncTransactionAge");
getUplevelProfileAttributeArray()
Get the uplevel (i.e. present in the upgraded version) value of a profile attribute which has an array type.
If the particular attribute being queried is not of an array type, this method will return null, and give a warning. To get the value of a non-array attribute, you must use getUplevelProfileAttribute
which has a return type which can correctly express the simple contents.
String[] uplevelArray = getUplevelProfileAttributeArray("dbquery-config", "postgres-Config", "dataSourceProfileIDs");
Miscellaneous methods
toRulesArray()
The toRulesArray()
method converts an iterable object containing rules to the array required by the API, for cases where it’s easier to build up the rules dynamically.
@Override public Rule[] rules(RulesContext rulesContext) { List<Rule> list = new ArrayList<>(); list.add( profileTable("example", doNotImportProfileTable() ) ); return toRulesArray(list); }
issueWarning()
The primary use for the issueWarning()
method is so that code that uses the API can issue a warning, for example if it discovers that the data it is transforming is too complex to be automatically handled.
A secondary use is as "print statements" for use as a temporary diagnostic aid.
issueWarning("Danger, do not reverse polarity");
Using the ValueFunction
When the data transformation rules require a 'value' to be assigned, that value can be given explicitly in the rule, or can be provided via a ValueFunction
object.
The ValueFunction
object provides an evaluate method that is called when the rule is being used, and allows the function to calculate the value based on the context where it has been invoked.
The context holds:
/** * A GeneratorContext is passed to the {@link ValueFunction#evaluate} and * {@link ObjectGenerator#generate(GeneratorContext)} methods.<br> * It contains information about the context in which the value is being generated. */ public class GeneratorContext { /** * Information about the transformation rule in which the evaluation is taking place. */ public static class RuleInfo { /** * The name of the rule. */ public String name; /** * A description of the rule. */ public String description; } /** * Information about the attribute which is being generated, where appropriate. */ public static class Attribute { /** * The name of the profile attribute. */ public String name; /** * The attribute value, expressed as a string, where available. Not appropriate for use with array types. */ public String value; /** * The attribute value, expressed as a string array, where available. Only appropriate for use with array types. */ public String[] array; } /** * Information about the profile in which the evaluation is taking place, where appropriate. */ public static class Profile { /** * The name of the profile. */ public String name; } /** * Information about the profile table in which the evaluation is taking place, where appropriate. */ public static class Table { /** * The name of the profile table. */ public String name; } /** * Information about the transformation rule in which the evaluation is taking place. */ public RuleInfo rule; /** * Information about the attribute which is being generated, where appropriate. * This field will be null if there is no attribute present in this context. */ public Attribute attribute; /** * Information about the profile in which the evaluation is taking place, where appropriate. * This field will be null if there is no profile present in this context. */ public Profile profile; /** * Information about the profile table in which the evaluation is taking place, where appropriate. * This field will be null if there is no profile table present in this context. */ public Table table;
In addition, the method has full access to the whole of the data that is being transformed, via the query functions in the API.
new ValueFunction() { @Override public Object evaluate(GeneratorContext context) throws TransformationException { return String.format("hello %s, I see that your value was %s", context.attribute.name, context.attribute.value); } @Override public String toString() { return "exampleFunction()"; } };
The evaluate method must be provided. The toString() override is optional, but helps make warning messages that relate to rules that use the ValueFunction easier to follow.
A ValueFunction
can provide either a single value or an array as its return value. The same ValueFunction
can be passed to multiple rules, where the logic can use the context information to act appropriately.
Of interest, there is one advantage to omitting the toString() override. If you do, and have have your IDE set up appropriately this may be shown to you using the Java 8 lambda syntax, which is easier to follow:
(context) -> { return String.format("hello %s, I see that your value was %s", context.attribute.name, context.attribute.value); }
The provided ValueFunction implementations
Given that ValueFunction
objects are so powerful, a number of useful functions have been packaged as part of the API. They follow the simple naming convention of all having the word 'function' at the start of their method name!
Array manipulation
When an attribute is of an array type, common operations are to add or remove items from that array. Both of these are supported via provided ValueFunctions.
Appending adds to the end of the array. Removing removes the first item that matches.
setAttribute("names", functionArrayAppend("extra")), setAttribute("names", functionArrayRemove("excess")),
Providing a different value to the default profile
The addAttribute rule allows a "default" value to be provided - this is the value to be used for profiles that are added that do not specify a particular value for that attribute.
That is quite different from providing a value for the default profile.
A set of functions functionForDefaultProfileAndOthers()
are provided that allow you to specify one value for the default profile, and a different value for all other profiles.
The variations allow the values to be specified as either a string value, or a further ValueFunction.
setAttribute("name", functionForDefaultProfileAndOthers("value for default profile", "value for all other profiles")), setAttribute("name", functionForDefaultProfileAndOthers("value for default profile", exampleFunction())), setAttribute("name", functionForDefaultProfileAndOthers(exampleFunction(), "value for all other profiles")), setAttribute("name", functionForDefaultProfileAndOthers(exampleFunction(), exampleFunction())),
Providing a constant value
The functionForConstant()
ValueFunction at first sight appears redundant - every method in the API that accepts a ValueFunction can also be passed a constant. However, if your logic is more complex, you may find wrapping a constant up as a ValueFunction
is very useful, since you can then provide common code that does not have to distinguish a constant from other possible inputs.
// These two rules are equivalent, so the second is an artificial example setAttribute("name", 99), setAttribute("name", functionForConstant(99))
Documenting what the transformation rules do
There’s no doubt about it, writing transformation rules can be tricky, so you are likely to want to include copious comments in the Java file, to help you keep track of what you did, and to guide future maintainers of the code. That is business as usual for programmers.
However, you also need to document what your rules do for a second set of people - those who know nothing about programming, but who are doing an upgrade, and need to understand, in general, human readable terms, what config is being migrated, and what is not, in the specific upgrade.
Since the best way to keep documentation current is to keep it close to the code itself, this second set of documentation is placed in the same Java file as the code that produces the rules. The build process will pull this documentation out of the Java file, and put it in the user facing upgrade document for the appropriate product.
To mark the bits of the Java file that should be included in the docs in this way, they are tagged using AsciiDoc tags, as illustrated here.
/* This is a java file, so we put documentation inside multi-line comments The next line is the special marker to know what text to include // tag::ocdoc[] Here we document what the rules do... The text that we write here will be concatenated with any other text fragments further on in the file that are also tagged "ocdoc", so that way we can intersperse the code and the documentation of that code. // end::ocdoc[] The previous line holds the end marker of the text to be included. */
The tag uses the name ocdoc
, to indicate that the text fragment it wraps will be pulled out into the OpenCloud upgrade docs.
You can add markup to multiple fragments of text, throughout the file, and the build process will pull them all into the same document.
The fragments that you write are in AsciiDoc markup - at its simplest that means that you can write your description across multiple lines, and it will all end up joined together into the same paragraph. Leave a blank line between paragraphs. However, you can do lots more with this formatting if you wish.
You are also not limited to just including text from inside of Java comments - there may well be times when you want to include text taken from the actual code in the documentation. For example, if you have a lot of similar rules, you may be accumulating them dynamically based on a list of names used in a loop. That same list can be usefully included in the documentation by placing tags around the relevant lines of code.
/* // tag::ocdoc[] The following profile tables are not imported from the 2.6 system, instead using the default values from the 2.7 install: // We want the list that follows to be treated as pre-formatted by the docs [listing,indent=0] // end::ocdoc[] */ String[] do_not_import = { // tag::ocdoc[] "sip-sis-ra-address-subscriptions", "sip-sis-ra-compositions", "sip-sis-ra-experimental-features", "sip-sis-ra-ext-platforms", "sip-sis-ra-extension-refs" // end::ocdoc[] }; for (String tableName : do_not_import) { rules.add(profileTable(tableName, doNotImportProfileTable())); }
Note also the use of other AsciiDoc markup, specifically [listing,indent=0]
to make the document processing display the list of strings in a clean way. The line that precedes that line, starting //
is an AsciiDoc comment!
You may have split your code that produces the transformation rules across several Java files. If you have done that, then you may find it easiest to put all the documentation markup in just one of those files. If you do choose to spread it around within more than one Java file, the build process will find the markup in all the Java files, but will process the fragments in alphabetical order of the filenames, which may not be what you wanted.
Debugging the transformation rules
-
When you run the transformation tool, it may output a number of warnings and errors. These are your first port of call when trying to debug your rules.
-
If you have made use of
ValueFunction
, then these warnings will be more readable if you make sure that you include an override of thetoString()
method in these. -
Consider adding some temporary
issueWarning
calls as you build up your rules, or within yourValueFunction
implementations so that you can get an insight into the values your rules are processing. -
A very useful way to check what your rules have done is to do a textual comparison of the input and output data - the processing tries very hard to ensure that the only differences you see in the xml files should be due to the transformation, and not be down to other processing artifacts. (The xml produced by
rhino-export
is not written in canonical XML format, and this behaviour is preserved by the transformations). -
The transformation process produces detailed logs of its processing, though these were designed for debugging the engine rather than debugging the rules that it is running. That means that the output is perhaps more verbose than needed, and may not be intelligible without access to the engine source code.
Feature Scripts conflicts and resolution
Resolving the Feature Scripts conflicts
After orca
finishes the major upgrade in the first node, there might be Feature Scripts conflicts which need to be solved and applied to the system for correct operation.
The files in the feature-scripts
path will contain the scripts with:
-
the proposed merged version
-
the installed version
-
the new version (uplevel version)
-
the original downlevel version
The cases that require full manual intervention are ones where the file presents this line:
<merge conflict: some message>
, e.g, <merge conflict: all three versions differ>
In order to show how to solve conflict in the Feature Scripts consider the examples below.
Example of default_Post_SipAccess_SubscriberCheck
:
<merge conflict: all three versions differ> ### Write your script above. This line, and anything below it, will be removed === Currently installed version of script featurescript SipAccessSubscriberCheck-SysPost-Default { if not session.MonitorCallOnly { run B2BUAScurPostFeature } run SDPRewriter run SDPMonitor mode "post" run RecordTimestamps mode "inbound" run ExtractNetworkInfo run RemoveHeadersFromOutgoingMessages } >>> New version of script featurescript SipAccessSubscriberCheck-SysPost-Default { if not session.MonitorCallOnly { run B2BUAScurPostFeature } run SDPRewriter run SDPMonitor mode "post" run RecordTimestamps mode "outbound" run ExtractNetworkInfo run ExternalSessionTracking run RemoveHeadersFromOutgoingMessages } <<< Original version of script featurescript SipAccessSubscriberCheck-SysPost-Default { if not session.MonitorCallOnly { run B2BUAScurPostFeature } run SDPRewriter run SDPMonitor mode "post" run ExtractNetworkInfo run RemoveHeadersFromOutgoingMessages }
This case shows all 3 versions are different, specifically run RecordTimestamps mode "inbound"
changed to run RecordTimestamps mode "outbound"
and run ExternalSessionTracking
was added.
One correct solution would be to keep the new version of the script. The file after editing would have:
featurescript SipAccessSubscriberCheck-SysPost-Default { if not session.MonitorCallOnly { run B2BUAScurPostFeature } run SDPRewriter run SDPMonitor mode "post" run RecordTimestamps mode "outbound" run ExtractNetworkInfo run ExternalSessionTracking run RemoveHeadersFromOutgoingMessages } ### Write your script above. This line, and anything below it, will be removed === Currently installed version of script featurescript SipAccessSubscriberCheck-SysPost-Default { if not session.MonitorCallOnly { run B2BUAScurPostFeature } run SDPRewriter run SDPMonitor mode "post" run RecordTimestamps mode "inbound" run ExtractNetworkInfo run RemoveHeadersFromOutgoingMessages } >>> New version of script featurescript SipAccessSubscriberCheck-SysPost-Default { if not session.MonitorCallOnly { run B2BUAScurPostFeature } run SDPRewriter run SDPMonitor mode "post" run RecordTimestamps mode "outbound" run ExtractNetworkInfo run ExternalSessionTracking run RemoveHeadersFromOutgoingMessages } <<< Original version of script featurescript SipAccessSubscriberCheck-SysPost-Default { if not session.MonitorCallOnly { run B2BUAScurPostFeature } run SDPRewriter run SDPMonitor mode "post" run ExtractNetworkInfo run RemoveHeadersFromOutgoingMessages }
Example file MMTelTerm_SipAccess_PartyRequest
:
featurescript SipAccessPartyRequest-User-MmtelTerm { if session.ICBBarredWithAnnouncement or session.PlayCDIVAnnouncement or session.PlayCWAnnouncement or session.EndSessionWithAnnouncement { run SipPlayAnnouncement } if not session.FlexibleAlertingMode.NONE { if session.FlexibleAlertingMode.PARALLEL { run MMTelParallelFA } else { run MMTelSequentialFA } } run MMTelOIP run MMTelECT run MMTelStodProcessHandover run DetermineChargeableLeg if session.AccessLegTrackingActive { run AccessLegTracking } } ### Write your script above. This line, and anything below it, will be removed === Currently installed version of script featurescript SipAccessPartyRequest-User-MmtelTerm { if session.ICBBarredWithAnnouncement or session.PlayCDIVAnnouncement or session.PlayCWAnnouncement or session.EndSessionWithAnnouncement { run SipPlayAnnouncement } if not session.FlexibleAlertingMode.NONE { if session.FlexibleAlertingMode.PARALLEL { run MMTelParallelFA } else { run MMTelSequentialFA } } run MMTelOIP run MMTelECT run MMTelStodProcessHandover run DetermineChargeableLeg } >>> New version of script featurescript SipAccessPartyRequest-User-MmtelTerm { if session.ICBBarredWithAnnouncement or session.PlayCDIVAnnouncement or session.PlayCWAnnouncement or session.EndSessionWithAnnouncement { run SipPlayAnnouncement } if not session.FlexibleAlertingMode.NONE { if session.FlexibleAlertingMode.PARALLEL { run MMTelParallelFA } else { run MMTelSequentialFA } } run MMTelOIP run MMTelECT run MMTelStodProcessHandover run DetermineChargeableLeg if session.AccessLegTrackingActive { run AccessLegTracking } } <<< Original version of script featurescript SipAccessPartyRequest-User-MmtelTerm { if session.ICBBarredWithAnnouncement or session.PlayCDIVAnnouncement or session.PlayCWAnnouncement or session.EndSessionWithAnnouncement { run SipPlayAnnouncement } if not session.FlexibleAlertingMode.NONE { if session.FlexibleAlertingMode.PARALLEL { run MMTelParallelFA } else { run MMTelSequentialFA } } run MMTelOIP run MMTelECT run MMTelStodProcessHandover run DetermineChargeableLeg }
The change is that the new version introduces the change
if session.AccessLegTrackingActive { run AccessLegTracking }
It matches the case 2, so the uplevel version is the correct one to use and no changes in the file is required.
Importing the Feature Scripts after resolving the conflicts
After the conflicts are solved run the command from the same path you ran to do the major upgrade:
./orca --hosts <first host> import-feature-scripts
The output should be:
Importing feature script SCCTerm_HLR_SipAccess_ServiceTimer... Importing feature script default_Post_SipMidSession_ChargingReauth... Importing feature script MMTelOrig_SipMidSession_PartyRequest... Importing feature script MMTelConf_SipAccess_SubscriberCheck... Importing feature script SCCTermAnchor_SipAccess_ServiceTimer... Importing feature script SCC_Post_SipEndSession... Importing feature script MMTel_Pre_SipAccess_SessionStart... Importing feature script SCC_SipAccess_PartyRequest... ... other feature scripts ... Done on localhost
If some Feature Script files are not correct orca will print warnings:
5 scripts could not be imported (see above for errors): - MMTel_Pre_SipMidSession_PartyRequest - SCC_Post_SipAccess_PartyRequest - MMTel_Post_SipMidSession_PartyRequest - default_Post_SipMidSession_PartyResponse - MMTelOrig_Post_SubscriptionSipResponse
You can fix them and do the same import procedure as indicated above.
Troubleshooting
Besides the information on the console, orca
provides detailed output of the actions taken in the log file. The log file by default is located on the host that executed the command under the path logs
.
orca
can’t connect to the remote hosts
Check if the trusted connection via ssh is working. The command ssh <the host to connect to>
should work without asking for a password.
You can add a trusted connection by executing the steps below
-
Create SSH key by using default locations and empty passphrase. Just hit enter until you’re done
ssh-keygen -t rsa
-
Copy the SSH key onto all VMs that need to be accessed including the node you are on. You will have to enter the password for the user
ssh-copy-id -i $HOME/.ssh/id_rsa.pub sentinel@<VM_ADDRESS>
where VM_ADDRESS is the host name you want the key to be copied to.
To check run:
ssh VM_ADDRESS
It should return a shell of the remote host (VM_ADDRESS).
orca
failed to create the management database
orca
uses the psql
client to connect to and do operations on the PostgreSQL database. Check the first host in the --hosts
list has psql installed.
Get the database hostname from the file /home/sentinel/rhino/node-xxx/config/config_variables
Search for the properties MANAGEMENT_DATABASE_NAME
, MANAGEMENT_DATABASE_HOST
, MANAGEMENT_DATABASE_USER
, MANAGEMENT_DATABASE_PASSWORD
example
MANAGEMENT_DATABASE_NAME=rhino_50 MANAGEMENT_DATABASE_HOST=postgresql MANAGEMENT_DATABASE_PORT=5432 MANAGEMENT_DATABASE_USER=rhino_user MANAGEMENT_DATABASE_PASSWORD=rhino_password
Test the connection:
psql -h postgresql -U rhino_user -p 5432 rhino_50 Enter the password and you expect to see Password for user rhino: psql (9.5.14) SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) Type "help" for help. rhino_50=>
If the connection fails try with the database name from the previous cluster. If it still fails the procedure below is a guide on how to add permission to the database to accept remote connections.
-
Log in to the database host.
-
Open up the pg_hba.conf file, using sudo.
sudo vim /var/lib/pgsql/data/pg_hba.conf
-
Replace the line that looks like this… host rhino-xxx sentinel xxx.xxx.xxx.xxx/xx password with a line that looks like this…
host all sentinel xxx.xxx.xxx.xxx/xx password
Where 'rhino-xxx' is the name of cluster xxx’s postgres database, and xxx.xxx.xxx.xxx/xx covers the signalling addresses of the nodes.
-
Reload the config. /usr/bin/pg_ctl reload
-
Or failing that, try this command. sudo service postgresql restart
Installer runs but does not install
The recommendation is to rollback the node to the previous cluster, cleanup the prepared cluster and start again.
./orca --hosts host1 rollback ./orca --hosts host1,host2,host3 cleanup --cluster <cluster id> ./orca --hosts host1,host2,host3 major-upgrade <path to the product sdk zip> <path to the install.properties> --pause
Rhino nodes shutdown or reboot during migration
orca
requires the node being migrated to allow management commands via rhino-console. If the node is not active orca
will fail.
If while executing orca --hosts <list of hosts> major-upgrade packages install.properties --continue
the node being currently migrated shuts down or reboots, there are 3 options:
1- Skip the node and proceed to the other nodes and migrate the failed node later
Except for the first host name, remove the hosts that were already migrated from the orca command, including the host that failed and execute the command orca --hosts <list of hosts> major-upgrade packages install.properties --continue
again to continue with the upgrade.
2- Restart the node manually and continue with the upgrade
If the node shutdown and hadn’t rebooted, execute the rhino.sh
start script. If the node rebooted, check if rhino-console command works:
<path to the old cluster >/client/bin/rhino-console
Except for the first host name, remove the hosts that were already migrated from the orca command and execute the command orca --hosts <list of hosts> major-upgrade packages install.properties --continue
again to continue with the upgrade.
3- Do the migration manually
This is an extreme case, but the procedure is simple.
For each node that was not migrated:
Identify the new cluster
cd $HOME ls -ls rhino lrwxrwxrwx 1 sentinel sentinel 24 Nov 21 21:48 rhino -> volte-2.7.1.2-cluster-50
will show you the current cluster name in the format <product>-<version>-cluster-<id>.
The new cluster will have a similar name <product>-<new version>-cluster-<id + 1>, but the cluster id will be one number higher. For example: volte-2.8.0.3-cluster-51
Kill the current node
rhino/rhino.sh kill
Link the rhino path to the new cluster
rm rhino ln -s <new cluster> rhino
where <new cluster> is the path identified above. Example: volte-2.8.0.3-cluster-51
Start the node and check for connection
./rhino/rhino.sh start
Wait some minutes and check if you can connect to the node using REM or use rhino-console.