Orca supports online upgrade of the SGC from version 3.0.0.x
to select newer versions.
The OCSS7 installation to be upgraded must meet the documented general requirements.
The sgc-upgrade-cluster
command is used to upgrade the SGC cluster.
There are two basic options for an upgrade:
-
Install a new SGC version from an installation package and copy the configuration files from the existing installation to the new. The installation package may either be a standalone installation package, or the package provided as part of the Orca SGC upgrade bundle.
-
Upgrade to a pre-installed and pre-configured SGC installation.
The automated upgrade process performs the following steps:
-
It makes a backup of the current running installation’s critical configuration files.
-
It places the cluster into
UPGRADE_MULTI_VERSION
mode. -
(New SGC installation only) Installs the new SGC version on each node.
-
(New SGC installation only) Copies key configuration files from the old SGC to the new on each node.
-
Shuts down the old SGC and starts the new SGC. This is performed one node at a time, with a wait period between nodes in order to allow the cluster to perform the operations required to maintain cluster integrity.
-
Marks the upgrade as completed and places the cluster into
NORMAL
mode.
Upgrading to a New SGC Installation
To perform an upgrade where the new SGC is installed and configured as part of the upgrade requires either the --package-directory
or --sgc-package
command line argument.
For example, to use the Orca SGC upgrade bundle supplied installation packages:
$ ./orca -H vm1,vm2,vm3 sgc-upgrade-cluster --package-directory packages/
This process installs the new SGC following the recommended installation structure and copies configuration files from the currently running SGC to the new.
Then, a single node at a time, it stops the currently running node, updates the current
symbolic link to point to the new SGC and starts the new node.
Alternatively, the user may use a standalone OCSS7 installation package:
$ ./orca -H vm1,vm2,vm3 sgc-upgrade-cluster --sgc-package /path/to/ocss7-3.0.0.1.zip
If more than one cluster is installed on the target hosts it will be necessary to specify the cluster to upgrade using the --cluster argument.
|
Upgrading to a Pre-Existing SGC Installation
To perform an upgrade that uses a pre-existing SGC installation the --target-version
command line argument is required.
The pre-existing SGC installation must be fully configured as configuration files are not copied from the old installation to the new during this process.
For example:
$ ./orca -H vm1,vm2,vm3 sgc-upgrade-cluster --target-version 3.0.0.1
This process skips all installation and configuration steps.
One by one, each node is stopped, has its current
symbolic link updated to point to the target version and restarted.
Example Upgrade Using the Orca SGC Upgrade Bundle
This example is for a 3-node SGC cluster (PC1
) consisting of:
-
Host
vm1
:PC1-1
-
Host
vm2
:PC1-2
-
Host
vm3
:PC1-3
Before starting, check the current status of the SGC cluster:
$ ./orca -H vm1,vm3,vm2 sgc-status
Host vm1
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-1
* [Running] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]
Host vm3
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-3
* [Running] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]
Host vm2
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-2
* [Running] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]
If satisfied with the current cluster state, issue the upgrade command:
$ ./orca -H vm3,vm1,vm2 sgc-upgrade-cluster --package-directory packages/
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
No nodes specified with --nodes <nodes>, using auto-detected nodes: [u'PC1-3', u'PC1-1', u'PC1-2']
Backing up SGC cluster on nodes [u'PC1-3', u'PC1-1', u'PC1-2']
Running command on host vm3: orca_migrate_helper.py tmpzcmRnu=>{"function": "sgc_backup", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3
Running command on host vm1: orca_migrate_helper.py tmpQ5lQ39=>{"function": "sgc_backup", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1
Running command on host vm2: orca_migrate_helper.py tmp13Xs3g=>{"function": "sgc_backup", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2
Starting SGC upgrade process on host PC1-3
Running command on host vm3: orca_migrate_helper.py tmpJDOH4d=>{"function": "sgc_start_upgrade", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3
Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
Preparing new SGC cluster on nodes [u'PC1-3', u'PC1-1', u'PC1-2']
Running command on host vm3: orca_migrate_helper.py tmpmKVEid=>{"function": "sgc_prepare", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3
Running command on host vm1: orca_migrate_helper.py tmpVBwfZS=>{"function": "sgc_prepare", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1
Running command on host vm2: orca_migrate_helper.py tmpfAjL3n=>{"function": "sgc_prepare", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2
Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
Upgrading SGC nodes in turn
Upgrading SGC node PC1-3. This may take a couple of minutes.
Running command on host vm3: orca_migrate_helper.py tmpdmdejS=>{"function": "sgc_upgrade_node", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3
Waiting 60 seconds for SGC cluster to redistribute data before upgrading the next node
Upgrading SGC node PC1-1. This may take a couple of minutes.
Running command on host vm1: orca_migrate_helper.py tmp4ZuKGY=>{"function": "sgc_upgrade_node", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1
Waiting 60 seconds for SGC cluster to redistribute data before upgrading the next node
Upgrading SGC node PC1-2. This may take a couple of minutes.
Running command on host vm2: orca_migrate_helper.py tmpKDmSkk=>{"function": "sgc_upgrade_node", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2
Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
Completing SGC upgrade process on node PC1-3
Running command on host vm3: orca_migrate_helper.py tmpKpeaK7=>{"function": "sgc_complete_upgrade", "target_version": null, "sgc_package": "ocss7-3.0.0.1.zip", "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3
Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm3, vm1, vm2]
Available actions:
- sgc-backup
- sgc-backup-prune
- sgc-upgrade-start
- sgc-upgrade-cluster
- sgc-revert-start
- sgc-revert-cluster
- sgc-install
- sgc-node-start
- sgc-node-stop
- sgc-status
And finally, re-check the cluster status:
$ ./orca -H vm1,vm3,vm2 sgc-status
Host vm1
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-1
[Stopped] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]
* [Running] PC1-1 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.1]
Host vm3
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-3
[Stopped] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]
* [Running] PC1-3 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.1]
Host vm2
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-2
[Stopped] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]
* [Running] PC1-2 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.1]