Orca supports online reversion of the SGC to version 3.0.0.x
from select newer versions.
The OCSS7 installation to be reverted must meet the documented general requirements.
The sgc-revert-cluster
command is used to revert the SGC cluster.
There are two options for a reversion:
-
Install the replacement SGC version from an installation package and copy the configuration files from the existing installation to the replacement. The installation package may either be a standalone installation package, or the package provided as part of the Orca SGC upgrade bundle.
-
Revert to a pre-installed and pre-configured SGC installation.
The reversion process performs some pre-checks to ensure that the cluster is in an appropriate state and then reverts the cluster. This process includes:
-
Making a backup of the current running installation.
-
Placing the cluster into
REVERT_MULTI_VERSION
mode. -
Optionally, installing the replacement SGC version on each node.
-
Optionally, copying key configuration files from the existing SGC to the replacement node.
-
Shutting down the current SGC and starting the replacement SGC. This is performed one node at a time, with a wait period between nodes in order to allow the cluster perform operations required to maintain normal operation.
-
Marking the reversion as completed and placing the cluster into
NORMAL
mode.
The reversion process takes several minutes per node to complete.
Reverting to a New SGC Installation
To perform a reversion where the replacement SGC is installed and configured as part of the reversion requires either the --package-directory
or --sgc-package
command line argument.
For example, to use the Orca SGC upgrade bundle supplied installation packages:
$ ./orca -H vm1,vm2,vm3 sgc-revert-cluster --package-directory packages/
This process installs the replacement SGC following the recommended installation structure and copies configuration files from the currently running SGC to the replacement.
Then, a single node at a time, it stops the currently running node, updates the current
symbolic link to point to the replacement SGC and starts the replacement node.
Alternatively, the user may use a standalone OCSS7 installation package:
$ ./orca -H vm1,vm2,vm3 sgc-revert-cluster --sgc-package ocss7-3.0.0.0.zip
If there is more than one SGC cluster installed on the target hosts it will be necessary to specify the cluster name using --cluster argument.
|
Reverting to a Pre-Existing SGC Installation
To perform a reversion that uses a pre-existing SGC installation the --target-version
command line argument is required.
The pre-existing SGC installation must be fully configured as configuration files are not copied from the old installation to the replacement during this process.
For example:
$ ./orca -H vm1,vm2,vm3 sgc-revert-cluster --target-version 3.0.0.1
This process skips all installation and configuration steps.
One by one, each node is stopped, has its current
symbolic link updated to point to the target version and restarted.
Example Reversion
This example is for a 3-node SGC cluster (PC1
) consisting of:
-
Host
vm1
:PC1-1
-
Host
vm2
:PC1-2
-
Host
vm3
:PC1-3
Before starting, check the current status of the SGC cluster:
$ ./orca -H vm1,vm3,vm2 sgc-status
Host vm1
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-1
[Stopped] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]
* [Running] PC1-1 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.1]
Host vm3
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-3
[Stopped] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]
* [Running] PC1-3 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.1]
Host vm2
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-2
[Stopped] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]
* [Running] PC1-2 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.1]
If satisfied with the current cluster state, issue the revert command:
$ ./orca -Hvm1,vm2,vm3 sgc-revert-cluster --target-version 3.0.0.0
Getting status for cluster PC1 from hosts [vm1, vm2, vm3]
No nodes specified with --nodes <nodes>, using auto-detected nodes: [u'PC1-1', u'PC1-2', u'PC1-3']
Backing up SGC cluster on nodes [u'PC1-1', u'PC1-2', u'PC1-3']
Running command on host vm1: orca_migrate_helper.py tmpT_chqd=>{"function": "sgc_backup", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1
Running command on host vm2: orca_migrate_helper.py tmpi_qCVS=>{"function": "sgc_backup", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2
Running command on host vm3: orca_migrate_helper.py tmpyvwQKT=>{"function": "sgc_backup", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3
Starting SGC revert process on node PC1-1
Running command on host vm1: orca_migrate_helper.py tmpyAbsSv=>{"function": "sgc_start_revert", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1
Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm1, vm2, vm3]
Using existing SGC installation version 3.0.0.0
Reverting SGC nodes in turn
Reverting SGC node PC1-1. This may take a couple of minutes.
Running command on host vm1: orca_migrate_helper.py tmpClz8rt=>{"function": "sgc_revert_node", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1
Waiting 60 seconds for SGC cluster to redistribute data before reverting the next node
Reverting SGC node PC1-2. This may take a couple of minutes.
Running command on host vm2: orca_migrate_helper.py tmpxOS3vv=>{"function": "sgc_revert_node", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-2", "cluster_name": "PC1", "host": "vm2", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm2
Waiting 60 seconds for SGC cluster to redistribute data before reverting the next node
Reverting SGC node PC1-3. This may take a couple of minutes.
Running command on host vm3: orca_migrate_helper.py tmpnslp63=>{"function": "sgc_revert_node", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-3", "cluster_name": "PC1", "host": "vm3", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm3
Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm1, vm2, vm3]
Completing SGC reversion process on node PC1-1
Running command on host vm1: orca_migrate_helper.py tmpERJGq2=>{"function": "sgc_complete_revert", "target_version": "3.0.0.0", "sgc_package": null, "node_name": "PC1-1", "cluster_name": "PC1", "host": "vm1", "remote_home": "/home/sentinel", "overwrite": false}
Done on vm1
Refreshing cluster view post-operation
Getting status for cluster PC1 from hosts [vm1, vm2, vm3]
Available actions:
- sgc-backup
- sgc-backup-prune
- sgc-upgrade-start
- sgc-upgrade-cluster
- sgc-revert-start
- sgc-revert-cluster
- sgc-install
- sgc-node-start
- sgc-node-stop
- sgc-status
And finally, re-check the cluster status:
$ ./orca -H vm1,vm3,vm2 sgc-status
Host vm1
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-1
* [Running] PC1-1 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.0]
[Stopped] PC1-1 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-1/ocss7-3.0.0.1]
Host vm3
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-3
* [Running] PC1-3 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.0]
[Stopped] PC1-3 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-3/ocss7-3.0.0.1]
Host vm2
SGC Clusters:
Cluster PC1
mode=NORMAL format=30000
Node PC1-2
* [Running] PC1-2 3.0.0.0 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.0]
[Stopped] PC1-2 3.0.0.1 [/home/sentinel/ocss7/PC1/PC1-2/ocss7-3.0.0.1]