- Planning for the procedure
- Method of procedure
- Deployments with clustered Rhino using SIMPL 6.7.0
- Backout procedure
- Deployments with unclustered Rhino using SIMPL 6.7.0
- Backout procedure
- Deployments with clustered Rhino using using SIMPL 6.6.x
- Backout procedure
- Deployments with unclustered Rhino using SIMPL 6.6.x
- Backout procedure
- Next Step
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing VMware vSphere deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vSphere deployment from scratch
-
you are upgrading an existing downlevel deployment for custom.
-
you have deployed a SIMPL VM, and have followed all the pre-upgrade steps.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Deployments with clustered Rhino using SIMPL 6.7.0
If you have specified the custom VMs to use clustered Rhino in the node-parameters.yaml
file, then follow instructions to upgrade it from this section.
Step 1 - Upgrade the initial downlevel custom VMs
The VM with the Rhino node that has the lowest ID must be upgraded last.
Upgrade all of the other VMs using the following command:
csar update --vnf
.
The indexes start from 0, therefore 0 is the first VM.
The --index-range
accepts ranges as well as comma separated indexes (e.g. 1-3,7,9
).
To upgrade the VMs in stages, run the command multiple times using the appropriate --index-range
values.
The following will occur one custom node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next custom VM, or report that the upgrade of the custom was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel custom CSAR and downlevel SDF. The lowest uplevel VM must be rolled back last. For example, if VMs 2-5 are in the uplevel, you must rollback VMs 3-5 then rollback VM 2.
You may need to use the --skip pre-update-checks
flag as part of the csar update
command.
The --skip pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs,
then you must redeploy the downlevel VMs.
run csar redeploy --vnf
.
Diagnostics during the quiesce stage
When the downlevel VMs are quiesced, they upload some diagnostics to the CDS. These may be useful if the upgrade or rollback fails.
To get these diagnostics, follow instructions from Retrieving deployment TAS audit logs with export-audit-history and Retrieving Initconf and Rhino logs with export-log-history.
Deployments with unclustered Rhino using SIMPL 6.7.0
If you have specified the custom VMs to use unclustered Rhino in the node-parameters.yaml
file, then follow instructions to upgrade it from this section.
Step 1 - Upgrade the downlevel custom VMs
Run csar update --vnf
.
is the image-name you specified in the node-parameters.yaml file that was provided to the VM Build Container.
|
To perform a canary upgrade, run
csar update --vnf .
The indexes start from 0, therefore 0 is the first VM.
The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ).
Only the nodes specified in the index will be upgraded.
|
This will validate the uplevel SDF, generate the uplevel Terraform template, upload the uplevel image, and then it will start the upgrade.
The following will occur one custom node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next custom VM, or report that the upgrade of the custom was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel custom CSAR and downlevel SDF.
You may need to use the --skip pre-update-checks
flag as part of the csar update
command.
The --skip pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs,
then you must redeploy the downlevel VMs.
run csar redeploy --vnf
.
Diagnostics during the quiesce stage
When the downlevel VMs are quiesced, they upload some diagnostics to the CDS. These may be useful if the upgrade or rollback fails.
To get these diagnostics, follow instructions from Retrieving deployment TAS audit logs with export-audit-history and Retrieving Initconf and Rhino logs with export-log-history.
Deployments with clustered Rhino using using SIMPL 6.6.x
If you have specified the custom VMs to use clustered Rhino in the node-parameters.yaml
file, then follow instructions to upgrade it from this section.
Step 1 - Validate the SDF
Run csar validate-vsphere --sdf <path to SDF>
.
This will validate the uplevel SDF.
Step 2 - Generate the Terraform Template
Run csar generate --vnf
.
This will generate the terraform template.
Step 3 - Upgrade the downlevel custom VMs
The VM with the Rhino node that has the lowest ID must be upgraded last.
Upgrade all of the other VMs using the following command:
csar update --vnf
.
The indexes start from 0, therefore 0 is the first VM.
The --index-range
accepts ranges as well as comma separated indexes (e.g. 1-3,7,9
).
To upgrade the VMs in stages, run the command multiple times using the appropriate --index-range
values.
The following will occur one custom node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next custom VM, or report that the upgrade of the custom was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel custom CSAR and downlevel SDF. The lowest uplevel VM must be rolled back last. For example, if VMs 2-5 are in the uplevel, you must rollback VMs 3-5 then rollback VM 2.
You may need to use the --skip-pre-update-checks
flag as part of the csar update
command.
The --skip-pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs,
then you must redeploy the downlevel VMs.
run csar deploy --redeploy --vnf
.
Diagnostics during the quiesce stage
When the downlevel VMs are quiesced, they upload some diagnostics to the CDS. These may be useful if the upgrade or rollback fails.
To get these diagnostics, follow instructions from Retrieving deployment TAS audit logs with export-audit-history and Retrieving Initconf and Rhino logs with export-log-history.
Deployments with unclustered Rhino using SIMPL 6.6.x
If you have specified the custom VMs to use unclustered Rhino in the node-parameters.yaml
file, then follow instructions to upgrade it from this section.
Step 1 - Validate the SDF
Run csar validate-vsphere --sdf <path to SDF>
.
This will validate the uplevel SDF.
Step 2 - Generate the Terraform Template
Run csar generate --vnf
.
This will generate the terraform template.
Step 3 - Upgrade the downlevel custom nodes
Run csar update --vnf
.
is the image-name you specified in the node-parameters.yaml file that was provided to the VM Build Container.
|
To perform a canary upgrade, run
csar update --vnf .
The indexes start from 0, therefore 0 is the first VM.
The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ).
Only the nodes specified in the index will be upgraded.
|
This will upload the uplevel image, then it will start the upgrade.
The following will occur one custom node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next custom VM, or report that the upgrade of the custom was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel custom CSAR and downlevel SDF.
You may need to use the --skip-pre-update-checks
flag as part of the csar update
command.
The --skip-pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs,
then you must redeploy the downlevel VMs.
run csar deploy --redeploy --vnf
.
Diagnostics during the quiesce stage
When the downlevel VMs are quiesced, they upload some diagnostics to the CDS. These may be useful if the upgrade or rollback fails.
To get these diagnostics, follow instructions from Retrieving deployment TAS audit logs with export-audit-history and Retrieving Initconf and Rhino logs with export-log-history.
Next Step
Follow the post upgrade instructions here: Post rolling upgrade steps