Planning for the procedure

Background knowledge

This procedure assumes that:

  • you are installing into an existing VMware vSphere deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vSphere deployment from scratch

  • you are upgrading an existing downlevel deployment for custom.

  • you have deployed a SIMPL VM, and have followed all the pre-upgrade steps.

Reserve maintenance period

This procedure requires a maintenance period. When integrating into a live network, we recommend that you implement measures to mitigate any unforeseen events.

Plan for service impact

Misconfiguration could disrupt services for existing network elements.

People

You must be a system operator to perform the MOP steps.

Tools and access

You must have access to the SIMPL VM, and the SIMPL VM must have the right permissions on the VMware vSphere deployment.

Method of procedure

Note Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure.

Deployments with clustered Rhino using SIMPL 6.7.0

If you have specified the custom VMs to use clustered Rhino in the node-parameters.yaml file, then follow instructions to upgrade it from this section.

Step 1 - Upgrade the initial downlevel custom VMs

The VM with the Rhino node that has the lowest ID must be upgraded last.

Upgrade all of the other VMs using the following command: csar update --vnf --sites <site> --service-group <service_group> --index-range <range> --sdf <path to SDF>.

The indexes start from 0, therefore 0 is the first VM. The --index-range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9). To upgrade the VMs in stages, run the command multiple times using the appropriate --index-range values.

The following will occur one custom node at a time:

  • The downlevel node will be quiesced.

  • The uplevel node will be created and boot up.

  • The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.

  • Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the csar update command will move on to the next custom VM, or report that the upgrade of the custom was successful if all nodes have now been upgraded.

  • Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.

Step 2 - Upgrade the final downlevel custom VM

Upgrade the VM with the Rhino node that has the lowest ID.

Run the following command: csar update --vnf --sites <site> --service-group <service_group> --index-range <index> --sdf <path to SDF>.

Backout procedure

If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel custom CSAR and downlevel SDF. The lowest uplevel VM must be rolled back last. For example, if VMs 2-5 are in the uplevel, you must rollback VMs 3-5 then rollback VM 2.

You may need to use the --skip pre-update-checks flag as part of the csar update command. The --skip pre-update-checks flag allows rollbacks when a node is unhealthy.

If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar redeploy --vnf --sites <site> --sdf <path to SDF>.

Diagnostics during the quiesce stage

When the downlevel VMs are quiesced, they upload some diagnostics to the CDS. These may be useful if the upgrade or rollback fails.

Deployments with unclustered Rhino using SIMPL 6.7.0

If you have specified the custom VMs to use unclustered Rhino in the node-parameters.yaml file, then follow instructions to upgrade it from this section.

Step 1 - Upgrade the downlevel custom VMs

Run csar update --vnf --sdf <path to SDF>.

Note is the image-name you specified in the node-parameters.yaml file that was provided to the VM Build Container.
Note To perform a canary upgrade, run csar update --vnf --sites <site> --service-group <service_group> --index-range <range> --sdf <path to SDF>. The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9). Only the nodes specified in the index will be upgraded.

This will validate the uplevel SDF, generate the uplevel Terraform template, upload the uplevel image, and then it will start the upgrade.

The following will occur one custom node at a time:

  • The downlevel node will be quiesced.

  • The uplevel node will be created and boot up.

  • The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.

  • Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the csar update command will move on to the next custom VM, or report that the upgrade of the custom was successful if all nodes have now been upgraded.

  • Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.

Backout procedure

If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel custom CSAR and downlevel SDF.

You may need to use the --skip pre-update-checks flag as part of the csar update command. The --skip pre-update-checks flag allows rollbacks when a node is unhealthy.

If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar redeploy --vnf --sites <site> --sdf <path to SDF>.

Diagnostics during the quiesce stage

When the downlevel VMs are quiesced, they upload some diagnostics to the CDS. These may be useful if the upgrade or rollback fails.

Deployments with clustered Rhino using using SIMPL 6.6.x

If you have specified the custom VMs to use clustered Rhino in the node-parameters.yaml file, then follow instructions to upgrade it from this section.

Step 1 - Validate the SDF

Run csar validate-vsphere --sdf <path to SDF>.

This will validate the uplevel SDF.

Step 2 - Generate the Terraform Template

Run csar generate --vnf --sdf <path to SDF>.

This will generate the terraform template.

Step 3 - Upgrade the downlevel custom VMs

The VM with the Rhino node that has the lowest ID must be upgraded last.

Upgrade all of the other VMs using the following command: csar update --vnf --sites <site> --service-group <service_group> --index-range <range>.

The indexes start from 0, therefore 0 is the first VM. The --index-range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9). To upgrade the VMs in stages, run the command multiple times using the appropriate --index-range values.

The following will occur one custom node at a time:

  • The downlevel node will be quiesced.

  • The uplevel node will be created and boot up.

  • The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.

  • Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the csar update command will move on to the next custom VM, or report that the upgrade of the custom was successful if all nodes have now been upgraded.

  • Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.

Step 4 - Upgrade the final custom VM

Upgrade the VM with the Rhino node that has the lowest ID.

Run the following command: csar update --vnf --sites <site> --service-group <service_group> --index-range <index>.

Backout procedure

If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel custom CSAR and downlevel SDF. The lowest uplevel VM must be rolled back last. For example, if VMs 2-5 are in the uplevel, you must rollback VMs 3-5 then rollback VM 2.

You may need to use the --skip-pre-update-checks flag as part of the csar update command. The --skip-pre-update-checks flag allows rollbacks when a node is unhealthy.

If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar deploy --redeploy --vnf --sites <site>.

Diagnostics during the quiesce stage

When the downlevel VMs are quiesced, they upload some diagnostics to the CDS. These may be useful if the upgrade or rollback fails.

Deployments with unclustered Rhino using SIMPL 6.6.x

If you have specified the custom VMs to use unclustered Rhino in the node-parameters.yaml file, then follow instructions to upgrade it from this section.

Step 1 - Validate the SDF

Run csar validate-vsphere --sdf <path to SDF>.

This will validate the uplevel SDF.

Step 2 - Generate the Terraform Template

Run csar generate --vnf --sdf <path to SDF>.

This will generate the terraform template.

Step 3 - Upgrade the downlevel custom nodes

Run csar update --vnf .

Note is the image-name you specified in the node-parameters.yaml file that was provided to the VM Build Container.
Note To perform a canary upgrade, run csar update --vnf --sites <site> --service-group <service_group> --index-range <range>. The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9). Only the nodes specified in the index will be upgraded.

This will upload the uplevel image, then it will start the upgrade.

The following will occur one custom node at a time:

  • The downlevel node will be quiesced.

  • The uplevel node will be created and boot up.

  • The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.

  • Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the csar update command will move on to the next custom VM, or report that the upgrade of the custom was successful if all nodes have now been upgraded.

  • Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.

Backout procedure

If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel custom CSAR and downlevel SDF.

You may need to use the --skip-pre-update-checks flag as part of the csar update command. The --skip-pre-update-checks flag allows rollbacks when a node is unhealthy.

If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar deploy --redeploy --vnf --sites <site>.

Diagnostics during the quiesce stage

When the downlevel VMs are quiesced, they upload some diagnostics to the CDS. These may be useful if the upgrade or rollback fails.

Next Step

Follow the post upgrade instructions here: Post rolling upgrade steps

Previous page Next page
VM Build Container Version 1.0.0