Planning for the procedure

Background knowledge

This procedure assumes that:

  • you are installing into an existing OpenStack deployment

    • The OpenStack deployment must be set up with support for Heat templates.

  • you are using an OpenStack version from Icehouse through to Train inclusive

  • you are thoroughly familiar with working with OpenStack machines and know how to set up tenants, users, roles, client environment scripts, and so on.
    (For more information, refer to the appropriate OpenStack installation guide for the version that you are using here.)

  • you are upgrading an exisiting downlevel deployment for SGC.

  • you have deployed a SIMPL VM, unpacked the uplevel CSAR, and prepared an uplevel SDF.

Reserve maintenance period

This procedure does require a maintenance period. When integrating into a live network, it is recommended to implement measures to mitigate any unforeseen events.

Plan for service impact

Misconfiguration could disrupt service for existing network elements.

People

You must be a system operator to perform the MOP steps.

Tools and access

You must have access to the SIMPL VM, and the SIMPL VM must have the right permissions on the OpenStack deployment.

Method of procedure

Note Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure.

Step 1 - Check OpenStack quotas

The SIMPL VM creates one server group per VM, and one security group per interface on each VM. OpenStack sets limits on the number of server groups and security groups through quotas.

View the quota by running openstack quota show <project id> on OpenStack Controller node. This shows the maximum number of various resources.

You can view the existing server groups by running openstack server group list. Similarly, you can find the security groups by running openstack security group list

If the quota is too small to accommodate the new VMs that will be deployed, increase it by running
openstack quota set --<quota field to increase> <new quota value> <project ID>. For example:
openstack quota set --server-groups 100 125610b8bf424e61ad2aa5be27ad73bb

See CSAR EFIX patches to learn more on the CSAR EFIX patching process.

Step 2 - Generate the uplevel heat template

Run csar generate --vnf sgc --sdf <path to SDF>.

This will validate the uplevel SDF, and generate the uplevel Terraform template. If any errors occur, check the documentation on preparing the SDF and fix the uplevel SDF.

Step 3 - Upgrade the downlevel SGC nodes using the uplevel heat template

Run csar update --vnf sgc. This will upload the uplevel image.

The following will occur one SGC node at a time:

  • The downlevel node will be quiesced.

  • The uplevel node will be created and boot up.

  • The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.

  • Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the csar update command will move on to the next SGC VM, or report that the upgrade of the SGC was successful if all nodes have now been upgraded.

Note To perform a canary upgrade, run csar update --vnf sgc --sites <site> --service-group <service_group> --index-range <range>. The range accepts a comma delimited index of nodes starting from 0. Only the nodes specified in the index will be upgraded.

Backout procedure

If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps 2 and 3 with the downlevel SGC CSAR and downlevel SDF, appending the --skip-pre-update-checks flag to the csar update command in step 3. The --skip-pre-update-checks flag allows rollbacks when a node is unhealthy.

If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. Repeat step 2 with the downlevel CSAR and SDF, then run csar deploy --redeploy --vnf sgc --sites <site>.

Diagnostics during the quiesce stage

When the downlevel VMs are quiesced, they upload some diagnostics to CDS. These may be useful if the upgrade or rollback fails. To get these diagnostics, follow instructions from Retrieving deployment TAS audit logs with export-audit-history and Retrieving Initconf and Rhino logs with export-log-history.

Next Step

Verify your SGC upgrade here: Verify the state of the nodes and processes.

Previous page Next page
Rhino VoLTE TAS VMs Version 4.0.0