Planning for the procedure

Background knowledge

This procedure assumes that:

  • Downlevel cluster is already deployed with Cloudify.

Note All Cloudify CLI commands can also be executed on the Cloudify UI.

Reserve maintenance period

This procedure does not require a maintenance period. However, if you are integrating into a live network, it is recommended to implement measures to mitigate any unforeseen events.

Plan for service impact

During an upgrade there is one less node available. However, this should not affect availability, provided that the deployment is sized appropriately. We recommend that the deployment is sized for N+k, where N is the hardware needed for peak load, and k is redundancy. We also recommend that k = ceil(N*0.1).

People

You must be a system operator to perform the MOP steps.

Tools and access

  • You must have access to the VMware vCloud Director UI, or have ovftool (4.3.0 build-14746126 or greater) installed on your machine.

  • You must have access to the Cloudify UI or CLI.

Installation Questions

Question More information

Do you have the correct REM node CSAR?

All REM virtual appliances use the naming convention - rem-<full-version>-vmware-csar.zip. For example, rem-2.0.0-vmware-csar.zip where 2.0.0 is the software version. In particular, ensure you have the VMware CSAR (which supports both VMware vSphere and VMware vCloud).

Method of procedure

Step 1 - Create the uplevel SDF

The uplevel SDF can be created from the downlevel SDF if there have been no schema changes between the versions.

You can use the rvtconfig dump-config command to retrieve the downlevel SDF if the VM cluster has been previously configured.

Note The passwords and keys may have been removed from the dumped SDF. These fields will need to be filled in with the actual values.

Step 2 - Update uplevel SDF

Edit the vnfcs section for rem in the uplevel SDF with the following changes:

  • image: The uplevel image template to be stored in VMware vCloud.

  • ip: Remove all IP addresses and replace with [].

  • count: Change to 0.

Example:

...
    vnfcs:
      - type: rem
        image: rem-uplevel
        name: mytestrem
        cluster-configuration:
          count: 0
          instances: []
        networks:
        - ip-addresses:
            ip: []
          name: Management
          subnet: mgmt-subnet
          traffic-types:
          - management
...

Step 3 - Deploy the uplevel CSAR

Follow steps 1-6 in deploying the CSAR, except using the uplevel CSAR and SDF.

Ensure that the VM image template name, blueprint name, and Cloudify deployment name are distinct from the downlevel names.

Step 4 - Ensure Healthy downlevel Deployment in Cloudify

Run the Cloudify command to ensure the downlevel deployment is okay:

cfy executions start ensure_healthy_deployment -d <downlevel-cloudify-deployment-name>

The following message will be seen in the command console or Cloudify UI console:

Task succeeded 'metaswitch_deployment_plugin.software.is_green (True)

Step 5 - Upload uplevel configuration to the CDS

Upload REM configuration to CDS with rvtconfig. You will find the configuration examples here and how to do on the configuration page.

Note Use the uplevel VM full version for --vm-version when executing the rvtconfig upload-config command.

Step 6 - Rolling Upgrade in Cloudify

The following workflows must be run simultaneously. The status can also be followed in the Cloudify UI console.

To start the Scale-In workflow for the downlevel deployment, run the command:

cfy executions start upgrade_scale_in -d <downlevel-deployment-name> \
                                      -p scaling_out_deployment_name=<uplevel-deployment-name> \
                                      -p clear_state=False \
                                      -p upgrade_version=<uplevel-version> \
                                      -p scale_in_quantity=<number-of-VMs-to-upgrade> \
                                      -p scaling_group_name=rem_node_group

On a second ssh terminal, to start the Scale-Out workflow for the uplevel deployment, run the command:

cfy executions start upgrade_scale_out -d <uplevel-deployment-name> \
                                       -p scaling_in_deployment_name=<downlevel-deployment-name> \
                                       -p scaling_group_name=rem_node_group

The following message will be seen in the first command console or Cloudify UI console:

'upgrade_scale_in' workflow execution succeeded

The following message will be seen in the second command console or Cloudify UI console:

'upgrade_scale_out' workflow execution succeeded

Backout procedure

Rollback can be executed by swapping uplevel/downlevel versions and Cloudify deployment names.

The following workflows must be run simultaneously. The status can also be followed in the Cloudify UI console.

To start the Scale-In workflow for the uplevel deployment, run the command:

cfy executions start upgrade_scale_in -d <uplevel-deployment-name> \
                                      -p scaling_out_deployment_name=<downlevel-deployment-name> \
                                      -p clear_state=False \
                                      -p scale_in_quantity=<number-of-VMs-to-rollback> \
                                      -p scaling_group_name=rem_node_group

On a second ssh terminal, to start the Scale-Out workflow for the downlevel deployment, run the command:

cfy executions start upgrade_scale_out -d <downlevel-deployment-name> \
                                       -p scaling_in_deployment_name=<uplevel-deployment-name> \
                                       -p scaling_group_name=rem_node_group

The following message will be seen in the first command console or Cloudify UI console:

'upgrade_scale_in' workflow execution succeeded

The following message will be seen in the second command console or Cloudify UI console:

'upgrade_scale_out' workflow execution succeeded
Previous page Next page
Rhino VoLTE TAS VMs Version 4.0.0