Planning for the procedure

Background knowledge

This procedure assumes that:

  • Downlevel cluster is already deployed with Cloudify.

Note All Cloudify CLI commands can also be executed on the Cloudify UI.

Reserve maintenance period

This procedure does not require a maintenance period. However, if you are integrating into a live network then it is recommended to implement measures to mitigate any unforeseen events.

Plan for service impact

During an upgrade there is one less node available. However, this should not affect availability, provided that the deployment is sized appropriately. We recommend that the deployment is sized for N+k, where N is the hardware needed for peak load, and k is redundancy. We also recommend that k = ceil(N*0.1).

People

You must be a system operator to perform the MOP steps.

Tools and access

  • You must have access to the VMware vCloud Director UI, or have ovftool (4.3.0 build-14746126 or greater) installed on your machine.

  • You must have access to the Cloudify UI or CLI.

Installation Questions

Question More information

Do you have the correct custom node CSAR?

All custom virtual appliances use the naming convention - [image name]-<full-version>-vcloud-csar.zip. For example, [image name]-2.0.0-vcloud-csar.zip where 2.0.0 is the software version. In particular, ensure you have the VMware vCloud CSAR.

Method of procedure

Step 1 - Create the uplevel SDF

The uplevel SDF can be created from the downlevel SDF if there have been no schema changes between the versions.

The rvtconfig dump-config command can be used to retrieve the downlevel SDF, if the VM cluster has been previously configured.

Note The passwords and keys may have been removed from the dumped SDF. These fields will have to be filled in with the actual values.

Step 2 - Update uplevel SDF

Edit the vnfcs section for [image name] in the uplevel SDF with the following changes:

  • image: The uplevel image template to be stored in VMware vCloud.

  • ip: Remove all IP addresses and replace with [].

  • count: Change to 0.

Example:

...
    vnfcs:
      - type: [image name]
        image: [image name]-uplevel
        name: mytest[image name]
        cluster-configuration:
          count: 0
          instances: []
        networks:
        - ip-addresses:
            ip: []
          name: Management
          subnet: mgmt-subnet
          traffic-types:
          - management
...

Step 3 - Deploy the uplevel CSAR

Follow steps 1-6 in deploying the CSAR, except using the uplevel CSAR and SDF file.

Ensure that the VM image template name, blueprint name, and deployment name are distinct from the downlevel names.

Step 4 - Ensure Healthy downlevel Deployment in Cloudify

Run the following Cloudify command to ensure the downlevel deployment is ok:

cfy executions start ensure_healthy_deployment -d <downlevel-deployment-name>

The following message will be seen in the command console or Cloudify UI console:

Task succeeded 'metaswitch_deployment_plugin.software.is_green (True)

Step 5 - Upload uplevel configuration to the CDS

Upload custom configuration to CDS with rvtconfig. How to do this can be found on the Initial configuration page.

Note Use the uplevel VM full version for --vm-version when executing the rvtconfig initial-configure command.

Step 6 - Rolling Upgrade in Cloudify

The following workflows must be run simultaneously. The status can also be followed in the Cloudify UI console.

Run the following command to start the Scale-In workflow for the downlevel deployment:

cfy executions start upgrade_scale_in -d <downlevel-deployment-name> \
                                      -p scaling_out_deployment_name=<uplevel-deployment-name> \
                                      -p clear_state=False \
                                      -p upgrade_version=<uplevel-version> \
                                      -p scale_in_quantity=<number-of-VMs-to-upgrade> \
                                      -p scaling_group_name=[image name]_node_group

On a second ssh terminal, run the following command to start the Scale-Out workflow for the uplevel deployment:

cfy executions start upgrade_scale_out -d <uplevel-deployment-name> \
                                       -p scaling_in_deployment_name=<downlevel-deployment-name> \
                                       -p scaling_group_name=[image name]_node_group

The following message will be seen in the first command console or Cloudify UI console:

'upgrade_scale_in' workflow execution succeeded

The following message will be seen in the second command console or Cloudify UI console:

'upgrade_scale_out' workflow execution succeeded

Backout procedure

Rollback can be executed by swapping uplevel/downlevel versions and deployment names.

The following workflows must be run simultaneously. The status can also be followed in the Cloudify UI console.

Run the following command to start the Scale-In workflow for the uplevel deployment:

cfy executions start upgrade_scale_in -d <uplevel-deployment-name> \
                                      -p scaling_out_deployment_name=<downlevel-deployment-name> \
                                      -p clear_state=False \
                                      -p scale_in_quantity=<number-of-VMs-to-rollback> \
                                      -p scaling_group_name=[image name]_node_group

On a second ssh terminal, run the following command to start the Scale-Out workflow for the downlevel deployment:

cfy executions start upgrade_scale_out -d <downlevel-deployment-name> \
                                       -p scaling_in_deployment_name=<uplevel-deployment-name> \
                                       -p scaling_group_name=[image name]_node_group

The following message will be seen in the first command console or Cloudify UI console:

'upgrade_scale_in' workflow execution succeeded

The following message will be seen in the second command console or Cloudify UI console:

'upgrade_scale_out' workflow execution succeeded
Previous page Next page
VM Build Container Version 1.0.0