1. Check prerequisites

Before starting the upgrade, check the following:

  • The SMO nodes need to be on version at least 4.0.0-34-1.0.0. If not, perform an upgrade of the SMO nodes to 4.0.0-34-1.0.0 first, following the RVT 4.0.0 VM Install Guide. This requires you to plan a separate maintenance window before starting the upgrade to RVT 4.1.

  • DNS changes are required during this upgrade. Please see section "Update the DNS entry for the vertical service codes feature" below for further details. These changes must be made before the upgrade commences, so please ensure they are in place and tested with sufficient time to spare.

  • A new RVT license must be installed before you commence upgrade to V4.1. Please contact your Customer Care Representative to obtain the updated license.

  • The TSN nodes need to be on one of the following versions:

    • 4.0.0-9-1.0.0

    • 4.0.0-14-1.0.0

    • 4.0.0-22-1.0.0

    • 4.0.0-23-1.0.0

    • 4.0.0-24-1.0.0

    • 4.0.0-28-1.0.0

    • 4.0.0-34-1.0.0

      If they are on a different version, contact your Customer Care Representative.

  • All other nodes need to be on version at least 4.0.0-9-1.0.0.

  • You have deployed a SIMPL VM, version 6.13.3 or later. Output shown in this document is correct for version 6.13.3 of the SIMPL VM; it may differ slightly on later versions.

If it is still on a lower version, upgrade it as per the SIMPL VM Documentation. SIMPL VM upgrades are out of scope for this document.

  • If you want to use RVT SIMon dashboards, you will need SIMon on version at least 13.5.0, and need to ensure the community string is set correctly. Contact your Customer Care Representative for more information.

  • You have access to the SSH keys used to access the SIMPL VM.

  • You have access to the SIMPL and MDM documentation.

2. Prepare for breaking interface changes

  • From RVT 4.1 onwards, all deployments will have the same static set of SNMP OIDs. In RVT 4.0.0, the OIDs differed per deployment (but were preserved across upgrades). This means that during an upgrade to 4.1, you will be changed over to the new, static set. Ensure all monitoring systems are updated to accommodate this change. Contact your Customer Care Representative for Management Information Bases (MIBs) detailing all the new SNMP OIDs.

  • The rhinoInstanceId for the HSS Data and Data Configuration REST API has changed. In RVT 4.0.0, the request URI was of the form /rem/sentinel/api/hssdata/subscriberdata?rhinoInstanceId=Local&selectionKey=Metaswitch::::, but in RVT 4.1 the request URI is now of the form /rem/sentinel/api/hssdata/subscriberdata?rhinoInstanceId=RVT-mag.<site ID>-<hostname>&selectionKey=Metaswitch::::. If you use this API, all calls will need to be made to the new URL once the MAG nodes have been upgraded. Prepare for this prior to starting the upgrade.

3. Upload uplevel CSARs

Your Customer Care Representative will have provided you with the uplevel TSN, MAG, ShCM, MMT GSM, and SMO CSARs. Use scp to copy these to /csar-volume/csar/ on the SIMPL VM.

Once the copy is complete, for each CSAR, run csar unpack /csar-volume/csar/<filename> on the SIMPL VM (replacing <filename> with the filename of the CSAR, which will end with .zip).

The csar unpack command may fail if there is insufficient disk space available. If this occurs, SIMPL VM will report this with instructions to remove some CSARs to free up disk space. You can list all unpacked CSARs with csar list and remove a CSAR with csar remove <node type>/<version>.

Backout procedure

Remove any unpacked CSARs using csar remove <node type>/<version>. Remove any uploaded CSARs from /csar-volume/csar/ using rm /csar-volume/csar/<filename>.

4. Update the configuration files for RVT 4.1

4.1. Prepare the downlevel config directory

If you keep the configuration hosted on the SIMPL VM, then the existing config should already be located in /home/admin/current-config (your configuration folder may have a different name, as the folder name is not policed e.g. yours may be named rvt-config, if this is the case then rename it to current-config). Verify this is the case by running ls /home/admin/current-config and checking that the directory contains:

  • The downlevel configuration files

  • The Rhino license.

  • The current SDF for the deployment (in the format used by SIMPL 6.6 and SIMPL 6.7). This is the SDF titled 'sdf-rvt.yaml' which you will previously have used to manage the RVT 4.0 VMs.

  • Any certificates and private key files for REM, BSF, and/or XCAP: <type>-cert.crt and <type>-cert.key, where <type> is one of rem, bsf, or xcap.

If it isn’t, or you prefer to keep your configuration outside of the SIMPL VM, then create this directory on the SIMPL VM:

mkdir /home/admin/current-config

Use scp to upload the files described above to this directory.

4.2. Create directories for RVT 4.1 configuration and for rollbacks

To create the directory for holding the uplevel configuration files, on the SIMPL VM, run:

mkdir /home/admin/uplevel-config

Then run

cp /home/admin/current-config/* /home/admin/uplevel-config

to copy the configuration, which you will edit in place in the steps below.

In addition, create a directory to contain a specially tailored copy of the SDF, which you will use if a rollback is required:

mkdir /home/admin/rvt-rollback-sdf
Note
At this point you should have the following directories on the SIMPL VM:
  • /home/admin/current-config, containing the downlevel configuration files (i.e. the unmodified files you copied off of the downlevel SIMPL VM).

  • /home/admin/uplevel-config, containing a copy of the current-config files. These files will be modified and used for the RVT 4.1 upgrade.

  • /home/admin/rvt-rollback-sdf, an empty directory which will contain a copy of the sdf-rvt.yaml file which will be used should a VM rollback be required.

4.3. Make product-independent changes to the SDF for SIMPL 6.13.3

SIMPL 6.13.3 (used by RVT 4.1) has major changes in the SDF format compared to SIMPL 6.6/6.7 (used by RVT 4.0.0). Most notably, secrets are now stored in QSG.

Updating the SDF is independent of the RVT upgrade and as such is not described in this document. Refer to the SIMPL VM Documentation for more details. You can also refer to the list of "Deprecated SDF fields" described in https://community.metaswitch.com/support/solutions/articles/76000042844-simpl-vm-release-notes for all versions between your current SIMPL version up to and included SIMPL 6.13.3. Make sure to make the changes to /home/admin/uplevel-config/sdf-rvt.yaml only. As per the SIMPL VM documentation, product-specific changes need to be described in product documentation. These will be described below in Make product-specific changes to the SDF for RVT 4.1.

If you are upgrading a deployment on OpenStack, ensure you specify your OpenStack release under the openstack section in vim-configuration.

Important

Do NOT yet update the version in /home/admin/uplevel-config/sdf-rvt.yaml to the uplevel version, but instead keep it as the downlevel version until instructed otherwise.

4.4. Generate SSH keys for the RVT nodes

In RVT 4.1, SSH access to VMs is only available using SSH keys, while on RVT 4.0, SSH access was possible both using passwords and SSH keys. A key will need to be provisioned to allow you access to RVT 4.1 VMs.

For 4.1, the SSH key must be in PEM format; it must not be an OpenSSH formatted key (the default format of keys created by ssh-keygen).

If your existing key is OpenSSH format, or if you did not use SSH keys for access to RVT 4.0.0 VMs, generate a new one. You can create a PEM formatted SSH key pair using the command ssh-keygen -b 4096 -m PEM -f /home/admin/rvt-ssh-key. This will prompt for a passphrase; we recommend setting one for security reasons. Keep the file rvt-ssh-key safe, as it will be used to connect to the RVT 4.1 VMs.

Note

This key is meant to be used by people who need to access the VMs directly. It is advised to keep this key safe and share with only those who needs to access the VMs directly.

4.5. Create a copy of the SDF for rollback purposes

As any rollback to RVT 4.0.0 will need to be done using the upgraded SIMPL VM, you need an updated copy of the SDF to perform rollbacks. Before you make further updates to the SDF for RVT 4.1, create a copy:

cp /home/admin/uplevel-config/sdf-rvt.yaml /home/admin/rvt-rollback-sdf

4.6. Make product-specific changes to the SDF for RVT 4.1

In Update the SDF for SIMPL 6.13.3, you updated the SDF for SIMPL 6.13.3. We now make further changes to the SDF, to support RVT 4.1.

Some of these changes are due to secrets now being stored securely. We will first set the secret identifiers in the SDF, and then provide instructions on how to store their values in the secrets store.

Open the /home/admin/uplevel-config/sdf-rvt.yaml file using vi. Find the vnfcs section, and within that every RVT VNFC (tsn, mag, shcm, mmt-gsm, or smo). For each of them, make changes as follows :

  • Update the product-options as follows:

    • secrets-private-key has been replaced by secrets-private-key-id, with the value being stored in the secrets store. Please store off the current value of the secrets-private-key line as you will need this at a later stage, then replace this line with secrets-private-key-id: rvt-secrets-private-key.

    • If the primary-user-password line exists, store off the value of this parameter as you will need it at a later stage, then remove the line. Regardless of whether primary-user-password was previously present, you must now insert this line primary-user-password-id: rvt-primary-user-password. This field is mandatory.

    • For the tsn VNFC, add the appropriate cassandra version option (cassandra_version_3_11) under custom-options section as shown below:

            product-options:
              tsn:
                cds-addresses:
                - 172.18.1.10
                - 172.18.1.11
                - 172.18.1.12
                custom-options:
                - log-passwords
                - cassandra_version_3_11
      Important

      Failure to add the cassandra_version_3_11 custom option to the SDF when performing major TSN upgrades from 4.0.0 to 4.1 4.0 will result in TSN 4.1 being deployed with Cassandra Version 4.1.3, thus, not being able to join the cassandra cluster.

  • Skip this step if using OpenStack - it is only applicable for vSphere based deployments. For each VNFC type, except the SMO nodes, under the networks section find the entry which has a traffic-type of cluster. Remove the entry if present. If it is not present, move onto the next VNFC type.

For example, if the current networks section look like this:

  networks:
    - ip-addresses:
        ip:
          - 172.16.0.11
      name: Management
      subnet: management
      traffic-types:
        - management
   - ip-addresses:
       ip:
         - 172.17.0.11
     name: Cluster
     subnet: cluster
     traffic-types:
       - cluster
    - ip-addresses:
        ip:
          - 172.18.0.11
      name: Signaling
      subnet: signaling
      traffic-types:
        - internal
        - diameter
        - sip
        - ss7

you would remove the second list entry, and end up with this:

  networks:
    - ip-addresses:
        ip:
          - 172.16.0.11
      name: Management
      subnet: management
      traffic-types:
        - management
    - ip-addresses:
        ip:
          - 172.18.0.11
      name: Signaling
      subnet: signaling
      traffic-types:
        - internal
        - diameter
        - sip
        - ss7
  • Under cluster-configuration, find the instances section. For every instance, ensure there is a section ssh as follows, where your public key is either the contents of your pre-existing public key, or the contents of /home/admin/rvt-ssh-key.pub if you generated one above:

    ssh:
      authorized-keys:
        - <your public key>
      private-key-id: rvt-simpl-private-key-id
  • Update the VM versions for all the VM types (tsn, mag, shcm, mmt-gsm, or smo). Find the vnfcs section, and within each VNFC, locate the version field and change its value to the uplevel version, for example 4.1-7-1.0.0.

type: mag
-      version: 4.0.0-14-1.0.0
+      version: 4.1-7-1.0.0
       vim-configuration:

Save and close the file.

Next, run csar secrets auto-create-keys --sdf /home/admin/uplevel-config/sdf-rvt.yaml. This will generate the SSH key with ID rvt-simpl-private-key-id, this key will be used by SIMPL VM to connect to the RVT VMs so should not be shared or kept elsewhere.

Then, generate a template secrets_input_file.yaml file by running:

csar secrets create-input-file --sdf /home/admin/uplevel-config/sdf-rvt.yaml

Open the file secrets_input_file.yaml using vi and using the secrets you stored in the previous steps, fill the values in as follows:

  • rvt-secrets-private-key: The value of secrets-private-key in /home/admin/current-config/sdf-rvt.yaml. Note that there are multiple occurrences of secrets-private-key in sdf-rvt.yaml, but they should all be equal. If this is not the case, contact your Customer Care Representative.

  • rvt-primary-user-password: What you want the password of the sentinel user to be. This password is used when logging into the VM through the VNFI console, when SSH connectivity can’t be established.

Run the command csar secrets add secrets_input_file.yaml to add the secrets to the secret store.

4.7. Provision SIMPL SSH key on the RVT 4.0.0 nodes

In the previous step, you generated an SSH key for the SIMPL VM to use to connect to the RVT VMs. During RVT upgrade to V4.1 (or later), this SSH key will automatically be installed onto the VMs. However, SIMPL VM needs to connect to the RVT V4.0 VMs as part of the upgrade process. To allow this, this newly generated key must manually be copied to the RVT V4.0 VMs. Ensure you copy the key generated in the previous step, not the key you generated in step 4.4.

First, run csar secrets get-value rvt-simpl-private-key-id. From the output, copy-paste from the line -----BEGIN RSA PRIVATE KEY----- up to (and including) the line -----END RSA PRIVATE KEY-----. Create the file /home/admin/rvt-simpl-private-key using vi, and paste the private key. Save and close the file. Then run chmod 600 /home/admin/rvt-simpl-private-key to change the permissions.

Next, run ssh-keygen -y -f /home/admin/rvt-simpl-private-key > /home/admin/rvt-simpl-private-key.pub to generate the public key.

Finally, provision this public key on all the RVT 4.0.0 VMs, by running, for the management IP of every RVT VM: ssh-copy-id -i /home/admin/rvt-simpl-private-key sentinel@<management IP>, entering the current VM password when prompted. The output will then look as below:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'sentinel@<management IP>'"
and check to make sure that only the key(s) you wanted were added.

4.8. Make configuration changes for RVT 4.1

Some fields in the configuration files have been removed, deprecated or added. Open the following files inside the directory /home/admin/uplevel-config in vi, edit them as instructed, and then save them.

  • common-config.yaml: If present, remove the field shcm-domain.

  • mag-vmpool-config.yaml: Ensure every entry in the list xcap-domains starts with xcap. (including a period).

  • mmt-gsm-vmpool-config.yaml: If present, remove the field cluster-dns-name.

  • naf-filter-config.yaml: If present, remove the section cassandra-connectivity and the fields nonce-cassandra-keyspace, storage-mechanism, cache-capacity and intercept-tomcat-errors.

  • sentinel-ipsmgw-config.yaml: If present, remove the field notification-host.

  • smo-vmpool-config.yaml: If not using Sentinel IPSMGW (i.e. sentinel-ipsmgw-enabled is set to false),

    • Remove the field diameter-ro-origin-host from every entry in the virtual-machines list.

    • Remove the file sentinel-ipsmgw-config.yaml.

  • sentinel-volte-gsm-config.yaml or sentinel-volte-cdma-config.yaml : If present, under scc.service-continuity remove the field atu-sti, and under sis remove the field originating-address (do NOT remove it under hlr-connectivity-origin!). Under xcap-data-update, if present, remove the fields port, use-https, base-uri, auid and document.

  • shcm-vmpool-config.yaml: Underneath every vm-id, add a field

    rhino-node-id: 10x

    where the first entry gets node ID 101, the second entry node ID 102, and so on.

  • smo-vmpool-config.yaml: If present, remove the field cluster-dns-name.

  • snmp-config.yaml: Under notifications, check that rhino-notifications-enabled, system-notifications-enabled and sgc-notifications-enabled are all present. If any of them are missing, add them with a value of false.

4.9. Identify if any non-RVT nodes need access to ShCM

To improve security of ShCM, from RVT 4.1 onwards only nodes on an allowlist are allowed to connect to ShCM. This allowlist automatically includes all RVT nodes. However, if for any reason a non-RVT node needs to connect to ShCM directly to integrate with the ShCM API, edit the file /home/admin/uplevel-config/shcm-service-config.yaml with vi, and add an additional-client-addresses section under deployment-config:shcm-service:

deployment-config:shcm-service:
  additional-client-addresses:
    - <IP 1>
    - <IP 2>
    - <IP 3>

(adding or removing lines to match the number of IPs required as necessary).

4.10. Identify if any RVT nodes are misordered

Inside /home/admin/uplevel-config, check the files mag-vmpool-config.yaml, mmt-gsm-vmpool-config.yaml and smo-vmpool-config.yaml. Within each of these files, confirm that the first occurrence of rhino-node-id: xxx is set to the smallest value of all occurrences of rhino-node-id: yyy in that particular file. If not, contact your Customer Care Representative to adjust the upgrade steps in this MOP.

4.11. Backout procedure

To undo the changes in this section, remove the created configuration directories:

rm -rf /home/admin/uplevel-config
rm -rf /home/admin/rvt-rollback-sdf

5. Update the DNS entry for the vertical service codes feature

The vertical service codes (VSC) feature on the MMT nodes uses the XCAP server to assist in the handling of vertical service codes. If you do not use this feature, this step can be skipped.

Previously, the DNS generation tool generated an entry of the form internal-xcap.. This is not a valid XCAP domain, and no longer accepted by RVT. Therefore it needs to be updated to xcap.internal..

On the SIMPL VM, open the file /home/admin/uplevel-config/sentinel-volte-gsm-config.yaml and find the value for host under xcap-data-update. Replace the prefix internal-xcap with xcap.internal.

Then, change to the home directory by running cd /home/admin, followed by

csar create-dns-entries --sdf /home/admin/uplevel-config/sdf-rvt.yaml --dns-ip <IP address of your primary DNS server> --domain <ims-domain-name>

where <ims-domain-name> can be found as the value of ims-domain-name in /home/admin/uplevel-config/sdf-rvt.yaml.

This will write a BIND file db.<ims-domain-name>. Either provision it to the customer’s DNS server, or open this file in a text editor and manually verify all DNS entries in this file are present in the customer’s DNS server. In particular, ensure the presence of the new xcap.internal domain.

6. Validate the new configuration

6.1. Validate the configuration

We now check that the uplevel configuration files are correctly formatted, contain valid values, and are self-consistent.

For each node type tsn, mag, shcm, mmt-gsm, or smo, run the command /home/admin/.local/share/csar/<node type>/<uplevel version>/resources/rvtconfig validate -t <node type> -i /home/admin/uplevel-config

For example …​ /home/admin/.local/share/csar/mag/4.1-2-1.0.0/resources/rvtconfig validate -t mag -i /home/admin/uplevel-config

A successful validation with no errors or warnings produces the following output.

Validating node type against the schema: <node type>
YAML for node type(s) ['<node type>'] validates against the schema

If the output contains validation errors, fix the configuration in the /home/admin/uplevel-config directory and refer to the previous steps to fix the issues.

If the output contains validation warnings, consider whether you wish to address them before performing the upgrade. The VMs will accept configuration that has validation warnings, but certain functions may not work.

6.2. Carry out a csar update dry run

The csar update dry run command carries out more extensive validation of the SDF and VM states than rvtconfig validate does.

Carrying out this step now, before the upgrades are due to take place, ensures problems with the SDF files are identified early and can be rectified beforehand.

Note

The --dry-run operation will not make any changes to your VMs, it is safe to run at any time, although we always recommend running it during a maintenance window if possible.

Please run the following command (replacing mmt-gsm with mmt-cdma if your deployment uses CDMA) to execute the dry run.

csar update --sdf /home/admin/uplevel-config/sdf-rvt.yaml --vnf smo,shcm,tsn,mmt-gsm,mag --skip force-in-series-update-with-l3-permission --dry-run --use-target-version-csar-info

Confirm the output does not flag any problems or errors. The end of the command output should look similar to this.

You are about to update VMs as follows:

- VNF smo:
    - For site grsite1:
      - update all VMs in VNFC service group smo/4.1-5-1.0.0:
        - smo-1 (index 0)
        - smo-2 (index 1)
        - smo-3 (index 2)

- VNF shcm:
    - For site grsite1:
      - update all VMs in VNFC service group shcm/4.1-5-1.0.0:
        - shcm-1 (index 0)
        - shcm-2 (index 1)

- VNF tsn:
    - For site grsite1:
      - update all VMs in VNFC service group tsn/4.1-5-1.0.0:
        - tsn-1 (index 0)
        - tsn-2 (index 1)
        - tsn-3 (index 2)

- VNF mmt-gsm:
    - For site grsite1:
      - update all VMs in VNFC service group mmt-gsm/4.1-5-1.0.0:
        - mmt-1 (index 0)
        - mmt-2 (index 1)
        - mmt-3 (index 2)

- VNF mag:
    - For site grsite1:
      - update all VMs in VNFC service group mag/4.1-5-1.0.0:
        - mag-1 (index 0)
        - mag-2 (index 1)
        - mag-3 (index 2)

Please confirm the set of nodes you are upgrading looks correct, and that the software version against each service group correctly indicates the software version you are planning to upgrade to.

If you see any errors, please address them, then re-run the dry run command until it indicates success.

Previous page Next page
Rhino VoLTE TAS VMs Version 4.1