Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
The OpenStack deployment must be set up with support for Heat templates.
-
-
you are using an OpenStack version from Icehouse through to Train inclusive
-
you are thoroughly familiar with working with OpenStack machines and know how to set up tenants, users, roles, client environment scripts, and so on.
(For more information, refer to the appropriate OpenStack installation guide for the version that you are using here.)
-
you have deployed a SIMPL VM, unpacked the CSAR, and prepared an SDF.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Step 1 - Check OpenStack quotas
The SIMPL VM creates one server group per VM, and one security group per interface on each VM. OpenStack sets limits on the number of server groups and security groups through quotas.
View the quota by running openstack quota show <project id>
on OpenStack Controller node.
This shows the maximum number of various resources.
You can view the existing server groups by running openstack server group list
.
Similarly, you can find the security groups by running openstack security group list
If the quota is too small to accommodate the new VMs that will be deployed,
increase it by running
openstack quota set --<quota field to increase> <new quota value> <project ID>
. For example:
openstack quota set --server-groups 100 125610b8bf424e61ad2aa5be27ad73bb
Step 2 - Validate MAG RVT configuration
Validate the configuration for the MAG nodes to ensure that each MAG node can properly self-configure.
To validate the configuration after creating the YAML files, run
rvtconfig validate -t mag -i <yaml-config-file-directory>
on the SIMPL VM from the resources
subdirectory of the MAG CSAR.
Step 3 - Upload MAG RVT configuration
Upload the configuration for the MAG nodes to the CDS. This will enable each MAG node to self-configure when they are deployed in the next step.
To upload configuration after creating the YAML files and validating them as described above, run
rvtconfig upload-config -c <tsn-mgmt-addresses> -t mag -i <yaml-config-file-directory> (--vm-version-source this-rvtconfig | --vm-version <version>)
on the SIMPL VM from the resources
subdirectory of the MAG CSAR.
See Example configuration YAML files for example configuration files.
An in-depth description of RVT YAML configuration can be found in the Rhino VoLTE TAS Configuration and Management Guide.
Step 4 - Deploy the OVA
Run csar deploy --vnf mag --sdf <path to SDF>
.
This will validate the SDF, and generate the heat template. After successful validation, this will upload the image, and deploy the number of MAG nodes specified in the SDF.
Only one node type should be deployed at the same time. I.e. when deploying these MAG nodes, don’t deploy other node types at the same time in parallel. |
Backout procedure
To delete the deployed VMs, run
csar delete --vnf mag --sdf <path to SDF>
.
You must also delete the MDM state for each VM.
To do this, you must first SSH into one of the MDM VMs.
Get the instance IDs by running: mdmhelper --deployment-id <deployment ID> instance list
.
Then for each MAG VM, run the following command:
curl -X DELETE -k \
--cert /etc/certs-agent/upload/mdm-cert.crt \
--cacert /etc/certs-agent/upload/mdm-cas.crt \
--key /etc/certs-agent/upload/mdm-key.key \
https://127.0.0.1:4000/api/v1/deployments/<deployment ID>/instances/<instance ID>
Verify that the deletion worked by running mdmhelper --deployment-id <deployment ID> instance list
again.
You may now log out of the MDM VM.
You must also delete state for this node type and version from the CDS prior to re-deploying the VMs.
To delete the state, run
rvtconfig delete-node-type --cassandra-contact-point <any TSN IP> --deployment-id <deployment ID>
.
--site-id <site ID> --node-type mag
(--vm-version-source [this-vm | this-rvtconfig] | --vm-version <vm_version>)
Next Step
If you are upgrading a full set of VMs, go to Deploy MMT CDMA nodes on OpenStack, otherwise follow the verification instructions here: Verify the state of the nodes and processes