This document explains how to install Sh Cache Microservice (ShCM) nodes as virtual machines on VMware or OpenStack.
Notices
Copyright © 2014-2019 Metaswitch Networks. All rights reserved
This manual is issued on a controlled basis to a specific person on the understanding that no part of the Metaswitch Networks product code or documentation (including this manual) will be copied or distributed without prior agreement in writing from Metaswitch Networks.
Metaswitch Networks reserves the right to, without notice, modify or revise all or part of this document and/or change product features or specifications and shall not be responsible for any loss, cost, or damage, including consequential damage, caused by reliance on these materials.
Metaswitch and the Metaswitch logo are trademarks of Metaswitch Networks. Other brands and products referenced herein are the trademarks or registered trademarks of their respective holders.
Introduction
Sh Cache Microservice is a service that provides HTTP access to the HSS via diameter Sh, as well as caching some of that data to reduce round trips to the HSS.
This manual provides instructions for configuring and deploying ShCM nodes on VMware or Openstack virtual machine hosts.
It assumes you are familiar with Sh Cache Microservice and have planned your network as described in the Top Level Architecture.
For configuration of your deployment following the initial installation and day-to-day management of the Sh Cache Microservice, refer to the Sh Cache Microservice Configuration Guide.
The acronym ShCM is pronounced as shook-um
, and thus we refer to "a ShCM", not "an".
Glossary
The following acronyms and abbreviations are used throughout the Sh Cache Microservice documentation.
CDS |
Configuration Data Store Database used to store configuration data for the VMs. |
CSAR |
Cloud Service ARchive File type used by the Commissioning VM. |
Deployment ID |
Uniquely identifies a deployment, which can consist of many sites, each with many groups of VMs |
HSS |
Home Subscriber System |
HTTP |
Hypertext Transfer Protocol |
OVA |
Open Virtual Appliance File type used by VMware. |
OVF |
Open Virtualization Format File type used by VMware. |
QCOW2 |
QEMU Copy on Write 2 File type used by OpenStack. |
RVT |
Rhino VoLTE TAS |
SAS |
Service Assurance Server |
SDF |
Solution Definition File Describes the deployment, for consumption by the Commissioning VM. |
Sh |
Diameter Sh protocol |
ShCM |
Sh Cache Microservice The abbreviated form ShCM is pronounced as |
Site ID |
Uniquely identifies one site within the deployment, normally a geographic site (e.g. one data center) |
TAS |
Telecom Application Server |
TSN |
TAS Storage Node TSNs provide Cassandra databases and CDS services to the other VMs. |
VM |
Virtual Machine |
YANG |
Yet Another Next Generation Schemas used for verifying YAML files. |
Initial Setup
This section describes how to install the nodes that make up your initial Sh Cache Microservice deployment on VMware or OpenStack virtual machines.
In a production system, you will need at least two Sh Cache Microservice nodes per site.
In a lab trial deployment, you can have just one ShCM.
For information on architecture and deployment options please refer to the Architecture guide.
Live migration of an Sh Cache Microservice node to a new VMware host or a new OpenStack compute node is not supported. To move such a node to a new host, remove it from the old host and re-add it to the new host.
Preparation
Task | More information |
---|---|
Set up your OpenStack or VMware deployment. |
The installation procedures assume that you are installing ShCM into an existing OpenStack or VMware deployment. |
Obtain a valid Sh Cache Microservice license. |
ShCM requires a valid license for Rhino, HTTP and Diameter Resource Adaptors. This should be obtained from a Metaswitch representative. |
Install the ShCM nodes. |
The ShCM node cluster provides the Cassandra database used by the caching function of ShCM, and the CDS used to configure ShCM. |
Installing your ShCM deployment
The following table sets out the steps you need to take to install and commission your ShCM deployment.
There are two ways to deploy the ShCM nodes: in an automated way using the Commissioning VM, or manually. It is recommended to perform an automated installation using the Commissioning VM.
Step | Task | Link |
---|---|---|
Automated installation (on VMware) |
Prepare the SDF for an automated ShCM deployment |
|
Deploy the Commissioning VM into VMware |
||
Deploy ShCM nodes from a VMware CSAR |
||
Automated installation (on OpenStack) |
Prepare the SDF for an automated ShCM deployment |
|
Deploy the Commissioning VM into OpenStack |
||
Create the OpenStack flavor |
||
Deploy ShCM nodes from an OpenStack CSAR |
||
Manual installation (on VMware) |
Deploy ShCM nodes from a VMware OVA |
|
Manual installation (on OpenStack) |
Create flavor |
|
Install image |
||
Determine networks and security group |
||
Deploy ShCM nodes |
||
Service configuration |
Ensure bootstrap completes successfully, then upload configuration files to CDS. |
|
Verification |
Run some simple tests to verify that your ShCM nodes are working as expected. |
|
Set up the rest of your deployment |
Refer to the Sh Cache Microservice Configuration Guide for information and procedures to set up other components and features in your ShCM deployment. |
Installing ShCM on OpenStack
It is recommended to use the Commissioning VM to deploy the ShCM nodes automatically. However, it is also possible to deploy the ShCM nodes manually.
Automated ShCM installation on OpenStack
It is possible to use the Commissioning VM to automate the deployment of ShCM nodes. The steps for this are as follows.
Prepare the SDF for an automated ShCM deployment
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
you are using an OpenStack version from Icehouse through to Train inclusive
-
you have some knowledge of VMs and familiarity with OpenStack’s host software
-
if you require information on deploying on OpenStack, see Manual ShCM installation on OpenStack for links to OpenStack documentation
-
-
you have read the installation guidelines at Initial Setup and have everything you need to carry out the installation.
Reserve maintenance period
This procedure does not require a maintenance period. However if you are integrating into a live network then it is recommended to implement measures to mitigate any unforeseen events.
Tools and access
This document references an external document: the Commissioning VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have a list of the IP addresses that you intend to give to each node? |
Each node requires an IP address for each interface. |
Do you have DNS and NTP Server information? |
It is expected that the deployed nodes will integrate with the IMS Core NTP and DNS servers. |
Method of procedure
Write the SDF
The Solution Definition File (SDF) contains all the information required to set up your ShCM cluster. It is therefore crucial to ensure all information in the SDF is correct before beginning the deployment.
Documentation on how to write an SDF is available in the 'Write an SDF' section of the Commissioning VM Documentation.
The format of the SDF is common to all Metaswitch products, so information can be reused as necessary. However, one element only applies to Rhino VoLTE TAS node types: cds-addresses
. This element lists all the Config Data Store addresses. In almost all circumstances, this should be set to all the signaling IPs of the TSN nodes in the deployment.
It is recommended to start from a template SDF and edit it as desired instead of writing an SDF from scratch. An example SDF is included in the CSAR, or can be found here.
Deploy the Commissioning VM into OpenStack
Note that one Commissioning VM can be used to deploy multiple node types. Thus, this step only needs to be performed once for all node types. |
The supported version of the Commissioning VM is version 1.16.3. Prior versions cannot be used. |
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
you are using a supported OpenStack version, as described in the 'Hardware Requirements' section of the Commissioning VM Documentation
-
you have some knowledge of VMs and familiarity with OpenStack’s host software
-
if you require information on deploying on OpenStack, see Manual ShCM installation on OpenStack for links to OpenStack documentation
-
-
you know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the Commissioning VM.
Reserve maintenance period
This procedure does not require a maintenance period. However if you are integrating into a live network then it is recommended to implement measures to mitigate any unforeseen events.
Tools and access
You must have:
-
access to a local computer with a network connection and browser access to the OpenStack Dashboard
-
administrative access to the OpenStack host machine
-
the OpenStack privileges required to deploy VMs from an image (see OpenStack documentation for specific details).
This document references an external document: the Commissioning VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have the correct Commissioning VM QCOW2? |
All Commissioning VM virtual appliances use the naming convention - |
Do you know the IP address that you intend to give to the Commissioning VM? |
The Commissioning VM requires one IP address, for management traffic. |
Have you created and do you know the names of the networks and security group for ShCM? |
Use the OpenStack documentation to create the management and signaling networks and a subnet for each. Follow Determine Network Names and Security Group to get the names of the networks and check the security group is created. |
Method of procedure
Deploy and configure the Commissioning VM
Follow the Commissioning VM Documentation on how to deploy the Commissioning VM and set up the initial configuration.
Create a ShCM OpenStack flavor
About this task
This task creates the ShCM flavor that you will need when installing your deployment on OpenStack virtual machines.
You must complete this procedure before you begin the installation of the first ShCM node on OpenStack, but will not need to carry it out again for subsequent node installations. |
Create your ShCM flavor
Detailed procedure
-
Run the following command to create the ShCM OpenStack flavor, replacing <flavor name> with a name that will help you identify the flavor in future, for example ShCM.
nova flavor-create <flavor name> auto <ram_mb> <disk_gb> <vcpu_count>
where:
-
<ram_mb>
is the amount of RAM, in megabytes -
<disk_gb>
is the amount of hard disk space, in gigabytes -
<vpu_count>
is the number of virtual CPUs.Specify the parameters as pure numbers without units. The values for RAM, hard disk space, and virtual CPU count can be determined from the following table:
-
Spec | Use case | Resources |
---|---|---|
ShCM |
All deployments - this is the only supported deployment size |
|
-
Make note of the flavor ID value provided in the command output because you will need it when installing your OpenStack deployment.
-
Run the following command to check that the flavor you have just created has the correct values.
nova flavor-list
-
Run the following command if you need to remove an incorrectly-configured flavor, replacing <flavor name> with the name of the flavor.
nova flavor-delete <flavor name>
Deploy ShCM nodes from an OpenStack CSAR
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
the OpenStack deployment must be set up with support for Heat templates
-
-
you are using an OpenStack version from Icehouse through to Train inclusive
-
you have some knowledge of VMs and familiarity with OpenStack’s host software
-
if you require information on deploying on OpenStack, see Manual ShCM installation on OpenStack for links to OpenStack documentation
-
-
you have prepared an SDF and deployed a Commissioning VM.
Reserve maintenance period
This procedure does not require a maintenance period. However if you are integrating into a live network then it is recommended to implement measures to mitigate any unforeseen events.
Tools and access
You must have access to the Commissioning VM, and the Commissioning VM must have the right permissions on the OpenStack deployment.
Installation Questions
Question | More information |
---|---|
Do you have the correct ShCM CSAR? |
All ShCM virtual appliances use the naming convention - |
Method of procedure
Step 1 - Check OpenStack quotas
The Commissioning VM creates one server group per VM, and one security group per interface on each VM. OpenStack sets limits on the number of server groups and security groups through quotas.
View the quota by running openstack quota show <project id>
. This shows the maximum number of various resources.
The existing server groups can be seen by running openstack server group list
Similarly, the security groups can be found by running openstack security group list
If the quotas are too small to accommodate the new VMs that will be deployed, increase it by running openstack quota set --<quota field to increase> <new quota value> <project ID>
. For example: openstack quota set --server-groups 100 125610b8bf424e61ad2aa5be27ad73bb
Step 2 - Upload your CSAR to the Commissioning VM
Transfer the CSAR onto the Commissioning VM and run csar unpack <path to CSAR>
, where <path to CSAR>
is the full path to the transferred CSAR.
This will unpack the CSAR to ~/.local/share/csar/
.
Step 3 - Upload your SDF to the Commissioning VM
Transfer the previously written SDF onto the Commissioning VM.
Step 4 - Generate the Heat template
Run csar generate --vnf shcm --sdf <path to SDF> --orchestrator heat
. This will validate the SDF, and generate the Heat template. If any errors occur, check the documentation on preparing the SDF and fix the SDF.
Next steps
Once all ShCM VMs have been created using the procedure above, configure them by uploading configuration to CDS. How to do this can be found on the Initial configuration page.
Example SDF for ShCM
Example SDF
---
site-params: &site-params
timezone: Europe/London
deployment-id: sample
fixed-ips: true
networking:
subnets:
- cidr: 172.16.0.0/24
name: mgmt-subnet
network-name: mgmt-network
default-gateway: 172.16.0.1
dns-servers:
- 2.3.4.5
- 3.4.5.6
- cidr: 173.16.0.0/24
name: signaling-subnet
network-name: signaling-network
default-gateway: 173.16.0.1
- cidr: 174.16.0.0/24
name: access-subnet
network-name: access-network
default-gateway: 174.16.0.1
- cidr: 175.16.0.0/24
name: cluster-subnet
network-name: cluster-network
default-gateway: 175.16.0.1
services:
ntp-servers:
- 1.2.3.4
- 1.2.3.5
msw-deployment:deployment:
sites:
- name: DC1
site-parameters:
<<: *site-params
vim-configuration:
type: openstack
openstack:
connection:
username: asd
password: asd
auth-url: ads
keystone-v2:
tenant-name: asd
availability-zone: nonperf
vnfcs:
- name: mytestshcm
type: msw-vnf-rvt:shcm
image: SHCM-template
cluster-configuration:
count: 2
instances:
- name: shcm1
- name: shcm2
vnfc-openstack-options:
flavor: lab
networks:
- name: management
subnet: mgmt-subnet
ip-addresses:
- 172.16.0.8
- 172.16.0.9
- name: signaling
subnet: signaling-subnet
ip-addresses:
- 173.16.0.8
- 173.16.0.9
product-options:
rvt:
cds-addresses:
- 1.2.3.4
ssh-authorized-keys:
- ssh-rsa XXXXXXXXXXXXXXXXXXXX
Manual ShCM installation on OpenStack
Before you begin
-
You must be familiar with working with OpenStack machines and know how to set up tenants, users, roles, client environment scripts, and so on. For more information, refer to the appropriate OpenStack installation guide for the version that you are using here.
-
You must have set up your OpenStack server and environment permissions.
Preparation for running OpenStack commands
All OpenStack install instructions use the OpenStack CLI. Start a shell on the OpenStack host and source the client environment script for the tenant that will contain the Rhino VoLTE TAS VMs. Run all commands given in the subsequent pages linked below in this shell.
Installation procedure
The installation procedure for OpenStack consists of up to four steps.
-
Create a ShCM OpenStack flavor (first time installation only)
-
Install ShCM Virtual Appliance as an OpenStack image (first time installation only)
Create a ShCM OpenStack flavor
About this task
This task creates the ShCM flavor that you will need when installing your deployment on OpenStack virtual machines.
You must complete this procedure before you begin the installation of the first ShCM node on OpenStack, but will not need to carry it out again for subsequent node installations. |
Create your ShCM flavor
Detailed procedure
-
Run the following command to create the ShCM OpenStack flavor, replacing <flavor name> with a name that will help you identify the flavor in future, for example ShCM.
nova flavor-create <flavor name> auto <ram_mb> <disk_gb> <vcpu_count>
where:
-
<ram_mb>
is the amount of RAM, in megabytes -
<disk_gb>
is the amount of hard disk space, in gigabytes -
<vpu_count>
is the number of virtual CPUs.Specify the parameters as pure numbers without units. The values for RAM, hard disk space, and virtual CPU count can be determined from the following table:
-
Spec | Use case | Resources |
---|---|---|
ShCM |
All deployments - this is the only supported deployment size |
|
-
Make note of the flavor ID value provided in the command output because you will need it when installing your OpenStack deployment.
-
Run the following command to check that the flavor you have just created has the correct values.
nova flavor-list
-
Run the following command if you need to remove an incorrectly-configured flavor, replacing <flavor name> with the name of the flavor.
nova flavor-delete <flavor name>
Install ShCM Virtual Appliance as an OpenStack image
Planning for the procedure
Background knowledge
You must create the OpenStack image for each node type using this procedure because OpenStack does not provide a mechanism to deploy instances directly from a QCOW2 virtual appliance.
You should have some knowledge of VMs and familiarity with OpenStack’s host software. You must also have set up your OpenStack environment permissions before you start this procedure. See Manual ShCM installation on OpenStack if you need more information on configuring your OpenStack deployment.
Method of procedure
This procedure requires you to enter complex and detailed OpenStack commands which also require the input of lengthy parameters. We strongly recommend that you create these commands in a text editor and then copy and paste them into the SSH session for execution. |
Create the OpenStack Image
Download the QCOW2 image to the OpenStack host
Transfer the QCOW2 file to the host platform’s /tmp
directory.
Create the OpenStack image
Carry out the following steps from a command line interface on the OpenStack host platform.
Detailed procedure
-
Create an OpenStack image from the QCOW2 virtual appliance using the following command, replacing
<full-version>
with the correct version number for the node you are creating.glance image-create \ --name shcm-<full-version> \ --disk-format qcow2 \ --visibility public \ --container-format bare \ --file /tmp/shcm-<full-version>.qcow2
-
List the image for verification purposes:
openstack image show shcm-<full-version>
The image
status
should be active. -
Cleanup disk space by removing the QCOW2 virtual appliance:
sudo rm -f /tmp/shcm-<full-version>.qcow2
Backout procedure
If necessary, you can delete an OpenStack image.
Delete images as required
Delete the image
-
Run the following command to display a list of image identifiers to ensure that you have the correct image id for the image you want to delete.
openstack image list
-
Run the following command to delete an OpenStack image, replacing <image id> with the identifier of the image you want to delete.
glance image-delete <image id>
Determine Network Names and Security Group
In this step you will determine the names of networks and the unrestricted security group that will be associated with the created VMs. These names will be required during the VM creation step.
Determine network names
It is assumed you have created the required networks for the Rhino VoLTE TAS VMs. If not, follow the OpenStack documentation to create them.
Method of procedure
-
Log into the OpenStack server.
-
Source the client environment script for the tenant that will contain the Rhino VoLTE TAS VMs.
-
List the available networks using
openstack network list
. -
Find the networks that will be used for the VMs and note their names in the second column of the output. You can find a list of required networks for each VM here.
-
Record these names for later use.
Ensure there is an unrestricted security group
If you have previously created an unrestricted security group for another VM type, and you know the name under which it was created, this section can be skipped. You can check using the command openstack security group list
on OpenStack Pike or later, and neutron security-group-list
on OpenStack Ocata and earlier.
Rhino VoLTE TAS VMs require an unrestricted security group. If you have not created one yet, then you can create one using the following commands:
neutron security-group-create open
neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol icmp
--remote-ip-prefix 0.0.0.0/0 open
neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp
--remote-ip-prefix 0.0.0.0/0 open
neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol udp
--remote-ip-prefix 0.0.0.0/0 open
This creates a security group named open
.
Backout procedure
If you wish to delete the security group, use the command neutron security-group-delete open
(replace open
with the name of your security group, if different).
Next step
You can now create ports and deploy VMs.
Deploy ShCM nodes from an OpenStack image
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
you are using an OpenStack version from Icehouse through to Train inclusive
-
you have some knowledge of VMs and familiarity with OpenStack’s host software
-
if you require information on deploying on OpenStack, see Installing ShCM on OpenStack for links to OpenStack documentation
-
-
you have read the installation guidelines at Initial Setup and have everything you need to carry out the installation.
Reserve maintenance period
This procedure does not require a maintenance period. However if you are integrating into a live network then it’s recommended to implement measures to mitigate any unforeseen events.
Tools and access
You must have:
-
access to a local computer with a network connection and browser access to the OpenStack Dashboard
-
administrative access to the OpenStack host machine
-
the OpenStack privileges required to deploy VMs from an image (see OpenStack documentation for specific details).
Installation Questions
Question | More information |
---|---|
Do you have the OpenStack flavor for ShCM? |
You should have created these by following the instructions in Create a ShCM OpenStack flavor. |
Have you installed the OpenStack image for ShCM? |
You should have installed the image by following the instructions in Install ShCM Virtual Appliance as an OpenStack image. |
Have you created and do you know the names of the networks and security group for ShCM? |
Use the OpenStack documentation to create the management and signaling networks and a subnet for each. Follow Determine Network Names and Security Group to get the names of the networks and check the security group is created. |
Do you have network information for HSS endpoints and HTTP Resource Adaptor? |
Each node connects to HSS/SLF via Diameter Sh and the Cassandra cluster. It also exposes an HTTP address for API connectivity. |
Do you have a list of the IP addresses that you intend to give to each node? |
Each node requires two IP addresses - one for signaling which will support Diameter, HTTP and CQL traffic, and one for management traffic. |
Do you have DNS and NTP Server information? |
It’s expected that the system will integrate with the IMS Core NTP and DNS servers. |
Installation Steps
Repeat the following steps for each ShCM VM that you are creating.
Step | Information |
---|---|
1 - Create ports |
Create a port for each network interface on each VM, to determine the MAC addresses for that VM. |
2 - Create userdata YAML document |
The userdata provides the VM with some basic configuration, such as IP address, DNS servers, and so on. |
3 - Deploy VM |
Use |
This procedure requires you to enter complex and detailed OpenStack commands which also require the input of lengthy parameters. We strongly recommend that you create these commands in a text editor and then copy and paste them into the SSH session for execution. If you have a large number of VMs, you may want to write a script to automate the commands. |
Method of procedure
Step 1 - Create ports
This procedure describes how to create ports for the VM to use. A port associates a VM’s network interface (NIC) with one of the OpenStack host’s networks and subnets, and assigns the NIC a MAC address and IP address.
Prerequisites
-
You must have created the networks and security group used by the VMs, and know their names. See Determine Network Names and Security Group.
-
You must know the IP addresses to be associated with this VM.
Detailed procedure
-
For each network required by this VM, create a port using the following command:
For OpenStack Pike and later
openstack port create \ --network <network-name> \ --description <description> \ --fixed-ip ip-address=<IPv4 address> \ --security-group <security-group-name> \ <name>
For OpenStack Ocata and earlier
neutron port-create \ <network-name> \ --fixed-ip ip_address=<IPv4 address> \ --security-group <security-group-name> \ --name <name>
where:
-
<network-name>
is the name of the network for this port -
description
is a short description of the port - a suggested format is<hostname> <network name>
, e.g.shcm-1 management
-
<IPv4 address>
is the IP address of this VM for this network -
<security-group-name>
is the name of the security group, normallyopen
-
<name>
is a name for the port - a suggested format is<hostname>-<network name>
, e.g.shcm-1-management
.
-
-
In the output, note the ID and MAC address that the OpenStack host has assigned to the port. The port ID will be required when using
nova boot
to deploy the VM instance. The MAC address will be required in the YAML userdata for the VM.
Ensure you repeat steps 2 and 3 for each network required by the VM (between two and four times in total).
Step 2 - Create userdata YAML document
Prerequisites
-
You must have created the ports for this VM and noted their MAC addresses.
-
You must know the values for all the bootstrap parameters.
-
You must know the signaling addresses of all the TSN nodes in your deployment.
Detailed procedure
Follow the instructions at Writing the YAML userdata document for OpenStack to create a file containing the userdata for this VM on the OpenStack server.
Set the CDS addresses parameter to a list of the signaling addresses of all the TSN nodes in your deployment.
Step 3 - Deploy VM
This procedure describes how to deploy a ShCM VM from the OpenStack image.
Prerequisites
-
You must have created the ports that the VM(s) will use.
-
You must have created the YAML userdata file which specifies the bootstrap parameters, including the MAC addresses of the ports you created for this VM.
Detailed procedure
-
If you haven’t done so already, log into the OpenStack host and source the client environment script for the tenant that will contain the Rhino VoLTE TAS VMs.
-
Determine the image ID of the node type being deployed.
-
You created this when following the procedures in Install ShCM Virtual Appliance as an OpenStack image.
-
You can use
openstack image list
to display the list of images and locate the appropriateimage id
.
-
-
Determine the flavor name for the node type. You created this in Create a ShCM OpenStack flavor.
-
Create the virtual machine by running
nova boot
:nova boot \ --image <image id> \ --flavor <flavor name> \ --security-groups <security-group-name> \ --availability-zone nova \ --nic port-id=<mgmt-port-id> \ --nic port-id=<sig-port-id> \ --config-drive true \ --user-data <path-to-userdata-file> \ <hostname>
where:
-
<image id>
is the ID of the image for this node type -
<flavor name>
is the name of the flavor for this node type -
<security-group-name>
is the name of the unrestricted security group, normallyopen
-
<mgmt-port-id>
is the ID of the port for this VM’s management interface -
<sig-port-id>
is the ID of the port for this VM’s signaling interface -
<path-to-userdata-file>
is the relative or absolute path to the YAML userdata file -
<hostname>
is the VM’s hostname, which must match that specified in the YAML userdata file.Make a note of the ID of the newly created virtual machine because you will need to use this (as
<VM id>
) in subsequent commands.Previous versions of the ShCM VM, and the 3.1 series of the MMT, MAG and SMO VMs, use the
--meta
option tonova boot
to provide the properties. With userdata,--meta
is no longer used.
-
-
Run the command
nova show <VM id>
. View the status.-
If you see ACTIVE, the VM has been created successfully and is powered on.
-
If you see BUILD, the VM is still being created and you should wait a few moments before running the command again.
-
If you see ERROR, you should terminate the VM, using the procedure described in the Backout Procedure.
-
Results
A ShCM VM has been created and powered on. The bootstrap script will run automatically once the VM has booted.
Repeat the above steps for each ShCM VM you are creating.
Next step
Once all ShCM VMs have been created using the procedure above, configure them by uploading configuration to CDS. Follow the links from this page.
Backout procedure
If you need to delete a VM which has failed to create, bootstrap or configure properly, use the following steps.
-
Run
nova delete <VM id>
, whereVM id
is the ID of the VM that was created above.If you don’t know the
VM id
, you can determine it usingnova list
. -
Run
nova list
to verify that the server has been deleted. -
Delete the ports you created for this VM.
For OpenStack Pike and later: For each port you created for this VM, run
openstack port delete <port ID>
, where<port ID>
is the ID of the port. Then verify usingopenstack port list
that there are no more ports remaining.For OpenStack Ocata and earlier: For each port you created for this VM, run
neutron port-delete <port ID>
, where<port ID>
is the ID of the port. Then verify usingneutron port-list
that there are no more ports remaining.
Installing ShCM on VMware
It is recommended to use the Commissioning VM to deploy the ShCM nodes automatically. However, it is also possible to deploy the ShCM nodes manually.
Automated ShCM installation on VMware
It is possible to use the Commissioning VM to automate the deployment of ShCM nodes. The steps for this are as follows.
Prepare the SDF for an automated ShCM deployment
Planning for the procedure
Background knowledge
This procedure assumes that:
-
You are installing into an existing VMware deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware deployment from scratch.
-
You know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the ShCM nodes.
Reserve maintenance period
This procedure does not require a maintenance period. However if you are integrating into a live network then it is recommended to implement measures to mitigate any unforeseen events.
Tools and access
This document references an external document: the Commissioning VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have a list of the IP addresses that you intend to give to each node? |
Each node requires an IP address for each interface. |
Do you have DNS and NTP Server information? |
It is expected that the deployed nodes will integrate with the IMS Core NTP and DNS servers. |
Method of procedure
Write the SDF
The Solution Definition File (SDF) contains all the information required to set up your ShCM cluster. It is therefore crucial to ensure all information in the SDF is correct before beginning the deployment.
Documentation on how to write an SDF is available in the 'Write an SDF' section of the Commissioning VM Documentation.
The format of the SDF is common to all Metaswitch products, so information can be reused as necessary. However, one element only applies to Rhino VoLTE TAS node types: cds-addresses
. This element lists all the Config Data Store addresses. In almost all circumstances, this should be set to all the signaling IPs of the TSN nodes in the deployment.
It is recommended to start from a template SDF and edit it as desired instead of writing an SDF from scratch. An example SDF is included in the CSAR, or can be found here.
Deploy the Commissioning VM into VMware
Note that one Commissioning VM can be used to deploy multiple node types. Thus, this step only needs to be performed once for all node types. |
The supported version of the Commissioning VM is version 1.16.3. Prior versions cannot be used. |
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are using a supported VMware version, as described in the 'Hardware Requirements' section of the Commissioning VM Documentation
-
you are installing into an existing VMware deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware deployment from scratch.
-
you know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the Commissioning VM.
Reserve maintenance period
This procedure does not require a maintenance period. However if you are integrating into a live network then it is recommended to implement measures to mitigate any unforeseen events.
Tools and access
You must have access to a local computer (referred to in this procedure as the local computer) with a network connection and the VMware vSphere Client Integration Plug-in installed. You can use various versions of VMware vSphere, including V5.5 through to V6.7.
For more information on installing the VMware vSphere Client Integration Plug-in, see the VMware documentation.
This document references an external document: the Commissioning VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have the correct Commissioning VM OVA? |
All Commissioning VM virtual appliances use the naming convention - |
Do you know the IP address that you intend to give to the Commissioning VM? |
The Commissioning VM requires one IP address, for management traffic. |
Method of procedure
Deploy and configure the Commissioning VM
Follow the Commissioning VM Documentation on how to deploy the Commissioning VM and set up the initial configuration.
Deploy ShCM nodes from a VMware CSAR
Planning for the procedure
Background knowledge
This procedure assumes that:
-
You are installing into an existing VMware deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware deployment from scratch.
-
You have prepared an SDF and deployed a Commissioning VM.
Reserve maintenance period
This procedure does not require a maintenance period. However if you are integrating into a live network then it is recommended to implement measures to mitigate any unforeseen events.
Tools and access
You must have access to the Commissioning VM, and the Commissioning VM must have the right permissions on the VMware deployment.
Installation Questions
Question | More information |
---|---|
Do you have the correct ShCM CSAR? |
All ShCM virtual appliances use the naming convention - |
Method of procedure
Step 1 - Upload your CSAR to the Commissioning VM
Transfer the CSAR onto the Commissioning VM and run csar unpack <path to CSAR>
, where <path to CSAR>
is the full path to the transferred CSAR.
This will unpack the CSAR to ~/.local/share/csar/
.
Step 2 - Upload your SDF to the Commissioning VM
Transfer the previously written SDF onto the Commissioning VM.
Step 3 - Generate the Terraform template
Run csar generate --vnf shcm --sdf <path to SDF> --orchestrator terraform
. This will validate the SDF, and generate the Terraform template. If any errors occur, check the documentation on preparing the SDF and fix the SDF.
Next steps
Once all ShCM VMs have been created using the procedure above, configure them by uploading configuration to CDS. How to do this can be found on the Initial configuration page.
Example SDF for ShCM
Example SDF
---
site-params: &site-params
timezone: Europe/London
deployment-id: sample
fixed-ips: true
networking:
subnets:
- cidr: 172.16.0.0/24
name: mgmt-subnet
network-name: mgmt-network
default-gateway: 172.16.0.1
dns-servers:
- 2.3.4.5
- 3.4.5.6
- cidr: 173.16.0.0/24
name: signaling-subnet
network-name: signaling-network
default-gateway: 173.16.0.1
- cidr: 174.16.0.0/24
name: access-subnet
network-name: access-network
default-gateway: 174.16.0.1
- cidr: 175.16.0.0/24
name: cluster-subnet
network-name: cluster-network
default-gateway: 175.16.0.1
services:
ntp-servers:
- 1.2.3.4
- 1.2.3.5
msw-deployment:deployment:
sites:
- name: DC1
site-parameters:
<<: *site-params
vim-configuration:
type: vmware
vmware:
connection:
server: 172.18.12.228
username: VSPHERE.LOCAL\cloudify
password: cloudify
allow-insecure: true
datacenter: Automation
vnfcs:
- name: mytestshcm
type: msw-vnf-rvt:shcm
image: SHCM-template
cluster-configuration:
count: 2
instances:
- name: shcm-1
node-vim-options:
resource-pool-name: Resources
datastore: "data:storage1"
host: "esxi.hostname"
- name: shcm-2
node-vim-options:
resource-pool-name: Resources
datastore: "data:storage1"
host: "esxi.hostname"
vnfc-vmware-options:
deployment-size: lab
networks:
- name: management
subnet: mgmt-subnet
ip-addresses:
- 172.16.0.8
- 172.16.0.9
- name: signaling
subnet: signaling-subnet
ip-addresses:
- 173.16.0.8
- 173.16.0.9
product-options:
rvt:
cds-addresses:
- 1.2.3.4
ssh-authorized-keys:
- ssh-rsa XXXXXXXXXXXXXXXXXXXX
Manual ShCM installation on VMware
The VMware installation procedures assume that you are familiar with the VMware vSphere Web Client. You can use various versions of VMware vSphere, including V5.5 through to V6.7.
The procedures for installing ShCM nodes on VMware are found below.
Deploy ShCM nodes from a VMware OVA
Planning for the procedure
Background knowledge
This procedure assumes that:
-
You are installing into an existing VMware deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware deployment from scratch.
-
You have knowledge of VMs and familiarity with the VMware vSphere Web Client.
-
You know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the ShCM nodes.
Reserve maintenance period
This procedure does not require a maintenance period. However if you are integrating into a live network then it’s recommended to implement measures to mitigate any unforeseen events.
Tools and access
You must have access to a local computer (referred to in this procedure as the local computer) with a network connection and the VMware vSphere Client Integration Plug-in installed. You can use various versions of VMware vSphere, including V5.5 through to V6.7.
For more information on installing the VMware vSphere Client Integration Plug-in, see the VMware documentation.
Installation Questions
Question | More information |
---|---|
Do you have the correct ShCM OVA? |
All ShCM virtual appliances use the naming convention |
Do you have a list of the IP addresses that you intend to give to each node? |
Each node requires two IP addresses - one for signaling which will support Diameter, HTTP and CQL traffic, and one for management traffic. |
Do you have DNS and NTP Server information? |
It’s expected that the system will integrate with the IMS Core NTP and DNS servers. |
Installation Steps
Repeat the following steps for each ShCM VM that you are creating.
Step | Information |
---|---|
1 - Deploy your ShCM OVA |
Deploy the OVA into VMware |
Method of procedure
Step 1 - Deploy your ShCM OVA
This description tells you how to deploy a ShCM VM using the vSphere GUI. There are equivalent mechanisms than can be used to deploy the same VM via command line tools, as would be used for scripted deployment scenarios.
-
Start vSphere client.
-
Navigate to the datacenter in which you want to deploy the VMs.
-
Select [Actions] → [Deploy OVF Template].
-
Check [local file] and select the OVA template.
-
Click
Next
to continue. -
Enter the hostname for the virtual machine, for example
shcm-1
, and select the datacenter or VM folder to create the new virtual machine in. -
Click
Next
to continue. -
Select a compute resource to run the new virtual machine.
-
Click
Next
to continue. -
Review the settings and click
Next
to continue -
Select a deployment configuration.
-
Click
Next
to continue. -
Set the Select virtual disk format: field to
Thick Provision Lazy Zeroed
(for a lab trial deployment, you can use thin provision instead). -
Select the desired datastore.
-
Click
Next
to continue. -
Ensure that the correct network connections are selected to allow your ShCM node to connect to your management and signaling networks.
-
Click
Next
to continue. -
Customize the deployment properties. See Bootstrap parameters for detailed information. Optional properties can be left blank.
Note: The Sh Cache Microservice installation will fail to bootstrap if an incorrect property is used, rendering the VM unusable. Refer to the bootstrap troubleshooting section for more information in the event that the Sh Cache Microservice doesn’t bootstrap successfully.
-
Click
Next
to continue. -
The next panel summarizes the deployment settings. Review the settings to check you have entered all settings accurately.
-
Click
Finish
to complete the process. -
Start the node by right clicking on the VM and select
Power On
.
Results
A ShCM VM has been created and powered on. The bootstrap script will run automatically once the VM has booted.
Repeat the above steps for each ShCM VM you are creating.
Troubleshooting
In some systems, the above procedure may fail with the following error message:
"Transfer failed: The OVF descriptor is not available"
If this happens, take the following steps:
-
Create a temporary directory on the local computer on which you are running the browser.
-
Copy the OVA template to this directory.
-
Restart the procedure, selecting the copy of the OVA template from this temporary location.
-
Once the OVA template has been successfully uploaded, delete the temporary directory.
In the event of any other failure, refer to the Backout Procedure.
Next step
Once all ShCM VMs have been created using the procedure above, configure them by uploading configuration to CDS. Follow the links from this page.
Backout Procedure
If you need to delete a VM that failed to create properly or has the wrong bootstrap parameters, follow these steps.
-
Find the VM you wish to delete under its datacenter / host.
-
If the VM is powered on, power it off by right-clicking on its entry in the list of VMs and choosing [Power] → [Power Off].
-
Delete the VM and its associated resources by right-clicking on its entry in the list of VMs and choosing [Delete from Disk].
VM Configuration
This section describes details of the VM configuration of the ShCM.
-
If manually deploying the VMs, you will need to specify the bootstrap parameters, either as vApp parameters or as OpenStack userdata. These parameters are derived from the SDF in an automatic deployment.
-
After the VMs boot up, they will automatically perform bootstrap. You then need to upload configuration to CDS for the initial configuration step.
-
You may wish to refer to the Services and Components page for information about the ShCM’s components, directory structure, and the like.
Network interfaces
The following table lists the network interfaces present on the ShCM VMs.
The various IP addresses for the interfaces must each be on a separate subnet. In addition, each cluster of VMs must share a subnet for each applicable traffic type (e.g. all signaling addresses for the ShCM VMs must be on the same subnet). The recommended configuration is to use one subnet per traffic type. If your deployment has multiple sites then use one subnet per traffic type per site. |
Interface | Description | Examples of use |
---|---|---|
Management |
Used for managing the node. |
|
Signaling |
Used for signaling messages. |
|
No cluster interface is required for ShCM. Each ShCM node operates independently and is automatically configured to have cluster traffic routed over a local loopback address. |
Bootstrap parameters
Bootstrap parameters are provided to the VM when the VM is created. They are used by the bootstrap process to configure various settings in the VM’s operating system.
On VMware, the bootstrap parameters are provided as vApp parameters. On OpenStack, the bootstrap parameters are provided as userdata in YAML format.
When using the Commissioning VM to deploy the VM in an automated way, the bootstrap parameters are automatically derived from the SDF. It is therefore not necessary to specify them manually, and in automated deployments this section is for reference only. |
Note on network interface parameters
The MAC address (OpenStack only), IP address, subnet mask and gateway parameters are specified per interface. Refer to Network Interfaces for a list of network interfaces found on each ShCM VM.
For VMware vApp parameters, each network interface property is prefixed by the name of the interface and a full stop (.) - for example, management.ip_address
.
For OpenStack userdata, the network interface properties for each interface are grouped into a YAML dictionary, whose key in the userdata is the name of the interface. For example:
management:
mac_address: AA:BB:CC:DD:EE:FF
ip_address: 192.168.10.10
subnet: 192.168.0.0/16
gateway: 192.168.1.1
The names of the interfaces as used in the bootstrap parameters are as follows:
Interface | Name used in vApp parameters or userdata |
---|---|
Management |
|
Signaling |
|
See Network Interfaces for more information.
Writing the YAML userdata document for OpenStack
Userdata is a mechanism via which properties are provided to VM instances on OpenStack. For the ShCM and TSN VMs, userdata is provided in the form of a YAML document. You can find more information on YAML at https://yaml.org/spec/1.2/spec.html
In the list of bootstrap parameters, you can find the names of the parameters together with their format. Note that where a vApp parameter is specified as a comma-separated list for VMware, it should be converted to a YAML list for OpenStack. With the exception of network interfaces, all parameters are placed at the top level of the YAML document.
To provide the userdata to the OpenStack instances:
-
construct the YAML document in a single file, either directly on the OpenStack server, or by creating it on another machine and copying it to the OpenStack server; then
-
provide the
--user-data <file>
parameter tonova boot
during instance creation, where<file>
is the path to the file created in the previous step.
Since the files for different instances will be similar, but have different contents (such as IP addresses), you may want to construct one file for each instance and name them correspondingly, e.g. If you have a large number of VMs you may find it easier to write a script to generate the files. |
Below is a full example of a userdata YAML document. In particular note how each element of a list is preceded by a hyphen (-
), and that comments can be included by using the #
character (anything between the #
and the end of that line is treated as a comment).
hostname: shcm-1
dns_servers:
- 8.8.8.8
- 8.8.4.4
ntp_servers:
- ntp1.telco.com
- ntp2.telco.com
timezone: Pacific/Auckland
cds_addresses:
- 192.168.10.10
- 192.168.10.11
- 192.168.10.12
deployment_id: telco1
site_id: DC01
# Network interface settings
management:
mac_address: AA:BB:CC:DD:EE:F0
ip_address: 192.168.10.10
subnet: 192.168.10.0/24
gateway: 192.168.10.1
signaling:
mac_address: AA:BB:CC:DD:EE:F1
ip_address: 192.168.20.10
subnet: 192.168.20.0/24
gateway: 192.168.20.1
List of bootstrap parameters
Property | Description | Format and Example |
---|---|---|
|
Required. The hostname of the server. |
A string consisting of letters A-Z, a-z, digits 0-9, and hyphens (-). Maximum length is 27 characters. Example: |
|
Required. List of DNS servers. |
For VMware, a comma-separated list of IPv4 addresses. For OpenStack, a list of IPv4 addresses. Example: |
|
Required. List of NTP servers. |
For VMware, a comma-separated list of IPv4 addresses or FQDNs. For OpenStack, a list of IPv4 addresses or FQDNs. Example: |
|
Optional. The system time zone in POSIX format. Defaults to UTC. |
Example: |
|
Required. The list of signaling addresses of Config Data Store (CDS) servers which will provide initial configuration for the cluster. CDS is provided by the TSN nodes. Refer to the Initial Configuration section of the documentation for more information. |
For VMware, a comma-separated list of IPv4 addresses. For OpenStack, a list of IPv4 addresses. Example: |
|
Required. An identifier for this deployment. A deployment consists of one or more sites, each of which consists of several clusters of nodes. |
A string consisting of letters A-Z, a-z, digits 0-9, and hyphens (-). Maximum length is 15 characters. Example: |
|
Required. A unique identifier (within the deployment) for this site. |
A string of the form |
|
Required only when there are multiple clusters of the same type in the same site. A suffix to distinguish between clusters of the same node type within a particular site. For example, when deploying the MaX product, a second TSN cluster may be required. |
A string consisting of letters A-Z, a-z, and digits 0-9. Maximum length is 8 characters. Example: |
|
Optional. A list of SSH public keys. Machines configured with the corresponding private key will be allowed to access the node over SSH as the |
For VMware, a comma-separated list of SSH public key strings, including the For OpenStack, a list of SSH public key strings. Example: |
|
Required only on Openstack. The MAC address of the interface. |
Six pairs of hexadecimal digits separated by colons. Example: |
|
Required. The IPv4 address of the interface. |
IPv4 address in dotted-decimal notation. Example: |
|
Required. The subnet mask of the interface. |
IPv4 subnet mask in CIDR notation. Example: |
|
Required. The default gateway for traffic on the interface. |
IPv4 address in dotted-decimal notation. Example: |
Bootstrap and initial configuration
Bootstrap
Bootstrap is the process whereby, after a VM is started for the first time, it is configured with key system-level configuration such as IP addresses, DNS and NTP server addresses, a hostname, and so on. This process runs automatically on the first boot of the VM. In order for bootstrap to succeed it is crucial that all entries in the SDF (or in the case of a manual deployment, all the bootstrap parameters) are correct.
Successful bootstrap
Once the VM has booted into multi-user mode, bootstrap normally takes about one minute.
SSH access to the VM is not possible until bootstrap has completed. If you want to monitor bootstrap from the console, log in as the sentinel
user with password !sentinel
and examine the log file bootstrap/bootstrap.log
. Successful completion is indicated by the line Bootstrap complete
.
Troubleshooting bootstrap
If bootstrap fails, an exception will be written to the log file. If the network-related portion of bootstrap succeeded but a failure occurred afterwards, the VM will be accessible over SSH and logging in will display a warning Automatic bootstrap failed
.
Examine the log file bootstrap/bootstrap.log
to see why bootstrap failed. In the majority of cases it will be down to an incorrect SDF or a missing or invalid bootstrap parameter. Destroy the VM and recreate it with the correct SDF or bootstrap parameters (it is not possible to run bootstrap more than once).
If you are sure you have the SDF or bootstrap parameters correct, or it is not obvious what is wrong, then contact your Customer Care Representative.
Initial configuration
Initial configuration occurs after bootstrap. It sets up product-level configuration such as:
-
configuring Rhino and the relevant Sentinel products (on systems that run Rhino);
-
SNMP-based monitoring; and
-
SSH key exchange to allow access from other VMs in the cluster to this VM.
To perform this initial configuration, the process retrieves its configuration in the form of YAML files from the CDS (Config Data Store). The CDS to contact is determined using the 'CDS addresses' parameter from the SDF or bootstrap parameters. Currently CDS services are provided by the TSN nodes.
The YAML files describing the initial configuration should be prepared in advance, and uploaded to the CDS after spinning up the VMs using the process described below.
Configuration files
The initial configuration process reads settings from YAML files. Each YAML file refers to a particular set of configuration options, for example SNMP settings. The YAML files are validated against a YANG schema. The YANG schema is human-readable and lists all the possible options, together with a description. It is therefore recommended to reference the Initial configuration YANG schema while preparing the YAML files.
Some YAML files are shared between different node types. If a file with the same file name is required for two different node types, then the same file must be used in both cases.
A set of sample YAML files for ShCM nodes can be found on the Example initial configuration YAML files page.
Since TSN nodes provide the CDS services, boot all TSN nodes and wait for them to complete bootstrap before booting a node of any other type. |
When uploading configuration files, you must also include Solution Definition Files for all RVT nodes in the deployment (see below). Furthermore, for any VM which runs Rhino, you must also include a valid Rhino license. |
Solution Definition Files
If performing an automated deployment, you will already have written Solution Definition Files (SDFs) as part of the creation of the VMs. However, as the initial configuration process discovers other RVT nodes using the SDFs, even when performing a manual deployment, you are still required to write SDFs. Read the page on preparing an SDF for more details on how to write an SDF.
When writing an SDF for a manual deployment, it is not necessary to fill out any connectivity information, including credentials, for the VMware or OpenStack deployment. However, all other fields, in particular hostnames, the deployment ID, the site ID and all the IPs must be filled out correctly. |
The SDF for this node type should be in a file named |
The rvtconfig
tool
Initial configuration YAML files can be validated and uploaded to CDS using the rvtconfig
tool. It is possible to run the rvtconfig
tool either from the Commissioning VM, or from any Rhino VoLTE TAS (RVT) VM. On the Commissioning VM, the command can be found in the resources
subdirectory of any RVT CSAR, e.g.: /home/admin/.local/share/csar/tsn-vmware-csar/resources/rvtconfig
. On any RVT VM, the rvtconfig
tool is on the path for the sentinel
user and can be run directly by running rvtconfig
.
The tool current supports the following three commands:
-
rvtconfig validate
can be used to validate the configuration, even before booting any RVT VMs by using the Commissioning VM. -
rvtconfig initial-configure
can be used to upload the configuration to CDS. -
rvtconfig delete-deployment
can be used to delete a deployment from CDS - only use this when advised to do so by your Customer Care Representative.
For more information, use the built-in help, i.e. run rvtconfig --help
.
Uploading configuration to CDS
To upload the YAML files to CDS:
-
Create a directory to hold the initial configuration YAML files:
mkdir yamls
-
Upload all the files to this directory: the initial configuration YAML files, the Solution Definition Files (SDFs) for all node types, and the Rhino license for nodes running Rhino. Ensure all the files are in the directory itself - do not create any subdirectories. Also ensure the file names match the example YAML files.
-
Validate the YAML files using the
rvtconfig
tool:rvtconfig validate -l -t shcm -i ~/yamls
The
rvtconfig
command can be run from the Commissioning VM, or alternatively you can SSH into any TSN VM and run the command from there (see the previous section for more details). If there are any errors, fix them, upload the fixed files to theyamls
directory, and then re-run the abovervtconfig validate
command. -
Once the files pass validation, store the YAML files in CDS using the
rvtconfig
tool:rvtconfig initial-configure -c <Any TSN VM’s management IP address> -d <deployment ID> -t shcm -i ~/yamls
If the CDS is not yet available, this will retry every 30 seconds for up to 15 minutes. As a large TSN cluster can take up to one hour to form, this means the command could time out before it completes. If the command still fails after several attempts over an hour, troubleshoot the TSN nodes.
If using the Commissioning VM, the |
Successful initial configuration
The initial configuration process on the VMs starts after bootstrap completes. It is constantly listening for configuration to be written to CDS (see above). Once it detects configuration has been uploaded, it will automatically download and validate it. Assuming everything passes validation, the configuration will then be applied automatically. This can take up to 20 minutes depending on node type.
You can monitor the initial configuration process by using the report-initconf status
tool. Success is indicated by status=vm_converged
.
Troubleshooting initial configuration
Like bootstrap, errors are reported to the log file.
<file> failed to validate against YANG schemas
: This indicates something in one of the YAML files was invalid. Refer to the output to check which field was invalid, and fix the problem. For initial configuration validation issues you do not have to destroy and recreate the VM, but can simply upload the fixed configuration using rvtconfig
as per the instructions above. The initial configuration process will automatically try again once it detects the uploaded configuration has been updated.
Task <name> caused an error
: This indicates that initial configuration has irrecoverably failed. Contact your Customer Care Representative for next steps.
Other errors: If these relate to invalid field values or a missing license then it is normally safe to fix the configuration and try again. Otherwise, contact your Customer Care Representative.
ShCM Services and Components
This section describes details of components and services running on the ShCM.
Systemd Services
ShCM Rhino Process
ShCM runs on the Rhino platform which is managed via the rhino.service
Systemd Service. To start Rhino run sudo systemctl start rhino.service
and to stop run sudo systemctl stop rhino.service
.
Linkerd
Linkerd is a transparent proxy that is used by ShCM for outbound communication. The proxy is run from inside a Docker container. To check if the process is running run docker ps --filter name=linkerd
.
SNMP service monitor
The SNMP service monitor process is responsible for raising SNMP alarms when a disk partition gets too full.
The SNMP service monitor alarms are compatible with Rhino alarms and can be accessed in the same way. Refer to Accessing SNMP Statistics and Notifications for more information about this.
Alarms are sent to SNMP targets as configured through the initial configuration YAML files.
The following partitions are monitored:
-
the root partition (
/
) -
the log partition (
/var/log
)
There are two thresholds for disk monitoring, expressed as a percentage of the total partition size. When disk usage exceeds the lower threshold, a warning (MINOR severity) alarm will be raised. When disk usage exceeds the upper threshold, a MAJOR severity alarm will be raised, and (except for the root partition) files will be automatically cleaned up where possible.
Once disk space has returned to a non-alarmable level, the SNMP service monitor will clear the associated alarm on the next check. By default it checks disk usage once per day. Running the command sudo systemctl reload disk-monitor
will force an immediate check of the disk space, for example if an alarm was raised and you have since cleaned up the appropriate partition and want to clear the alarm.
Configuring the SNMP service monitor
The default monitoring settings should be appropriate for the vast majority of deployments.
Should your Metaswitch Customer Care Representative advise you to reconfigure the disk monitor, you can do so by editing the file /etc/disk_monitor.yaml
(you will need to use sudo
when editing this file due to its permissions):
global:
check_interval_seconds: 86400
log:
lower_threshold: 80
max_files_to_delete: 10
upper_threshold: 90
root:
lower_threshold: 90
upper_threshold: 95
snmp:
enabled: true
notification_type: trap
targets:
- address: 192.168.50.50
port: 162
version: 2c
The file is in YAML format, and specifies the alarm thresholds for each disk partition (as a percentage), the interval between checks in seconds, and the SNMP targets.
-
Supported SNMP versions are
2c
and3
. -
Supported notification types are
trap
andnotify
. -
Supported values for the upper and lower thresholds are:
Partition |
Lower threshold range |
Upper threshold range |
Minimum difference between thresholds |
|
50% to 80% |
60% to 90% |
10% |
|
50% to 90% |
60% to 99% |
5% |
-
check_interval_seconds
must be in the range 60 to 86400 seconds inclusive. It is recommended to keep the interval as long as possible to minimise performance impact.
After editing the file you can apply the configuration by running sudo systemctl reload disk-monitor
. Verify that the service has accepted the configuration by running sudo systemctl status disk-monitor
. If it shows an error, run journalctl -u disk-monitor
for more detailed information. Correct the errors in the configuration and apply it again.
Systemd Timers
Cleanup Timer
The ShCM node contains a daily timer that cleans up stale Rhino SLEE activities and SBB instances which are created as part of ShCM transactions. This timer runs every night at 02:00 (in the system’s timezone), with a random delay of 15 minutes to avoid all nodes running the cleanup at the same time, as a safeguard to minimize the chance of a potential service impact.
This timer consists of two systemd units: cleanup-sbbs-activities.timer
, which is the actual timer, and cleanup-sbbs-activities.service
, which is the service that the timer activates. The service in turn calls the manage-sbbs-activities
tool. This tool can also be run manually to investigate if there are any stale activities or SBB instances. Run it with the -h
option to get help about its command line options.
Partitions
The ShCM VMs contain three partitions:
-
/boot
, with a size of 100MB. This contains the kernel and bootloader. -
/var/log
, with a size of 7000MB. This is where the OS and Rhino store their logfiles. The Rhino logs are within thetas
subdirectory, and within that each cluster has its own directory. -
/
, which uses up the rest of the disk. This is the root filesystem.
PostgreSQL Configuration
On the ShCM node, there are default restrictions made to who may access the postgresql instance. These lie within the root-restricted file /var/lib/pgsql/9.6/data/pg_hba.conf
. The default trusted authenticators are as follows:
Type of authenticator |
Database |
User |
Address |
Authentication method |
Local |
All |
All |
Trust unconditionally |
|
Host |
All |
All |
127.0.0.1/32 |
MD5 encrypted password |
Host |
All |
All |
::1/128 |
MD5 encrypted password |
Host |
All |
Sentinel |
127.0.0.1/32 |
Unencrypted password |
In addition, the instance will listen on the localhost interface only. This is recorded in /var/lib/pgsql/9.6/data/postgresql.conf
in the listen addresses
field.
Monitoring
Each ShCM VM contains a Prometheus exporter, which monitors statistics about the VM’s health (such as CPU usage, RAM usage, etc). These statistics can be retrieved using SIMon by connecting it to port 9100 on the VM’s management interface.
If you prefer to use SNMP walking to retrieve system health statistics, they are available via the standard UCD-SNMP-MIB
OIDs with prefix 1.3.6.1.4.1.2021
.
Schema for shcm-vmpool-config.yaml
YANG schema
module shcm-vm-pool {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/shcm-vm-pool";
prefix "shcm-vm-pool";
import ietf-inet-types {
prefix "ietf-inet";
}
import vm-types {
prefix "vmt";
revision-date 2019-11-29;
}
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "ShCM VM pool configuration schema.";
revision 2019-11-29 {
description
"Initial revision";
reference
"Metaswitch Deployment Definition Guide";
}
grouping shcm-virtual-machine-pool {
container rhino-cluster {
uses vmt:rhino-cluster-grouping;
description "Rhino cluster configuration.";
}
leaf deployment-id {
type vmt:deployment-id-type;
mandatory true;
description "The deployment identifier. Used to form a unique VM identifier within the"
+ " VM host.";
}
leaf site-id {
type vmt:site-id-type;
mandatory true;
description "Site ID for the site that this VM pool is a part of.";
}
leaf node-type-suffix {
type vmt:node-type-suffix-type;
default "";
description "Suffix to add to the node type when deriving the group identifier. Should"
+ " normally be left blank.";
}
list cassandra-contact-points {
key "management.ipv4 signaling.ipv4";
uses vmt:base-interfaces;
description "Cassandra contact points.";
}
list virtual-machines {
key "vm-id";
leaf vm-id {
type string;
mandatory true;
description "The unique virtual machine identifier.";
}
unique diameter-sh-origin-host;
leaf diameter-sh-origin-host {
type ietf-inet:domain-name;
mandatory true;
description "Diameter Sh origin host";
}
container sas-instance-configuration {
uses vmt:sas-instance-configuration-grouping {
refine "system-name" {
default "ShCM";
}
}
when "../../../shcm-service-config/sas-configuration/enabled = 'true'";
description "SAS instance configuration.";
}
description "Configured virtual machines.";
}
description "ShCM virtual machine pool.";
}
}
Schema for shcm-service-config.yaml
YANG schema
module shcm-service-configuration {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/shcm-service-configuration";
prefix "shcm-service";
import vm-types {
prefix "vmt";
revision-date 2019-11-29;
}
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "ShCM service configuration schema.";
revision 2019-11-29 {
description
"Initial revision";
reference
"Metaswitch Deployment Definition Guide";
}
typedef cache-strategy-type {
type enumeration {
enum no-cache {
description "Do not use a cache.";
}
enum simple-cache {
description "Use a simple cache.";
}
enum subscription-cache {
description "Use a subscription cache.";
}
}
description "The type used to define the caching strategy.";
}
typedef cache-validity-time-type {
type uint32 {
range "1..172800";
}
description "The type used to define cache validity time.";
}
grouping shcm-service-configuration-grouping {
container sas-configuration {
uses vmt:sas-configuration-grouping {
refine "sas-connection/system-type" {
default "ShCM";
}
}
description "SAS configuration.";
}
container diameter-sh {
uses vmt:diameter-configuration-grouping;
description "Diameter SH configuration.";
}
leaf health-check.user-identity {
type vmt:sip-uri-type;
mandatory true;
description "The health check user identity.";
}
leaf diameter.request-timeout {
type uint32 {
range "909 .. 27273";
}
default 5000;
description "The Diameter request timeout.";
}
container cassandra-locking-configuration {
leaf backoff-time {
type uint32 {
range "50 .. 5000";
}
default 5000;
description "The backoff time in milliseconds.";
}
leaf backoff-limit {
type uint32 {
range "1 .. 10";
}
default 5;
description "The backoff limit.";
}
leaf hold-time {
type uint32 {
range "1000 .. 30000";
}
default 12000;
description "The hold time in milliseconds.";
}
description "Cassandra locking configuration.";
}
grouping cache-parameters-group {
description "Parameters describing the configuration for this cache.";
leaf cache-validity-time-seconds {
type cache-validity-time-type;
mandatory true;
description "Cache validity time in seconds.";
}
}
container caching-configuration {
list service-indications {
key "service-indication";
leaf service-indication {
type string;
mandatory true;
description "Service indication.";
}
leaf cache-strategy {
type cache-strategy-type;
default "subscription-cache";
description "Cache strategy.";
}
container cache-parameters {
when "../cache-strategy != 'no-cache'";
uses "cache-parameters-group";
description "Parameters describing the configuration for this cache.";
}
description "Service indications.";
}
list data-references-subscription-allowed {
key "data-reference";
leaf data-reference {
type enumeration {
enum ims-public-identity {
description "IMS public identity";
}
enum s-cscfname {
description "S-CSCF Name";
}
enum initial-filter-criteria {
description "Initial filter criteria";
}
enum service-level-trace-info {
description "Service level trace info";
}
enum ip-address-secure-binding-information {
description "IP address secure binding information";
}
enum service-priority-level {
description "Service priority level";
}
enum extended-priority {
description "Extended priority";
}
}
mandatory true;
description "The data reference.";
}
leaf cache-strategy {
type cache-strategy-type;
default "subscription-cache";
description "The cache strategy.";
}
container cache-parameters {
when "../cache-strategy != 'no-cache'";
uses "cache-parameters-group";
description "Parameters describing the configuration for this cache.";
}
description "List of data references for which subscription is permitted, and"
+ " their caching strategy configuration";
}
list data-references-subscription-not-allowed {
key "data-reference";
leaf data-reference {
type enumeration {
enum charging-information {
description "Charging information";
}
enum msisdn {
description "MS-ISDN";
}
enum psiactivation {
description "PSI activation";
}
enum dsai {
description "DSAI";
}
enum sms-registration-info {
description "SMS registration info";
}
enum tads-information {
description "TADS information";
}
enum stn-sr {
description "STN SR";
}
enum ue-srvcc-capability {
description "UE SRV CC capability";
}
enum csrn {
description "CSRN";
}
enum reference-location-information {
description "Reference location information";
}
}
mandatory true;
description "The data reference.";
}
leaf cache-strategy {
type enumeration {
enum no-cache {
description "Do not use a cache.";
}
enum simple-cache {
description "Use a simple cache.";
}
}
default "simple-cache";
description "The cache strategy.";
}
container cache-parameters {
when "../cache-strategy != 'no-cache'";
uses "cache-parameters-group";
description "Parameters describing the configuration for this cache.";
}
description "List of data references for which subscription is not permitted,"
+ " and their caching strategy configuration.";
}
description "Caching configuration.";
}
description "ShCM service configuration.";
}
}
Common types
YANG schema
module vm-types {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/vm-types";
prefix "vm-types";
import ietf-inet-types {
prefix "ietf-inet";
}
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "Types used by the various virtual machine schemas.";
revision 2019-11-29 {
description
"Initial revision";
reference
"Metaswitch Deployment Definition Guide";
}
// If you change this range, please update MIN_NODE_ID in tasvmcommon/components.py.
typedef rhino-node-id-type {
type uint8 {
range "1 .. 254";
}
description "The Rhino node identifier type.";
}
typedef rhino-cluster-id-type {
type uint16 {
range "1 .. 32767";
}
description "The Rhino cluster identifier type.";
}
typedef sgc-cluster-name-type {
type string;
description "The SGC cluster name type.";
}
typedef deployment-id-type {
type string {
pattern "[a-zA-Z0-9-]{1,15}";
}
description "Deployment identifier type. May only contain upper and lower case letters 'a'"
+ " through 'z', the digits '0' through '9' and hyphens. Must be between 1 and"
+ " 15 characters in length, inclusive.";
}
typedef site-id-type {
type string {
pattern "[a-zA-Z0-9]+";
}
description "Site identifier type. May only contain upper and lower case letters 'a'"
+ " through 'z' and the digits '0' through '9'. Minimum length of 1.";
}
typedef node-type-suffix-type {
type string {
pattern "[a-zA-Z0-9]*";
}
description "Node type suffix type. May only contain upper and lower case letters 'a'"
+ " through 'z' and the digits '0' through '9'. May be empty.";
}
typedef trace-level-type {
type enumeration {
enum debug {
description "The 'debug' trace level.";
}
enum info {
description "The 'info' trace level.";
}
enum warn {
description "The 'warn' trace level.";
}
enum error {
description "The 'error' trace level.";
}
}
description "The Rhino trace level type";
}
typedef sip-uri-type {
type string {
pattern 'sip:.*';
}
description "The SIP URI type.";
}
typedef tel-uri-type {
type string {
pattern "tel:.*";
}
description "The Tel URI type.";
}
typedef sip-or-tel-uri-type {
type union {
type sip-uri-type;
type tel-uri-type;
}
description "A type allowing either a SIP URI or a Tel URI.";
}
typedef number-string {
type string {
pattern "[0-9]+";
}
description "A type that permits a non-negative integer value.";
}
typedef sccp-address-type {
type string {
pattern "^(.*,)*type=(A|C|CH|J)7.*";
pattern "^(.*,)*ri=(gt|ssn|pcssn).*";
pattern "^.*=.*(,.*=.*)*$";
}
description "The SCCP address string type, as used by some Rhino RAs, such as CGIN and"
+ " SIS.";
}
typedef ss7-address-string-type {
type string {
pattern "^(.*,)*address=.*";
pattern "^.*=.*(,.*=.*)*$";
}
description "The SS7 address string type.";
}
grouping rhino-cluster-grouping {
leaf cluster-id {
type rhino-cluster-id-type;
mandatory true;
description "The Rhino cluster identifier.";
}
description "Rhino cluster configuration.";
}
grouping base-interfaces {
leaf management.ipv4 {
type ietf-inet:ipv4-address-no-zone;
mandatory true;
description "The IPv4 address of the management interface.";
}
leaf signaling.ipv4 {
type ietf-inet:ipv4-address-no-zone;
mandatory true;
description "The IPv4 address of the signaling interface.";
}
description "Base network interfaces: management and signaling";
}
grouping diameter-configuration-grouping {
leaf origin-realm {
type ietf-inet:domain-name;
mandatory true;
description "The Diameter origin realm.";
}
leaf destination-realm {
type ietf-inet:domain-name;
mandatory true;
description "The Diameter destination realm.";
}
list destination-peers {
key "destination-hostname";
min-elements 1;
leaf protocol {
type enumeration {
enum aaa {
description "The Authentication, Authorization and Accounting (AAA)"
+ " transport protocol.";
}
enum aaas {
description "The Authentication, Authorization and Accounting with Secure"
+ " Transport (AAAS) transport protocol.";
}
}
default aaa;
description "The Diameter transport protocol.";
}
leaf destination-hostname {
type ietf-inet:domain-name;
mandatory true;
description "The destination hostname.";
}
leaf port {
type ietf-inet:port-number;
default 3868;
description "The destination port number.";
}
description "Diameter destination peer(s).";
}
description "Diameter configuration.";
}
grouping sas-configuration-grouping {
leaf enabled {
type boolean;
default true;
description "'true' enables the use of SAS, 'false' disables.";
}
container sas-connection {
when "../enabled = 'true'";
leaf system-type {
type string;
description "The SAS system type.";
}
leaf system-version {
type string;
description "The SAS system version. "
+ "Will use the service version if not specified.";
}
leaf-list servers {
type ietf-inet:ipv4-address-no-zone;
min-elements 1;
description "The list of SAS servers to send records to.";
}
description "Configuration for connecting to SAS.";
}
description "SAS configuration.";
}
grouping sas-instance-configuration-grouping {
leaf system-name {
type string;
description "The SAS system name.";
}
description "SAS per-instance configuration.";
}
grouping plmn-id-grouping {
leaf mcc {
type number-string {
length "3";
}
mandatory true;
description "The Mobile Country Code (MCC).";
}
leaf mnc {
type number-string {
length "2..3";
}
mandatory true;
description "The Mobile Network Code (MNC).";
}
description "The Public Land Mobile Network (PLMN) identity.";
}
}
ShCM example initial configuration YAML files
The ShCM initial configuration process requires the following YAML files:
Example shcm-vmpool-config.yaml
Example YAML
# This file describes the pool of Virtual Machines that comprise a "ShCM group"
deployment-config:shcm-virtual-machine-pool:
# needs to match the deployment_id vapp parameter
deployment-id: example
# needs to match the site_id vApp parameter
site-id: DC1
# rhino cluster of one is only true for the ShCM Virtual Machine
rhino-cluster:
cluster-id: 202
virtual-machines:
- vm-id: example-shcm-1
diameter-sh-origin-host: shcm1.shcm.site1.mnc123.mcc530.3gppnetwork.org
# Only specify this if SAS is enabled in the ShCM service configuration.
# sas-instance-configuration:
# system-name: shcm-1
- vm-id: example-shcm-2
diameter-sh-origin-host: shcm2.shcm.site1.mnc123.mcc530.3gppnetwork.org
# Only specify this if SAS is enabled in the ShCM service configuration.
# sas-instance-configuration:
# system-name: shcm-2
Example shcm-service-config.yaml
Example YAML
# Service configuration for the Sh Cache Microservice
deployment-config:shcm-service-config:
##
## SAS Configuration
##
sas-configuration:
# Whether SAS is enabled ('true') or disabled ('false')
enabled: false
# # Parameters for connecting to SAS
# sas-connection:
# # List of SAS servers. Needs to be specified in a non-MDM deployment.
# servers:
# - 127.0.0.1
#
# # SAS system type
# system-type: ShCM
##
## Diameter Sh Configuration
##
diameter-sh:
# The origin realm to use when sending messages.
origin-realm: opencloud.com
# The value to use as the destination realm.
destination-realm: opencloud
# The HSS destination peers.
destination-peers:
- destination-hostname: hss.opencloud.com
port: 3868
protocol: aaa
# The user identity that is put in the diameter message to the HSS when a health check is performed
health-check.user-identity: sip:shcm-health-check@example.com
##
## Advanced settings - don't change unless specifically instructed
## by a Metaswitch engineer
##
# The request timeout (ms) the Sh RA should use
diameter.request-timeout: 5000
##
## Cassandra locking configuration
##
cassandra-locking-configuration:
# The time in milliseconds to wait before retrying to acquire the cassandra lock. Limited to 50-5000.
backoff-time: 200
# The number of times to retry to acquire the cassandra lock. Limited to 1-10.
backoff-limit: 5
# The time in milliseconds to hold the cassandra lock. Limited to 1000-30000.
hold-time: 12000
##
## Caching strategy
## Every setting has both no-cache or simple-cache options, and for most settings
## subscription-cache is also available.
##
## no-cache:
## The cache functionality will not be used; every read and write will
## always query the HSS for the requested information. Subscription is
## not applicable.
## simple-cache:
## Results from HSS queries will be cached. Updates will always write
## through to the HSS. The cache will not receive updates from the HSS.
## subscription-cache:
## Results from HSS queries will be cached. Updates will always write
## through to the HSS. ShCM will subscribe to data changes in the HSS and
## cache entries will be updated if the data is modified in the HSS.
##
## Recommendation:
## Don't change the default settings below.
## However, some HSS's don't support subscriptions and for these simple-cache
## should be used.
##
## If a Cassandra database isn't available for caching then no-cache can be
## used for test purposes.
##
caching-configuration:
##
## Caching strategy: one of `no-cache, simple-cache, subscription-cache`
##
service-indications:
# Caching configuration for MMTel-Services
- service-indication: mmtel-services
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for ims-odb-information
- service-indication: ims-odb-information
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for opencloud-3rd-party-registrar
- service-indication: opencloud-3rd-party-registrar
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for metaswitch-tas-services
- service-indication: metaswitch-tas-services
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
data-references-subscription-allowed:
# Caching configuration for ims-public-identity
- data-reference: ims-public-identity
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for s-cscfname
- data-reference: s-cscfname
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for initial-filter-criteria
- data-reference: initial-filter-criteria
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for service-level-trace-info
- data-reference: service-level-trace-info
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for ip-address-secure-binding-information
- data-reference: ip-address-secure-binding-information
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for service-priority-level
- data-reference: service-priority-level
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
# Caching configuration for extended-priority
- data-reference: extended-priority
cache-strategy: subscription-cache
cache-parameters:
cache-validity-time-seconds: 86400
##
## Caching strategy: one of `no-cache, simple-cache`
##
data-references-subscription-not-allowed:
# Caching configuration for charging-information
- data-reference: charging-information
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for msisdn
- data-reference: msisdn
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for psi-activation
- data-reference: psiactivation
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for dsai
- data-reference: dsai
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for sms-registration-info
- data-reference: sms-registration-info
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for tads-information
- data-reference: tads-information
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for stn-sr
- data-reference: stn-sr
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for ue-srvcc-capability
- data-reference: ue-srvcc-capability
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for csrn
- data-reference: csrn
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
# Caching configuration for reference-location-information
- data-reference: reference-location-information
cache-strategy: simple-cache
cache-parameters:
cache-validity-time-seconds: 3600
Verify the state of the ShCM nodes and processes
VNF validation tests
The VNF validation tests can only be run from the Commissioning VM, when performing an automatic deployment. |
What are VNF validation tests?
The VNF validation tests can be used to run some basic checks on deployed VMs to ensure they have been deployed correctly. Tests include:
-
Checking that the management IP can be reached;
-
Checking that the management gateway can be reached;
-
Checking that
sudo
works on the VM; -
Checking that the VM has converged to its initial configuration.
Running the VNF validation tests
After deploying the VMs and performing the initial configuration, you can run the VNF validation tests from the Commissioning VM.
First, generate an 'Ansible hosts' file from the SDF: csar ansible-hosts --sdf <path to SDF>
Then, run the validation tests using the Ansible hosts file just generated: csar validate --vnf shcm hosts_ansible.yaml
If any of the tests fail, refer to the troubleshooting section.
Rhino Console Checks
Check via the rhino-console that various ShCM components are active.
Check | Actions | Expected Result |
---|---|---|
Check ShCM is running |
login as sentinel |
The SLEE should be in started state |
Check ShCM microservice is active |
rhino-console |
Both sh-cache-microservice and sh-cache-microservice-notification-service should be active |
Check ShCM Resource Adaptors are active |
rhino-console |
cassandra-cql-ra, diameter-sh-ra and http-ra should be active. |
List Active Alarms |
rhino-console |
Check for any active alarms. Further information about alarms can be found in |
Health Check API
If the curl commands fail with a connection exception check the correct port is being used.
Check | Actions | HTTP Result |
---|---|---|
Check the microservice is working correctly. |
curl -G http://localhost:8088/shcache/v1/infra/up -v |
204 |
Check that the microservice is in service and ready to receive requests on this API. |
curl -G http://localhost:8088/shcache/v1/infra/ready -v |
204 |
Troubleshooting Node Installation
Sh Cache Microservice not running after installation
If the Sh Cache Microservice is not running after the installation steps, first check that bootstrap and initial configuration were successful.
[sentinel@shcm1 ~]$ grep 'Bootstrap complete' ~/bootstrap/bootstrap.log 2019-10-28 13:53:54,226 INFO bootstrap.main Bootstrap complete [sentinel@shcm1 ~]$
If the bootstrap.log
does not contain that string, examine the log for any exceptions or errors.
[sentinel@shcm1 ~]$ report-initconf status status=vm_converged [sentinel@shcm1 ~]$
If the status is different, examine the output from report-initconf
for any problems. If that is not sufficient, examine the ~/initconf/initconf.log
file for any exceptions or errors.
If bootstrap and initital configure were successful, check the Rhino journalctl logs.
[sentinel@shcm1 ~]$ journalctl -u rhino -l
Further information can be found from the Sh Cache Microservice logs in /var/log/tas.
Rhino Alarms
Not Connected to Cassandra
Alarm 101:230435607984642 [CassandraCQLRA.ConnectToCluster]
Rhino node : 101
Level : Critical
InstanceID : cassandra-cql-ra
Source : (RA Entity) cassandra-cql-ra
Timestamp : 20190207 23:15:14 (active 8m 3s)
Message : Not connected to Cassandra. Attempting to connect each 10s
-
Check that the Cassandra server is active.
-
Check the network connectivity to the Cassandra host and port. To identify the hostname check via
rhino-console> listraentityconfigproperties cassandra-cql-ra
-
For further information turn on ra tracing via
rhino-console> settracerlevel resourceadaptorentity cassandra-cql-ra root Finest
Tracing is logged in the file /var/log/tas/[shcm-release]/rhino.log
.
Connection to SAS server is down
Alarm 101:230435607984641 [rhino.sas.connection.lost]
Rhino node : 101
Level : Major
InstanceID : Facility=SasFacility, host=localhost, port=6761
Source : (Subsystem) SasFacility
Timestamp : 20190207 23:15:14 (active 8m 4s)
Message : Connection to SAS server at localhost:6761 is down
-
Check that SAS is active.
-
Check the network connectivity to the SAS server host and port.
Diameter Peer is down
Alarm 101:230435607984643 [diameter.peer.connectiondown]
Rhino node : 101
Level : Warning
InstanceID : diameter.peer.localhost
Source : (RA Entity) diameter-sh-ra
Timestamp : 20190207 23:15:15 (active 8m 3s)
Message : Connection to localhost:3888 is down
-
Check the network connectivity to the Diameter peer host and port.
rhino-console> listraentityconfigproperties cassandra-cql-ra
-
For further information turn on ra tracing via
rhino-console> settracerlevel resourceadaptorentity diameter-sh-ra root Finest
Tracing is logged in the file /var/log/tas/[shcm-release]/rhino.log
.
Cassandra Storage
This section describes details of the Cassandra Database used by ShCM nodes.
Overview
The Sh Cache Microservice uses Cassandra to cache the HSS Data.
As of Sentinel version 3.1, the Cassandra database is now provided by the TSN (TAS Storage Node). In Sentinel version 3.0, the Cassandra database was normally provided by the MAG node.
Database Schema
The below schema is for information only. During the initial configuration process the schema is automatically created on the TSNs. There is no need to run any CQL commands manually. |
Cassandra’s tables exist in a 'keyspace', which, for illustrative purposes, can be thought of as a 'database' in SQL implementations.
-
The keyspace must be named
opencloud_sh_cache_microservice
. -
For a production environment, the keyspace is created with replication (that is, each row in a table is replicated to many of the Cassandra nodes in the cluster, rather than just one). A sample CQL command for creating such a keyspace is:
CREATE KEYSPACE IF NOT EXISTS opencloud_sh_cache_microservice WITH REPLICATION={'class' : 'SimpleStrategy', 'replication_factor' : 3};
-
For testing purposes, replication may not be necessary. A sample CQL command for creating such a keyspace is:
CREATE KEYSPACE IF NOT EXISTS opencloud_sh_cache_microservice WITH REPLICATION={'class' : 'SimpleStrategy', 'replication_factor' : 1};
Once a keyspace is created, the required tables can be created. The following CQL statements provide both the structure and insight into the tables that are required.
USE opencloud_sh_cache_microservice;
CREATE TABLE IF NOT EXISTS repository_data_cache (
public_id text,
service_ind text,
sequence_number int,
user_data blob,
valid_until timestamp,
PRIMARY KEY (public_id, service_ind)
)
WITH gc_grace_seconds = 518400; //6 days
CREATE TABLE IF NOT EXISTS repository_data_lock (
public_id text,
service_ind text,
owner_id uuid,
count int,
held boolean,
held_until timestamp,
PRIMARY KEY (public_id, service_ind)
)
WITH gc_grace_seconds = 864000; //10 days
CREATE TABLE IF NOT EXISTS repository_data_subscription (
public_id text,
service_ind text,
expires timestamp,
PRIMARY KEY (public_id, service_ind)
)
WITH gc_grace_seconds = 518400; //6 days
USE opencloud_sh_cache_microservice;
CREATE TABLE IF NOT EXISTS dataref_cache (
public_id text,
dataref_id int,
private_id text,
user_data blob,
valid_until timestamp,
PRIMARY KEY (public_id, dataref_id, private_id)
)
WITH gc_grace_seconds = 518400; //6 days
CREATE TABLE IF NOT EXISTS dataref_lock (
public_id text,
dataref_id int,
private_id text,
owner_id uuid,
count int,
held boolean,
held_until timestamp,
PRIMARY KEY (public_id, dataref_id, private_id)
)
WITH gc_grace_seconds = 864000; //10 days
CREATE TABLE IF NOT EXISTS dataref_subscription (
public_id text,
dataref_id int,
private_id text,
expires timestamp,
PRIMARY KEY (public_id, dataref_id, private_id)
)
WITH gc_grace_seconds = 518400; //6 days
USE opencloud_sh_cache_microservice;
CREATE TABLE IF NOT EXISTS as_dataref_cache (
public_id text,
dataref_id int,
as_name text,
dsai_tags frozen<set<text>>,
user_data blob,
valid_until timestamp,
PRIMARY KEY (public_id, dataref_id, as_name, dsai_tags)
)
WITH gc_grace_seconds = 518400; //6 days
CREATE TABLE IF NOT EXISTS as_dataref_lock (
public_id text,
dataref_id int,
as_name text,
dsai_tags frozen<set<text>>,
owner_id uuid,
count int,
held boolean,
held_until timestamp,
PRIMARY KEY (public_id, dataref_id, as_name, dsai_tags)
)
WITH gc_grace_seconds = 864000; //10 days
CREATE TABLE IF NOT EXISTS as_dataref_subscription (
public_id text,
dataref_id int,
as_name text,
dsai_tags frozen<set<text>>,
expires timestamp,
PRIMARY KEY (public_id, dataref_id, as_name, dsai_tags)
)
WITH gc_grace_seconds = 518400; //6 days
USE opencloud_sh_cache_microservice;
CREATE TABLE IF NOT EXISTS impu_cache (
public_id text,
identity_sets frozen<set<int>>,
user_data blob,
valid_until timestamp,
PRIMARY KEY (public_id, identity_sets)
)
WITH gc_grace_seconds = 518400; //6 days
CREATE TABLE IF NOT EXISTS impu_lock (
public_id text,
identity_sets frozen<set<int>>,
owner_id uuid,
count int,
held boolean,
held_until timestamp,
PRIMARY KEY (public_id, identity_sets)
)
WITH gc_grace_seconds = 864000; //10 days
CREATE TABLE IF NOT EXISTS impu_subscription (
public_id text,
identity_sets frozen<set<int>>,
expires timestamp,
PRIMARY KEY (public_id, identity_sets)
)
WITH gc_grace_seconds = 518400; //6 days
USE opencloud_sh_cache_microservice;
CREATE TABLE IF NOT EXISTS subscribe_ue_reachability (
public_id text,
private_id text,
notification_url text,
notification_data text,
subscription_id uuid,
span_id uuid,
PRIMARY KEY (public_id, private_id, subscription_id)
)
WITH gc_grace_seconds = 172800; //2 days