This manual is a guide for configuring and upgrading the REM nodes as virtual machines on OpenStack, VMware vSphere, or VMware vCloud.
- Notices
- Changelogs
- Introduction
- Setting up CDS
- VM types
- Installation and upgrades
- Installation and upgrades overview
- Installation or upgrades on OpenStack
- Installation or upgrades on VMware vSphere
- Installation or upgrades on VMware vCloud
- Verify the state of the nodes and processes
- VM configuration
- Troubleshooting node installation
- Glossary
Notices
Copyright © 2014-2022 Metaswitch Networks. All rights reserved
This manual is issued on a controlled basis to a specific person on the understanding that no part of the Metaswitch Networks product code or documentation (including this manual) will be copied or distributed without prior agreement in writing from Metaswitch Networks.
Metaswitch Networks reserves the right to, without notice, modify or revise all or part of this document and/or change product features or specifications and shall not be responsible for any loss, cost, or damage, including consequential damage, caused by reliance on these materials.
Metaswitch and the Metaswitch logo are trademarks of Metaswitch Networks. Other brands and products referenced herein are the trademarks or registered trademarks of their respective holders.
Changelogs
4.0.0-36-1.0.0
-
rvtconfig has been updated so that it ignores specific files that may be in the rvt-config directory unnecessarily. (#386665)
-
Fully qualified table names in cqlsh queries and replaced prepared statements with parameterised simple statements. (#340635)
-
An error message is now output when incorrectly formatted override yaml files are inputted rather than a lengthy stack trace. (#381281)
-
Update MAG nginx config to add X-Ua-OpenSSL-Cipher-Suite header to XCAP server requests containing UE-nginx SSL connection cipher. (#340633)
-
Disabled reverse-DNS lookups for SSH logins on the VM. (#398999)
-
The
override.yaml
files formmt-gsm
andmmt-cdma
node types are now imcluded in thecompare-config
andupload-config
comparisons. (#371373) -
The
--vm-version-source
argument now takes the optionsdf-version
that uses the version in the SDF for a given node. There is now a check that the inputted version matches the SDF version and an optional argument--skip-version-check
that skips this check. (#380063)
4.0.0-34-1.0.0
-
Updated system package versions of
rsync
andopen-vm-tools
to address security vulnerabilities. -
Updated system package versions of
bpftool
,kernel
,perf
,python
andxz
to address security vulnerabilities. -
Fixed an issue where VMs would send DNS queries for the
localhost
hostname. (#206220) -
Fixed issue that meant
rvtconfig upload-config
would fail when running in an environment where the input device is not a TTY. When this case is detectedupload-config
will default to non-interactive confirmation-y
. This preserves 4.0.0-26-1.0.0 (and earlier versions) in environments where an appropriate input device is not available. (#258542) -
Fixed an issue where scheduled tasks could incorrectly trigger on a reconfiguration of their schedules. (#167317)
-
Added
rvtconfig compare-config
command and madervtconfig upload-config
check config differences and request confirmation before upload. There is a new-f
flag that can be used withupload-config
to bypass the configuration comparison.-y
flag can now be used withupload-config
to provide non-interactive confirmation in the case that the comparison shows differences. (OPT-4517)
-
Added the rvt-gather_diags script to all node types. (#94043)
-
Increased bootstrap timeout from 5 to 15 minutes to allow time (10 minutes) to establish connectivity to NTP servers. (OPT-4917)
-
Make
rvtconfig validate
not fail if fields are present in the SDF it does not recognize. (OPT-4699) -
Added 3 new traffic schemes: "all signaling together except SIP", "all signaling together except HTTP", and "all traffic types separated". (#60997)
-
Fixed an issue where updated routing rules with the same target were not correctly applied. (#169195)
-
Scheduled tasks can now be configured to run more than once per day, week or month; and at different frequencies on different nodes. (OPT-4373)
-
Updated subnet validation to be done per-site rather than across the entire SDF deployment. (OPT-4412)
-
Fixed an issue where unwanted notification categories can be sent to SNMP targets. (OPT-4543)
-
Hardened linkerd by closing the prometheus stats port and changing the proxy port to listen on localhost only. (OPT-4840)
-
Added an optional node types field in the routing rules YAML configuration. This ensures the routing rule is only attempted to apply to VMs that are of the specified node types. (OPT-4079)
-
initconf
will not exit on invalid configuration. VM will be allowed to quiesce or upload new configuration. (OPT-4389) -
rvtconfig
now only uploads a single group’s configuration to that group’s entry in CDS. This means that initconf no longer fails if some other node type has invalid configuration. (OPT-4392) -
Fixed a race condition that could result in the quiescence tasks failing to run. (OPT-4468)
-
The
rvtconfig upload-config
command now displays leader seed information as part of the printed config version summary. (OPT-3962) -
Added
rvtconfig print-leader-seed
command to display the current leader seed for a deployment and group. (OPT-3962) -
Enum types stored in CDS cross-level refactored to string types to enable backwards compatibility. (OPT-4072)
-
Updated system package versions of
bind
,dhclient
,dhcp
,bpftool
,libX11
,linux-firmware
,kernel
,nspr
,nss
,openjdk
andperf
to address security vulnerabilities. (OPT-4332) -
Made
ip-address.ip
field optional during validation for non-RVT VNFCs. RVT and Custom VNFCs will still require the field. (OPT-4532) -
Fix SSH daemon configuration to reduce system log sizes due to error messages. (OPT-4538)
-
Allowed the primary user’s password to be configured in the product options in the SDF. (OPT-4448)
-
Updated system package version of
glib2
to address security vulnerabilities. (OPT-4198) -
Updated NTP services to ensure the system time is set correctly on system boot. (OPT-4204)
-
Include deletion of leader-node state in rvtconfig delete-node-type, resolving an issue where the first node deployed after running that command wouldn’t deploy until the leader was re-deployed. (OPT-4213)
-
Rolled back SIMPL support to 6.6.3. (OPT-43176)
-
Disk and service monitor notification targets that use SNMPv3 are now configured correctly if both SNMPv2c and SNMPv3 are enabled. (OPT-4054)
-
Fixed issue where initconf would exit (and restart 15 minutes later) if it received a 400 response from the MDM. (OPT-4106)
-
The Sentinel GAA Cassandra keyspace is now created with a replication factor of 3. (OPT-4080)
-
snmptrapd
is now enabled even if no targets are configured for system monitor notifications, in order to log any notifications that would have been sent. (OPT-4102) -
Fixed bug where the SNMPv3 user’s authentication and/or privacy keys could not be changed. (OPT-4102)
-
Making SNMPv3 queries to the VMs now requires encryption. (OPT-4102)
-
Fixed bug where system monitor notification traps would not be sent if SNMPv3 is enabled but v2c is not. Note that these traps are still sent as v2c only, even when v2c is not otherwise in use. (OPT-4102)
-
Removed support for the
signaling
andsignaling2
traffic type names. All traffic types should now be specified using the more granular names, such asss7
. Refer to the pageTraffic types and traffic schemes
in the Install Guide for a list of available traffic types. (OPT-3820) -
Ensured
ntpd
is in slew mode, but always step the time on boot before Cassandra, Rhino and OCSS7 start. (OPT-4131, OPT-4143)
4.0.0-14-1.0.0
-
Changed the
rvtconfig delete-node-type
command to also delete OID mappings as well as all virtual machine events for the specified version from cross-level group state. (OPT-3745) -
Fixed systemd units so that
systemd
does not restart Java applications after asystemctl kill
. (OPT-3938) -
Added additional validation rules for traffic types in the SDF. (OPT-3834)
-
Increased the severity of SNMP alarms raised by the disk monitor. (OPT-3987)
-
Added
--cds-address
and--cds-addresses
aliases for the-c
parameter inrvtconfig
. (OPT-3785)
4.0.0-13-1.0.0
-
Added support for separation of traffic types onto different network interfaces. (OPT-3818)
-
Improved the validation of SDF and YAML configuration files, and the errors reported when validation fails. (OPT-3656)
-
Added logging of the instance ID of the leader while waiting during initconf. (OPT-3558)
-
Do not use YAML anchors/aliases in the example SDFs. (OPT-3606)
-
Fixed a race condition that could cause initconf to hang indefinitely. (OPT-3742)
-
Improved error reporting in
rvtconfig
. -
Updated SIMPL VM dependency to 6.6.1. (OPT-3857)
-
Adjusted linkerd OOM score so it will no longer be terminated by the OOM killer (OPT-3780)
-
Disabled all yum repositories. (OPT-3781)
-
Disabled the TLSv1 and TLSv1.1 algorithms for Java. (OPT-3781)
-
Changed initconf to treat the reload-resource-adaptors flag passed to rvtconfig as an intrinsic part of the configuration, when determining if the configuration has been updated. (OPT-3766)
-
Updated system package versions of
bind
,bpftool
,kernel
,nettle
,perf
andscreen
to address security vulnerabilities. (OPT-3874) -
Added an option to
rvtconfig dump-config
to dump the config to a specified directory. (OPT-3876) -
Fixed the confirmation prompt for
rvtconfig delete-node-type
andrvtconfig delete-deployment
commands when run on the SIMPL VM. (OPT-3707) -
Corrected a regression and a race condition that prevented configuration being reapplied after a leader seed change. (OPT-3862)
4.0.0-9-1.0.0
-
All SDFs are now combined into a single SDF named
sdf-rvt.yaml
. (OPT-2286) -
Added the ability to set certain OS-level (kernel) parameters via YAML configuration. (OPT-3403)
-
Updated to SIMPL 6.5.0. (OPT-3358, OPT-3545)
-
Make the default gateway optional for the clustering interface. (OPT-3417)
-
initconf
will no longer block startup of a configured VM if MDM is unavailable. (OPT-3206) -
Enforce a single secrets-private-key in the SDF. (OPT-3441)
-
Made the message logged when waiting for config be more detailed about which parameters are being used to determine which config to retrieve. (OPT-3418)
-
Removed image name from example SDFs, as this is derived automatically by SIMPL. (OPT-3485)
-
Make
systemctl status
output for containerised services not print benign errors. (OPT-3407) -
Added a command
delete-node-type
to facilitate re-deploying a node type after a failed deployment. (OPT-3406) -
Updated system package versions of
glibc
,iwl1000-firmware
,net-snmp
andperl
to address security vulnerabilities. (OPT-3620)
4.0.0-8-1.0.0
-
Fix bug (affecting 4.0.0-7-1.0.0 only) where rvtconfig was not reporting the public version string, but rather the internal build version (OPT-3268).
-
Update sudo package for CVE-2021-3156 vulnerability (OPT-3497)
-
Validate the product-options for each node type in the SDF. (OPT-3321)
-
Clustered MDM installations are now supported. Initconf will failover across multiple configured MDMs. (OPT-3181)
4.0.0-7-1.0.0
-
If YAML validation fails, print the filename where an error was found alongside the error. (OPT-3108)
-
Improved support for backwards compatibility with future CDS changes. (OPT-3274)
-
Change the
report-initconf
script to check for convergence since the last time config was received. (OPT-3341) -
Improved exception handling when CDS is not available. (OPT-3288)
-
Change rvtconfig upload-config and rvtconfig initial-configure to read the deployment ID from the SDFs and not a command line argument. (OPT-3111)
-
Publish imageless CSARs for all node types. (OPT-3410)
-
Added message to initconf.log explaining some Cassandra errors are expected. (OPT-3081)
-
Updated system package versions of
bpftool
,dbus
,kernel
,nss
,openssl
andperf
to address security vulnerabilities.
4.0.0-6-1.0.0
-
Updated to SIMPL 6.4.3. (OPT-3254)
-
When using a release version of
rvtconfig
, the correctthis-rvtconfig
version is now used. (OPT-3268) -
All REM setup is now completed before restarting REM, to avoid unnecessary restarts. (OPT-3189)
-
Updated system package versions of
bind-*
,curl
,kernel
,perf
andpython-*
to address security vulnerabilities. (OPT-3208) -
Added support for routing rules on the Signaling2 interface. (OPT-3191)
-
Configured routing rules are now ignored if a VM does not have that interface. (OPT-3191)
-
Added support for absolute paths in
rvtconfig
CSAR container. (OPT-3077) -
The existing Rhino OIDs are now always imported for the current version. (OPT-3158)
-
Changed behaviour of
initconf
to not restart resource adaptors by default, to avoid an unexpected outage. A restart can be requested using the--reload-resource-adaptors
parameter torvtconfig upload-config
. (OPT-2906) -
Changed the SAS resource identifier to match the provided SAS resource bundles. (OPT-3322)
-
Added information about MDM and SIMPL to the documentation. (OPT-3074)
4.0.0-4-1.0.0
-
Added
list-config
anddescribe-config
operations torvtconfig
to list configurations already in CDS and describe the meaning of the specialthis-vm
andthis-rvtconfig
values. (OPT-3064) -
Renamed
rvtconfig initial-configure
torvtconfig upload-config
, with the old command remaining as a synonym. (OPT-3064) -
Fixed
rvtconfig pre-upgrade-init-cds
to create a necessary table for upgrades from 3.1.0. (OPT-3048) -
Fixed crash due to missing Cassandra tables when using
rvtconfig pre-upgrade-init-cds
. (OPT-3094) -
rvtconfig pre-upgrade-init-cds
andrvtconfig push-pre-upgrade-state
now supports absolute paths in arguments. (OPT-3094) -
Reduced timeout for DNS server failover. (OPT-2934)
-
Updated
rhino-node-id
max to 32767. (OPT-3153) -
Diagnostics at the top of
initconf.log
now include system version and CDS group ID. (OPT-3056) -
Random passwords for the Rhino client and server keystores are now generated and stored in CDS. (OPT-2636)
-
Updated to SIMPL 6.4.0. (OPT-3179)
-
Increased the healthcheck and decommision timeouts to 20 minutes and 15 minutes respectively. (OPT-3143)
-
Updated example SDFs to work with MDM 2.28.0, which is now the supported MDM version. (OPT-3028)
-
Added support to
report-initconf
for handling rolled overinitconf-json.log
files. The script can now read historic log files when building a report if necessary. (OPT-1440) -
Fixed potential data loss in Cassandra when doing an upgrade or rollback. (OPT-3004)
Introduction
This manual describes the configuration and upgrade of custom Rhino application VMs.
Introduction to the custom Rhino application product
The custom Rhino application solution is designed to allow you to create and deploy virtual machines that run custom-developed applications based on the Rhino Telecoms Application Server platform. Starting from an export or package of your application, you can create VM images with the VM Build Container (VMBC) tool, and deploy them to an OpenStack, VMware vSphere, or VMware vCloud host.
In addition, accompanying REM and SGC VMs are available that provide monitoring and SS7 functionality.
Installation
Installation is the process of deploying VMs onto your host. The custom Rhino application VMs must be installed using the SIMPL VM, which you will need to deploy manually first, using instructions from the SIMPL VM Documentation.
The SIMPL VM allows you to deploy VMs in an automated way. By writing a Solution Definition File (SDF), you describe to the SIMPL VM the number of VMs in your deployment and their properties such as hostnames and IP addresses. Software on the SIMPL VM then communicates with your VM host to create and power on the VMs.
The SIMPL VM deploys images from packages known as CSARs (Cloud Service Archives), which contain a VM image in the format the host would recognize, such as .ova
for VMware vSphere, as well as ancillary tools and data files.
The VMBC tool creates CSARs suitable for the platform(s) you specify when invoking it.
See the Installation and upgrades overview page for detailed installation instructions.
Note that all nodes in a deployment must be configured before any of them will start to serve live traffic.
Upgrades
The custom Rhino application nodes are designed to allow rolling upgrades with little or no service outage time. One at a time, each downlevel node is destroyed and replaced by an uplevel node. This is repeated until all nodes have been upgraded.
Configuration for the uplevel node is uploaded in advance. As nodes are recreated, they immediately pick up the uplevel configuration and resume service application.
If an upgrade goes wrong, rollback to the previous version is also supported.
As with installation, upgrades and rollbacks use the SIMPL VM.
See the Installation and upgrades overview page for detailed instructions on how to perform an upgrade.
CSAR EFIX patches
CSAR EFIX patches, also known as VM patches, are based on the SIMPL VM’s csar efix command. The command is used to combine a CSAR EFIX file (a tar file containing some metadata and files to update), and an existing unpacked CSAR on the SIMPL. This creates a new, patched CSAR on the SIMPL VM. It does not patch any VMs in-place, but instead patches the CSAR itself offline on the SIMPL VM. A normal rolling upgrade is then used to migrate to the patched version.
Once a CSAR has been patched, the newly created CSAR is entirely separate, with no linkage between them. Applying patch EFIX_1 to the original CSAR creates a new CSAR with the changes from patch EFIX_1.
In general:
-
Applying patch EFIX_2 to the original CSAR will yield a new CSAR without the changes from EFIX_1.
-
Applying EFIX_2 to the already patched CSAR will yield a new CSAR with the changes from both EFIX_1 and EFIX_2.
VM patches which target SLEE components (e.g. a service or feature change) contain the full deployment state of Rhino, including all SLEE components. As such, if applying multiple patches of this type, only the last such patch will take effect, because the last patch contains all the SLEE components. In other words, a patch to SLEE components should contain all the desired SLEE component changes, relative to the original release of the VM. For example, patch EFIX_1 contains a fix for the HTTP RA SLEE component X and patch EFIX_2 contains an fix for a SLEE Service component Y. When EFIX_2 is generated it will contain the component X and Y fixes for the VM.
However, it is possible to apply a specific patch with a generic CSAR EFIX patch that only contains files to update. For example, patch EFIX_1 contains a specific patch that contains a fix for the HTTP RA SLEE component, and patch EFIX_2 contains an update to the linkerd config file. We can apply patch EFIX_1 to the original CSAR, then patch EFIX_2 to the patched CSAR.
We can also apply EFIX_2 first then EFIX_1.
When a CSAR EFIX patch is applied, a new CSAR is created with the versions of the target CSAR and the CSAR EFIX version. |
Configuration
The configuration model is "declarative" - to change the configuration, you upload a complete set of files containing the entire configuration for all nodes, and the VMs will attempt to alter their configuration ("converge") to match. This allows for integration with GitOps (keeping configuration in a source control system), as well as ease of generating configuration via scripts.
Configuration is stored in a database called CDS, which is a set of tables in a Cassandra database. These tables contain version information, so that you can upload configuration in preparation for an upgrade without affecting the live system.
For the custom Rhino application, the CDS database must be provided by the customer. See Setting up CDS for a guide on how to create the required tables.
Configuration files are written in YAML format. Using the rvtconfig tool, their contents can be syntax-checked and verified for validity and self-consistency before uploading them to CDS.
See VM configuration for detailed information about writing configuration files and the (re)configuration process.
Setting up CDS
What is CDS?
CDS, or Configuration Data Store, is a Cassandra server that the custom VMs use to distribute configuration, and to coordinate their actions. Before deploying any custom VMs, the operator needs to set up a Cassandra server with the right keyspaces and tables, as described on this page.
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you already have a Cassandra server running. Metaswitch does not provide Cassandra support.
-
you know how to use the
cqlsh
tool.
Method of procedure
Create keyspace and tables
Use the cqlsh
tool to create the CDS keyspace and tables as follows.
First create the keyspace. If your Cassandra cluster has 1 or 2 nodes, use the following statement.
CREATE KEYSPACE IF NOT EXISTS metaswitch_tas_deployment_info
WITH REPLICATION={'class' : 'SimpleStrategy', 'replication_factor' : 1}
If your Cassandra cluster has 3 or more nodes, use the following statement instead.
CREATE KEYSPACE IF NOT EXISTS metaswitch_tas_deployment_info
WITH REPLICATION={'class' : 'SimpleStrategy', 'replication_factor' : 3}
Regardless of the number of nodes, create five tables as follows.
CREATE TABLE IF NOT EXISTS
metaswitch_tas_deployment_info.initial_config_namespaced (
deployment_id text, group_id text, namespace text, config blob, config_metadata blob,
PRIMARY KEY (deployment_id, group_id, namespace)
)
CREATE TABLE IF NOT EXISTS
metaswitch_tas_deployment_info.cas_group_state (
deployment_id text, group_id text, namespace text, state blob, seq int,
PRIMARY KEY (deployment_id, group_id, namespace)
)
CREATE TABLE IF NOT EXISTS
metaswitch_tas_deployment_info.cas_instance_state (
deployment_id text, group_id text, namespace text, instance_id text, state blob, seq int,
PRIMARY KEY (deployment_id, group_id, namespace, instance_id)
)
CREATE TABLE IF NOT EXISTS
metaswitch_tas_deployment_info.seeds_allocation (
deployment_id text, group_id text, namespace text, purpose text, instance_id set<text>, seq int,
PRIMARY KEY (deployment_id, group_id, namespace, purpose)
)
CREATE TABLE IF NOT EXISTS
metaswitch_tas_deployment_info.audit_history (
deployment_id text, group_id text, namespace text, instance_id text, history blob,
PRIMARY KEY (deployment_id, group_id, namespace, instance_id)
)
Your CDS is now ready for use.
VM types
This page describes the different custom Rhino application VM type(s) documented in this manual.
It also describes the ancillary nodes used to deploy and manage those VMs.
A REM node is a VM that runs the Rhino Element Manager (REM) application. REM is a web-based console for monitoring, configuring, and managing a Rhino SLEE. The REM node comes pre-installed with the Sentinel Express and SIS plugins, to simplify management of Sentinel Express and SIS-based applications.
Refer to the Flavors section for information on the REM VM’s sizing: number of vCPUs, RAM, and virtual disk.
Ancillary node types
The SIMPL VM
The SIMPL Virtual Appliance provides orchestration software to create, verify, configure, destroy and upgrade instances of your custom VM and the optional supporting REM and SGC VMs. Following the initial deployment, you will only need the SIMPL VM to perform configuration changes, patching or upgrades - it is not required for normal operation of the deployment.
Metaswitch Deployment Manager (MDM)
The custom application solution uses Metaswitch Deployment Manager (MDM) to co-ordinate installation, upgrades, scale and healing (replacement of failed instances). MDM is a virtual appliance that provides state monitoring, DNS and NTP services to the deployment. It is deployed as a pool of at least three virtual machines.
Upgrade
If you are upgrading from a deployment which already has MDM, ensure all MDM instances are upgraded before starting the upgrade of your custom application solution nodes. Your Customer Care Representative can provide guidance on upgrading MDM.
If you are upgrading from a deployment which does not have MDM, you must deploy MDM before upgrading any of your custom application nodes.
As REM nodes are non-clustered and not part of the call path, it is possible to deploy only one REM node. However, if redundant monitoring is desired, more nodes should be deployed.
Flavors
The REM nodes can be installed using the following flavors. This option has to be selected in the SDF. The selected option determines the values for RAM, hard disk space and virtual CPU count.
The term |
The |
Spec | Use case | Resources |
---|---|---|
|
Lab and small-size production environments |
|
|
Mid and large-size production environments |
|
Installation and upgrades
- Installation and upgrades overview
- Installation or upgrades on OpenStack
- Installation or upgrades on VMware vSphere
- Installation or upgrades on VMware vCloud
Installation and upgrades overview
The steps below describe how to upgrade the nodes that make up your deployment. Select the steps that are appropriate for your VM host: OpenStack, VMware vSphere, or VMware vCloud.
The supported versions for the platforms are listed below:
Platform | Supported versions |
---|---|
OpenStack |
Newton to Wallaby |
VMware vSphere |
6.7 and 7.0 |
Live migration of a node to a new VMware vSphere host or a new OpenStack compute node is not supported. To move such a node to a new host, remove it from the old host and add it again to the new host.
Preparing for an upgrade
Task | More information |
---|---|
Set up and/or verify your OpenStack, VMware vSphere, or VMware vCloud deployment |
The installation procedures assume that you are upgrading VMs on an existing OpenStack, VMware vSphere, or VMware vCloud host(s). Ensure the host(s) have sufficient vCPU, RAM and disk space capacity for the VMs. Note that for upgrades, you will temporarily need approximately one more VM’s worth of vCPU and RAM, and potentially more than double the disk space, than your existing deployment currently uses. You can later clean up older images to save disk space once you are happy that the upgrade was successful. Perform health checks on your host(s), such as checking for active alarms, to ensure they are in a suitable state to perform VM lifecycle operations. Ensure the VM host credentials that you will use in your SDF are valid and have sufficient permission to create/destroy VMs, power them on and off, change their properties, and access a VM’s terminal via the console. |
Set up your CDS deployment |
The installation procedures assume that CDS has been set up, as instructed in the Setting up CDS section. |
Prepare service configuration |
VM configuration information can be found at VM Configuration. |
Installation
The following table sets out the steps you need to take to install and commission your VM deployment.
Be sure you know the number of VMs you need in your deployment. At present it is not possible to change the size of your deployment after it has been created.
Step | Task | Link |
---|---|---|
Installation (on OpenStack) |
Prepare the SDF for the deployment |
|
Deploy SIMPL VM into OpenStack |
||
Prepare configuration files for the deployment |
||
Create the OpenStack flavors |
||
Install MDM |
||
Prepare SIMPL VM for deployment |
||
Deploy REM nodes on OpenStack |
||
Installation (on VMware vSphere) |
Prepare the SDF for the deployment |
|
Deploy SIMPL VM into VMware vSphere |
||
Prepare configuration files for the deployment |
||
Install MDM |
||
Prepare SIMPL VM for deployment |
||
Deploy REM nodes on VMware vSphere |
||
Installation (on VMware vCloud) |
Prepare the SDF for the deployment |
|
Deploy SIMPL VM into VMware vCloud |
||
Prepare configuration files for the deployment |
||
Install MDM |
||
Prepare SIMPL VM for deployment |
||
Deploy REM nodes on VMware vCloud |
||
Verification |
Run some simple tests to verify that your VMs are working as expected |
Upgrades
The following table sets out the steps you need to execute a rolling upgrade of an existing VM deployment.
Step | Task | Link |
---|---|---|
Rolling upgrade (on OpenStack) |
Setting up for a rolling upgrade |
|
Rolling upgrade REM nodes on OpenStack |
||
Rolling upgrade (on OpenStack) |
Post rolling upgrade steps |
|
Rolling upgrade (on VMware vSphere) |
Setting up for a rolling upgrade |
|
Rolling upgrade REM nodes on VMware vSphere |
||
Rolling upgrade (on VMware vSphere) |
Post rolling upgrade steps |
|
Rolling upgrade (on VMware vCloud) |
Setting up for a rolling upgrade |
|
Rolling upgrade REM nodes on VMware vCloud |
||
Rolling upgrade (on VMware vCloud) |
Post rolling upgrade steps |
|
Verification |
Run some simple tests to verify that your VMs are working as expected |
Patches
The following table sets out the steps you need to execute a patch of an existing VM deployment.
Step | Task | Link |
---|---|---|
Rolling upgrade using CSAR EFIX patch (on OpenStack) |
Setting up for a rolling upgrade using CSAR EFIX patch |
|
Rolling CSAR EFIX patch REM nodes on OpenStack |
||
Rolling upgrade using CSAR EFIX patch (on OpenStack) |
Post rolling upgrade using CSAR EFIX patch steps |
|
Rolling upgrade using CSAR EFIX patch (on VMware vSphere) |
Setting up for a rolling upgrade using CSAR EFIX patch |
|
Rolling CSAR EFIX patch REM nodes on VMware vSphere |
||
Rolling upgrade using CSAR EFIX patch (on VMware vSphere) |
Post rolling upgrade using CSAR EFIX patch steps |
|
Installation or upgrades on OpenStack
These pages describe how to install or upgrade the REM nodes on OpenStack.
Prepare the SDF for the deployment
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
you are using an OpenStack version from Icehouse through to Train inclusive
-
you are thoroughly familiar with working with OpenStack machines and know how to set up tenants, users, roles, client environment scripts, and so on
(For more information, refer to the appropriate OpenStack installation guide for the version that you are using here.)
-
you have read the installation guidelines at Installation and upgrades overview and have everything you need to carry out the installation.
Reserve maintenance period
This procedure does not require a maintenance period. However, if you are integrating into a live network, we recommend that you implement measures to mitigate any unforeseen events.
Tools and access
This page references an external document: SIMPL VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have the correct CSARs? |
All virtual appliances use the naming convention - |
Do you have a list of the IP addresses that you intend to give to each node of each node type? |
Each node requires an IP address for each interface. You can find a list of the VM’s interfaces on the Network Interfaces page. |
Do you have DNS and NTP Server information? |
It is expected that the deployed nodes will integrate with the IMS Core NTP and DNS servers. |
Method of procedure
Step 1 - Extract the CSAR
This can either be done on your local Linux machine or on a SIMPL VM.
Option A - Running on a local machine
If you plan to do all operations from your local Linux machine instead of SIMPL, Docker must be installed to run the rvtconfig tool in a later step. |
To extract the CSAR, run the command: unzip <path to CSAR> -d <new directory to extract CSAR to>
.
Option B - Running on an existing SIMPL VM
For this step, the SIMPL VM does not need to be running on the Openstack deployment where the deployment takes place. It is sufficient to use a SIMPL VM on a lab system to prepare for a production deployment.
Transfer the CSAR onto the SIMPL VM and run csar unpack <path to CSAR>
, where <path to CSAR>
is the full path to the transferred CSAR.
This will unpack the CSAR to ~/.local/share/csar/
.
Step 2 - Write the SDF
The Solution Definition File (SDF) contains all the information required to set up your cluster. It is therefore crucial to ensure all information in the SDF is correct before beginning the deployment. One SDF should be written per deployment.
It is recommended that the SDF is written before starting the deployment. The SDF must be named sdf-rvt.yaml
.
See Writing an SDF for more detailed information.
Each deployment needs a unique |
Example SDFs are included in every CSAR and can also be found at Example SDFs. We recommend that you start from a template SDF and edit it as desired instead of writing an SDF from scratch.
Deploy SIMPL VM into OpenStack
Note that one SIMPL VM can be used to deploy multiple node types. Thus, this step only needs to be performed once for all node types. |
The supported version of the SIMPL VM is |
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
you are using a supported OpenStack version, as described in the 'OpenStack requirements' section of the SIMPL VM Documentation
-
you are thoroughly familiar with working with OpenStack machines and know how to set up tenants, users, roles, client environment scripts, and so on
(For more information, refer to the appropriate OpenStack installation guide for the version that you are using here.)
-
you know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the SIMPL VM.
Reserve maintenance period
This procedure does not require a maintenance period. However, if you are integrating into a live network, we recommend that you implement measures to mitigate any unforeseen events.
Tools and access
You must have:
-
access to a local computer with a network connection and browser access to the OpenStack Dashboard
-
administrative access to the OpenStack host machine
-
the OpenStack privileges required to deploy VMs from an image (see OpenStack documentation for specific details).
This page references an external document: the SIMPL VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have the correct SIMPL VM QCOW2? |
All SIMPL VM virtual appliances use the naming convention - |
Do you know the IP address that you intend to give to the SIMPL VM? |
The SIMPL VM requires one IP address, for management traffic. |
Have you created and do you know the names of the networks and security group for the nodes? |
The SIMPL VM requires a management network with an unrestricted security group. |
Method of procedure
Deploy and configure the SIMPL VM
Follow the SIMPL VM Documentation on how to deploy the SIMPL VM and set up the configuration.
Prepare configuration files for the deployment
To deploy nodes, you need to prepare configuration files that would be uploaded to the VMs.
Method of procedure
Step 1 - Create configuration YAML files
Create configuration YAML files relevant for your node type on the SIMPL VM. Store these files in the same directory as your prepared SDF.
See Example configuration YAML files for example configuration files.
Create the OpenStack flavors
About this task
This task creates the node flavor(s) that you will need when installing your deployment on OpenStack virtual machines.
You must complete this procedure before you begin the installation of the first node on OpenStack, but will not need to carry it out again for subsequent node installations. |
Create your node flavor(s)
Detailed procedure
-
Run the following command to create the OpenStack flavor, replacing
<flavor name>
with a name that will help you identify the flavor in future.nova flavor-create <flavor name> auto <ram_mb> <disk_gb> <vcpu_count>
where:
-
<ram_mb>
is the amount of RAM, in megabytes -
<disk_gb>
is the amount of hard disk space, in gigabytes -
<vpu_count>
is the number of virtual CPUs.Specify the parameters as pure numbers without units.
-
You can find the possible flavors in the Flavors section, and it is recommended to use the same flavor name as described there.
Some node types share flavors. If the same flavor is to be used for multiple node types, only create it once.
-
Make note of the flavor ID value provided in the command output because you will need it when installing your OpenStack deployment.
-
To check that the flavor you have just created has the correct values, run the command:
nova flavor-list
-
If you need to remove an incorrectly-configured flavor (replacing <flavor name> with the name of the flavor), run the command:
nova flavor-delete <flavor name>
Install MDM
Before deploying any nodes, you will need to first install Metaswitch Deployment Manager (MDM).
Prerequisites
-
The MDM CSAR
-
A deployed and powered-on SIMPL virtual machine
-
The MDM deployment parameters (hostnames; management and signaling IP addresses)
-
Addresses for NTP, DNS and SNMP servers that the MDM instances will use
The minimum supported version of MDM is |
Method of procedure
Your Customer Care Representative can provide guidance on using the SIMPL VM to deploy MDM. Follow the instructions in the SIMPL VM Documentation.
As part of the installation, you will add MDM to the Solution Definition File (SDF) with the following data:
-
certificates and keys
-
custom topology
Generation of certificates and keys
MDM requires the following certificates and keys. Refer to the MDM documentation for more details.
-
An SSH key pair (for logging into all instances in the deployment, including MDM, which does not allow SSH access using passwords)
-
A CA (certificate authority) certificate and private key (used for the server authentication side of mutual TLS)
-
A "static", also called "client", certificate and private key (used for the client authentication side of mutual TLS)
The CA private key is unused, but should be kept safe in order to generate a new static certificate and private key in the future. Add the other credentials to the SDF sdf-rvt.yaml
as described in MDM service group.
Prepare SIMPL VM for deployment
Before deploying the VMs, the following files must be uploaded onto the SIMPL VM.
Deploy REM nodes on OpenStack
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
The OpenStack deployment must be set up with support for Heat templates.
-
-
you are using an OpenStack version from Icehouse through to Train inclusive
-
you are thoroughly familiar with working with OpenStack machines and know how to set up tenants, users, roles, client environment scripts, and so on.
(For more information, refer to the appropriate OpenStack installation guide for the version that you are using here.)
-
you have deployed a SIMPL VM, unpacked the CSAR, and prepared an SDF.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Step 1 - Check OpenStack quotas
The SIMPL VM creates one server group per VM, and one security group per interface on each VM. OpenStack sets limits on the number of server groups and security groups through quotas.
View the quota by running openstack quota show <project id>
on OpenStack Controller node. This shows the maximum number of various resources.
You can view the existing server groups by running openstack server group list
. Similarly, you can find the security groups by running openstack security group list
If the quota is too small to accommodate the new VMs that will be deployed, increase it by running
openstack quota set --<quota field to increase> <new quota value> <project ID>
. For example:
openstack quota set --server-groups 100 125610b8bf424e61ad2aa5be27ad73bb
Step 2 - Validate REM RVT configuration
Validate the configuration for the REM nodes to ensure that each REM node can properly self-configure.
To validate the configuration after creating the YAML files, run
rvtconfig validate -t rem -i <yaml-config-file-directory>
on the SIMPL VM from the resources
subdirectory of the REM CSAR.
Step 3 - Upload REM RVT configuration
Upload the configuration for the REM nodes to the CDS. This will enable each REM node to self-configure when they are deployed in the next step.
To upload configuration after creating the YAML files and validating them as described above, run
rvtconfig upload-config -c <cds-mgmt-addresses> -t rem -i <yaml-config-file-directory> (--vm-version-source this-rvtconfig | --vm-version <version>)
on the SIMPL VM from the resources
subdirectory of the REM CSAR.
See Example configuration YAML files for example configuration files.
Step 4 - Deploy the OVA
Run csar deploy --vnf rem --sdf <path to SDF>
.
This will validate the SDF, and generate the heat template. After successful validation, this will upload the image, and deploy the number of REM nodes specified in the SDF.
Only one node type should be deployed at the same time. I.e. when deploying these REM nodes, don’t deploy other node types at the same time in parallel. |
Backout procedure
To delete the deployed VMs, run csar delete --vnf rem --sdf <path to SDF>
.
You must also delete the MDM state for each VM. To do this, you must first SSH into one of the MDM VMs. Get the instance IDs by running: mdmhelper --deployment-id <deployment ID> instance list
. Then for each REM VM, run the following command:
curl -X DELETE -k \
--cert /etc/certs-agent/upload/mdm-cert.crt \
--cacert /etc/certs-agent/upload/mdm-cas.crt \
--key /etc/certs-agent/upload/mdm-key.key \
https://127.0.0.1:4000/api/v1/deployments/<deployment ID>/instances/<instance ID>
Verify that the deletion worked by running mdmhelper --deployment-id <deployment ID> instance list
again. You may now log out of the MDM VM.
You must also delete state for this node type and version from the CDS prior to re-deploying the VMs. To delete the state, run rvtconfig delete-node-type --cassandra-contact-point <any CDS IP> --deployment-id <deployment ID>
.
--site-id <site ID> --node-type rem
(--vm-version-source [this-vm | this-rvtconfig] | --vm-version <vm_version>)
Next Step
Follow the verification instructions here: Verify the state of the nodes and processes
Automatic rolling upgrades and patches with SIMPL VM on OpenStack
This section provides information on Upgrades and CSAR EFIX patches.
Before running a rolling upgrade or patch, ensure that all node types in the deployment pass validation. See Verify the state of the nodes and processes for instructions on how to do this.
All uplevel CSARs or CSAR EFIX patches must be uploaded to SIMPL for all upgraded node types before installation. In addition, the uplevel SDF must contain the uplevel CSAR versions for all upgraded node types.
Rolling upgrades with SIMPL VM
To upgrade all node types, refer to the following pages in the order below.
Setting up for a rolling upgrade
Before running a rolling upgrade, some steps must be completed first.
Verify that HTTPS certificates are valid
The HTTPS certificates on the VMs must be valid for more than 30 days, and must remain valid during the upgrade for the whole deployment. For example, your upgrade will fail if your certificate is valid for 32 days and it takes more than 1 day to upgrade all of the VMs for all node types.
Using your own certificates
If using your own generated certificates, check its expiry date using:
openssl x509 -in <certificate file> -enddate -noout
If the certificates are expiring, you must first upload the new certificates using rvtconfig upload-config
before upgrading.
Using VM generated certificates
If you did not provide certificates to the VMs, the VM will generate its own certificates which are valid for 5 years. So if the current VMs have been deployed less than 5 years ago then there is nothing further to do. If it has been over 5 years, then please contact your Metaswitch Customer Care representative.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Upload the uplevel CSARs to the SIMPL VM
If not already done, transfer the uplevel CSARs onto the SIMPL VM. For each CSAR, run csar unpack <path to CSAR>
, where <path to CSAR>
is the full path to the transferred uplevel CSAR.
This will unpack the uplevel CSARs to ~/.local/share/csar/
.
Upload the uplevel SDF to SIMPL VM
If the CSAR uplevel SDF was not created on the SIMPL VM, transfer the previously written CSAR uplevel SDF onto the SIMPL VM.
Ensure that each version in the vnfcs section of the uplevel SDF matches each node type’s CSAR version. |
Upload uplevel RVT configuration
Upload the uplevel configuration for all of the node types to the CDS. This is required for the rolling upgrade to complete.
As configuration is stored against a specific version, you need to re-upload, the uplevel configuration even if it is identical to the downlevel configuration. |
When performing a rolling upgrade some elements of the uplevel configuration must remain identical to those in the downlevel configuration. These elements (and the remedy if that configuration change was made and the cluster upgrade process started) are described in the following table:
Node Type |
Disallowed Configuration Change |
Remedy |
All |
The |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
All |
The ordering of the VM instances in the SDF may not be altered. |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
See Example configuration YAML files for example configuration files.
Rolling upgrade REM nodes on OpenStack
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
The OpenStack deployment must be set up with support for Heat templates.
-
-
you are using an OpenStack version from Icehouse through to Train inclusive
-
you are thoroughly familiar with working with OpenStack machines and know how to set up tenants, users, roles, client environment scripts, and so on.
(For more information, refer to the appropriate OpenStack installation guide for the version that you are using here.) -
you are upgrading an existing downlevel deployment for REM.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Step 1 - Upgrade the downlevel REM VMs
Run csar update --vnf rem --sdf <path to SDF>
.
To perform a canary upgrade, run csar update --vnf rem --sites <site> --service-group <service_group> --index-range <range> --sdf <path to SDF> . The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ). Only the nodes specified in the index will be upgraded. |
This will validate the uplevel SDF, generate the uplevel Terraform template, upload the uplevel image, and then it will start the upgrade.
The following will occur one REM node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next REM VM, or report that the upgrade of the REM was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel REM CSAR and downlevel SDF.
You may need to use the --skip pre-update-checks
flag as part of the csar update
command. The --skip pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar redeploy --vnf rem --sites <site> --sdf <path to SDF>
.
Next Step
Follow the post upgrade instructions here: Post rolling upgrade steps
Post rolling upgrade steps
After a rolling upgrade, some steps must be completed.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Rolling upgrades using CSAR EFIX patch with SIMPL VM
To patch all node types, refer to the following pages in the order below.
Setting up for a rolling upgrade using CSAR EFIX patch
Before running a rolling upgrade, some steps must be completed first.
Verify that HTTPS certificates are valid
The HTTPS certificates on the VMs must be valid for more than 30 days, and must remain valid during the upgrade for the whole deployment. For example, your upgrade will fail if your certificate is valid for 32 days and it takes more than 1 day to upgrade all of the VMs for all node types.
Using your own certificates
If using your own generated certificates, check its expiry date using:
openssl x509 -in <certificate file> -enddate -noout
If the certificates are expiring, you must first upload the new certificates using rvtconfig upload-config
before upgrading.
Using VM generated certificates
If you did not provide certificates to the VMs, the VM will generate its own certificates which are valid for 5 years. So if the current VMs have been deployed less than 5 years ago then there is nothing further to do. If it has been over 5 years, then please contact your Metaswitch Customer Care representative.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Upload the CSAR EFIX patches to the SIMPL VM
If not already done, transfer the CSAR EFIX patches onto the SIMPL VM. For each CSAR EFIX patch, run:
csar efix <node type>/<version> <path to CSAR EFIX>
<path to CSAR EFIX>
is the full path to the CSAR EFIX patch. <node type>/<version>
is the downlevel unpacked CSAR located at ~/.local/share/csar/
.
If you are not sure of the exact version string to use, run csar list to view the list of installed CSARs. |
This will apply the efix patch to the the downlevel CSAR.
The new patched CSAR is now the uplevel CSAR referenced in the following steps. |
Don’t apply the same CSAR EFIX patch to the same CSAR target more than once. If a previous attempt to run the csar efix command failed, be sure to remove the created CSAR before re-attempting, as the csar efix command requires a clean target directory to work with. |
Upload the uplevel SDF to SIMPL VM
If the CSAR EFIX patch uplevel SDF was not created on the SIMPL VM, transfer the previously written CSAR EFIX patch uplevel SDF onto the SIMPL VM.
Ensure the version in the each node type’s vnfcs section of the uplevel SDF is set to <downlevel-version>-<patch-version> . For example: 4.0.0-14-1.0.0-patch123 , where 4.0.0-14-1.0.0 is the downlevel version and patch123 is the patch version. |
Upload uplevel RVT configuration
Upload the uplevel configuration for all of the node types to the CDS. This is required for the rolling upgrade using CSAR EFIX patch to complete.
As configuration is stored against a specific version, you need to re-upload the uplevel configuration even if it is identical to the downlevel configuration. |
The uplevel version for a CSAR EFIX patch is the format <downlevel-version>-<patch-version>
. For example: 4.0.0-14-1.0.0-patch123
, where 4.0.0-14-1.0.0
is the downlevel version and patch123
is the patch version.
When performing a rolling upgrade some elements of the uplevel configuration must remain identical to those in the downlevel configuration. These elements (and the remedy if that configuration change was made and the cluster upgrade process started) are described in the following table:
Node Type |
Disallowed Configuration Change |
Remedy |
All |
The |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
All |
The ordering of the VM instances in the SDF may not be altered. |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
See Example configuration YAML files for example configuration files.
Rolling CSAR EFIX patch REM nodes on OpenStack
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing OpenStack deployment
-
The OpenStack deployment must be set up with support for Heat templates.
-
-
you are using an OpenStack version from Icehouse through to Train inclusive
-
you are thoroughly familiar with working with OpenStack machines and know how to set up tenants, users, roles, client environment scripts, and so on.
(For more information, refer to the appropriate OpenStack installation guide for the version that you are using here.) -
you are upgrading an existing downlevel deployment for REM.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Step 1 - Check OpenStack quotas
The SIMPL VM creates one server group per VM, and one security group per interface on each VM. OpenStack sets limits on the number of server groups and security groups through quotas.
View the quota by running openstack quota show <project id>
on OpenStack Controller node. This shows the maximum number of various resources.
You can view the existing server groups by running openstack server group list
. Similarly, you can find the security groups by running openstack security group list
If the quota is too small to accommodate the new VMs that will be deployed, increase it by running
openstack quota set --<quota field to increase> <new quota value> <project ID>
. For example:
openstack quota set --server-groups 100 125610b8bf424e61ad2aa5be27ad73bb
See CSAR EFIX patches to learn more on the CSAR EFIX patching process.
Step 2 - Upgrade the downlevel REM VMs
Run csar update --vnf rem --sdf <path to SDF>
.
To perform a canary upgrade, run csar update --vnf rem --sites <site> --service-group <service_group> --index-range <range> --sdf <path to SDF> . The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ). Only the nodes specified in the index will be upgraded. |
This will validate the uplevel SDF, generate the uplevel Terraform template, upload the uplevel image, and then it will start the upgrade.
The following will occur one REM node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next REM VM, or report that the upgrade of the REM was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel REM CSAR and downlevel SDF.
You may need to use the --skip pre-update-checks
flag as part of the csar update
command. The --skip pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar redeploy --vnf rem --sites <site> --sdf <path to SDF>
.
Next Step
Follow the post upgrade instructions here: Post rolling upgrade using CSAR EFIX patch steps
Post rolling upgrade using CSAR EFIX patch steps
After a rolling upgrade using CSAR EFIX patch, some steps must be completed.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Installation or upgrades on VMware vSphere
These pages describe how to install or upgrade the REM nodes on VMware vSphere.
Prepare the SDF for the deployment
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing VMware vSphere deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vSphere deployment from scratch
-
you know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the nodes.
-
you have read the installation guidelines at Installation and upgrades overview and have everything you need to carry out the installation.
Reserve maintenance period
This procedure does not require a maintenance period. However, if you are integrating into a live network, we recommend that you implement measures to mitigate any unforeseen events.
Tools and access
This page references an external document: SIMPL VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have the correct CSARs? |
All virtual appliances use the naming convention - |
Do you have a list of the IP addresses that you intend to give to each node of each node type? |
Each node requires an IP address for each interface. You can find a list of the VM’s interfaces on the Network Interfaces page. |
Do you have DNS and NTP Server information? |
It is expected that the deployed nodes will integrate with the IMS Core NTP and DNS servers. |
Method of procedure
Step 1 - Extract the CSAR
This can either be done on your local Linux machine or on a SIMPL VM.
Option A - Running on a local machine
If you plan to do all operations from your local Linux machine instead of SIMPL, Docker must be installed to run the rvtconfig tool in a later step. |
To extract the CSAR, run the command: unzip <path to CSAR> -d <new directory to extract CSAR to>
Option B - Running on an existing SIMPL VM
For this step, the SIMPL VM does not need to be running on the VMware vSphere where the deployment takes place. It is sufficient to use a SIMPL VM on a lab system to prepare for a production deployment.
Transfer the CSAR onto the SIMPL VM and run csar unpack <path to CSAR>
, where <path to CSAR>
is the full path to the transferred CSAR.
This will unpack the CSAR to ~/.local/share/csar/
.
Step 2 - Write the SDF
The Solution Definition File (SDF) contains all the information required to set up your cluster. It is therefore crucial to ensure all information in the SDF is correct before beginning the deployment. One SDF should be written per deployment.
It is recommended that the SDF is written before starting the deployment. The SDF must be named sdf-rvt.yaml
.
See Writing an SDF for more detailed information.
Each deployment needs a unique |
Example SDFs are included in every CSAR and can also be found at Example SDFs. We recommend that you start from a template SDF and edit it as desired instead of writing an SDF from scratch.
Deploy SIMPL VM into VMware vSphere
Note that one SIMPL VM can be used to deploy multiple node types. Thus, this step only needs to be performed once for all node types. |
The supported versions of the SIMPL VM are |
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are using a supported VMware vSphere version, as described in the 'VMware requirements' section of the SIMPL VM Documentation
-
you are installing into an existing VMware vSphere deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vSphere deployment from scratch
-
you know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the SIMPL VM.
Reserve maintenance period
This procedure does not require a maintenance period. However, if you are integrating into a live network, we recommend that you implement measures to mitigate any unforeseen events.
Tools and access
You must have access to a local computer (referred to in this procedure as the local computer) with a network connection and access to the vSphere client.
This page references an external document: the SIMPL VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have the correct SIMPL VM OVA? |
All SIMPL VM virtual appliances use the naming convention - |
Do you know the IP address that you intend to give to the SIMPL VM? |
The SIMPL VM requires one IP address, for management traffic. |
Method of procedure
Deploy and configure the SIMPL VM
Follow the SIMPL VM Documentation on how to deploy the SIMPL VM and set up the configuration.
Prepare configuration files for the deployment
To deploy nodes, you need to prepare configuration files that would be uploaded to the VMs.
Method of procedure
Step 1 - Create configuration YAML files
Create configuration YAML files relevant for your node type on the SIMPL VM. Store these files in the same directory as your prepared SDF.
See Example configuration YAML files for example configuration files.
Install MDM
Before deploying any nodes, you will need to first install Metaswitch Deployment Manager (MDM).
Prerequisites
-
The MDM CSAR
-
A deployed and powered-on SIMPL virtual machine
-
The MDM deployment parameters (hostnames; management and signaling IP addresses)
-
Addresses for NTP, DNS and SNMP servers that the MDM instances will use
The minimum supported version of MDM is |
Method of procedure
Your Customer Care Representative can provide guidance on using the SIMPL VM to deploy MDM. Follow the instructions in the SIMPL VM Documentation.
As part of the installation, you will add MDM to the Solution Definition File (SDF) with the following data:
-
certificates and keys
-
custom topology
Generation of certificates and keys
MDM requires the following certificates and keys. Refer to the MDM documentation for more details.
-
An SSH key pair (for logging into all instances in the deployment, including MDM, which does not allow SSH access using passwords)
-
A CA (certificate authority) certificate and private key (used for the server authentication side of mutual TLS)
-
A "static", also called "client", certificate and private key (used for the client authentication side of mutual TLS)
The CA private key is unused, but should be kept safe in order to generate a new static certificate and private key in the future. Add the other credentials to the SDF sdf-rvt.yaml
as described in MDM service group.
Prepare SIMPL VM for deployment
Before deploying the VMs, the following files must be uploaded onto the SIMPL VM.
Deploy REM nodes on VMware vSphere
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing VMware vSphere deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vSphere deployment from scratch
-
you have deployed a SIMPL VM, unpacked the CSAR, and prepared an SDF.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Step 1 - Validate REM RVT configuration
Validate the configuration for the REM nodes to ensure that each REM node can properly self-configure.
To validate the configuration after creating the YAML files, run
rvtconfig validate -t rem -i <yaml-config-file-directory>
on the SIMPL VM from the resources
subdirectory of the REM CSAR.
Step 2 - Upload REM RVT configuration
Upload the configuration for the REM nodes to the CDS. This will enable each REM node to self-configure when they are deployed in the next step.
To upload configuration after creating the YAML files and validating them as described above, run
rvtconfig upload-config -c <cds-mgmt-addresses> -t rem -i <yaml-config-file-directory> (--vm-version-source this-rvtconfig | --vm-version <version>)
on the SIMPL VM from the resources
subdirectory of the REM CSAR.
See Example configuration YAML files for example configuration files.
Step 3 - Deploy the OVA
Run csar deploy --vnf rem --sdf <path to SDF>
.
This will validate the SDF, and generate the terraform template. After successful validation, this will upload the image, and deploy the number of REM nodes specified in the SDF.
Only one node type should be deployed at the same time. I.e. when deploying these REM nodes, don’t deploy other node types at the same time in parallel. |
Backout procedure
To delete the deployed VMs, run csar delete --vnf rem --sdf <path to SDF>
.
You must also delete the MDM state for each VM. To do this, you must first SSH into one of the MDM VMs. Get the instance IDs by running: mdmhelper --deployment-id <deployment ID> instance list
. Then for each REM VM, run the following command:
curl -X DELETE -k \
--cert /etc/certs-agent/upload/mdm-cert.crt \
--cacert /etc/certs-agent/upload/mdm-cas.crt \
--key /etc/certs-agent/upload/mdm-key.key \
https://127.0.0.1:4000/api/v1/deployments/<deployment ID>/instances/<instance ID>
Verify that the deletion worked by running mdmhelper --deployment-id <deployment ID> instance list
again. You may now log out of the MDM VM.
You must also delete state for this node type and version from the CDS prior to re-deploying the VMs. To delete the state, run rvtconfig delete-node-type --cassandra-contact-point <any CDS IP> --deployment-id <deployment ID>
.
--site-id <site ID> --node-type rem
(--vm-version-source [this-vm | this-rvtconfig] | --vm-version <vm_version>)
Next Step
Follow the verification instructions here: Verify the state of the nodes and processes
Automatic rolling upgrades and patches with SIMPL VM on VMware vSphere
This section provides information on Upgrades and CSAR EFIX patches.
Before running a rolling upgrade or patch, ensure that all node types in the deployment pass validation. See Verify the state of the nodes and processes for instructions on how to do this.
All uplevel CSARs or CSAR EFIX patches must be uploaded to SIMPL for all upgraded node types before installation. In addition, the uplevel SDF must contain the uplevel CSAR versions for all upgraded node types.
Rolling upgrades with SIMPL VM
To upgrade all node types, refer to the following pages in the order below.
Setting up for a rolling upgrade
Before running a rolling upgrade, some steps must be completed first.
Verify that HTTPS certificates are valid
The HTTPS certificates on the VMs must be valid for more than 30 days, and must remain valid during the upgrade for the whole deployment. For example, your upgrade will fail if your certificate is valid for 32 days and it takes more than 1 day to upgrade all of the VMs for all node types.
Using your own certificates
If using your own generated certificates, check its expiry date using:
openssl x509 -in <certificate file> -enddate -noout
If the certificates are expiring, you must first upload the new certificates using rvtconfig upload-config
before upgrading.
Using VM generated certificates
If you did not provide certificates to the VMs, the VM will generate its own certificates which are valid for 5 years. So if the current VMs have been deployed less than 5 years ago then there is nothing further to do. If it has been over 5 years, then please contact your Metaswitch Customer Care representative.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Upload the uplevel CSARs to the SIMPL VM
If not already done, transfer the uplevel CSARs onto the SIMPL VM. For each CSAR, run csar unpack <path to CSAR>
, where <path to CSAR>
is the full path to the transferred uplevel CSAR.
This will unpack the uplevel CSARs to ~/.local/share/csar/
.
Upload the uplevel SDF to SIMPL VM
If the CSAR uplevel SDF was not created on the SIMPL VM, transfer the previously written CSAR uplevel SDF onto the SIMPL VM.
Ensure that each version in the vnfcs section of the uplevel SDF matches each node type’s CSAR version. |
Upload uplevel RVT configuration
Upload the uplevel configuration for all of the node types to the CDS. This is required for the rolling upgrade to complete.
As configuration is stored against a specific version, you need to re-upload, the uplevel configuration even if it is identical to the downlevel configuration. |
When performing a rolling upgrade some elements of the uplevel configuration must remain identical to those in the downlevel configuration. These elements (and the remedy if that configuration change was made and the cluster upgrade process started) are described in the following table:
Node Type |
Disallowed Configuration Change |
Remedy |
All |
The |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
All |
The ordering of the VM instances in the SDF may not be altered. |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
See Example configuration YAML files for example configuration files.
Rolling upgrade REM nodes on VMware vSphere
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing VMware vSphere deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vSphere deployment from scratch
-
you are upgrading an existing downlevel deployment for REM.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Deployments using SIMPL 6.7.3
Step 1 - Upgrade the downlevel REM VMs
Run csar update --vnf rem --sdf <path to SDF>
.
To perform a canary upgrade, run csar update --vnf rem --sites <site> --service-group <service_group> --index-range <range> --sdf <path to SDF> . The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ). Only the nodes specified in the index will be upgraded. |
This will validate the uplevel SDF, generate the uplevel Terraform template, upload the uplevel image, and then it will start the upgrade.
The following will occur one REM node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next REM VM, or report that the upgrade of the REM was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel REM CSAR and downlevel SDF.
You may need to use the --skip pre-update-checks
flag as part of the csar update
command. The --skip pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar redeploy --vnf rem --sites <site> --sdf <path to SDF>
.
Deployments using SIMPL 6.6.x
Step 1 - Validate the SDF
Run csar validate-vsphere --sdf <path to SDF>
.
This will validate the uplevel SDF.
Step 2 - Generate the Terraform Template
Run csar generate --vnf rem --sdf <path to SDF>
.
This will generate the terraform template.
Step 3 - Upgrade the downlevel REM nodes
Run csar update --vnf rem
.
To perform a canary upgrade, run csar update --vnf rem --sites <site> --service-group <service_group> --index-range <range> . The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ). Only the nodes specified in the index will be upgraded. |
This will upload the uplevel image, then it will start the upgrade.
The following will occur one REM node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next REM VM, or report that the upgrade of the REM was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel REM CSAR and downlevel SDF.
You may need to use the --skip-pre-update-checks
flag as part of the csar update
command. The --skip-pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar deploy --redeploy --vnf rem --sites <site>
.
Next Step
Follow the post upgrade instructions here: Post rolling upgrade steps
Post rolling upgrade steps
After a rolling upgrade, some steps must be completed.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Rolling upgrades using CSAR EFIX patch with SIMPL VM
To patch all node types, refer to the following pages in the order below.
Setting up for a rolling upgrade using CSAR EFIX patch
Before running a rolling upgrade, some steps must be completed first.
Verify that HTTPS certificates are valid
The HTTPS certificates on the VMs must be valid for more than 30 days, and must remain valid during the upgrade for the whole deployment. For example, your upgrade will fail if your certificate is valid for 32 days and it takes more than 1 day to upgrade all of the VMs for all node types.
Using your own certificates
If using your own generated certificates, check its expiry date using:
openssl x509 -in <certificate file> -enddate -noout
If the certificates are expiring, you must first upload the new certificates using rvtconfig upload-config
before upgrading.
Using VM generated certificates
If you did not provide certificates to the VMs, the VM will generate its own certificates which are valid for 5 years. So if the current VMs have been deployed less than 5 years ago then there is nothing further to do. If it has been over 5 years, then please contact your Metaswitch Customer Care representative.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Upload the CSAR EFIX patches to the SIMPL VM
If not already done, transfer the CSAR EFIX patches onto the SIMPL VM. For each CSAR EFIX patch, run:
csar efix <node type>/<version> <path to CSAR EFIX>
<path to CSAR EFIX>
is the full path to the CSAR EFIX patch. <node type>/<version>
is the downlevel unpacked CSAR located at ~/.local/share/csar/
.
If you are not sure of the exact version string to use, run csar list to view the list of installed CSARs. |
This will apply the efix patch to the the downlevel CSAR.
The new patched CSAR is now the uplevel CSAR referenced in the following steps. |
Don’t apply the same CSAR EFIX patch to the same CSAR target more than once. If a previous attempt to run the csar efix command failed, be sure to remove the created CSAR before re-attempting, as the csar efix command requires a clean target directory to work with. |
Upload the uplevel SDF to SIMPL VM
If the CSAR EFIX patch uplevel SDF was not created on the SIMPL VM, transfer the previously written CSAR EFIX patch uplevel SDF onto the SIMPL VM.
Ensure the version in the each node type’s vnfcs section of the uplevel SDF is set to <downlevel-version>-<patch-version> . For example: 4.0.0-14-1.0.0-patch123 , where 4.0.0-14-1.0.0 is the downlevel version and patch123 is the patch version. |
Upload uplevel RVT configuration
Upload the uplevel configuration for all of the node types to the CDS. This is required for the rolling upgrade using CSAR EFIX patch to complete.
As configuration is stored against a specific version, you need to re-upload the uplevel configuration even if it is identical to the downlevel configuration. |
The uplevel version for a CSAR EFIX patch is the format <downlevel-version>-<patch-version>
. For example: 4.0.0-14-1.0.0-patch123
, where 4.0.0-14-1.0.0
is the downlevel version and patch123
is the patch version.
When performing a rolling upgrade some elements of the uplevel configuration must remain identical to those in the downlevel configuration. These elements (and the remedy if that configuration change was made and the cluster upgrade process started) are described in the following table:
Node Type |
Disallowed Configuration Change |
Remedy |
All |
The |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
All |
The ordering of the VM instances in the SDF may not be altered. |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
See Example configuration YAML files for example configuration files.
Rolling CSAR EFIX patch REM nodes on VMware vSphere
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing VMware vSphere deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vSphere deployment from scratch
-
you are upgrading an existing downlevel deployment for REM.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
See CSAR EFIX patches to learn more on the CSAR EFIX patching process.
Deployments using SIMPL 6.7.3
Step 1 - Upgrade the downlevel REM VMs
Run csar update --vnf rem --sdf <path to SDF>
.
To perform a canary upgrade, run csar update --vnf rem --sites <site> --service-group <service_group> --index-range <range> --sdf <path to SDF> . The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ). Only the nodes specified in the index will be upgraded. |
This will validate the uplevel SDF, generate the uplevel Terraform template, upload the uplevel image, and then it will start the upgrade.
The following will occur one REM node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next REM VM, or report that the upgrade of the REM was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel REM CSAR and downlevel SDF.
You may need to use the --skip pre-update-checks
flag as part of the csar update
command. The --skip pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar redeploy --vnf rem --sites <site> --sdf <path to SDF>
.
Deployments using SIMPL 6.6.x
Step 1 - Validate the SDF
Run csar validate-vsphere --sdf <path to SDF>
.
This will validate the uplevel SDF.
Step 2 - Generate the Terraform Template
Run csar generate --vnf rem --sdf <path to SDF>
.
This will generate the terraform template.
Step 3 - Upgrade the downlevel REM VMs
Run csar update --vnf rem
.
To perform a canary upgrade, run csar update --vnf rem --sites <site> --service-group <service_group> --index-range <range> . The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ). Only the nodes specified in the index will be upgraded. |
This will upload the uplevel image, then it will start the upgrade.
The following will occur one REM node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next REM VM, or report that the upgrade of the REM was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel REM CSAR and downlevel SDF.
You may need to use the --skip-pre-update-checks
flag as part of the csar update
command. The --skip-pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar deploy --redeploy --vnf rem --sites <site>
.
Next Step
Follow the post upgrade instructions here: Post rolling upgrade using CSAR EFIX patch steps
Post rolling upgrade using CSAR EFIX patch steps
After a rolling upgrade using CSAR EFIX patch, some steps must be completed.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Installation or upgrades on VMware vCloud
These pages describe how to install or upgrade the REM nodes on VMware vCloud.
Prepare the SDF for the deployment
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing VMware vCloud deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vCloud deployment from scratch
-
you know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the nodes.
-
you have read the installation guidelines at Installation and upgrades overview and have everything you need to carry out the installation.
Reserve maintenance period
This procedure does not require a maintenance period. However, if you are integrating into a live network, we recommend that you implement measures to mitigate any unforeseen events.
Tools and access
This page references an external document: SIMPL VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have the correct CSARs? |
All virtual appliances use the naming convention - |
Do you have a list of the IP addresses that you intend to give to each node of each node type? |
Each node requires an IP address for each interface. You can find a list of the VM’s interfaces on the Network Interfaces page. |
Do you have DNS and NTP Server information? |
It is expected that the deployed nodes will integrate with the IMS Core NTP and DNS servers. |
Method of procedure
Step 1 - Extract the CSAR
This can either be done on your local Linux machine or on a SIMPL VM.
Option A - Running on a local machine
If you plan to do all operations from your local Linux machine instead of SIMPL, Docker must be installed to run the rvtconfig tool in a later step. |
To extract the CSAR, run the command: unzip <path-to-csar> -d <new-directory-to-extract-csar-to>
Option B - Running on an existing SIMPL VM
For this step, the SIMPL VM does not need to be running on the VMware vcloud where the deployment takes place. It is sufficient to use a SIMPL VM on a lab system to prepare for a production deployment.
Transfer the CSAR onto the SIMPL VM and run csar unpack <path to CSAR>
, where <path to CSAR>
is the full path to the transferred CSAR.
This will unpack the CSAR to ~/.local/share/csar/
.
Step 2 - Write the SDF
The Solution Definition File (SDF) contains all the information required to set up your cluster. It is therefore crucial to ensure all information in the SDF is correct before beginning the deployment. One SDF should be written per deployment.
It is recommended that the SDF is written before starting the deployment. The SDF must be named sdf-rvt.yaml
.
See Writing an SDF for more detailed information.
Each deployment needs a unique |
Example SDFs are included in every CSAR and can also be found at Example SDFs. We recommend that you start from a template SDF and edit it as desired instead of writing an SDF from scratch.
Deploy SIMPL VM into VMware vCloud
Note that one SIMPL VM can be used to deploy multiple node types. Thus, this step only needs to be performed once for all node types. |
The supported version of the SIMPL VM is |
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are using a supported VMware vCloud version, as described in the 'VMware requirements' section of the SIMPL VM Documentation
-
you are installing into an existing VMware vCloud deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vCloud deployment from scratch
-
you know the IP networking information (IP address, subnet mask in CIDR notation, and default gateway) for the SIMPL VM.
Reserve maintenance period
This procedure does not require a maintenance period. However, if you are integrating into a live network, we recommend that you implement measures to mitigate any unforeseen events.
Tools and access
You must have access to a local computer (referred to in this procedure as the local computer) with a network connection and access to the vCloud client.
This page references an external document: the SIMPL VM Documentation. Ensure you have a copy available before proceeding.
Installation Questions
Question | More information |
---|---|
Do you have the correct SIMPL VM OVA? |
All SIMPL VM virtual appliances use the naming convention - |
Do you know the IP address that you intend to give to the SIMPL VM? |
The SIMPL VM requires one IP address, for management traffic. |
Method of procedure
Deploy and configure the SIMPL VM
Follow the SIMPL VM Documentation on how to deploy the SIMPL VM and set up the configuration.
Prepare configuration files for the deployment
To deploy nodes, you need to prepare configuration files that would be uploaded to the VMs.
Method of procedure
Step 1 - Create configuration YAML files
Create configuration YAML files relevant for your node type on the SIMPL VM. Store these files in the same directory as your prepared SDF.
See Example configuration YAML files for example configuration files.
Install MDM
Before deploying any nodes, you will need to first install Metaswitch Deployment Manager (MDM).
Prerequisites
-
The MDM CSAR
-
A deployed and powered-on SIMPL virtual machine
-
The MDM deployment parameters (hostnames; management and signaling IP addresses)
-
Addresses for NTP, DNS and SNMP servers that the MDM instances will use
The minimum supported version of MDM is |
Method of procedure
Your Customer Care Representative can provide guidance on using the SIMPL VM to deploy MDM. Follow the instructions in the SIMPL VM Documentation.
As part of the installation, you will add MDM to the Solution Definition File (SDF) with the following data:
-
certificates and keys
-
custom topology
Generation of certificates and keys
MDM requires the following certificates and keys. Refer to the MDM documentation for more details.
-
An SSH key pair (for logging into all instances in the deployment, including MDM, which does not allow SSH access using passwords)
-
A CA (certificate authority) certificate and private key (used for the server authentication side of mutual TLS)
-
A "static", also called "client", certificate and private key (used for the client authentication side of mutual TLS)
The CA private key is unused, but should be kept safe in order to generate a new static certificate and private key in the future. Add the other credentials to the SDF sdf-rvt.yaml
as described in MDM service group.
Prepare SIMPL VM for deployment
Before deploying the VMs, the following files must be uploaded onto the SIMPL VM.
Deploy REM nodes on VMware vCloud
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing VMware vCloud deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vCloud deployment from scratch
-
you have deployed a SIMPL VM, unpacked the CSAR, and prepared an SDF.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Step 1 - Validate REM RVT configuration
Validate the configuration for the REM nodes to ensure that each REM node can properly self-configure.
To validate the configuration after creating the YAML files, run
rvtconfig validate -t rem -i <yaml-config-file-directory>
on the SIMPL VM from the resources
subdirectory of the REM CSAR.
Step 2 - Upload REM RVT configuration
Upload the configuration for the REM nodes to the CDS. This will enable each REM node to self-configure when they are deployed in the next step.
To upload configuration after creating the YAML files and validating them as described above, run
rvtconfig upload-config -c <cds-mgmt-addresses> -t rem -i <yaml-config-file-directory> (--vm-version-source this-rvtconfig | --vm-version <version>)
on the SIMPL VM from the resources
subdirectory of the REM CSAR.
See Example configuration YAML files for example configuration files.
Step 3 - Deploy the OVA
Run csar deploy --vnf rem --sdf <path to SDF>
.
This will validate the SDF, and generate the terraform template. After successful validation, this will upload the image, and deploy the number of REM nodes specified in the SDF.
Only one node type should be deployed at the same time. I.e. when deploying these REM nodes, don’t deploy other node types at the same time in parallel. |
Backout procedure
To delete the deployed VMs, run csar delete --vnf rem --sdf <path to SDF>
.
You must also delete the MDM state for each VM. To do this, you must first SSH into one of the MDM VMs. Get the instance IDs by running: mdmhelper --deployment-id <deployment ID> instance list
. Then for each REM VM, run the following command:
curl -X DELETE -k \
--cert /etc/certs-agent/upload/mdm-cert.crt \
--cacert /etc/certs-agent/upload/mdm-cas.crt \
--key /etc/certs-agent/upload/mdm-key.key \
https://127.0.0.1:4000/api/v1/deployments/<deployment ID>/instances/<instance ID>
Verify that the deletion worked by running mdmhelper --deployment-id <deployment ID> instance list
again. You may now log out of the MDM VM.
You must also delete state for this node type and version from the CDS prior to re-deploying the VMs. To delete the state, run rvtconfig delete-node-type --cassandra-contact-point <any CDS IP> --deployment-id <deployment ID>
.
--site-id <site ID> --node-type rem
(--vm-version-source [this-vm | this-rvtconfig] | --vm-version <vm_version>)
Next Step
Follow the verification instructions here: Verify the state of the nodes and processes
Automatic rolling upgrades with SIMPL VM on VMware vCloud
This section provides information on Upgrades .
Before running a rolling upgrade, ensure that all node types in the deployment pass validation. See Verify the state of the nodes and processes for instructions on how to do this.
All uplevel CSARs must be uploaded to SIMPL for all upgraded node types before installation. In addition, the uplevel SDF must contain the uplevel CSAR versions for all upgraded node types.
Rolling upgrades with SIMPL VM
To upgrade all node types, refer to the following pages in the order below.
Setting up for a rolling upgrade
Before running a rolling upgrade, some steps must be completed first.
Verify that HTTPS certificates are valid
The HTTPS certificates on the VMs must be valid for more than 30 days, and must remain valid during the upgrade for the whole deployment. For example, your upgrade will fail if your certificate is valid for 32 days and it takes more than 1 day to upgrade all of the VMs for all node types.
Using your own certificates
If using your own generated certificates, check its expiry date using:
openssl x509 -in <certificate file> -enddate -noout
If the certificates are expiring, you must first upload the new certificates using rvtconfig upload-config
before upgrading.
Using VM generated certificates
If you did not provide certificates to the VMs, the VM will generate its own certificates which are valid for 5 years. So if the current VMs have been deployed less than 5 years ago then there is nothing further to do. If it has been over 5 years, then please contact your Metaswitch Customer Care representative.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Upload the uplevel CSARs to the SIMPL VM
If not already done, transfer the uplevel CSARs onto the SIMPL VM. For each CSAR, run csar unpack <path to CSAR>
, where <path to CSAR>
is the full path to the transferred uplevel CSAR.
This will unpack the uplevel CSARs to ~/.local/share/csar/
.
Upload the uplevel SDF to SIMPL VM
If the CSAR uplevel SDF was not created on the SIMPL VM, transfer the previously written CSAR uplevel SDF onto the SIMPL VM.
Ensure that each version in the vnfcs section of the uplevel SDF matches each node type’s CSAR version. |
Upload uplevel RVT configuration
Upload the uplevel configuration for all of the node types to the CDS. This is required for the rolling upgrade to complete.
As configuration is stored against a specific version, you need to re-upload, the uplevel configuration even if it is identical to the downlevel configuration. |
When performing a rolling upgrade some elements of the uplevel configuration must remain identical to those in the downlevel configuration. These elements (and the remedy if that configuration change was made and the cluster upgrade process started) are described in the following table:
Node Type |
Disallowed Configuration Change |
Remedy |
All |
The |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
All |
The ordering of the VM instances in the SDF may not be altered. |
Rollback the affected VM(s) to restore the original configuration, then correct the uplevel configuration and re-run the upgrade. |
See Example configuration YAML files for example configuration files.
Rolling upgrade REM nodes on VMware vCloud
Planning for the procedure
Background knowledge
This procedure assumes that:
-
you are installing into an existing VMware vCloud deployment which has pre-configured networks and VLANs; this procedure does not cover setting up a VMware vCloud deployment from scratch
-
you are upgrading an existing downlevel deployment for REM.
Method of procedure
Refer to the SIMPL VM Documentation for details on the commands mentioned in the procedure. |
Step 1 - Upgrade the downlevel REM VMs
Run csar update --vnf rem --sdf <path to SDF>
.
To perform a canary upgrade, run csar update --vnf rem --sites <site> --service-group <service_group> --index-range <range> --sdf <path to SDF> . The indexes start from 0, therefore 0 is the first VM. The range accepts ranges as well as comma separated indexes (e.g. 1-3,7,9 ). Only the nodes specified in the index will be upgraded. |
This will validate the uplevel SDF, generate the uplevel Terraform template, upload the uplevel image, and then it will start the upgrade.
The following will occur one REM node at a time:
-
The downlevel node will be quiesced.
-
The uplevel node will be created and boot up.
-
The VM will automatically start applying configuration from the files you uploaded to CDS in the above steps. During this phase, the status of the VM in MDM will be Orange.
-
Once configuration is complete, the status will change to Green, and the node will be ready for service. At this point the
csar update
command will move on to the next REM VM, or report that the upgrade of the REM was successful if all nodes have now been upgraded. -
Once the upgrade is complete, place calls and run any additional validation tests to verify the uplevel VMs are working as expected.
Backout procedure
If the upgrade has brought up uplevel VMs to replace the downlevel VMs, then the uplevel VMs can be rolled back to the downlevel VMs. To rollback, repeat the steps above with the downlevel REM CSAR and downlevel SDF.
You may need to use the --skip pre-update-checks
flag as part of the csar update
command. The --skip pre-update-checks
flag allows rollbacks when a node is unhealthy.
If the upgrade has failed to bring up the uplevel VMs or the rollback has failed to bring up the downlevel VMs, then you must redeploy the downlevel VMs. run csar redeploy --vnf rem --sites <site> --sdf <path to SDF>
.
Next Step
Follow the post upgrade instructions here: Post rolling upgrade steps
Post rolling upgrade steps
After a rolling upgrade, some steps must be completed.
Verify all VMs are healthy
All the VMs in the deployment need to be healthy. To check this, run the common health checks for the VMs by following: Verify the state of the nodes and processes. The per-node checks should also be run by following each page under: Per-node checks.
Verify the state of the nodes and processes
VNF validation tests
What are VNF validation tests?
The VNF validation tests can be used to run some basic checks on deployed VMs to ensure they have been deployed correctly. Tests include:
-
checking that the management IP can be reached
-
checking that the management gateway can be reached
-
checking that
sudo
works on the VM -
checking that the VM has converged to its configuration.
Running the VNF validation tests
After deploying the VMs for a given VM type, and performing the configuration for those VMs, you can run the VNF validation tests for those VMs from the SIMPL VM.
Run the validation tests: csar validate --vnf <node-type> --sdf <path to SDF>
Here, <node-type>
is one of rem
.
If any of the tests fail, refer to the troubleshooting section.
An MDM CSAR must be unpacked on the SIMPL VM before running the csar validate command. Run csar list on the SIMPL VM to verify whether an MDM CSAR is already installed. |
REM checks
Verify REM is running
Log in to the VM with the default credentials.
Run systemctl status rhino-element-manager
to view the status of the REM service. It should be listed as active (running)
.
You can also check the jps
command to ensure that the Tomcat process has started. It is listed in the output as Bootstrap
.
Verify you can connect to REM
From a PC which is on or can reach the same subnet as the REM node’s management interface, connect to https://<management IP address>:8443/rem/
with a web browser. You should be presented with a login page. From here you can use the credentials set up in the rem-vmpool-config.yaml file to log in.
VM configuration
This section describes details of the VM configuration of the nodes.
-
An overview of the configuration process is described in declarative configuration.
-
The bootstrap parameters are derived from the SDF and supplied as either vApp parameters or as OpenStack userdata automatically.
-
After the VMs boot up, they will automatically perform bootstrap. You then need to upload configuration to the CDS for the configuration step.
-
The rvtconfig tool is used to upload configuration to the CDS.
-
You may wish to refer to the Services and Components page for information about each node’s components, directory structure, and the like.
Declarative Configuration
Overview
This section describes how to configure the custom Rhino application VMs - that is, the processes of making and applying configuration changes.
It is not intended as a full checklist of the steps to take during an upgrade or full installation - for example, business level change-control processes are not discussed.
The configuration process is based on modifying configuration files, which are validated and sent to a central configuration data store (CDS) using the rvtconfig
tool. The custom Rhino application VMs will poll the CDS, and will pull down and apply any changes.
Initial setup
The initial configuration process starts with the example YAML files distributed alongside the custom Rhino application VMs, as described in Example configuration YAML files.
Metaswitch strongly recommends that the configuration files are stored in a version control system (VCS). A VCS allows version control, rollback, traceability, and reliable storage of the system’s configuration. |
If a VCS is not a viable option for you, you must take backups of the configuration before making any changes. The configuration backups are your responsibility and must be made every time a change is required. In this case, we recommend that you store the full set of configuration files in a reliable cloud storage system (for example, OneDrive) and keep the backups in different folders named with a progressive number and a timestamp of the backup date (for example, v1-20210310T1301).
The rest of the guide is written assuming the use of a VCS to manage the configuration files.
Initially, add the full set of example YAMLs into your VCS as a baseline, alongside the solution definition files (SDFs) described in the custom Rhino application VM install guides. You should store all files (including the SDFs for all nodes) in a single directory yamls
with no subdirectories.
Making changes
To change the system configuration, the first step is to edit the configuration files, making the desired changes (as described in this guide). You can do this on any machine using a text editor (one with YAML support is recommended). After you have made the changes, record them in the VCS.
Validating the changes
On the SIMPL VM, as the admin user, change to the directory /home/admin/
. Check out (or copy) your yamls
directory to this location, as /home/admin/yamls/
.
If network access allows, we recommend that you retrieve the files directly from the VCS into this directory, rather than copying them. Having a direct VCS connection means that changes made at this point in the process are more likely to be committed back into the VCS, a critical part of maintaining the match between live and stored configuration. |
At this point, use the rvtconfig
tool to validate the configuration used for all relevant nodes.
For more information on the rvtconfig tool, see rvtconfig. |
The relevant nodes depend on which configuration files have been changed. To determine the mapping between configuration files and nodes, consult Example configuration YAML files.
The rvtconfig
tool is delivered as part of the VM image CSAR file, and unpacked into /home/admin/.local/share/csar/<csar name>/<version>/resources/rvtconfig
.
It is important that the rvtconfig binary used to validate a node’s configuration is from a matching release. That is, if the change is being made to a node that is at version x.y.z-p1 , the rvtconfig binary must be from a version x.y.z CSAR. |
For example, assume a change has been made to the rem-vmpool-config.yaml
file in the custom Rhino application network. This would require reconfiguration of the rem
node at version 4.0.0
. To validate this change, use the following command from the /home/admin/
directory.
./.local/share/csar/rem/4.0.0/resources/rvtconfig validate -t rem -i ./yamls
If the node fails validation, update the files to fix the errors as reported, and record the changes in your VCS.
Uploading the changes
Once the file is validated, record the local changes in your VCS.
Next, use the rvtconfig upload-config
command to upload the changes to the CDS. As described in Uploading configuration to CDS with upload-config, the upload-config
command requires a number of command line arguments.
The full syntax to use for this use case is:
rvtconfig upload-config -c <cds-ip-addresses> -t <node type> -i <config-path> --vm-version <vm_version>
where:
-
<cds-ip-addresses>
is the signaling IP address of a CDS node. -
<deployment-id>
can be found in the relevant SDF. -
<node type>
is the node being configured, as described above. -
<config-path>
is the path of the directory containing the YAML and SDFs. -
<vm_version>
is the version string of the node being configured.
As with validation, the rvtconfig
executable must match the version of software being configured. Take the example of a change to the rem-vmpool-config.yaml
as above, on a custom Rhino application network with nodes at version 4.0.0
, a deployment ID of prod
, and a CDS at IP 192.0.0.1
. In this environment the configuration could be uploaded with the following commands (from /home/admin/
):
./.local/share/csar/rem/4.0.0/resources/rvtconfig upload-config -c 192.0.0.1 -t rem -i ./yamls --vm-version 4.0.0
rvtconfig
rvtconfig
tool
Configuration YAML files can be validated and uploaded to the CDS using the rvtconfig
tool. The rvtconfig
tool can be run either on the SIMPL VM or any custom Rhino application VM.
On the SIMPL VM, you can find the command in the resources
subdirectory of any custom Rhino application (rem
) CSAR, after it has been extracted using csar unpack
.
/home/admin/.local/share/csar/<csar name>/<version>/resources/rvtconfig
On any custom Rhino application VM, the rvtconfig
tool is in the PATH
for the sentinel
user and can be run directly by running:
rvtconfig <command>
The available rvtconfig
commands are:
-
rvtconfig validate
validates the configuration, even before booting any VMs by using the SIMPL VM. -
rvtconfig upload-config
validates, encrypts, and uploads the configuration to the CDS. -
rvtconfig delete-deployment
deletes a deployment from the CDS.Only use this when advised to do so by a Customer Care Representative. -
rvtconfig delete-node-type
deletes state and configuration for a given node type from the CDSOnly use this after deleting all VMs for a given node type. -
rvtconfig list-config
displays a summary of the configurations stored in the CDS. -
rvtconfig dump-config
dumps the current configuration from the CDS. -
rvtconfig print-leader-seed
prints the current leader seed as stored in the CDS. -
rvtconfig split-sdf
splits an SDF definition into separate ones, one for each instance. -
rvtconfig generate-private-key
generates a new private key for use in the SDF. -
rvtconfig export-log-history
exports the quiesce log history from the CDS. -
rvtconfig describe-versions
prints the current values of the versions of the VM found in the config and in the SDF. -
rvtconfig compare-config
compares currently uploaded config with a given set of configuration.
Commands that read or modify the CDS state take a --cds-address
parameter (which is also aliased as --cds-addresses
, --cassandra-contact-point
, --cassandra-contact-points
, or simply -c
). For this parameter, specify the management address(es) of at least one machine hosting the CDS database. Separate multiple addresses with a space, for example --cds-address 1.2.3.4 1.2.3.5
.
For more information, run rvtconfig --help
or rvtconfig upload-config --help
.
Verifying and uploading configuration
-
Create a directory to hold the configuration YAML files.
mkdir yamls
-
Ensure the directory contains the following:
-
configuration YAML files
-
the Solution Definition File (SDF)
-
Rhino license for nodes running Rhino.
-
Do not create any subdirectories. Ensure the file names match the example YAML files. |
Verifying configuration with validate
To validate configuration, run the command:
rvtconfig validate -t <node type> -i ~/yamls
where <node type>
is the node type you want to verify, which can be rem
. If there are any errors, fix them, move the fixed files to the yamls
directory, and then re-run the above rvtconfig validate
command on the yamls
directory.
Once the files pass validation, store the YAML files in the CDS using the rvtconfig upload-config
command.
If using the SIMPL VM, the |
Uploading configuration to the CDS with upload-config
To upload the YAML files to the CDS, run the command:
rvtconfig upload-config -c <cds-mgmt-addresses> -t <node type> -i ~/yamls
(--vm-version-source [this-vm | this-rvtconfig | sdf-version] | --vm-version <vm_version>) [--reload-resource-adaptors]
The |
-
--vm-version
specifies the version of the VM to target (as configuration can differ across a VM upgrade). -
--vm-version-source
automatically derives the VM version from the given source. Failure to determine the version will result in an error.-
Use
this-rvtconfig
when running thervtconfig
tool included in the CSAR for the target VM, to extract the version information packaged intorvtconfig
. -
Use
this-vm
if running thervtconfig
tool directly on the VM being configured, to extract the version information from the VM. -
Option
sdf-version
extracts the version value written in the SDF for the given node.
-
Whatever way you enter the version, the value obtained must match the version on the SDF. Otherwise, the upload will fail. |
Any YAML configuration values which are specified as secrets are marked as such in the YAML files' comments. These values will be encrypted using the generated private-key created by rvtconfig generate-private-key
and prior to uploading the SDF. In other words, the secrets should be entered in plain text in the SDF, and the upload-config
command takes care of encrypting them. Currently this applies to the following:
-
Rhino users' passwords
-
REM users' passwords
-
SSH keys for accessing the VM
-
the HTTPS key and certificate for REM.
Use the |
If the CDS is not yet available, this will retry every 30 seconds for up to 15 minutes. As a large Cassandra cluster can take up to one hour to form, this means the command could time out if run before the cluster is fully formed. If the command still fails after several attempts over an hour, troubleshoot Cassandra on the machines hosting the CDS database.
This command first compares the configuration files currently uploaded for the target version with those in the input directory. It summarizes which files are different and how many lines differ. If any files are different, it will prompt the user to confirm the differences are as expected before continuing with the upload.
If the upload is canceled and --output-dir
is specified, then full details of any files with differences will be put into the given output directory, which gets created by the command.
Changes to secrets and non-YAML files cannot be detected due to encryption; they will not appear in the summary or detailed output. Any such changes will still be uploaded.
This pre-check on config can be disabled by using the -f
flag.
Restarting resource adaptors
Specify the The If you apply configuration changes that don’t include changes to any fields marked as needing an RA restart, then you do not need to specify the If you apply configuration changes that include changes to such fields, and do not specify the |
Comparing existing configuration in the CDS with compare-config
Compare the configuration in an input directory with the currently uploaded configuration in the CDS using the command:
rvtconfig compare-config -c <cds-mgmt-addresses> -t <node type> -i ~/yamls --output-dir <output-directory>
[(--vm-version-source [this-vm | this-rvtconfig | sdf-version] | --vm-version <vm_version>)]
This will compare the currently uploaded configuration in the CDS with the configuration in the local input directory.
The version of configuration to look up will be automatically taken from the SDF. If the optional --vm-version-source
or --vm-version
parameter is provided, then this is used instead. This can be used to check what has changed just before running an upgrade, where the version in the SDF differs.
The files that have differences will be displayed, along with the number of different lines. The full contents of each version of these files will be put in the output directory, along with the differences found. When doing so, secrets and non-YAML files are ignored.
The files in this output directory use the suffix .local
for a redacted version of the input file, .live
for a redacted version of the live file, and .diff
for a diff command run against the two showing the differences.
The contents of the files in the output directory are reordered and no longer have comments; these won’t match the formatting of the original input files, but contain the same information. |
Deleting configuration from the CDS with delete-deployment
Delete all deployment configuration from the CDS by running the command:
rvtconfig delete-deployment -c <cds-mgmt-addresses> -d <deployment-id> [--delete-audit-history]
Only use this when advised to do so by a Customer Care Representative. |
Deleting state and configuration for a node type from the CDS with delete-node-type
Delete all state and configuration for a given node type and version from the CDS by running the command:
rvtconfig delete-node-type -c <cds-mgmt-addresses> -d <deployment-id> --site-id <site-id> --node-type <node type>
(--vm-version-source [this-vm | this-rvtconfig | sdf-version -i ~/yamls] | --vm-version <vm_version>) [-y]
The argument -i ~/yamls
is only needed if sdf-version
is used.
Only use this after deleting all VMs of this node type within the specified site. Functionality of all nodes of this type within the given site will be lost. These nodes will have to be redeployed to restore functionality. |
Listing configurations available in the CDS with list-config
List all currently available configurations in the CDS by running the command:
rvtconfig list-config -c <cds-mgmt-addresses> -d <deployment-id>
This command will print a short summary of the configurations uploaded, the VM version they are uploaded for, and which VMs are commissioned in that version.
Retrieving configuration from the CDS with dump-config
Retrieve the VM group configuration from the CDS by running the command:
rvtconfig dump-config -c <cds-mgmt-addresses> -d <deployment-id> --group-id <group-id>
(--vm-version-source [this-vm | this-rvtconfig | sdf-version -i ~/yamls -t <node type>] | --vm-version <vm_version>)
[--output-dir <output-dir>]
Group ID syntax: RVT-<node type>.<site_id> Example: RVT-tsn.DC1 Here, <node type> can be rem . |
If the optional --output-dir <directory>
argument is specified, then the configuration will be dumped as individual files in the given directory. The directory can be expressed as either an absolute or relative path. It will be created if it doesn’t exist.
If the --output-dir
argument is omitted, then the configuration is printed to the terminal.
The arguments -i ~/yamls
and -t <node type>
are only needed if sdf-version
is used.
Displaying the current leader seed with print-leader-seed
Display the current leader seed by running the command:
rvtconfig print-leader-seed -c <cds-mgmt-addresses> -d <deployment-id> --group-id <group-id>
(--vm-version-source [this-vm | this-rvtconfig | sdf-version -i ~/yamls -t <node type>] | --vm-version <vm_version>)
Group ID syntax: RVT-<node type>.<site_id> Example: RVT-tsn.DC1 Here, <node type> can be rem . |
The command will display the current leader seed for the specified deployment, group, and VM version. A leader seed may not always exist, in which case the output will include No leader seed found
. Conditions where a leader seed may not exist include:
-
No deployment exists with the specified deployment, group, and VM version.
-
A deployment exists, but initconf has not yet initialized.
-
A deployment exists, but the previous leader seed has quiesced and a new leader seed has not yet been selected.
The arguments -i ~/yamls
and -t <node type>
are only needed if sdf-version
is used.
Splitting an SDF by product type with split-sdf
Create partial SDFs for each VM by running the command:
rvtconfig split-sdf -i <input-directory> -o <output-directory> <sdf>
Generating a private-key
for Encrypting Passwords with generate-private-key
Rhino TAS and REM require the configuration to supply passwords that are encrypted with a private key. rvtconfig
can generate a private-key to encrypt a password with the following command:
rvtconfig generate-private-key
The SDF can be updated with the generated private key.
See here for details.
Retrieving VM logs with export-log-history
During upgrade, when a downlevel VM is removed, it uploads Initconf and Rhino logs to the CDS. The log files are stored as encrypted data in the CDS.
Only the portions of the logs written during quiesce are stored. |
Retrieve the VM logs for a deployment from the CDS by running the command:
rvtconfig export-log-history -c <cds-mgmt-addresses> -d <deployment-id> --zip-destination-dir <directory>
--private-key <private-key>
The --private-key must match the key used in the SDF (secrets-private-key ). |
The Initconf and Rhino logs are exported in unencrypted zip files. The zip file names will consist of VM hostname, version, and type of log. |
Viewing the values associated with the special sdf-version
, this-vm
, and this-rvtconfig
versions with describe-versions
Some commands, upload-config
for example, can be used with the special version values sdf-version
, this-vm
, and this-rvtconfig
.
-
Calling
sdf-version
extracts the version from the value given in the SDF for the given node. -
The
this-vm
option takes the version of the VM the commands are being run from. This can only be used when running commands on a node VM. -
Using
this-rvtconfig
extracts the version from the rvtconfig found in the directory the command is being run from. This can only be used on a SIMPL VM.
To view the real version strings associated with each of these special values:
rvtconfig describe-versions [-i ~/yamls -t <node type(s)>]
Both optional arguments -i ~/yamls
and -t <node type(s)>
are required for the sdf-version
value to be given. Multiple node types can be taken as arguments.
If a special version value cannot be found, for example if this-vm
is run on a SIMPL VM or neither of the optional arguments are called, the describe-versions
command will print N/A for that special version.
Overview and structure of SDF
SDF overview and terminology
A Solution Definition File (SDF) contains information about all Metaswitch products in your deployment. It is a plain-text file in YAML format.
-
The deployment is split into
sites
. Note that multiple sites act as independent deployments, e.g. there is no automatic georedundancy. -
Within each site you define one or more
service groups
of virtual machines. A service group is a collection of virtual machines (nodes) of the same type. -
The collection of all virtual machines of the same type is known as a
VNFC
(Virtual Network Function Component). For example, you may have a SAS VNFC and a MDM VNFC. -
The VMs in a VNFC are also known as
VNFCIs
(Virtual Network Function Component Instances), or justinstances
for short.
Some products may support a VNFC being split into multiple service groups. However, for custom Rhino application VMs, all VMs of a particular type must be in a single service group. |
The format of the SDF is common to all Metaswitch products, and in general it is expected that you will have a single SDF containing information about all Metaswitch products in your deployment.
This section describes how to write the parts of the SDF specific to the custom Rhino application product. It includes how to configure the MDM and REM VNFCs, how to configure subnets and traffic schemes, and some example SDF files to use as a starting point for writing your SDF.
Further documentation on how to write an SDF is available in the 'Creating an SDF' section of the SIMPL VM Documentation.
For the custom Rhino application solution, the SDF must be named sdf-rvt.yaml
when uploading configuration.
Structure of a site
Each site in the SDF has a name
, site-parameters
and vnfcs
.
-
The site
name
can be any unique human-readable name. -
The
site-parameters
has multiple sub-sections and sub-fields. Only some are described here. -
The
vnfcs
is where you list your service groups.
Site parameters
Under site-parameters
, all of the following are required for the custom Rhino application product:
-
deployment-id
: The common identifier for a SDF and set of YAML configuration files. It can be any name consisting of up to 20 characters. Valid characters are alphanumeric characters and underscores. -
site-id
: The identifier for this site. Must be in the formDC1
toDC32
. -
fixed-ips
: Must be set totrue
. -
vim-configuration
: VNFI-specific configuration (see below) that describes how to connect to your VNFI and the backing resources for the VMs. -
services:
→ntp-servers
must be a list of NTP servers. At least one NTP server is required; at least two is recommended. These must be specified as IP addresses, not hostnames. -
networking
: Subnet definitions. See Subnets and traffic schemes. -
timezone
: Timezone, in POSIX format such asEurope/London
. -
mdm
: MDM options. See MDM service group.
Structure of a service group
Under the vnfcs
section in each site, you list that site’s service groups. For REM VMs, each service group consists of the following fields:
-
name
: A unique human-readable name for the service group. -
type
: Must be one ofrem
. -
version
: Must be set to the version of the CSAR.The version can be found in the CSAR filename, e.g. if the filename is
rem-4.0.0-12-1.0.0-vsphere-csar.zip
then the version is4.0.0-12-1.0.0
. Alternatively, inside each CSAR is a manifest file with a.mf
extension, whose content lists the version under the keyvnf_package_version
, for examplevnf_package_version: 4.0.0-12-1.0.0
.Specifying the version in the SDF is mandatory for custom Rhino application service groups, and strongly recommended for other products in order to disambiguate between CSARs in the case of performing an upgrade.
-
cluster-configuration:
→count
: The number of VMs in this service group. -
cluster-configuration:
→instances
: A list of instances. Each instance has aname
(the VM’s hostname) and, on VMware vSphere, a list ofvnfci-vim-options
(see below). -
networks
: A list of networks used by this service group. See Subnets and traffic schemes. -
vim-configuration
: The VNFI-specific configuration for this service group (see below).
VNFI-specific options
The SDF includes VNFI-specific options at both the site and service group levels. At the site level, you specify how to connect to your VNFI and give the top-level information about the deployment’s backing resources, such as datastore locations on vSphere, or availability zone on OpenStack. At the VNFC level, you can assign the VMs to particular sub-hosts or storage devices (for example vSphere hosts within a vCenter), and specify the flavor of each VM.
Options required for REM VMs
For each service group, include a vim-configuration
section with the flavor information, which varies according to the target VNFI type:
-
VMware vSphere:
vim-configuration:
→vsphere:
→deployment-size: <flavor name>
-
OpenStack:
vim-configuration:
→openstack:
→flavor: <flavor name>
When deploying to VMware vSphere, include a vnfci-vim-options
section for each instance with the following fields set:
-
vnfci-vim-options:
→vsphere:
→folder
May be any valid folder name on the VMware vSphere instance, or""
(i.e. an empty string) if the VMs are not organised into folders. -
vnfci-vim-options:
→vsphere:
→datastore
-
vnfci-vim-options:
→vsphere:
→host
-
vnfci-vim-options:
→vsphere:
→resource-pool-name
For example:
vnfcs:
- name: rem
cluster-configuration:
count: 3
instances:
- name: rem-1
vnfci-vim-options:
folder: production
datastore: datastore1
host: esxi1
resource-pool-name: Resources
- name: rem-2
...
vim-configuration:
vsphere:
deployment-size: medium
For OpenStack, no vnfci-vim-options
section is required.
MDM service group
MDM site-level configuration
In the site-parameters
, include the MDM credentials that you generated when installing MDM:
-
the CA certificate, static certificate, and static private key go into an
mdm
section of thesite-parameters
under the keysmdm:
→ca-certificate
,mdm:
→static-certificate
andmdm:
→private-key
respectively -
the public key from the SSH key pair goes into the
ssh
section of thesite-parameters
.
Include the option mdm:
→ ssl-certificate-management
with the value static
.
Copy certificates and keys to the SDF in their plain-text Base64 format, including the BEGIN
and END
lines, and as a multi-line string using YAML’s |-
block-scalar style that keeps all newlines except the final one.
Overall, it should look like this:
site-parameters:
mdm:
static-certificate: |-
---- BEGIN CERTIFICATE -----
AAAA.....
---- END CERTIFICATE -----
ca-certificate: |-
---- BEGIN CERTIFICATE -----
BBBB.....
---- END CERTIFICATE -----
private-key: |-
---- BEGIN PRIVATE KEY -----
CCCC.....
---- END PRIVATE KEY -----
ssl-certificate-management: static
MDM service group
Define one service group containing details of all the MDM VMs.
Networks for the MDM service group
MDM requires two traffic types: management
and signaling
, which must be on separate subnets.
MDM v3.0 or later only requires the management traffic type. Refer to the MDM Overview Guide for further information. |
Each MDM instance needs one IP address on each subnet. The management
subnet does not necessarily have to be the same as the management subnet that the REM VMs are assigned to, but the network firewalling and topology does need to allow for communication between the REM VMs' management addresses and the MDM instances' management addresses, and as such it is simplest to use the same subnet as a matter of practicality.
REM service groups
REM service groups
Note that whilst SDFs include all VNFCs in the deployment, this section only covers the custom Rhino application VMs (REM). |
Define one service group for each REM node type (rem
).
Product options for REM service groups
The following is a list of REM-specific product options in the SDF. All listed product options must be included in a product-options:
→ <node type>
section, for example:
product-options:
rem:
cds-addresses:
- 1.2.3.4
etc.
-
cds-addresses
: Required by all node types. This element lists all the CDS addresses. Must be set to all the signaling IPs of the CDS nodes. -
secrets-private-key
: Required by all node types. Contains the private key to encrypt/decrypt passwords generated for configuration. Thervtconfig
tool should be used to generate this key. More details can be found in the rvtconfig page. The same key must be used for all VMs in a deployment
Subnets and traffic schemes
The SDF defines subnets. Each subnet corresponds to a virtual NIC on the VMs, which in turn maps to a physical NIC on the VNFI. The mapping from subnets to VMs' vNICs is one-to-one, but the mapping from vNICs to physical NICs can be many-to-one.
A traffic scheme is a mapping of traffic types (such as management or SIP traffic) to these subnets. The list of traffic types required by each VM, and the possible traffic schemes, can be found in Traffic types and traffic schemes.
Defining subnets
Networks are defined in the site-parameters:
→ networking:
→ subnets
section. For each subnet, define the following parameters:
-
cidr
: The subnet mask in CIDR notation, for example172.16.0.0/24
. All IP addresses assigned to the VMs must be congruent with the subnet mask. -
default-gateway
: The default gateway IP address. Must be congruent with the subnet mask. -
identifier
: A unique identifier for the subnet, for examplemanagement
. This identifier is used when assigning traffic types to the subnet (see below). -
vim-network
: The name of the corresponding VNFI physical network, as configured on the VNFI.
The subnet that is to carry management traffic must include a dns-servers
option, which specifies a list of DNS server IP addresses. Said DNS server addresses must be reachable from the management subnet.
Physical network requirements
Each physical network attached to the VNFI must be at least 100Mb/s Ethernet (1Gb/s or better is preferred).
As a security measure, we recommend that you set up network firewalls to prevent traffic flowing between subnets. Note however that the VMs' software will send traffic over a particular subnet only when the subnet includes the traffic’s destination IP address; if the destination IP address is not on any of the VM’s subnets, it will use the management subnet as a default route.
If configuring routing rules for every destination is not possible, then an acceptable, but less secure, workaround is to firewall all interfaces except the management interface.
Allocating IP addresses and traffic types
Within each service group, define a networks
section, which is a list of subnets on which the VMs in the service group will be assigned addresses. Define the following fields for each subnet:
-
name
: A human-readable name for the subnet. -
subnet
: The subnetidentifier
of a subnet defined in thesite-parameters
section as described above. -
ip-addresses:
-
ip
: A list of IP addresses, in the same order as theinstances
that will be assigned those IP addresses. Note that while, in general, the SDF supports various formats for specifying IP addresses, for REM VMs theip
list form must be used.
-
-
traffic-types
: A list of traffic types to be carried on this subnet.
Examples
Example 1
The following example shows a partial service group definition, describing three VMs with IPs allocated on two subnets - one for management traffic, and one for SIP and internal signaling traffic.
The order of the IP addresses on each subnet matches the order of the instances, so the first VM (vm01
) will be assigned IP addresses 172.16.0.11
for management
traffic and 172.18.0.11
for sip
and internal
traffic, the next VM (vm02
) is assigned 172.16.0.12
and 172.18.0.12
, and so on.
Ensure that each VM in the service group has an IP address - i.e. each list of IP addresses must have the same number of elements as there are VM instances.
vnfcs:
- name: rem
cluster-configuration:
count: 3
instances:
- name: vm01
- name: vm02
- name: vm03
networks:
- name: Management network
ip-addresses:
ip:
- 172.16.0.11
- 172.16.0.12
- 172.16.0.13
subnet: management-subnet
traffic-types:
- management
- name: Core Signaling network
ip-addresses:
ip:
- 172.18.0.11
- 172.18.0.12
- 172.18.0.13
subnet: core-signaling-subnet
traffic-types:
- sip
- internal
...
Example 2
The order of the IP addresses on each subnet matches the order of the instances, so the first VM (vm01
) will be assigned IP addresses 172.16.0.11
for management
traffic, 172.17.0.11
for cluster
traffic etc.; the next VM (vm02
) will be assigned 172.16.0.12
, 172.17.0.12
etc; and so on. Ensure that each VM in the service group has an IP address - i.e. each list of IP addresses must have the same number of elements as there are VM instances.
vnfcs:
- name: rem
cluster-configuration:
count: 3
instances:
- name: vm01
- name: vm02
- name: vm03
networks:
- name: Management network
ip-addresses:
ip:
- 172.16.0.11
- 172.16.0.12
- 172.16.0.13
subnet: management-subnet
traffic-types:
- management
- name: Cluster
ip-addresses:
ip:
- 172.17.0.11
- 172.17.0.12
- 172.17.0.13
subnet: cluster
traffic-types:
- cluster
- name: Core Signaling network
ip-addresses:
ip:
- 172.18.0.11
- 172.18.0.12
- 172.18.0.13
subnet: core-signaling-subnet
traffic-types:
- diameter
- internal
...
Traffic type assignment restrictions
For all REM service groups in a site, where two or more service groups use a particular traffic type, this traffic type must be assigned to the same subnet throughout the site. For example, it is not permitted to use one subnet for management traffic on the REM VMs and a different subnet for management traffic on another VM type.
Within each site, traffic types must each be assigned to a different subnet.
Traffic types and traffic schemes
About traffic types, network interfaces and traffic schemes
A traffic type is a particular classification of network traffic. It may include more than one protocol, but generally all traffic of a particular traffic type serves exactly one purpose, such as Diameter signaling or VM management.
A network interface is a virtual NIC (vNIC) on the VM. These are mapped to physical NICs on the host, normally one vNIC to one physical NIC, but sometimes many vNICs to one physical NIC.
A traffic scheme is an assignment of each of the traffic types that a VM uses to one of the VM’s network interfaces. For example:
-
First interface: Management
-
Second interface: Cluster
-
Third interface: Diameter signaling and Internal signaling
-
Fourth interface: SS7 signaling
Applicable traffic types
Traffic type | Name in SDF | Description |
---|---|---|
Management |
management |
Used by Administrators for managing the node. |
Internal signaling |
internal |
Used for signaling traffic between a site’s custom Rhino application nodes. |
Defining a traffic scheme
Traffic schemes are defined in the SDF. Specifically, within the vnfcs
section of the SDF there is a VNFC entry for each node type, and each VNFC has a networks
section. Within each network interface defined in the networks
section of the VNFC, there is a list named traffic_types
, where you list the traffic type(s) (use the Name in SDF
from the table above) that are assigned to that network interface.
Traffic type names use lowercase letters and underscores only. Specify traffic types as a YAML list, not a comma-separated list. For example:
|
When defining the traffic scheme in the SDF, for each node type (VNFC), be sure to include only the relevant traffic types for that VNFC. If an interface in your chosen traffic scheme has no traffic types applicable to a particular VNFC, then do not specify the corresponding network in that VNFC.
Currently only one traffic scheme is supported for REM nodes.
Traffic scheme description | First interface | Second interface |
---|---|---|
Standard traffic scheme |
- management |
- internal |
|
Example SDF for VMware vSphere
---
msw-deployment:deployment:
sites:
- name: my-site-1
site-parameters:
deployment-id: example
fixed-ips: true
mdm:
ca-certificate: |-
-----BEGIN CERTIFICATE-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END CERTIFICATE-----
private-key: |-
-----BEGIN RSA PRIVATE KEY-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END RSA PRIVATE KEY-----
ssl-certificate-management: static
static-certificate: |-
-----BEGIN CERTIFICATE-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END CERTIFICATE-----
networking:
subnets:
- cidr: 172.16.0.0/24
default-gateway: 172.16.0.1
dns-servers:
- 2.3.4.5
- 3.4.5.6
identifier: management
vim-network: management-network
- cidr: 173.16.0.0/24
default-gateway: 173.16.0.1
identifier: core-signaling
vim-network: core-signaling-network
services:
ntp-servers:
- 1.2.3.4
- 1.2.3.5
site-id: DC1
ssh:
authorized-keys:
- ssh-rsa XXXXXXXXXXXXXXXXXXXX
timezone: Europe/London
vim-configuration:
vsphere:
connection:
allow-insecure: true
password: vsphere
server: 172.1.1.1
username: VSPHERE.LOCAL\vsphere
datacenter: Automation
folder: ''
reserve-resources: false
resource-pool-name: Resources
vnfcs:
- cluster-configuration:
count: 3
instances:
- name: example-mdm-1
vnfci-vim-options:
datastore: data:storage1
host: esxi.hostname
resource-pool-name: Resources
- name: example-mdm-2
vnfci-vim-options:
datastore: data:storage1
host: esxi.hostname
resource-pool-name: Resources
- name: example-mdm-3
vnfci-vim-options:
datastore: data:storage1
host: esxi.hostname
resource-pool-name: Resources
name: mdm
networks:
- ip-addresses:
ip:
- 172.16.0.135
- 172.16.0.136
- 172.16.0.137
name: Management
subnet: management
traffic-types:
- management
- ip-addresses:
ip:
- 173.16.0.135
- 173.16.0.136
- 173.16.0.137
name: Core Signaling
subnet: core-signaling
traffic-types:
- signaling
product-options:
mdm:
consul-token: ABCdEfgHIJkLmNOp-MS-MDM
custom-topology: |-
{
"member_groups": [
{
"group_name": "DNS",
"neighbors": []
},
{
"group_name": "RVT-rem.DC1",
"neighbors": [
"SAS-DATA"
]
}
]
}
type: mdm
version: 2.31.0
vim-configuration:
vsphere:
deployment-size: medium
- cluster-configuration:
count: 1
instances:
- name: example-rem-1
vnfci-vim-options:
datastore: data:storage1
host: esxi.hostname
resource-pool-name: Resources
name: rem
networks:
- ip-addresses:
ip:
- 172.16.0.10
name: Management
subnet: management
traffic-types:
- management
- ip-addresses:
ip:
- 173.16.0.10
name: Core Signaling
subnet: core-signaling
traffic-types:
- internal
product-options:
rem:
cds-addresses:
- 1.2.3.4
primary-user-password: ooooooooooooo
secrets-private-key: ooooooooooooooooooooooooooooooooo
type: rem
version: 4.0.0-99-1.0.0
vim-configuration:
vsphere:
deployment-size: medium
Example SDF for OpenStack
---
msw-deployment:deployment:
sites:
- name: my-site-1
site-parameters:
deployment-id: example
fixed-ips: true
mdm:
ca-certificate: |-
-----BEGIN CERTIFICATE-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END CERTIFICATE-----
private-key: |-
-----BEGIN RSA PRIVATE KEY-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END RSA PRIVATE KEY-----
ssl-certificate-management: static
static-certificate: |-
-----BEGIN CERTIFICATE-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END CERTIFICATE-----
networking:
subnets:
- cidr: 172.16.0.0/24
default-gateway: 172.16.0.1
dns-servers:
- 2.3.4.5
- 3.4.5.6
identifier: management
vim-network: management-network
- cidr: 173.16.0.0/24
default-gateway: 173.16.0.1
identifier: core-signaling
vim-network: core-signaling-network
services:
ntp-servers:
- 1.2.3.4
- 1.2.3.5
site-id: DC1
ssh:
keypair-name: key-pair
timezone: Europe/London
vim-configuration:
openstack:
availability-zone: nonperf
connection:
auth-url: http://my-openstack-server:5000/v3
keystone-v3:
project-id: 0102030405060708090a0b0c0d0e0f10
user-domain-name: Default
password: openstack-password
username: openstack-user
vnfcs:
- cluster-configuration:
count: 3
instances:
- name: example-mdm-1
- name: example-mdm-2
- name: example-mdm-3
name: mdm
networks:
- ip-addresses:
ip:
- 172.16.0.135
- 172.16.0.136
- 172.16.0.137
name: Management
subnet: management
traffic-types:
- management
- ip-addresses:
ip:
- 173.16.0.135
- 173.16.0.136
- 173.16.0.137
name: Core Signaling
subnet: core-signaling
traffic-types:
- signaling
product-options:
mdm:
consul-token: ABCdEfgHIJkLmNOp-MS-MDM
custom-topology: |-
{
"member_groups": [
{
"group_name": "DNS",
"neighbors": []
},
{
"group_name": "RVT-rem.DC1",
"neighbors": [
"SAS-DATA"
]
}
]
}
type: mdm
version: 2.31.0
vim-configuration:
openstack:
flavor: medium
- cluster-configuration:
count: 1
instances:
- name: example-rem-1
name: rem
networks:
- ip-addresses:
ip:
- 172.16.0.10
name: Management
subnet: management
traffic-types:
- management
- ip-addresses:
ip:
- 173.16.0.10
name: Core Signaling
subnet: core-signaling
traffic-types:
- internal
product-options:
rem:
cds-addresses:
- 1.2.3.4
primary-user-password: ooooooooooooo
secrets-private-key: ooooooooooooooooooooooooooooooooo
type: rem
version: 4.0.0-99-1.0.0
vim-configuration:
openstack:
flavor: medium
Example SDF for VMware vCloud
---
msw-deployment:deployment:
sites:
- name: my-site-1
site-parameters:
deployment-id: example
fixed-ips: true
mdm:
ca-certificate: |-
-----BEGIN CERTIFICATE-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END CERTIFICATE-----
private-key: |-
-----BEGIN RSA PRIVATE KEY-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END RSA PRIVATE KEY-----
ssl-certificate-management: static
static-certificate: |-
-----BEGIN CERTIFICATE-----
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
-----END CERTIFICATE-----
networking:
subnets:
- cidr: 172.16.0.0/24
default-gateway: 172.16.0.1
dns-servers:
- 2.3.4.5
- 3.4.5.6
identifier: management
vim-network: management-network
- cidr: 173.16.0.0/24
default-gateway: 173.16.0.1
identifier: core-signaling
vim-network: core-signaling-network
services:
ntp-servers:
- 1.2.3.4
- 1.2.3.5
site-id: DC1
ssh:
authorized-keys:
- ssh-rsa XXXXXXXXXXXXXXXXXXXX
timezone: Europe/London
vim-configuration:
vcloud:
catalog: mycatalog
connection:
allow-insecure: true
password: admin
sysadmin-privileges: true
url: https://vcloud-server
username: admin
org: MyOrg
vdc: My VDC
vnfcs:
- cluster-configuration:
count: 3
instances:
- name: example-mdm-1
- name: example-mdm-2
- name: example-mdm-3
name: mdm
networks:
- ip-addresses:
ip:
- 172.16.0.135
- 172.16.0.136
- 172.16.0.137
name: Management
subnet: management
traffic-types:
- management
- ip-addresses:
ip:
- 173.16.0.135
- 173.16.0.136
- 173.16.0.137
name: Core Signaling
subnet: core-signaling
traffic-types:
- signaling
product-options:
mdm:
consul-token: ABCdEfgHIJkLmNOp-MS-MDM
custom-topology: |-
{
"member_groups": [
{
"group_name": "DNS",
"neighbors": []
},
{
"group_name": "RVT-rem.DC1",
"neighbors": [
"SAS-DATA"
]
}
]
}
type: mdm
version: 2.31.0
vim-configuration:
vcloud:
deployment-size: medium
- cluster-configuration:
count: 1
instances:
- name: example-rem-1
name: rem
networks:
- ip-addresses:
ip:
- 172.16.0.10
name: Management
subnet: management
traffic-types:
- management
- ip-addresses:
ip:
- 173.16.0.10
name: Core Signaling
subnet: core-signaling
traffic-types:
- internal
product-options:
rem:
cds-addresses:
- 1.2.3.4
primary-user-password: ooooooooooooo
secrets-private-key: ooooooooooooooooooooooooooooooooo
type: rem
version: 4.0.0-99-1.0.0
vim-configuration:
vcloud:
deployment-size: medium
Bootstrap parameters
Bootstrap parameters are provided to the VM when the VM is created. They are used by the bootstrap process to configure various settings in the VM’s operating system.
On VMware vSphere, the bootstrap parameters are provided as vApp parameters. On OpenStack, the bootstrap parameters are provided as userdata in YAML format.
Configuration of bootstrap parameters is handled automatically by the SIMPL VM. This page is only relevant if you are deploying VMs manually or using an orchestrator other than the SIMPL VM, in consultation with your Metaswitch Customer Care Representative.
List of bootstrap parameters
Property | Description | Format and Example |
---|---|---|
|
Required. The hostname of the server. |
A string consisting of letters A-Z, a-z, digits 0-9, and hyphens (-). Maximum length is 27 characters. Example: |
|
Required. List of DNS servers. |
For VMware vSphere, a comma-separated list of IPv4 addresses. For OpenStack, a list of IPv4 addresses. Example: |
|
Required. List of NTP servers. |
For VMware vSphere, a comma-separated list of IPv4 addresses or FQDNs. For OpenStack, a list of IPv4 addresses or FQDNs. Example: |
|
Optional. The system time zone in POSIX format. Defaults to UTC. |
Example: |
|
Required. The list of signaling addresses of Config Data Store (CDS) servers which will provide configuration for the cluster. CDS is provided by the TSN nodes. Refer to the Configuration section of the documentation for more information. |
For VMware vSphere, a comma-separated list of IPv4 addresses. For OpenStack, a list of IPv4 addresses. Example: |
|
Required. This is only for TSN VMs. The IP address of the leader node of the CDS cluster. This should only be set in the "node heal" case, not when doing the initial deployment of a cluster. |
A single IPv4 address. Example: |
|
Required. An identifier for this deployment. A deployment consists of one or more sites, each of which consists of several clusters of nodes. |
A string consisting of letters A-Z, a-z, digits 0-9, and hyphens (-). Maximum length is 15 characters. Example: |
|
Required. A unique identifier (within the deployment) for this site. |
A string of the form |
|
Required only when there are multiple clusters of the same type in the same site. A suffix to distinguish between clusters of the same node type within a particular site. For example, when deploying the MaX product, a second TSN cluster may be required. |
A string consisting of letters A-Z, a-z, and digits 0-9. Maximum length is 8 characters. Example: |
|
Optional. A list of SSH public keys. Machines configured with the corresponding private key will be allowed to access the node over SSH as the |
For VMware vSphere, a comma-separated list of SSH public key strings, including the For OpenStack, a list of SSH public key strings. Example: |
|
Optional. An identifier for the VM to use when communicating with MDM, provided by the orchestrator. Supply this only for an MDM-managed deployment. |
Free form string Example: |
|
Optional. The list of management addresses of Metaswitch Deployment Manager(MDM) servers which will manage this cluster. Supply this only for an MDM-managed deployment. |
For VMware vSphere, a comma-separated list of IPv4 addresses. For OpenStack, a list of IPv4 addresses. Example: |
|
Optional. The static certificate for connecting to MDM. Supply this only for an MDM-managed deployment. |
The static certificate as a string Example: |
|
Optional. The CA certificate for connecting to MDM. Supply this only for an MDM-managed deployment. |
The static certificate as a string Example: |
|
Optional. The private key for connecting to MDM. Supply this only for an MDM-managed deployment. |
The static certificate as a string Example: |
|
Required. The private Fernet key used to encrypt and decrypt secrets used by this deployment. A Fernet key may be generated for the deployment using the |
The private key as a string Example: |
|
Required. The primary user’s password. The primary user is the |
The password as a string. Minimum length is 8 characters. Be sure to quote it if it contains special characters. Example: |
|
Required. The IP address information for the VM. |
An encoded string. Example: |
The ip_info
parameter
For all network interfaces on a VM, the assigned traffic types, MAC address (OpenStack only), IP address, subnet mask, are encoded in a single parameter called ip_info
. Refer to Traffic types and traffic schemes for a list of traffic types found on each VM and how to assign them to network interfaces.
The names of the traffic types as used in the ip_info
parameter are:
Traffic type | Name used in ip_info |
---|---|
Management |
management |
Internal signaling |
internal |
Constructing the ip_info
parameter
-
Choose a traffic scheme.
-
For each interface in the traffic scheme which has traffic types relevant to your VM, note down the values of the parameters for that interface: traffic types, MAC address, IP address, subnet mask, and default gateway address.
-
Construct a string for each parameter using these prefixes:
Parameter Prefix Format Traffic types
t=
A comma-separated list (without spaces) of the names given above.
Example:t=diameter,sip,internal
MAC address
m=
Six pairs of hexadecimal digits, separated by colons. Case is unimportant.
Example:m=01:23:45:67:89:AB
IP address
i=
IPv4 address in dotted-decimal notation.
Example:i=172.16.0.11
Subnet mask
s=
CIDR notation.
Example:s=172.16.0.0/24
Default gateway address
g=
IPv4 address in dotted-decimal notation.
Example:g=172.16.0.1
-
Join all the parameter strings together with an ampersand (
&
) between each.
Example:t=diameter,sip,internal&m=01:23:45:67:89:AB&i=172.16.0.11&s=172.16.0.0/24&g=172.16.0.1
-
Repeat for every other network interface.
-
Finally, join the resulting strings for each interface together with a semicolon (
;
) between each.
The individual strings for each network interface must not contain a trailing When including the string in a YAML userdata document, be sure to quote the string, e.g. Do not include details of any interfaces which haven’t been assigned any traffic types. |
Bootstrap and configuration
Bootstrap
Bootstrap is the process whereby, after a VM is started for the first time, it is configured with key system-level configuration such as IP addresses, DNS and NTP server addresses, a hostname, and so on. This process runs automatically on the first boot of the VM. For bootstrap to succeed it is crucial that all entries in the SDF (or in the case of a manual deployment, all the bootstrap parameters) are correct.
Successful bootstrap
Once the VM has booted into multi-user mode, bootstrap normally takes about one minute.
SSH access to the VM is not possible until bootstrap has completed. If you want to monitor bootstrap from the console, log in as the sentinel
user and examine the log file bootstrap/bootstrap.log
. Successful completion is indicated by the line Bootstrap complete
.
Troubleshooting bootstrap
If bootstrap fails, an exception will be written to the log file. If the network-related portion of bootstrap succeeded but a failure occurred afterwards, the VM will be accessible over SSH and logging in will display a warning Automatic bootstrap failed
.
Examine the log file bootstrap/bootstrap.log
to see why bootstrap failed. In the majority of cases it will be down to an incorrect SDF or a missing or invalid bootstrap parameter. Destroy the VM and recreate it with the correct SDF or bootstrap parameters (it is not possible to run bootstrap more than once).
If you are sure you have the SDF or bootstrap parameters correct, or it is not obvious what is wrong, contact your Customer Care Representative.
Configuration
Configuration occurs after bootstrap. It sets up product-level configuration such as:
-
configuring Rhino and the relevant products (on systems that run Rhino)
-
SNMP-based monitoring, and
-
SSH key exchange to allow access from other VMs in the cluster to this VM.
To perform this configuration, the process retrieves its configuration in the form of YAML files from the CDS. The CDS to contact is determined using the cds-addresses
parameter from the SDF or bootstrap parameters.
The configuration process constantly looks for new configuration, and reconfigures the system if new configuration has been uploaded to the CDS.
The YAML files describing the configuration should be prepared in advance.
Configuration files
The configuration process reads settings from YAML files. Each YAML file refers to a particular set of configuration options, for example, SNMP settings. The YAML files are validated against a YANG schema. The YANG schema is human-readable and lists all the possible options, together with a description. It is therefore recommended to reference the Configuration YANG schema while preparing the YAML files.
Some YAML files are shared between different node types. If a file with the same file name is required for two different node types, the same file must be used in both cases.
The CDS nodes should be ready for service before booting any other nodes. |
When uploading configuration files, you must also include a Solution Definition File containing all nodes in the deployment (see below). Furthermore, for any VM which runs Rhino, you must also include a valid Rhino license. |
Solution Definition File
You will already have written a Solution Definition File (SDF) as part of the creation of the VMs. As the configuration process discovers other RVT nodes using the SDF, this SDF needs to be uploaded as part of the configuration.
The SDF must be named |
Successful configuration
The configuration process on the VMs starts after bootstrap completes. It is constantly listening for configuration to be written to CDS (via rvtconfig upload-config
). Once it detects configuration has been uploaded, it will automatically download and validate it. Assuming everything passes validation, the configuration will then be applied automatically. This can take up to 20 minutes depending on node type.
The configuration process can be monitored using the report-initconf status
tool. The tool can be run via an VM SSH session. Success is indicated by status=vm_converged
.
Troubleshooting configuration
Like bootstrap, errors are reported to the log file, located at initconf/initconf.log
in the default user’s home directory.
initconf initialization failed due to an error
: This indicates that initconf initialization has irrecoverably failed. Contact a Customer Care Representative for next steps.
Task <name> marked as permanently failed
: This indicates that configuration has irrecoverably failed. Contact a Customer Care Representative for next steps.
<file> failed to validate against YANG schemas
: This indicates something in one of the YAML files was invalid. Refer to the output to check which field was invalid, and fix the problem. For configuration validation issues, the VM doesn’t need to be destroyed and recreated. The fixed configuration can be uploaded using rvtconfig upload-config
. The configuration process will automatically try again once it detects the uploaded configuration has been updated.
If there is a configuration validation error on the VM, initconf will NOT run tasks until new configuration has been validated and uploaded to the CDS. |
Other errors: If these relate to invalid field values or a missing license, it is normally safe to fix the configuration and try again. Otherwise, contact a Customer Care Representative.
Configuration alarms
The configuration process can raise the following SNMP alarms, which are sent to the configured notification targets (all with OID prefix 1.3.6.1.4.1.19808.2
):
OID | Description |
---|---|
12355 |
Initconf warning. This alarm is raised if a task has failed to converge after 30 tries. If this alarm does not eventually clear, refer to Troubleshooting configuration to troubleshoot the issue. |
12356 |
Initconf failed. This alarm is raised if the configuration process irrecoverably failed. Refer to Troubleshooting configuration to troubleshoot the issue. |
12361 |
Initconf unexpected exception. This alarm is raised if the configuration process encountered an unexpected exception. Initconf will attempt to retry the task up to five times, and might eventually succeed. However, the configuration of the node after this recovery attempt might not match the desired configuration exactly. It is therefore recommended to troubleshoot this issue. This alarm must be administratively cleared as it indicates an issue that requires manual intervention. |
12363 |
Configuration validation warning. This alarm is raised if the VM’s configuration contains items that require attention, such as expired or expiring REM certificates. The configuration will be applied, but some services may not be fully operational. Further information regarding the configuration warning may be found in the initconf log. |
REM certificates
About HTTPS certificates for REM
On the REM VMs, REM runs on Apache Tomcat, where the Tomcat webserver is configured to only accept traffic over HTTPS. As such, Tomcat requires a server-side certificate, which is presented to the user’s browser to prove the server’s identity when a user accesses REM.
Certificates are generated and signed by a known and trusted Certificate Authority (CA). This is done by having a chain of certificates, starting from the CA’s root certificate, where each certificate signs the next in the chain - creating a chain of trust from the CA to the end user’s webserver.
Each certificate is associated with a private key. The certificate itself contains a public key which matches the private key, and these keys are used to encrypt and decrypt the traffic flowing over the HTTPS connection. While the certificate can be safely shared publicly, the private key must be kept safe and not revealed to anyone.
Using rvtconfig
, you can upload certificates and private keys to the REM nodes, and initconf
will automatically set up Tomcat to use them. Alternatively, you can opt to have initconf
generate self-signed certificates.
REM, being a tool for network operators and available only over the management interface, should not be exposed to the public Internet. As such public CAs such as Let’s Encrypt will not be able to issue a certificate for it. To avoid any browser warnings for users accessing REM, you will need to set up a private CA and issue a certificate from that, and add the CA’s root certificate to the browser’s in-built list of trusted root certificates, for example, by using group policy settings. If you do not have an in-house CA, use of a self-signed certificate is the recommended approach. |
Self-signed certificates
If no certificate is uploaded for REM, initconf
creates a self-signed certificate. This will be entirely functional, though users trying to log in to REM will see a browser warning stating that the certificate is self-signed, and will have to add a security exception in order to use REM.
HTTPS certificate specification
If you have an in-house Certificate Authority, they can issue you with a signed certificate for your REM domain(s) and/or IP address(es). To ensure your certificate is compatible with initconf
, it should conform to RFC 2818, that is to say that each domain name and/or IP address through which users will log in to REM must be specified in the certificate as a Subject Alternative Name (SAN), and not as the Common Name (CN). SANs must be of DNS
(also known as IA5 dNSName
) type for hostnames and IP
(IA5 iPAddress
) type for IP addresses.
If users are to connect to REM via hostname(s) rather than IP address(es), be sure the DNS entry for each hostname resolves to only one node. This ensures that all REM requests made in a single session are directed to a single node. |
For the subject, specify at least the Country (C), Organisation (O), Organisational Unit (OU) and Common Name (CN) fields to match the details of your deployment.
Here is an example set of field values for a certificate request:
C = NZ
O = SomeTelco
OU = SomeCity Network Operations Center
CN = REM
SAN = DNS:rem.sometelco.com
SAN = IP:192.168.10.10
Ensure that the CA issues your certificate in PEM (Privacy-Enhanced Mail) format. In addition, the private key must not have a passphrase (even an empty one).
A certificate bundle issued by a CA generally contains your certificate, your private key, their root certificate, and possibly one or more intermediate certificates. All certificates in the chain need to be merged into a single file in order to be uploaded for use with Tomcat. Follow the steps below:
-
Ensure the files are in PEM format. You can do this by first checking that the contents of each file begins with this line
----- BEGIN CERTIFICATE -----
and ends with this line
----- END CERTIFICATE -----
(the exact number of hyphens in the line can vary). Then check the certificates are valid and not expired by using
openssl
:openssl x509 -in <filename> -inform pem -text -noout
If the certificate is indeed in PEM format, this command will display the certificate details. You can check that for your certificate, the subject details (the C, OU and so on) match those you specified on the certificate request. Look at the
Validity
fields to ensure all certificates in the bundle are valid. Forinitconf
to accept them, they must all be valid for at least 30 days from the day you upload them. -
Work out the order of the certificates. To take an example of a bundle containing your certificate, the root certificate and one intermediate certificate: your certificate is signed by the intermediate, and the intermediate certificate is signed by the root. If there is more than one intermediate certificate then the CA can tell you which certificate is signed by which.
-
Construct the chain by concatenating the files together in the correct order such that each certificate is signed by the next, starting with your certificate and ending with the root certificate. For example, this can be done using the Linux
cat
utility as follows:cat my_certificate.crt intermediate_certificate.crt root_certificate.crt >chain.crt
which will create a file
chain.crt
containing the entire certificate chain and suitable for uploading to the REM nodes. -
Keep the private key safe - you should not reveal the contents of the file to anyone outside of your organisation, not even Metaswitch. You will however need to upload it to the REM nodes alongside the certificate chain. If you have multiple HTTPS certificates and private keys, ensure you can associate each private key with the certificate it refers to.
Uploading a certificate chain and private key for REM during configuration
When uploading the YAML configuration files using rvtconfig
, you can also include the certificate chain and private key and upload those at the same time.
To do this, place the certificate chain and private key files in the directory containing the YAML files before running rvtconfig
.
-
For REM, the certificate chain file must have the filename
rem-cert.crt
, and the private key file must have the filenamerem-cert.key
.
No additional rvtconfig
arguments are required; rvtconfig
will locate the files through the known filenames given above. It will then run a few basic checks on the files, such as checking whether the private key matches the certificate, and that the certificate is not due to expire in less than 30 days. If all checks pass, then the certificates will be uploaded to the CDS and installed by initconf
. Otherwise, rvtconfig
will inform you of any errors. Correct these and try again.
Note that you must provide either both the certificate chain and private key, or neither (in which case initconf
will generate a self-signed certificate). If you provide only one, rvtconfig
will fail.
Changing the certificate
Once a certificate and key have been successfully uploaded to the nodes, there is no need to upload them again on subsequent reconfigurations. The node will continue to use the same certificate.
If you are using a self-signed certificate, then subsequent reconfigurations will not recreate it. Self-signed certificates generated by initconf
are valid for 5 years. If the certificate expires or you need to refresh it for some other reason (such as the private key being compromised), contact your Metaswitch Customer Care representative.
You can replace a CA-issued certificate at any time by following the same steps above with a new certificate chain file and private key file. Providing a CA-issued certificate this way will also override any self-signed certificate currently in use.
Services and components
This section describes details of components and services running on the REM nodes.
systemd services
Rhino Element Manager
REM runs as a 'webapp' inside Apache Tomcat. This runs as a systemd service called rhino-element-manager
. REM comes equipped with the SIS EM and Sentinel Express plugins, to simplify management of SIS and Sentinel based services.
You can examine the state of the REM service by running sudo systemctl status rhino-element-manager.service
. This is an example of a healthy status:
[sentinel@mag-1 ~]$ sudo systemctl status rhino-element-manager.service
● rhino-element-manager.service - Rhino Element Manager (REM)
Loaded: loaded (/etc/systemd/system/rhino-element-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-01-11 05:43:10 NZDT; 3s ago
Docs: https://docs.opencloud.com/ocdoc/books/devportal-documentation/1.0/documentation-index/platforms/rhino-element-manager-rem.html
Process: 4659 ExecStop=/home/sentinel/apache-tomcat/bin/systemd_relay.sh stop (code=exited, status=0/SUCCESS)
Process: 4705 ExecStart=/home/sentinel/apache-tomcat/bin/systemd_relay.sh start (code=exited, status=0/SUCCESS)
Main PID: 4713 (catalina.sh)
Tasks: 89
Memory: 962.1M
CGroup: /system.slice/rhino-element-manager.service
├─4713 /bin/sh bin/catalina.sh start
└─4715 /home/sentinel/java/current/bin/java -Djava.util.logging.config.file=/home/sentinel/apache-tomcat-8.5.38/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Xms2048m -Xmx2048m -...
Jan 11 05:43:00 mag-1 systemd[1]: Starting Rhino Element Manager (REM)...
Jan 11 05:43:00 mag-1 systemd_relay.sh[4705]: Tomcat started.
Jan 11 05:43:10 mag-1 systemd[1]: Started Rhino Element Manager (REM).
Alternatively, the Tomcat service will show up as Bootstrap
when running jps
.
For more information about REM, see the Rhino Element Manager (REM) Guide.
SNMP service monitor
The SNMP service monitor process is responsible for raising SNMP alarms when a disk partition gets too full.
The SNMP service monitor alarms are compatible with Rhino alarms and can be accessed in the same way. Refer to Accessing SNMP Statistics and Notifications for more information about this.
Alarms are sent to SNMP targets as configured through the configuration YAML files.
The following partitions are monitored:
-
the root partition (
/
) -
the log partition (
/var/log
)
There are two thresholds for disk monitoring, expressed as a percentage of the total partition size. When disk usage exceeds:
-
the lower threshold, a warning (MINOR severity) alarm will be raised.
-
the upper threshold, a MAJOR severity alarm will be raised, and (except for the root partition) files will be automatically cleaned up where possible.
Once disk space has returned to a non-alarmable level, the SNMP service monitor will clear the associated alarm on the next check. By default, it checks disk usage once per day. Running the command sudo systemctl reload disk-monitor
will force an immediate check of the disk space, for example, if an alarm was raised and you have since cleaned up the appropriate partition and want to clear the alarm.
Configuring the SNMP service monitor
The default monitoring settings should be appropriate for the vast majority of deployments.
Should your Metaswitch Customer Care Representative advise you to reconfigure the disk monitor, you can do so by editing the file /etc/disk_monitor.yaml
(you will need to use sudo
when editing this file due to its permissions):
global:
check_interval_seconds: 86400
log:
lower_threshold: 80
max_files_to_delete: 10
upper_threshold: 90
root:
lower_threshold: 90
upper_threshold: 95
snmp:
enabled: true
notification_type: trap
targets:
- address: 192.168.50.50
port: 162
version: 2c
The file is in YAML format, and specifies the alarm thresholds for each disk partition (as a percentage), the interval between checks in seconds, and the SNMP targets.
-
Supported SNMP versions are
2c
and3
. -
Supported notification types are
trap
andnotify
. -
Supported values for the upper and lower thresholds are:
Partition |
Lower threshold range |
Upper threshold range |
Minimum difference between thresholds |
|
50% to 80% |
60% to 90% |
10% |
|
50% to 90% |
60% to 99% |
5% |
-
check_interval_seconds
must be in the range 60 to 86400 seconds inclusive. It is recommended to keep the interval as long as possible to minimise performance impact.
After editing the file, you can apply the configuration by running sudo systemctl reload disk-monitor
.
Verify that the service has accepted the configuration by running sudo systemctl status disk-monitor
. If it shows an error, run journalctl -u disk-monitor
for more detailed information. Correct the errors in the configuration and apply it again.
Partitions
The nodes contain three partitions:
-
/boot
, with a size of 100MB. This contains the kernel and bootloader. -
/var/log
, with a size of 7000MB. This is where the OS and Rhino store their logfiles. The Rhino logs are within thetas
subdirectory, and within that each cluster has its own directory. -
/
, which uses up the rest of the disk. This is the root filesystem.
rem-vm-pool.yang
module rem-vm-pool {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/rem-vm-pool";
prefix "rem-vm-pool";
import vm-types {
prefix "vmt";
revision-date 2019-11-29;
}
import extensions {
prefix "yangdoc";
revision-date 2020-12-02;
}
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "Rhino Element Manager (REM) virtual machine pool configuration schema.";
revision 2020-06-01 {
description
"Initial revision";
reference
"Metaswitch Deployment Definition Guide";
}
grouping rem-virtual-machine-pool {
leaf deployment-id {
type vmt:deployment-id-type;
mandatory true;
description "The deployment identifier. Used to form a unique VM identifier within the
VM host.";
}
leaf site-id {
type vmt:site-id-type;
mandatory true;
description "Site ID for the site that this VM pool is a part of.";
}
leaf node-type-suffix {
type vmt:node-type-suffix-type;
default "";
description "Suffix to add to the node type when deriving the group identifier. Should
normally be left blank.";
}
list rem-auth {
key "username";
min-elements 1;
uses vmt:rem-auth-grouping;
description "List of REM users and their plain text passwords.";
yangdoc:change-impact "converges";
}
list virtual-machines {
key "vm-id";
leaf vm-id {
type string;
mandatory true;
description "The unique virtual machine identifier.";
}
description "Configured virtual machines.";
}
description "Rhino Element Manager (REM) virtual machine pool.";
}
}
snmp-configuration.yang
module snmp-configuration {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/snmp-configuration";
prefix "snmp";
import ietf-inet-types {
prefix "ietf-inet";
}
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "SNMP configuration schema.";
revision 2019-11-29 {
description
"Initial revision";
reference
"Metaswitch Deployment Definition Guide";
}
grouping snmp-configuration-grouping {
leaf v1-enabled {
type boolean;
default false;
description "Enables the use of SNMPv1 if set to 'true'. Note that support for SNMPv1
is deprecated and SNMP v2c should be used instead. Use of v1 is limited
to Rhino only and may cause some Rhino statistics to fail to appear
correctly or not at all. Set to 'false' to disable SNMPv1.";
}
leaf v2c-enabled {
type boolean;
default true;
description "Enables the use of SNMPv2c if set to 'true'.
Set to 'false' to disable SNMPv2c.";
}
leaf v3-enabled {
type boolean;
default false;
description "Enables the use of SNMPv3 if set to 'true'.
Set to 'false' to disable SNMPv3.";
}
leaf trap_type {
when "../v2c-enabled = 'true'";
type enumeration {
enum trap {
description "Generate TRAP type notifications.";
}
enum inform {
description "Generate INFORM type notifications.";
}
}
default trap;
description "Configure the notification type to use when SNMPv2c is enabled.";
}
leaf community {
when "../v2c-enabled = 'true'";
type string;
default "clearwater";
description "The SNMPv2c community name.";
}
container v3-authentication {
when "../v3-enabled = 'true'";
leaf username {
type string;
mandatory true;
description "The SNMPv3 user name.";
}
leaf authentication-protocol {
type enumeration {
enum SHA {
description "SHA";
}
enum MD5 {
description "MD5 message digest.";
}
}
default SHA;
description "The authentication mechanism to use.";
}
leaf authentication-key {
type string {
length "8 .. max";
}
mandatory true;
description "The authentication key.";
}
leaf privacy-protocol {
type enumeration {
enum DES {
description "Data Encryption Standard (DES)";
}
enum 3DES {
description "Triple Data Encryption Standard (3DES).";
}
enum AES128 {
description "128 bit Advanced Encryption Standard (AES).";
}
enum AES192 {
description "192 bit Advanced Encryption Standard (AES).";
}
enum AES256 {
description "256 bit Advanced Encryption Standard (AES).";
}
}
default AES128;
description "The privacy mechanism to use.";
}
leaf privacy-key {
type string {
length "8 .. max";
}
mandatory true;
description "The privacy key.";
}
description "SNMPv3 authentication configuration. Only used when 'v3-enabled' is set
to 'true'.";
}
container agent-details {
when "../v2c-enabled = 'true' or ../v3-enabled= 'true'";
// agent name is the VM ID
// description is the human-readable node description from the metadata
leaf location {
type string;
mandatory true;
description "The physical location of the SNMP agent.";
}
leaf contact {
type string;
mandatory true;
description "The contact email address for this SNMP agent.";
}
description "The configurable SNMP agent details. The VM ID is used as the agent's
name, and the human readable node description from the metadata is used
as the description.";
}
container notifications {
leaf rhino-notifications-enabled {
when "../../v2c-enabled = 'true' or ../../v3-enabled = 'true'";
type boolean;
default true;
description "Specifies whether or not Rhino SNMP v2c/3 notifications are enabled.
Applicable only when SNMPv2c and/or SNMPv3 are enabled.";
}
must "rhino-notifications-enabled = 'false'
or (count(targets[send-rhino-notifications = 'true']) > 0)" {
error-message "Since you have enabled Rhino notifications,
you must specify at least one Rhino notification target.";
}
leaf system-notifications-enabled {
when "../../v2c-enabled = 'true' or ../../v3-enabled = 'true'";
type boolean;
default true;
description "Specifies whether or not system SNMP v2c/3 notifications are enabled.
System notifications are: high memory and CPU usage warnings,
and system boot notifications.
If you use MetaView Server to monitor
your platform, then it is recommended to leave this set to 'false'.";
}
must "system-notifications-enabled = 'false'
or (count(targets[send-system-notifications = 'true']) > 0)" {
error-message "Since you have enabled system notifications,
you must specify at least one system notification target.";
}
leaf sgc-notifications-enabled {
when "../../v2c-enabled = 'true' or ../../v3-enabled = 'true'";
type boolean;
default true;
description "Specifies whether or not OCSS7 SGC SNMP v2c/3 notifications are
enabled.
Applicable only when SNMPv2c and/or SNMPv3 are enabled.";
}
must "sgc-notifications-enabled = 'false'
or (count(targets[send-sgc-notifications = 'true']) > 0)" {
error-message "Since you have enabled SGC notifications,
you must specify at least one SGC notification target.";
}
list targets {
key "version host port";
leaf version {
type enumeration {
enum v1 {
description "SNMPv1";
}
enum v2c {
description "SNMPv2c";
}
enum v3 {
description "SNMPv3";
}
}
description "The SNMP notification version to use for this target.";
}
leaf host {
type ietf-inet:host;
description "The target host.";
}
leaf port {
type ietf-inet:port-number;
// 'port' is a key and YANG ignores the default value of any keys, hence we
// cannot set a default '162' here.
description "The target port, normally 162.";
}
leaf send-rhino-notifications {
when "../../rhino-notifications-enabled = 'true'";
type boolean;
default true;
description "Specifies whether or not to send Rhino SNMP v2c/3 notifications
to this target.
Can only be specified if ../rhino-notifications-enabled is true.";
}
leaf send-system-notifications {
when "../../system-notifications-enabled = 'true'";
type boolean;
default true;
description "Specifies whether or not to send system SNMP v2c/3 notifications
to this target.
Can only be specified if ../system-notifications-enabled is true.";
}
leaf send-sgc-notifications {
when "../../sgc-notifications-enabled = 'true'";
type boolean;
default true;
description "Specifies whether or not to send SGC SNMP v2c/3 notifications
to this target.
Can only be specified if ../sgc-notifications-enabled is true.";
}
description "The list of SNMP notification targets.
Note that you can specify targets even if not using Rhino or system
notifications - the targets are also used for the disk and
service monitor alerts.";
}
list categories {
when "../rhino-notifications-enabled = 'true'";
key "category";
leaf category {
type enumeration {
enum alarm-notification {
description "Alarm related notifications.";
}
enum log-notification {
description "Log related notifications.";
}
enum log-rollover-notification {
description "Log rollover notifications.";
}
enum resource-adaptor-entity-state-change-notification {
description "Resource adaptor entity state change notifications.";
}
enum service-state-change-notification {
description "Service state change notifications.";
}
enum slee-state-change-notification {
description "SLEE state change notifications.";
}
enum trace-notification {
description "Trace notifications.";
}
enum usage-notification {
description "Usage notifications.";
}
}
description "Notification category.
If you are using MetaView Server, only the `alarm-notification`
category of Rhino SNMP notifications is supported.
Therefore, all other notification categories should be disabled.";
}
leaf enabled {
type boolean;
mandatory true;
description "Set to 'true' to enable this category. Set to 'false' to disable.";
}
description "Rhino notification categories to enable or disable.";
}
description "Notification configuration.";
}
container sgc {
leaf v2c-port {
when "../../v2c-enabled = 'true'";
type ietf-inet:port-number;
default 11100;
description "The port to bind to for v2c SNMP requests.";
}
leaf v3-port {
when "../../v3-enabled = 'true'";
type ietf-inet:port-number;
default 11101;
description "The port to bind to for v3 SNMP requests.";
}
description "SGC-specific SNMP configuration.";
}
description "SNMP configuration.";
}
}
routing-configuration.yang
module routing-configuration {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/routing-configuration";
prefix "routing";
import ietf-inet-types {
prefix "ietf-inet";
}
import traffic-type-configuration {
prefix "traffic-type";
revision-date 2022-04-11;
}
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "Routing configuration schema.";
revision 2019-11-29 {
description
"Initial revision";
reference
"Metaswitch Deployment Definition Guide";
}
grouping routing-configuration-grouping {
list routing-rules {
key "name";
unique "target";
leaf name {
type string;
mandatory true;
description "The name of the routing rule.";
}
leaf target {
type union {
type ietf-inet:ip-address;
type ietf-inet:ip-prefix;
}
mandatory true;
description "The target for the routing rule.
Can be either an IP address or a block of IP addresses.";
}
leaf interface {
type traffic-type:traffic-type;
mandatory true;
description "The interface to use to connect to the specified endpoint.
This must be one of the allowed traffic types,
corresponding to the interface carrying the traffic type.";
}
leaf gateway {
type ietf-inet:ip-address;
mandatory true;
description "The IP address of the gateway to route through.";
}
leaf-list node-types {
type enumeration {
enum shcm {
description "Apply this routing rule to the shcm nodes.";
}
enum mag {
description "Apply this routing rule to the mag nodes.";
}
enum mmt-gsm {
description "Apply this routing rule to the mmt-gsm nodes.";
}
enum mmt-cdma {
description "Apply this routing rule to the mmt-cdma nodes.";
}
enum smo {
description "Apply this routing rule to the smo nodes.";
}
enum tsn {
description "Apply this routing rule to the tsn nodes.";
}
enum max {
description "Apply this routing rule to the max nodes.";
}
enum rem {
description "Apply this routing rule to the rem nodes.";
}
enum sgc {
description "Apply this routing rule to the sgc nodes.";
}
enum custom {
description "Apply this routing rule to the custom nodes.";
}
}
description "The node-types this routing rule applies to.";
}
description "The list of routing rules.";
}
description "Routing configuration";
}
}
system-configuration.yang
module system-configuration {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/system-configuration";
prefix "system";
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "OS-level parameters configuration schema.";
revision 2019-11-29 {
description
"Initial revision";
reference
"Metaswitch Deployment Definition Guide";
}
grouping system-configuration-grouping {
container networking {
container core {
leaf receive-buffer-size-default {
type uint32 {
range "65536 .. 16777216";
}
units "bytes";
default 512000;
description "Default socket receive buffer size.";
}
leaf receive-buffer-size-max {
type uint32 {
range "65536 .. 16777216";
}
units "bytes";
default 2048000;
description "Maximum socket receive buffer size.";
}
leaf send-buffer-size-default {
type uint32 {
range "65536 .. 16777216";
}
units "bytes";
default 512000;
description "Default socket send buffer size.";
}
leaf send-buffer-size-max {
type uint32 {
range "65536 .. 16777216";
}
units "bytes";
default 2048000;
description "Maximum socket send buffer size.";
}
description "Core network settings.";
}
container sctp {
leaf rto-min {
type uint32 {
range "10 .. 5000";
}
units "milliseconds";
default 50;
description "Round trip estimate minimum. "
+ "Used in SCTP's exponential backoff algorithm for retransmissions.";
}
leaf rto-initial {
type uint32 {
range "10 .. 5000";
}
units "milliseconds";
default 300;
description "Round trip estimate initial value. "
+ "Used in SCTP's exponential backoff algorithm for retransmissions.";
}
leaf rto-max {
type uint32 {
range "10 .. 5000";
}
units "milliseconds";
default 1000;
description "Round trip estimate maximum. "
+ "Used in SCTP's exponential backoff algorithm for retransmissions.";
}
leaf sack-timeout {
type uint32 {
range "50 .. 5000";
}
units "milliseconds";
default 100;
description "Timeout within which the endpoint expects to receive "
+ "a SACK message.";
}
leaf hb-interval {
type uint32 {
range "50 .. 30000";
}
units "milliseconds";
default 1000;
description "Heartbeat interval. The longer the interval, "
+ "the longer it can take to detect that communication with a peer "
+ "has been lost.";
}
leaf path-max-retransmissions {
type uint32 {
range "1 .. 20";
}
default 5;
description "Maximum number of retransmissions on one path before "
+ "communication via that path is considered to be lost.";
}
leaf association-max-retransmissions {
type uint32 {
range "1 .. 20";
}
default 10;
description "Maximum number of retransmissions to one peer before "
+ "communication with that peer is considered to be lost.";
}
description "SCTP-related settings.";
}
description "Network-related settings.";
}
description "OS-level parameters. It is advised to leave all settings at their defaults.";
}
}
traffic-type-configuration.yang
module traffic-type-configuration {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/traffic-type-configuration";
prefix "traffic-type";
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "Traffic type configuration schema.";
revision 2022-04-11 {
description "Initial revision";
reference "Metaswitch Deployment Definition Guide";
}
typedef signaling-traffic-type {
type enumeration {
enum internal {
description "Internal signaling traffic.";
}
enum diameter {
description "Diameter signaling traffic.";
}
enum ss7 {
description "SS7 signaling traffic.";
}
enum sip {
description "SIP signaling traffic.";
}
enum http {
description "HTTP signaling traffic.";
}
enum custom-signaling {
description "Applies to custom VMs only.
Custom signaling traffic.";
}
enum custom-signaling2 {
description "Applies to custom VMs only.
Second custom signaling traffic.";
}
}
description "The name of the signaling traffic type.";
}
typedef multihoming-signaling-traffic-type {
type enumeration {
enum diameter-multihoming {
description "Second Diameter signaling traffic.";
}
enum ss7-multihoming {
description "Second SS7 signaling traffic.";
}
}
description "The name of the multihoming signaling traffic type.";
}
typedef traffic-type {
type union {
type signaling-traffic-type;
type multihoming-signaling-traffic-type;
type enumeration {
enum management {
description "Management traffic.";
}
enum cluster {
description "Cluster traffic.";
}
enum access {
description "Access traffic.";
}
}
}
description "The name of the traffic type.";
}
}
vm-types.yang
module vm-types {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/vm-types";
prefix "vm-types";
import ietf-inet-types {
prefix "ietf-inet";
}
import extensions {
prefix "yangdoc";
revision-date 2020-12-02;
}
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "Types used by the various virtual machine schemas.";
revision 2019-11-29 {
description
"Initial revision";
reference
"Metaswitch Deployment Definition Guide";
}
typedef rhino-node-id-type {
type uint16 {
range "1 .. 32767";
}
description "The Rhino node identifier type.";
}
typedef sgc-cluster-name-type {
type string;
description "The SGC cluster name type.";
}
typedef deployment-id-type {
type string {
pattern "[a-zA-Z0-9-]{1,20}";
}
description "Deployment identifier type. May only contain upper and lower case letters 'a'
through 'z', the digits '0' through '9' and hyphens. Must be between 1 and
20 characters in length, inclusive.";
}
typedef site-id-type {
type string {
pattern "DC[0-9]+";
}
description "Site identifier type. Must be the letters DC followed by one or more
digits 0-9.";
}
typedef node-type-suffix-type {
type string {
pattern "[a-zA-Z0-9]*";
}
description "Node type suffix type. May only contain upper and lower case letters 'a'
through 'z' and the digits '0' through '9'. May be empty.";
}
typedef trace-level-type {
type enumeration {
enum off {
description "The 'off' trace level.";
}
enum severe {
description "The 'severe' trace level.";
}
enum warning {
description "The 'warning level.";
}
enum info {
description "The 'info' trace level.";
}
enum config {
description "The 'config' trace level.";
}
enum fine {
description "The 'fine' trace level.";
}
enum finer {
description "The 'finer' trace level.";
}
enum finest {
description "The 'finest' trace level.";
}
}
description "The Rhino trace level type";
}
typedef sip-uri-type {
type string {
pattern 'sip:.*';
}
description "The SIP URI type.";
}
typedef tel-uri-type {
type string {
pattern 'tel:\+?[-*#.()A-F0-9]+';
}
description "The Tel URI type.";
}
typedef sip-or-tel-uri-type {
type union {
type sip-uri-type;
type tel-uri-type;
}
description "A type allowing either a SIP URI or a Tel URI.";
}
typedef number-string {
type string {
pattern "[0-9]+";
}
description "A type that permits a non-negative integer value.";
}
typedef phone-number-type {
type string {
pattern '\+?[*0-9]+';
}
description "A type that represents a phone number.";
}
typedef sccp-address-type {
type string {
pattern "(.*,)*type=(A|C)7.*";
pattern "(.*,)*ri=(gt|pcssn).*";
pattern "(.*,)*ssn=[0-2]?[0-9]?[0-9].*";
pattern ".*=.*(,.*=.*)*";
}
description "A type representing an SCCP address in string form.
The basic form of an SCCP address is:
`type=<variant>,ri=<address type>,<parameter>=<value>,...`
where `<variant>` is `A7` for ANSI-variant SCCP or `C7` for ITU-variant SCCP,
and `<address type>` is one of `gt` or `pcssn`
(for an address specified by Global Title (GT),
or Point Code (PC) and Subsystem Number (SSN), respectively).
The `<parameter>` options are:
- Point code: `pc=<point code in network-cluster-member (ANSI)
or integer (ITU) format>`
- Subsystem number: `ssn=<subsystem number 0-255>`
- Global title address digits: `digits=<address digits, one or more 0-9>`
- Nature of address: `nature=<nature>` where `<nature>` is
`unknown`, `international`, `national`, or `subscriber`
- Numbering plan: `numbering=<numbering>` where `<numbering>` is
`unknown`, `isdn`, `generic`, `data`, `telex`, `maritime-mobile`,
`land-mobile`, `isdn-mobile`, or `private`
- Global title translation type: `tt=<integer 0-255>`
- National indicator: `national=<true or false>`.
`parameter` names are separated from their values by an equals sign,
and all `<parameter>=<value>` pairs are separated by commas.
Do not include any whitespace anywhere in the address.
Only the `ssn` and `national` parameters are mandatory; the others are optional,
depending on the details of the address - see below.
Note carefully the following:
- For ANSI addresses, ALWAYS specify `national=true`,
unless using ITU-format addresses in an ANSI-variant network.
- For ITU addresses, ALWAYS specify `national=false`.
- All SCCP addresses across the deployment's configuration
must use the same variant (`A7` or `C7`).
- Be sure to update the SGC's SCCP variant in `sgc-config.yaml`
to match the variant of the addresses.
---
For PC/SSN addresses (with `ri=pcssn`), you need to specify
the point code and SSN.
For GT addresses (with `ri=gt`), you must specify the global title digits
and SSN in addition to the fields listed below (choose one option).
There are two options for ANSI GT addresses:
- translation type only
- numbering plan and translation type.
There are four options for ITU GT addresses:
- nature of address only
- translation type only
- numbering plan and translation type
- nature of address with either or both of numbering plan and translation type.
---
Some valid ANSI address examples are:
- `type=A7,ri=pcssn,pc=0-0-5,ssn=147,national=true`
- `type=A7,ri=gt,ssn=146,tt=8,digits=12012223333,national=true`
Some valid ITU address examples are:
- `type=C7,ri=pcssn,pc=1434,ssn=147,national=false`
- `type=C7,ri=gt,ssn=146,nature=INTERNATIONAL,numbering=ISDN,tt=0,
digits=123456,national=false`
- `type=C7,ri=gt,ssn=148,numbering=ISDN,tt=0,digits=0778899,national=false`";
}
typedef ss7-point-code-type {
type string {
pattern "(([0-2]?[0-9]?[0-9]-){2}[0-2]?[0-9]?[0-9])|"
+ "([0-1]?[0-9]{1,4})";
}
description "A type representing an SS7 point code.
When ANSI variant is in use, specify this in network-cluster-member format,
such as 1-2-3, where each element is between 0 and 255.
When ITU variant is in use, specify this as an integer between 0 and 16383.
Note that for ITU you will need to quote the integer,
as this field takes a string rather than an integer.";
}
typedef ss7-address-string-type {
type string {
pattern "(.*,)*address=.*";
pattern ".*=.*(,.*=.*)*";
}
description "The SS7 address string type.";
}
typedef sip-status-code {
type uint16 {
range "100..699";
}
description "SIP response status code type.";
}
typedef secret {
type string;
description "A secret, which will be automatically encrypted using the secrets-private-key
configured in the Site Definition File (SDF).";
}
grouping cassandra-contact-point-interfaces {
leaf management.ipv4 {
type ietf-inet:ipv4-address-no-zone;
mandatory true;
description "The IPv4 address of the management interface.";
}
leaf signaling.ipv4 {
type ietf-inet:ipv4-address-no-zone;
mandatory true;
description "The IPv4 address of the signaling interface.";
}
description "Base network interfaces: management and signaling";
}
grouping day-of-week-grouping {
leaf day-of-week {
type enumeration {
enum Monday {
description "Every Monday.";
}
enum Tuesday {
description "Every Tuesday.";
}
enum Wednesday {
description "Every Wednesday.";
}
enum Thursday {
description "Every Thursday.";
}
enum Friday {
description "Every Friday.";
}
enum Saturday {
description "Every Saturday.";
}
enum Sunday {
description "Every Sunday.";
}
}
description "The day of the week on which to run the scheduled task.";
}
description "Grouping for the day of the week.";
}
grouping day-of-month-grouping {
leaf day-of-month {
type uint8 {
range "1..28";
}
description "The day of the month (from the 1st to the 28th)
on which to run the scheduled task.";
}
description "Grouping for the day of the month.";
}
grouping frequency-grouping {
choice frequency {
case daily {
// empty
}
case weekly {
uses day-of-week-grouping;
}
case monthly {
uses day-of-month-grouping;
}
description "Frequency options for running a scheduled task.
Note: running a scheduled task in the single-entry
format is deprecated.";
}
uses time-of-day-grouping;
description "Grouping for frequency options for running a scheduled task.
Note: This field is deprecated. Use the options in
frequency-list-grouping instead.";
}
grouping frequency-list-grouping {
choice frequency-list {
case weekly {
list weekly {
key "day-of-week";
uses day-of-week-grouping;
uses time-of-day-grouping;
description "A list of schedules that specifies the days of the week
and times of day to run the scheduled task";
}
}
case monthly {
list monthly {
key "day-of-month";
uses day-of-month-grouping;
uses time-of-day-grouping;
description "A list of schedules that specifies the days of the month
and times of day to run the scheduled task";
}
}
description "Frequency options for running a scheduled task.";
}
description "Grouping for frequency options for a task scheduled multiple times.";
}
grouping time-of-day-grouping {
leaf time-of-day {
type string {
pattern "([0-1][0-9]|2[0-3]):[0-5][0-9]";
}
mandatory true;
description "The time of day (24hr clock in the system's timezone)
at which to run the scheduled task.";
}
description "Grouping for specifying the time of day.";
}
grouping scheduled-task {
choice scheduling-rule {
case single-schedule {
uses frequency-grouping;
}
case multiple-schedule {
uses frequency-list-grouping;
}
description "Whether the scheduled task runs once or multiple times per interval.";
}
description "Grouping for determining whether the scheduled task runs once
or multiple times per interval.
Note: Scheduling a task once per interval is deprecated.
Use the options in frequency-list-grouping instead
to schedule a task multiple times per interval.";
}
grouping rvt-vm-grouping {
uses rhino-vm-grouping;
container scheduled-sbb-cleanups {
presence "This container is optional, but has mandatory descendants.";
uses scheduled-task;
description "Cleanup leftover SBBs and activities on specified schedules.
If omitted, SBB cleanups will be scheduled for every day at 02:00.";
}
description "Parameters for a Rhino VoLTE TAS (RVT) VM.";
}
grouping rhino-vm-grouping {
leaf rhino-node-id {
type rhino-node-id-type;
mandatory true;
description "The Rhino node identifier.";
}
container scheduled-rhino-restarts {
presence "This container is optional, but has mandatory descendants.";
uses scheduled-task;
description "Restart Rhino on a specified schedule, for maintenance purposes.
If omitted, no Rhino restarts will be enabled.
Note: Please ensure there are no Rhino restarts within one hour of a
scheduled Cassandra repair.";
}
description "Parameters for a VM that runs Rhino.";
}
grouping rhino-auth-grouping {
leaf username {
type string {
length "3..16";
pattern "[a-zA-Z0-9]+";
}
description "The user's username.
Must consist of between 3 and 16 alphanumeric characters.";
}
leaf password {
type secret {
length "8..max";
pattern "[a-zA-Z0-9_@!$%^/.=-]+";
}
description "The user's password. Will be automatically encrypted at deployment using
the deployment's 'secret-private-key'.";
}
leaf role {
type enumeration {
enum admin {
description "Administrator role. Can make changes to Rhino configuration.";
}
enum view {
description "Read-only role. Cannot make changes to Rhino configuration.";
}
}
default view;
description "The user's role.";
}
description "Configuration for one Rhino user.";
}
grouping rem-auth-grouping {
leaf username {
type string {
length "3..16";
pattern "[a-zA-Z0-9]+";
}
description "The user's username.
Must consist of between 3 and 16 alphanumeric characters.";
}
leaf real-name {
type string;
description "The user's real name.";
}
leaf password {
type secret {
length "8..max";
pattern "[a-zA-Z0-9_@!$%^/.=-]+";
}
description "The user's password. Will be automatically encrypted at deployment using
the deployment's 'secret-private-key'.";
}
leaf role {
type enumeration {
enum em-admin {
description "Administrator role. Can make changes to REM configuration.
Also has access to the HSS Subscriber Provisioning REST API.";
}
enum em-user {
description "Read-only role. Cannot make changes to REM configuration.
Note: Rhino write permissions are controlled by the Rhino
credentials used to connect to Rhino, NOT the REM credentials.";
}
}
default em-user;
description "The user's role.";
}
description "Configuration for one REM user.";
}
grouping diameter-configuration-grouping {
leaf origin-realm {
type ietf-inet:domain-name;
mandatory true;
description "The Diameter origin realm.";
yangdoc:change-impact "restart";
}
leaf destination-realm {
type ietf-inet:domain-name;
mandatory true;
description "The Diameter destination realm.";
}
list destination-peers {
key "destination-hostname";
min-elements 1;
leaf protocol-transport {
type enumeration {
enum aaa {
description "The Authentication, Authorization and Accounting (AAA)
protocol over tcp";
}
enum aaas {
description "The Authentication, Authorization and Accounting with Secure
Transport (AAAS) protocol over tcp.
IMPORTANT: this protocol is currently not supported.";
}
enum sctp {
description "The Authentication, Authorization and Accounting (AAA)
protocol over Stream Control Transmission Protocol
(SCTP) transport. Will automatically be configured
multi-homed if multiple signaling interfaces are
provisioned.";
}
}
default aaa;
description "The combined Diameter protocol and transport.";
}
leaf destination-hostname {
type ietf-inet:domain-name;
mandatory true;
description "The destination hostname.";
}
leaf port {
type ietf-inet:port-number;
default 3868;
description "The destination port number.";
}
leaf metric {
type uint32;
default 1;
description "The metric to use for this peer.
Peers with lower metrics take priority over peers
with higher metrics. If all peers have the same metric,
traffic is round-robin load balanced over all peers.";
}
description "Diameter destination peer(s).";
}
description "Diameter configuration.";
}
typedef announcement-id-type {
type leafref {
path "/sentinel-volte/mmtel/announcement/announcements/id";
}
description "The announcement-id type, limits use to be one of the configured SIP
announcement IDs from
'/sentinel-volte/mmtel/announcement/announcements/id'.";
}
grouping feature-announcement {
container announcement {
presence "Enables announcements";
leaf announcement-id {
type announcement-id-type;
mandatory true;
description "The announcement to be played.";
}
description "Should an announcement be played";
}
description "Configuration for announcements.";
}
}
Example configuration YAML files
Example for rem-vmpool-config.yaml
# This file describes the pool of Virtual Machines that comprise a REM cluster
deployment-config:rem-virtual-machine-pool:
# needs to match the deployment_id vapp parameter
deployment-id: example
# needs to match the site_id vapp parameter
site-id: DC1
# Define one or more REM users and give their passwords in plain-text.
# Passwords will be encrypted by 'rvtconfig upload-config' before this file is uploaded to CDS.
rem-auth:
- username: remreadonly
real-name: REM read only user
password: xxxxxxxx
virtual-machines:
- vm-id: example-rem-1
Example for snmp-config.yaml
deployment-config:snmp:
# Enable SNMP v1 (not recommended)
v1-enabled: false
# Enable SNMP v2c
v2c-enabled: true
# Enable SNMP v3
v3-enabled: false
# SNMP Community. Required for SNMP v2c
community: clearwater
# SNMP agent details
agent-details:
location: Unknown location
contact: support.contact@invalid.com
# SNMP Notifications
notifications:
# Enable Rhino SNMP Notifications
rhino-notifications-enabled: true
# Enable System SNMP Notifications
system-notifications-enabled: true
# Enable SGC SNMP Notifications
sgc-notifications-enabled: true
# SNMP notification targets. Normally this is the address of your MVS
targets:
- version: v2c
host: 127.0.0.1
port: 162
# Enable different SNMP notification categories
categories:
- category: alarm-notification
enabled: true
- category: log-notification
enabled: false
- category: log-rollover-notification
enabled: false
- category: resource-adaptor-entity-state-change-notification
enabled: false
- category: service-state-change-notification
enabled: false
- category: slee-state-change-notification
enabled: false
- category: trace-notification
enabled: false
- category: usage-notification
enabled: false
Example for routing-config.yaml
deployment-config:routing:
routing-rules: []
# To create routing rules, populate the routing-rules list as shown in the example below.
# routing-rules:
# - name: Example
#
## The target for the routing rule.
## Can be either an IP address or a block of addresses (e.g. 10.0.0.0/8).
# target: 8.8.8.8
#
## The interface to use.
## Can be one of 'management', 'diameter', 'ss7', 'sip', 'internal', 'access', 'cluster',
## 'diameter-multihoming' or 'ss7_multihoming'.
# interface: management
#
## The IP address of the gateway to route through.
# gateway: 0.0.0.0
#
# The node types this routing rule applies to.
# If ommitted, this routing rule will be attempt to apply itself to all node types.
# node-types:
# - tsn
# - mag
#
# - name: Example2
## ...
Example for system-config.yaml
# This file contains OS-level settings.
# It is recommended to leave all these options at their default values,
# unless advised to change them by your Metaswitch Customer Care representative.
deployment-config:system:
networking: {}
# To populate settings, remove the "{}" and fill in the appropriate keys and values.
# For example:
#
# deployment-config:system:
# networking:
# sctp:
# hb-interval: 1000
Connecting to MetaView Server
If you have deployed MetaView Server, Metaswitch’s network management and monitoring solution, you can use MetaView Explorer to monitor alarms on your VMs.
These instructions have been tested on version 9.5.40 of MetaView Server; for other versions the procedure could differ. In that case, refer to the MetaView Server documentation for more details.
Setting up your VMs to forward alarms to MetaView Server
To set up your VMs to forward alarms to MetaView Server, configure the following settings in snmp-config.yaml
. An example can be found in the example snmp-config.yaml page.
Field | Value |
---|---|
|
|
|
|
|
|
|
|
|
|
MetaView Server only supports the alarm-notification category of Rhino SNMP notifications. Therefore, all other notification categories should be disabled. |
Then, perform the configuration to upload the configuration.
Adding your VMs to MetaView Server
-
Set up a deployment (if one does not already exist). From the
Object tree and Views
, right-click onAll managed components
and selectAdd Rhino deployment
. Give the deployment a name and clickapply
. -
Right-click on your deployment and select
add Rhino Cluster
. This needs to be done once per node type. We recommend that you name your cluster after the node type. -
For every node in your deployment, right-click on the Rhino cluster created in the previous step for this node type and select
add Rhino node
. Enter the management IP address for the node, and the SNMP community configured insnmp-config.yaml
. If the node has been set up correctly, it will show a green tick. If it shows a red cross, click on the bell next toAlarm state → Attention Required
to see the problem.
Troubleshooting node installation
REM not running after installation
Check that bootstrap and configuration were successful:
[sentinel@rem1 ~]$ grep 'Bootstrap complete' ~/bootstrap/bootstrap.log 2019-10-28 13:53:54,226 INFO bootstrap.main Bootstrap complete [sentinel@rem1 ~]$
If the bootstrap.log
does not contain that string, examine the log for any exceptions or errors.
[sentinel@rem1 ~]$ report-initconf status status=vm_converged [sentinel@rem1 ~]$
If the status is different, examine the output from report-initconf
for any problems. If that is not sufficient, examine the ~/initconf/initconf.log
file for any exceptions or errors.
Further information can be found from the REM logs in /var/log/tas
. In particular, the REM logs are found in /var/log/tas/apache-tomcat
.
Cannot connect to REM
Connect to REM using a web browser. The connection should be over HTTPS to port 8443 of the management interface, and to the /rem/
page. For example: https://192.168.10.10:8443/rem/
If you connect using a hostname rather than the IP address, be sure that the hostname refers only to a single server in DNS.
If connections to REM fail despite use of the correct hostname/IP and port, try the following:
-
Check the REM service status on the node you are trying to connect to with
sudo systemctl status rhino-element-manager
. It should be listed asactive (running)
. -
Check that
jps
lists aBootstrap
process (this is the Apache Tomcat process). -
Check that
netstat -ant6
shows two listening sockets, one on the loopback address127.0.0.1
, port 8005, and the other on the management address, port 8443:tcp6 0 0 127.0.0.1:8005 :::* LISTEN tcp6 0 0 192.168.10.10:8443 :::* LISTEN
If any of the above checks fail, try restarting REM with sudo systemctl restart rhino-element-manager
. You can also check for errors in the log files in the /var/log/tas/apache-tomcat
directory.
Cannot log in to REM
When connecting to REM, you should use one of the accounts set up in the rem-vmpool-config.yaml file. The default username/password documented in the REM product documentation is not available on the REM node.
When trying to connect to Rhino, REM asks for credentials
When trying to connect to a Rhino instance, you need to enter the credentials REM can use to connect to Rhino. The Rhino username and password are configured in the VM pool YAML file for the Rhino nodes being monitored.
The mapping from REM users to Rhino users is deployment-specific (for example, you may wish to allocate a separate Rhino user to each REM user, so it is clear in Rhino audit logs which user made a certain change to Rhino configuration). As such, the VM software is unable to set up these credentials automatically.
It is recommended to use the "Save credentials" option so that you only need to specify the Rhino credentials once (per user, per instance).
Known REM product issues
For known REM issues, refer to the Known issues in REM section in the REM documentation.
RVT Diagnostics Gatherer
rvt-gather_diags
The rvt-gather_diags
scripts collects diagnostic information. Run rvt-gather_diags [--force] [--force-confirmed]
on the VM command line.
Option | Description |
---|---|
|
option will prompt user to allow execution under high cpu load. |
|
option will not prompt user to run under high cpu load. |
Diagnostics dumps are written to /var/rvt-diags-monitor/dumps
as a gzipped tarball. The dump name is of the form {timestamp}.{hostname}.tar.gz
. This can be extracted by running the command tar -zxf {tarball-name}
.
The script automatically deletes old dumps so that the total size of all dumps doesn’t exceed 1GB. However, it will not delete the dump just taken, even if that dump exceeds the 1GB threshold.
Diagnostics collected
A diagnostic dump contains the following information:
General
-
Everything in
/var/log
and/var/run
-
This includes the raw journal files.
-
-
NTP status in
ntpq.txt
-
snmp status from
snmpwalk
insnmpstats.txt
Platform information
-
lshw.txt
- Output of thelshw
command -
cpuinfo.txt
- Processor details -
meminfo.txt
- Memory details -
os.txt
- Operating System information
Networking information
-
ifconfig.txt
- Interface settings -
routes.txt
- IP routing tables -
netstat.txt
- Currently allocated sockets, as reported bynetstat
-
/etc/hosts
and/etc/resolv.conf
Resource usage
-
df-kh.txt
- Disk usage as reported bydf -kh
-
sar.{datestamp}.txt
- The historical system resource usage as reported -
fdisk-l.txt
- Output offdisk -l
-
ps_axo.txt
- Output ofps axo
TAS-VM-Build information
-
bootstrap.log
-
initconf.log
-
The configured YAML files
-
disk_monitor.log
-
msw-release
- Details of the node type and version -
cds_deployment_data.txt
- Developer-level configuration information from the CDS -
Text files that hold the output of journalctl run for a allowlist set of both system and TAS specific services.
Glossary
The following acronyms and abbreviations are used throughout this documentation.
CDS |
Configuration Data Store Database used to store configuration data for the VMs. |
CSAR |
Cloud Service ARchive File type used by the SIMPL VM. |
Deployment ID |
Uniquely identifies a deployment, which can consist of many sites, each with many groups of VMs |
MDM |
Metaswitch Deployment Manager Virtual appliance compatible with many Metaswitch products, that co-ordinates deployment, scale and healing of product nodes, and provides DNS and NTP services. |
MOP |
Method Of Procedure A set of instructions for a specific operation. |
OVA |
Open Virtual Appliance File type used by VMware vSphere and VMware vCloud. |
OVF |
Open Virtualization Format File type used by VMware vSphere and VMware vCloud. |
QCOW2 |
QEMU Copy on Write 2 File type used by OpenStack. |
REM |
Rhino Element Manager |
RVT |
Rhino VoLTE TAS |
SAS |
Service Assurance Server |
SDF |
Solution Definition File Describes the deployment, for consumption by the SIMPL VM. |
SIMPL VM |
ServiceIQ Management Platform VM This VM has tools for deploying and upgrading a deployment. |
Site ID |
Uniquely identifies one site within the deployment, normally a geographic site (e.g. one data center) |
SLEE |
Service Logic Execution Environment An environment that is used for developing and deploying network services in telecommunications (JSLEE Guide). For more information on how to manage the SLEE, see SLEE Management. |
TAS |
Telecom Application Server |
VM |
Virtual Machine |
YAML |
Yet Another Markup Language Data serialisation language used in the custom Rhino application solution for writing configuration files. |
YANG |
Yet Another Next Generation Schemas used for verifying YAML files. |