This document explains how to use the VM Build Container to build virtual machine images for applications developed on top of the Rhino TAS.
Notices
Copyright © 2014-2022 Metaswitch Networks. All rights reserved
This manual is issued on a controlled basis to a specific person on the understanding that no part of the Metaswitch Networks product code or documentation (including this manual) will be copied or distributed without prior agreement in writing from Metaswitch Networks.
Metaswitch Networks reserves the right to, without notice, modify or revise all or part of this document and/or change product features or specifications and shall not be responsible for any loss, cost, or damage, including consequential damage, caused by reliance on these materials.
Metaswitch and the Metaswitch logo are trademarks of Metaswitch Networks. Other brands and products referenced herein are the trademarks or registered trademarks of their respective holders.
System requirements
This page details the system requirements for the development environment that will be used to run the VM Build Container.
OS Requirements
The VM Build Container is supported on Ubuntu Desktop, version 20.04 LTS or above.
If you want to run the VM Build Container on Windows, install Ubuntu in the Windows Subsystem for Linux.
Resource Requirements
A development environment with at least 200 GB of available disk space is recommended. This is because various temporary versions of the VM are produced in different formats before the final output is produced.
We recommend that the development environment should have a minimum of at least 2 CPU cores and 8 GB of RAM.
Resource requirements will vary upwards as more complex applications are installed during the build phase.
Docker
Docker must be installed in the development environment. If you don’t already have this, follow the official instructions for installing Docker on Ubuntu
In general we recommend using the latest version of Docker available. The minimum supported version is 20.10.10.
To see the version of your Docker installation, from inside a Linux terminal, you can run:
docker version
Docker user group
By default Docker runs as the root user, but the vmbc
tool runs as the logged in user, yet needs to use Docker. To permit this, you need to create a docker group, and add the current user to it using the following sequence of commands:
sudo groupadd docker sudo usermod -aG docker $USER newgrp docker
You can find more explanation of what these commands are doing at Manage Docker as a non-root user.
Docker in Docker
Building a VM image is a complex task, and the tools used to do so themselves make use of Docker, as does the vmbc
tool which invokes those tools. The vmbc
tool comes pre-packaged with Docker-in-Docker and runs its own, internal, version of Docker automatically.
For more information about Docker-in-Docker, refer to https://hub.docker.com/_/docker.
Utilities
To run the VM Build Container, a version of bash
at or above 5.0.3 must be installed on the development environment.
The zip
and unzip
utilities are required to use the VM Build Container to make custom VMs.
The tar
utility is required to extract the archive containing the VM Build Container and the vmbc
tool.
Initial setup
This page details the steps that need to be taken to set up the development environment to run the VM Build Container.
Distribution package
The VM Build Container is distributed as single archive file containing two files:
vmbc | the executable script that you run to invoke the tool |
vmbc-image | the Docker image file that contains the actual VM Build Container |
To prepare to use the tool, you first need to extract the individual files from the archive.
Extracting the tar archive
To set up a development environment to run the VM Build Container, cd
into the directory containing the vmbc.tar.gz
tar archive and extract it by running tar -xvzf vmbc.tar.gz
.
After extracting the archive, you can safely delete the archive by running rm vmbc.tar.gz
.
The directory should now contain the two files mentioned above.
You can verify that the environment has been set up correctly by running ls
. The two files should be shown as per the example below.
$ ls vmbc.tar.gz $ tar -xvzf vmbc.tar.gz vmbc-image vmbc $ rm vmbc.tar.gz $ ls vmbc vmbc-image
Application requirements
This page describes the requirements for your application before you can use the VM Build Container to build VMs from your app.
Rhino
Your application must run on a production Rhino installation. Version 3.2 of Rhino comes prepackaged with VMBC; you can optionally use a later Rhino version if you have a suitable production installer (rhino-install.tar
) for it. See Running the VM Build Container for more information on using a different Rhino version.
License
You must have a current license for Rhino and all Metaswitch components used in your application. Ensure the license is for the appropriate version(s) of those components. You need to provide this license when building the VM, and have the option to specify whether to leave the license present on the produced VMs, or to remove it and hence require the VM users to provide one during configuration. No special Rhino license is required to use the VM Build Container itself.
Software dependencies
The VMs come with some preinstalled software by default - see Included software and utilities for more details. If your application requires additional software, use the build hooks and/or initconf hooks to install it.
Preparing your application
Application packaging methods supported by VMBC
VM Build Container supports three options for packaging your application for virtual appliance creation: a directory of deployable units, a Rhino export, or a Sentinel package.
Deployable units
Deployable units (DUs) are packages of compiled application code, which you can install into the Rhino Service Logic Execution Environment (SLEE). After the installation and appropriate configuration, the application can run in Rhino and handle network traffic.
Rhino export
It is possible to take a snapshot of a Rhino SLEE, most commonly as a method of backing up the SLEE contents so they can be restored later in the event of a problem. This snapshot is called a Rhino export. The page About Rhino Exports in the Rhino documentation explains what exports are and what they contain.
Sentinel package
The standard approach for applications built using the Sentinel platform is to package its components into one or more Sentinel SDK modules. The SDK modules, together with any configuration files and scripts, can then be put into a Sentinel standalone package which provides an easy method of installing and configuring the application in Rhino.
Choosing between a directory of deployable units, a Rhino export, or a Sentinel package
VM Build Container allows you to create VMs running your Rhino application from any of these three methods.
-
For the vast majority of Rhino applications, we recommend using a directory of deployable units. This is the simplest method as, once the application has been compiled, it requires no further steps to prepare for use in VMBC beyond copying all the relevant
du.jar
files into a directory. -
If your application is based on the Sentinel platform, we recommend using a Sentinel standalone package.
-
In rare cases, notably if your application uses advanced Rhino features such as bindings or linked components, use a Rhino export.
The following sections describe each of these input methods in more detail.
Preparing applications for VMBC
Prepare an application as a directory of deployable units
In the input directory for VMBC, create a directory named du
. Place into this directory any deployable units that form part of your application, including but not limited to:
-
resource adaptors
-
resource adaptor types
-
profile specifications
-
services and SBBs
-
event types
-
dependencies of any of the above, for example third-party libraries.
All deployable units must be Do not include any |
VMBC will install your DUs into Rhino, but for your services to function, you will need to create any RA entities, profile tables and profiles they require, and create links between these components and your SBBs. We recommend that you use build hooks to create profiles, RA entities, RA link name bindings, and maybe set some default RA entity properties. Then, use initconf hooks to set RA entity properties and profile attributes based on configuration provided at runtime, and activate services and RA entities. |
Prepare an application as a Rhino export
Install your application into Rhino locally
After writing your application, you first need to deploy the application into Rhino version 3.2 locally. It does not matter if you use an SDK or production Rhino deployment for this purpose.
This guide does not describe how to write your application or how to deploy it into Rhino; this is described in the Rhino documentation.
If installing into a production Rhino deployment, ensure no per-node activation state is set before taking an export. You can achieve this using the |
Create a Rhino export
The VM Build Container takes as input a Rhino export created from a local deployment of your application into Rhino. It then creates a VM image containing your application by importing your export into a production Rhino on the VM.
Perform these steps to create your Rhino export.
-
Find your Rhino home directory, and
cd
into it. -
Run the
rhino-export
command, making sure to provide the-s
option:If exporting from a Rhino SDK installation, run:
./client/bin/rhino-export -J -s my_rhino_export
This will create a
my_rhino_export
directory in your Rhino home directory. For more details on how to run therhino-export
command, refer to the Rhino administration guide.It is important to specify the
-s
flag to rhino-export to ensure configuration will not be included in the export. This is to ensure no compatibility issues occur (when exporting from SDK Rhino) and to ensure no incorrect configuration gets applied to the resulting VM. -
Create a zip file of your Rhino export:
cd my_rhino_export zip -r ../rhino-export.zip .
Copy the
rhino-export.zip
file to the VMBC input directory. -
You can now remove the
my_rhino_export
directory if desired:cd .. rm -rf my_rhino_export
Prepare an application as a Sentinel package
To create a Sentinel package, follow the instructions on the Sentinel standalone packages page. Name your output file sentinel-package.zip
. Make sure you add .zip
to the output name so the package gets saved as a zip file.
Copy the sentinel-package.zip
file to the VMBC input directory.
Sizing your VM
Sizing your application
The size of your VM fundamentally depends on your application. Thus, before being able to size your VM, you need to size your Rhino application independently. For more details, see the Rhino Troubleshooting Guide.
Remember to size your application and VM for worst-case busy hour traffic rather than the average traffic.
Sizing your VM based on your application sizing
After determining the sizing for your application, you can size your VM. We recommend the following settings. Please note that these are only guidelines and the ideal value can only be determined through experimentation.
-
RAM:
Application RAM sizing + 3 GB
-
Disk space:
(Application disk sizing + 5 GB) * 1.33
, with a minimum of 20 GB -
vCPUs:
Application vCPU sizing
Running the VM Build Container
Required files
To run the VMBC, first create a sub-directory containing the following files, named as follows.
Name | Description |
---|---|
|
A license to use Rhino. This file will have been provided to you as part of your general contract to use this software. This file is required to run VMBC and can also be included in the final VM image. If the file is not included in the image, it needs to be added when configuring the VM(s) through the use of the rvtconfig tool. If the file was included it can be replaced at deployment time though use of the same tool. |
|
A Java Development Kit (JDK), in For licensing reasons, VMBC does not include a JDK by default; you must provide one. You can download a JDK from OpenJDK. Valid JDKs are determined using the following criteria, which all JDKs from Oracle and OpenJDK should meet:
The JDK you provide will be copied onto the produced VM. You must ensure that:
|
|
A file listing certain properties of the VM to be built. This file is described on the Node parameters file page. |
|
A Sentinel package, Rhino export, or directory of deployable units (DUs) of your application. You must include exactly one of these. If you provide a directory of deployable units:
|
|
An RSA private key used to sign the Cloud Service Archives (CSARs) from which the SIMPL VM will deploy your application. By signing the CSARs, your customers or users who deploy the CSARs can verify that they are the authentic published CSARs for your application, and are of the correct version. You can generate a suitable key using Providing a signing key is mandatory. The key is only used at build time to sign the CSARs, and is not included in the final VM images or CSARs. If you generate a key using a command similar to the one above, make sure to delete or move the public key file (ending with |
Optional files
For VMBC, the sub-directory may also contain the following optional files.
Name | Description |
---|---|
|
A production Rhino package. (The file can have any name, but If this file is provided, VMBC will install the given Rhino into the VMs in place of the Rhino version that comes prepackaged with VMBC. See Providing a Rhino package below for information on using a different Rhino version. |
|
An archive containing custom scripts to run during the build process. This file is described in Build hooks. |
|
An archive containing custom configuration scripts to run during configuration. This file is described in initconf hooks. |
|
A YAML file containing SAS bundle mappings. Each mapping is from a fully-qualified bundle name (such as Specify all bundle mappings at the top level of the YAML file. You can specify the prefixes in any integer format accepted by YAML - for this purpose, hexadecimal is most common. Here is an example file:
When provided, the bundle mappings are set in Rhino during the VM build. If required, your build hooks or initconf hooks can always make further changes to SAS bundle mappings. Refer to SAS Bundle Mappings in the Rhino documentation for more information on SAS bundle mappings. |
Providing a Rhino package
As noted in the table above, you can provide a production Rhino package in the input files, and VMBC will then install that version of Rhino into the VMs. A provided Rhino package must:
-
be a production Rhino package, not the Rhino SDK
-
be a valid
tar
archive (possiblygzip
-compressed), with a name ending in.tar
or.tar.gz
-
contain a version of Rhino that is at least that of the prepackaged Rhino, namely 3.2.
Updating the Rhino version this way is useful, for instance, when you wish to take a new Rhino version for a security fix or software update, without having to wait for the next release of VMBC.
To avoid unexpected behaviour, we strongly recommend that you use only Rhino versions with the same major version number as the pre-packaged Rhino, i.e. versions where the first two digits of the version number are identical. For security reasons, VMBC rejects attempts to use an earlier version of Rhino than the prepackaged version. Providing a later version of Rhino may not function as expected due to compatibility issues (the code in VMBC and on the VMs may not always be compatible with the given Rhino version). VMBC will emit a warning to this effect. |
Running the vmbc
command
The command you then run is ./vmbc
, which takes parameters describing the target platforms for your VM, chosen from the following:
vsphere |
VMware’s vSphere platform |
vcloud |
VMware’s vCloud platform |
openstack |
OpenStack platform |
imageless |
Platform-independent |
For every platform specified, corresponding CSAR files (the packaging used by Metaswitch’s SIMPL VM) will automatically be built.
The imageless
parameter builds CSARs containing the regularly included files (product schema, resources, validation tests), but no VM image. These are mainly used for verifying configuration against the CSAR schema, or other similar testing purposes where VMs are not being deployed. They can also be used where VM images have already been uploaded to virtualisation platforms. These CSARs are much smaller in size, making file transfer much quicker.
Run this command from the directory containing the sub-directory of input files described above. The tool will automatically find which sub-directory contains the necessary files, so you do not need to pass that as a parameter. Avoid having multiple different sub-directories containing relevant files because this will confuse this auto detection.
The maximal invocation of the command is thus:
./vmbc vsphere vcloud openstack imageless
There is a lot of work involved in building a VM - this command will typically take 10-20 minutes or more to run.
You can obtain full help for the ./vmbc
script by running ./vmbc -h
.
Host networking vs. bridge networking
By default, VMBC is run using Docker’s bridge networking mode. However, on some host operating systems you may find that this mode does not allow the container to get internet access, which may be required for some of your custom build hooks (for example, a hook that installs yum
packages). Alternatively, you may find that, although the scripts have Internet access, they do not have working DNS resolution.
VMBC does not require internet access by default. |
If this is the case, you can try running the ./vmbc
command with the -n
option, which will change to host networking mode instead. This should resolve the lack of internet access or DNS, but be aware that, on some host operating systems, this can cause Rhino to fail to start. See the troubleshooting section for more information.
Output files
Images are written to a directory named target/images
located in the directory from which you invoked the ./vmbc
command.
The command will also output build.log
and Rhino log files in the target
directory, along with a number of other log files (notably Rhino logs) generated by the build process and the .out
files from any custom build hooks. These can help to debug issues encountered during the build.
$ ls -1 target
before-rhino-install.out
build.log
rhino.log
console.log
<other files>
images
$ ls -1 target/images
custom-vm-0.1.0rc2-openstack-csar.zip
custom-vm-0.1.0rc2-vmware-csar.zip
custom-vm-0.1.0rc2-vsphere-csar.zip
custom-vm-0.1.0rc2-vcloud.ovf
custom-vm-0.1.0rc2-vsphere.ova
custom-vm-0.1.0rc2-openstack.qcow2
custom-vm-0.1.0rc2-imageless-csar.zip
custom-vm-0.1.0rc2.yaml
Node parameters file
The node-parameters.yaml
file describes a wide range of properties of the VM to be built. You can find an example file at the bottom of this page.
Be careful when editing the yaml file - the yaml file format is very flexible, but spaces and other small easily overlooked characters can be significant. You are unlikely to have any problems if you only change the values inside the quotes. |
The example file includes a description of each value in the file. When the vmbc
tool runs, it will validate the values, and provide guidance if any values are not acceptable.
The node-parameters.yaml
file consists of three distinct sections:
node-metadata
In the first part of the file, you specify high-level properties of your product.
Naming
Specify the name of the product and a version number. The version number is used in filenames, and should not include spaces, but beyond that no conventions are assumed. It is likely that you change the version, but not the product name, each time you release a new VM.
You need to specify the product name in three forms:
Field | Description | Format |
---|---|---|
|
The "official" full name of your product. It appears in the OVF (for VMware builds) and as the Rhino SNMP agent description. |
Any string of up to 100 characters |
|
The "friendly" name of the product, used for display purposes. This is often the same as, or a shortened form of, the |
Any string of up to 32 characters |
|
An identifier to represent this product amongst others your end users may be deploying. This identifier is used in a number of places, notably the filename of the image, the VNFC type field in the SDF, and the CDS group ID. |
A string of up to 32 characters, consisting only of letters |
OS user
Set the username of the default Linux user, and a password for that user. The password must be at least 8 characters long, but has no other restrictions.
The password is only used for build and initial deployment of the custom VM. For example, if the VM fails to bootstrap successfully, the VM user will use the username and password you specify here to log in through the console in their VM host’s GUI to diagnose the problem. Once the VM is up and running, the user of the VM will specify a new password in the SDF as part of runtime configuration. |
Signaling traffic types
You can specify the signaling traffic types used by the VM. Any signaling traffic type that you wish to use with the VM needs to be defined here. This allows for use of more than two signaling interfaces, since each interface requires at least one traffic type. For valid signaling traffic types see the YANG schema.
VMs built by VMBC always use the If you configure Rhino to run in pooled mode, or if replication is enabled, the If you don’t specify any signaling traffic types, the VM will default to having a mandatory Refer to the Custom VM Install Guide for more information about traffic types and their relationship to the VM’s network interfaces. |
Parallel upgrade support
You can specify settings for parallel upgrade.
flavors
The second part of the file is concerned with specifying the virtual hardware specifications of the VM that is produced, in particular the allocated resources for CPU, memory, and disk sizes. These specifications apply only to VMs produced to run on VMware’s vSphere or vCloud platforms, or the Microsoft Azure cloud platform (in which case you specify an Azure SKU and disk type). For OpenStack deployments, the user configures the flavor on the host before deploying the VM.
Although these parameters apply only to VMware platforms, this file has appropriated the name flavor
(as used by OpenStack) to describe a configuration of CPU, memory and disk size. You can specify multiple flavors. For example, you might create a small configuration for lab deployments, and a larger configuration expected to be used in production. Whilst there is no fixed limit to the number of flavors you can have, each flavor ends up being built separately into the VM image file, so the size of the VM images is affected by the number of flavors that you specify.
It is likely that you may want to experiment to come up with appropriate values for these parameters. You can find some guidelines on how to size your VM in the Sizing your VM section.
rhino-parameters
The third part of the file specifies parameters that will affect the installed Rhino:
-
whether the installed Rhino should run in cluster mode or pool mode
-
whether the installed Rhino should have replication enabled
-
whether to build images with a Rhino license pre-installed
-
a set of JVM tuning parameters that will affect Rhino’s performance
Running Rhino in unclustered mode is no longer supported; only pooled or clustered modes are supported. Pooled mode is similar to unclustered mode in that each Rhino instance needs configuring separately, but also allows certain "clustered-style" operations such as statistics gathering, viewing the status of all Rhino nodes with a single management command, and session replication. |
The preinstall-license
option
If you choose to build images with a license installed (preinstall-license
is set to true
), VMBC installs onto the VM the same license that you provide in the build. This way, the end user does not have to provide a license of their own when configuring the VM (though can still replace the license if they wish). Check with your Metaswitch Customer Care Representative as to whether the license you are using to build the VMs is suitable for your end users.
Specifying JVM tuning and garbage collection options
Specify the heap size, new size, and maximum new size options as numbers without any suffix. Values are in megabytes.
You can also specify garbage collection options. These should be in the form of JVM command-line arguments including the "-XX" or similar prefix, one option per line. For example, to enable G1GC (the Garbage First garbage collector) with a 150 ms maximum pause time, specify
gc-options:
- "-XX:+UseG1GC"
- "-XX:MaxGCPauseMillis=150"
All JVM tuning parameters are optional. The default heap size is 3072 MB (3 GB), and the default new size and maximum new size is 256 MB.
The default GC options are the same as the Rhino production defaults, which are a suitable starting point for most Rhino applications:
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=60
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+CMSClassUnloadingEnabled
-XX:+ParallelRefProcEnabled
If you specify anything in the gc-options
field, all of these default options no longer apply. Hence, to use CMS but change the initiating occupancy fraction to 70%, you must specify -XX:+UseConcMarkSweepGC
and -XX:CMSInitiatingOccupancyFraction=70
.
We strongly recommend that you experiment to come up with appropriate JVM tuning parameters that match the performance requirements for your application. |
Sample node-parameters.yaml
file
# The node metadata provides the name and version information for the VM that is being produced
node-metadata:
# The Linux username that the product code is installed for.
username: "rhino"
# The default password for the above user. This can be changed at runtime via configuration.
default-password: "specify-a-password-here"
# The name of the product.
product-name: "Custom VM Rhino Product"
# A short form of the product name used in naming the output VM. Use hyphens instead of spaces,
# and prefer lowercase.
# Also used to derive the CSAR VNF type and the CDS group ID.
image-name: "custom-vm"
# A short form of the product name for display purposes, where spaces are acceptable.
display-name: "Custom VM"
# The product version number. No particular format is assumed, but it is used in filenames,
# and should not contain spaces.
version: "0.1.0rc2"
# How many seconds SIMPL should wait for the node to commission.
# Defaults to 1200, allowed range is 120..3600.
healthcheck-timeout: 1200
# How many seconds SIMPL should wait for the node to decommission.
# Defaults to 900, allowed range is 120..3600.
decommission-timeout: 900
# The signaling traffic types used by the VM.
# Format:
# - name: <traffic_type_name> (this must be a valid signaling traffic type)
# optional: <boolean> (defaults to false if not specified)
# This field is optional. If not specified, the default traffic types are configured:
# signaling-traffic-types:
# - name: custom-signaling
# - name: custom-signaling2
# optional: true
signaling-traffic-types:
- name: internal
- name: sip
- name: diameter
optional: true
# Parallel upgrade options.
# To disable parallel upgrade, remove this section.
parallel-upgrade:
# The proportion of the entire deployment that can be upgraded in parallel.
# The number of VMs upgraded at once is calculated as this factor
# multiplied by the total number of VMs, and discarding any fractional remainder.
# For example, if factor is 0.25 and the deployment has 18 VMs,
# the VMs will be upgraded in groups of 4.
# This value must be expressed as a number between 0 and 1 (exclusive)
# and must be formatted as a string.
# It defaults to 0.25.
factor: "0.25"
# The maximum number of VMs to upgrade at once.
# For example, if factor is 0.25 and we have 16 VMs as above,
# but max is 3, then the VMs will be upgraded in groups of only 3.
# This value must be at least 2, and defaults to 10.
max: 10
# Flavors describe the virtual hardware configuration options for deploying the VM on VMware.
flavors:
# Name for this configuration
- name: "small"
# Optional human-readable description for this configuration
description: "For small-sized deployments"
# How much RAM to use for this configuration
ram-mb: 16384
# How much disk space to use for this configuration
disk-gb: 30
# How many CPUs to allocate to this configuration (v stands for "virtual")
vcpus: 4
# A second named configuration - if you only want one configuration then remove this section.
- name: "medium"
description: "For medium-sized deployments"
ram-mb: 16384
disk-gb: 30
vcpus: 8
# Rhino parameters affect the behaviour and performance of the installed Rhino instances.
rhino-parameters:
# Whether the Rhinos on these VM instances will be clustered (true) or pooled (false).
clustered: false
# Whether the Rhinos on these VM instances should have replication enabled (true) or not (false).
# This may only be specified if 'clustered' is true. If 'clustered' is false delete this line.
# If not specified this defaults to false.
replication-enabled: false
# JVM tuning parameters. These parameters affect Rhino performance.
# The provided values are just examples. They MUST be changed to match the demands of your application.
tuning-parameters:
heap-size-mb: 3072
new-size-mb: 256
max-new-size-mb: 256
# Garbage collection (GC) options.
# If you want to specify custom GC options, uncomment these lines
# and specify options in full, such as "-XX:+UseConcMarkSweepGC",
# with each option as a separate list item.
# gc-options:
# - "-XX:+UseConcMarkSweepGC"
# Whether to build (true) images with a Rhino license pre-installed or not (false).
# If false, a Rhino license must be provided when configuring the VM(s).
# If not specified this defaults to true.
preinstall-license: true
YANG schema for traffic type configuration
module traffic-type-configuration {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/traffic-type-configuration";
prefix "traffic-type";
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "Traffic type configuration schema.";
revision 2022-04-11 {
description "Initial revision";
reference "Metaswitch Deployment Definition Guide";
}
typedef signaling-traffic-type {
type enumeration {
enum internal {
description "Internal signaling traffic.";
}
enum diameter {
description "Diameter signaling traffic.";
}
enum ss7 {
description "SS7 signaling traffic.";
}
enum sip {
description "SIP signaling traffic.";
}
enum http {
description "HTTP signaling traffic.";
}
enum custom-signaling {
description "Applies to custom VMs only.
Custom signaling traffic.";
}
enum custom-signaling2 {
description "Applies to custom VMs only.
Second custom signaling traffic.";
}
}
description "The name of the signaling traffic type.";
}
typedef multihoming-signaling-traffic-type {
type enumeration {
enum diameter-multihoming {
description "Second Diameter signaling traffic.";
}
enum ss7-multihoming {
description "Second SS7 signaling traffic.";
}
}
description "The name of the multihoming signaling traffic type.";
}
typedef traffic-type {
type union {
type signaling-traffic-type;
type multihoming-signaling-traffic-type;
type enumeration {
enum management {
description "Management traffic.";
}
enum cluster {
description "Cluster traffic.";
}
enum access {
description "Access traffic.";
}
}
}
description "The name of the traffic type.";
}
}
Build hooks
This page details how you can add custom scripts that run on specified hooks during the VM build process.
Note that the build hooks run during the VM build process. See initconf hooks for hooks that run once the VM has been deployed and booted.
As examples, you may wish to use build hooks to:
-
install a particular RPM package using
yum
-
create or copy additional files required by your Rhino application into place
-
set suitable configuration (profiles, resource adaptor attributes, SAS bundle mappings, etc) for your Rhino application in preparation for its first-time startup.
If using yum install to download and install a specific package in a build script, make sure you specify the -y flag for automatic confirmation. E.g. sudo yum install -y vim . |
Available build hooks
As an overview of the build process:
-
firstly, a Dockerfile containing instructions for all components of the VM is assembled and built
-
secondly, the container boots in order to run "live" installation steps such as starting Rhino for the first time
-
thirdly, the container is shut down and committed to a disk image
-
finally, the disk image is converted to VM image(s) of the requested format(s).
All hooks run during the "live" phase, as described below. As such, the hooks have full access to the VM-to-be’s filesystem.
Build hook name | Point hook is run |
---|---|
|
This is the first build hook that will run. This hook will run immediately prior to installing Rhino. NB. The |
|
This hook will run after Rhino is installed, but before it is started. |
|
This hook will run after Rhino is started and your Rhino export, Sentinel package, or deployable units (see Preparing your application) have been loaded into the Rhino SLEE. When this hook runs, Rhino is running and the SLEE is in the |
|
This hook will run at the very end of the "live" install phase. Rhino will no longer be running. |
Registering build hook scripts
To register scripts to run against build hooks, create executables named after each build hook you want to run against. Create a ZIP archive of these executables named custom-build.zip
, then place that archive in the same directory as the Rhino license file when running the VM Build Container.
Each script’s name must match the corresponding hook’s name exactly. In particular, note that the names use hyphens (- ), not underscores (_ ). |
Ensure that the scripts have the user-executable permission bit set. |
Ensure that the scripts' filenames have no extension, e.g. just before-rhino-install , not before-rhino-install.sh . |
The build scripts must be in the root of the custom-build.zip
archive, i.e. they must appear in the current working directory when the archive is extracted. The custom-build.zip
archive may contain additional supplementary data for your build scripts alongside the executables. This data can take the form of any number of directories and files in any structure; all that matters is that the executable hook scripts are in the root directory. Build scripts can call other programs to achieve their tasks; these must already exist on the VMs or be provided in the custom-build.zip
archive.
Example custom-build.zip
archive structure:
custom-build.zip
├── before-rhino-install
├── after-rhino-import
├── required-os-package.rpm
└── application-files
└── application-config.properties
How build hook scripts are run
Before any build hook scripts are run, the custom-build.zip
file will be extracted to the directory /home/<username>/install/build-hooks/workspace
. This is part of the following directory structure:
/home/<username>/install/build-hooks
├── workspace
├── output
All build hook scripts are run as the default user. They are given no command-line parameters and the current working directory will be set to the above workspace
directory, meaning the script will be able to find the entire contents of the custom-build.zip
file in the current working directory.
When a build hook script is run, any output to the stdout
and stderr
streams will be captured and this output will be stored in a file in the output
directory.
-
The output file for a build hook script will be named the same as the build hook script but with a
.out
extension added, e.g. The file containing output for thebefore-rhino-install
build hook will be namedbefore-rhino-install.out
. -
An empty file will be created if a build hook script does not write to either stdout or stderr.
-
No output files will be created for build hooks that do not have scripts registered against them.
If a build hook script returns with any return code other than 0, this will be interpreted as a failure and will cause the VM build process to be immediately aborted.
If the build succeeds, the workspace
directory is cleaned up, but the original executable scripts will be available on the VM in the scripts
directory, and the output
directory is left in place. For example, if the before-rhino-install
and after-rhino-import
hooks were included in the custom-build.zip
file, then the VM’s filesystem will include the following:
/home/<username>/install/build-hooks
├── scripts
├── before-rhino-install
├── after-rhino-import
├── output
├── before-rhino-install.out
├── after-rhino-import.out
In addition, regardless of whether the build succeeded or failed, the .out
files from the build hooks will be made available outside the VMBC in the target
directory. See Output files.
Environment variables
When a build hook script runs, the following variables are present in its environment.
-
The environment variable
JAVA_HOME
is set to the location of the Java Runtime Environment (JRE) that comes installed on the VM. -
Additionally, for all hooks except
before-rhino-install
, theRHINO_HOME
andCLIENT_HOME
environment variables are set.-
RHINO_HOME
points to the location of the Rhino installation. -
CLIENT_HOME
points to theclient
directory within the Rhino installation, inside which you can find various Rhino tools, most notably${CLIENT_HOME}/bin/rhino-console
.
-
Caveats when using build hooks
There are some actions that might have some unexpected effects. These include:
-
Editing the network-related files
/etc/hosts
and/etc/resolv.conf
. VMBC uses Docker during the build process, and Docker overrides/etc/hosts
and/etc/resolv.conf
as part of its internal networking. Therefore, these files cannot be edited during build hooks. Any required changes should be applied at deployment time through the Initconf hooks instead. -
Editing files and configuration stored in the node subdirectory. During the image build process, VMBC temporarily creates a node with node ID
101
. This node gets deleted at the end of the build process. The user specifies, in thecustom-vm-pool.yaml
file, node IDs for each VM andinitconf
on each VM creates a node with the configured ID when the VM is deployed. This means that any changes to files or configuration, stored in the directory for the temporary node with ID 101, will not persist. Such changes should instead by applied to the default configuration inetc/defaults
, or be done at deployment time through the Initconf hooks instead. Examples of configuration that is stored in the node subdirectory includes:-
Logging and Tracer configuration (either set through
rhino-console
or by editingconfig/logging.xml
directly) -
Rhino configuration in
config/rhino-config.xml
-
Savanna cluster configuration in
config/savanna
-
MLet configuration in
config/permachine-mlet.conf
-
specific files required by resource adaptors, e.g.
config/tcapsim-gt-table.txt
.
-
-
Updating system packages through
yum
during a build hook. It is strongly recommended not to update any of the pre-installed system packages using the build hooks, as this can lead to unexpected compatibility issues.
Example build script
A full worked example of using the build hooks follows.
-
Create a
before-rhino-install
build hook script with the contents
#!/bin/bash
pwd
cat hello.txt
and a file called hello.txt
with the contents Hello, world!
. Ensure the before-rhino-install
script has the user-executable permission bit set, for example, by using chmod +x before-rhino-install
.
-
Compress these two files into an archive named
custom-build.zip
using the commandzip custom-build.zip before-rhino-install hello.txt
. -
Include this zip file in the same directory as the Rhino license when running the VMBC.
-
During the build process, the
before-rhino-install
script will be executed prior to the installation of Rhino. -
When the VM boots, you will be able to find this script in the
/home/<username>/install/build-hooks/scripts
directory. -
There will also be a
/home/<username>/install/build-hooks/output
directory containing one file,before-rhino-install.out
, whose contents will be:
/home/<username>/install/build-hooks/workspace
Hello, world!
-
You can also find the above
before-rhino-install.out
file in thetarget
directory on the machine running VMBC.
Initconf hooks
initconf
is the name of a daemon running in the background of all VMs produced using the VM Build Container. It handles configuration, decommissioning, and upgrade of the VMs.
This page details how you can add custom configuration scripts that run on specified hooks in initconf - both during configuration of your VMs, and when a VM is decommissioned.
initconf hooks
There are two types of initconf hooks:
-
configuration hooks which run after new config has been uploaded to the CDS
-
quiesce hooks which run when a VM is decommissioned, e.g. due to an upgrade triggered by MDM.
These hooks are detailed in the tables below.
Configuration hooks
Configuration hook name | Point hook is run |
---|---|
|
This is the first configuration hook that will run. This hook will run immediately after new config has been uploaded to the CDS. |
|
This will run before Rhino has been configured. |
|
This will run after Rhino has started. |
|
This will run after the |
|
This will run after the SLEE has started. |
|
This will run after all other configuration config hooks complete. |
The hook before-slee-start is run before the initconf task that is responsible for starting the SLEE after Rhino has been started. If initconf runs due to a change in CDS config after the SLEE has been started, the SLEE will already be running, but the before-slee-start config hook will still be run. The before-slee-start config script must take into account that the SLEE may already be running by the time this hook is run. |
Registering initconf hook scripts
To register initconf hook scripts to run against configuration or quiesce hooks, create executables named after each configuration hook you want to run against and add these to the optional custom-configuration.zip
archive in the same directory as your Rhino license.
The initconf hook scripts must be in the root of the custom-configuration.zip
archive, i.e. they must appear in the current working directory when the archive is extracted. The custom-configuration.zip
archive may contain additional supplementary data for your initconf hook scripts alongside the executables. Configuration scripts can call other programs to achieve their tasks. These must already exist on the VMs or be provided in the custom-configuration.zip
archive.
Each script’s name must match the corresponding hook’s name exactly. In particular, note that the names use hyphens (- ), not underscores (_ ). |
Ensure that the scripts have the user-executable permission bit set. |
Ensure that the scripts' filenames have no extension, e.g. just before-rhino-configuration , not before-rhino-configuration.sh . |
Example custom-configuration.zip
archive structure:
custom-configuration.zip
├── initial
├── before-rhino-configuration
├── after-slee-start
├── custom-configurer.jar
└── supplementary-data
└── data.json
The custom-config
directory
A directory called custom-config
will be created in the home directory of your VMs. This directory will contain the following sub-directories:
Directory Name | Description |
---|---|
|
Contents of the |
|
All configuration yaml files will be copied to this directory |
|
This directory contains files containing any output generated by the initconf hook scripts. |
How initconf hook scripts are run
Configuration hook scripts will be wrapped by initconf tasks to be run during the configuration process when new config is uploaded to the CDS. Quiesce scripts will be wrapped by quiesce tasks to be run during the quiesce process when a VM is decommissioned.
The initconf hook scripts will be run as the default user and will be run from the custom-config-workspace
directory. The path to the custom-config-data
directory will be passed to the initconf hook scripts as an argument so that they can use the data in the configuration yaml files.
When an initconf hook script is run, any output to the stdout and stderr streams will be captured and this output will be stored in a file in the custom-config-output
directory. The output file for an initconf hook script will be named after the initconf hook script with a .out
extension added. E.g. The file containing output for the initial
initconf hook script will be named initial.out
.
An empty file will be created if an initconf hook script does not write to either stdout or stderr. No output files will be created for hooks that do not have scripts registered against them.
If an initconf hook script returns with any return code other than 0, this will be interpreted as a failure. On such a failure the configuration process will end, and will start again when new config is uploaded to the CDS.
Environment variables
When an initconf hook script runs, the following variables are present in its environment.
-
The environment variable
JAVA_HOME
is set to the location of the Java Runtime Environment (JRE) that comes installed on the VM. -
The environment variable
RHINO_HOME
points to the location of the Rhino installation. -
The environment variable
CLIENT_HOME
points to theclient
directory within the Rhino installation, inside which you can find various Rhino tools, most notably${CLIENT_HOME}/bin/rhino-console
.
Example initial
initconf hook script
If an initial
initconf hook script with the contents
#!/bin/bash
cat hello.txt
echo Configuration yaml files:
ls "$1"
and a file called hello.txt
with the contents Hello, world!
are compressed into an archive called custom-configuration.zip
using the command zip custom-configuration.zip initial hello.txt
, and this zip file is included in the same directory as the Rhino license before building a VM, when the VM has been built these files will be extracted into the custom-config-workspace
directory.
During configuration, the initial
initconf hook script will be run against the initial
configuration hook.
After configuration has been completed, an ls
on the custom-config
directory will show the following contents:
$ ls custom-config custom-config-data custom-config-workspace custom-config-output
An ls
on each sub-directory within the custom-config
directory will show:
$ ls custom-config/custom-config-workspace hello.txt initial $ ls custom-config/custom-config-data/ custom-config-data.yaml routing-config.yaml sdf-custom.yaml custom-vmpool-config.yaml sas-config.yaml snmp-config.yaml $ ls custom-config/custom-config-output/ initial.out
The contents of the resulting initial.out
file will be:
Hello, world!
Configuration yaml files:
custom-config-data.yaml
custom-vmpool-config.yaml
routing-config.yaml
sas-config.yaml
sdf-custom.yaml
snmp-config.yaml
Example before-slee-start
initconf hook script
The before-slee-start
initconf hook is useful for setting up configuration within Rhino, based on the configuration in the custom-config-data.yaml
configuration file.
For example, if a service uses an RA entity called custom-ra-entity
, with a configuration property config-property
, that we want to be able to configure on the VM at runtime. We can do this by providing the value in custom-config-data.yaml
:
my-configuration:
my-value: test
Note that this file is not present on the VM; it is provided using rvtconfig
after the VM has been deployed. This means the value can be reconfigured and is not hard-coded on the VM.
To set this configuration within the VM, we can use the following before-slee-start
initconf hook:
#!/opt/tasvmruntime-py/bin/python3
import os
import subprocess
import sys
import yaml
from pathlib import Path
config_dir = Path(sys.argv[1])
with open(config_dir / "custom-config-data.yaml", "r") as input_file:
custom_config_data = yaml.safe_load(input_file)
config_property_value = custom_config_data["my-configuration"]["my-value"]
client_home = os.environ["CLIENT_HOME"]
process = subprocess.run(
[
f"{client_home}/bin/rhino-console",
"updateraentityconfigproperties",
"custom-ra-entity",
"config-property",
config_property_value,
],
check=True,
text=True,
)
This will cause the custom-ra-entity
configuration property config-property
to be set to the value provided in my-configuration/my-value
in custom-config-data.yaml
.
Included software and utilities
This page details the software included on VMs built by the VM Build Container, both provided by Metaswitch and by third-parties.
Metaswitch
Name | Description | Version |
---|---|---|
An application server that supports the development of telecommunications applications. |
Third-Party
Name | Description | Version |
---|---|---|
A tool for automating software build processes. |
||
The Linux distribution installed on the VMs. |
||
A set of platform as a service (PaaS) products that uses OS-level virtualization to deliver software in packages called containers. |
||
An ultralight service mesh. |
||
An exporter for machine metrics. |
||
A high level general-purpose programming language. |
2.7.5 (version bundled with CentOS7) |
CentOS 7 packages
You can find information about installed CentOS 7 packages by executing this command on a VM:
sudo yum list installed
Upgrades
Upgrades are supported using the SIMPL VM.
More information can be found in the pages below:
Parallel upgrades
By default, SIMPL VM upgrades one node at a time. This has the advantage of maintaining as much availability as possible during an upgrade, but means an upgrade can take a long time.
Depending on the application and the capacity of the end user’s deployment, it may be safe to upgrade multiple nodes at the same time. This is known as a parallel upgrade.
To determine whether your application supports parallel upgrade, you will need to consider all aspects of how it might function when multiple nodes quiesce or boot simultaneously, and possible impacts of multiple nodes being offline. For example:
-
What is the minimum deployment size (number of VMs) for your application? If it is common to deploy the VMs in very small numbers, parallel upgrade may not provide much benefit.
-
If the application reads from a distributed database such as Cassandra hosted on the VMs, could you fail to get a quorum of nodes if too many are down?
-
Are there intensive database or storage operations performed on startup and/or quiesce, which might impact the performance of your database, network and/or storage systems?
-
Are there any potential race conditions in your application’s quiesce logic?
node-parameters.yaml
settings
Once you have decided that you do want to support parallel upgrade, and determined the number of nodes that can be safely upgraded at once, you can specify parallel upgrade settings in your node-parameters.yaml
file. The factor
parameter is the proportion of the entire deployment that is upgraded at once, expressed as a number between 0 and 1 (exclusive). The max
parameter puts a further limit on the number of nodes for larger deployments.
You can calculate the number of nodes that are upgraded at once by multiplying the deployment size by the factor, rounding down to a whole number, then taking the smaller of the result and the configured maximum number of nodes. For example, with a factor of 0.25
and a max of 10
, then a 26-node deployment is upgraded six VMs at a time (four groups of six, then the final two nodes).
If your product does not support parallel upgrade, omit the parallel-upgrade
section in node-parameters.yaml
.
As a rough guide, a single VM or group of VMs takes about 2-3 minutes to upgrade, outside of any time required to run initconf hook scripts. When a product supports parallel upgrade, users of a SIMPL VM can always choose to upgrade it sequentially (one node at a time), though this would be unusual. The converse is not true, that is, a product that only supports sequential upgrade cannot be upgraded in parallel. |
Troubleshooting
This page describes some common issues encountered when using the VM Build Container.
Rhino fails to start
If you get the following error during the build:
FAILED - Hit exception: Timed out waiting for Rhino to start
examine target/rhino.log
to identify the problem, as below.
"Endpoint for this node does not match active interface on this host" error
Symptoms
A message like this in target/rhino.log
:
Endpoint for this node does not match active interface on this host. Endpoint: 172.17.0.1:12100
Cause
You have encountered a limitation using host networking mode with VMBC, where Rhino does not correctly identify the network interfaces in use. This is an artifact of using Docker-in-Docker and happens only on some operating systems.
Solution
Re-run VMBC without the -n
flag.
If you have custom build scripts that require internet access and removal of the -n
flag causes them to fail, you need to find an alternative approach, such as downloading yum
packages or other files beforehand and including them in the custom build hooks archive.
Watchdog timeout error
Symptoms
Messages like the following in target/rhino.log
:
Watchdog Failure has been notified to all listeners because condition 'GroupHeartbeat for group rhino-cluster (sent=10 received=0)' failed *** WATCHDOG TIMEOUT *** Failed watchdog condition: GroupHeartbeat for group rhino-cluster (sent=10 received=0) Failed watchdog condition: GroupHeartbeat for group rhino-management (sent=10 received=0) Failed watchdog condition: GroupHeartbeat for group rhino-admin (sent=10 received=0) Failed watchdog condition: GroupHeartbeat for group domain-0-rhino-ah (sent=10 received=0) Failed watchdog condition: GroupHeartbeat for group domain-0-rhino-db-stripe-0 (sent=10 received=0)
"Connection to PostgreSQL database failed" error
Symptoms
A message like this in target/rhino.log
:
Connection to PostgreSQL database failed: server="localhost:5432", user="rhino", database="rhino_100"
Errors importing your application into Rhino
"One or more components already deployed, or defined multiple times in the deployable unit" error
Cause
You provided a directory of deployable units, and within those deployable units, you included one or more built-in Rhino components, and/or your application’s deployable units define the same component more than once.
Solution
Ensure the directory of deployable units does not contain any built-in Rhino components. It must only contain components and libraries specific to your application.
Ensure none of your application’s deployable units define the same component more than once, including across different jar
files. The error message includes details of the (first) duplicate component that Rhino found.