This document explains how to use the VM Build Container to build virtual machine images for applications developed on top of the Rhino TAS.
Notices
Copyright © 2014-2022 Metaswitch Networks. All rights reserved
This manual is issued on a controlled basis to a specific person on the understanding that no part of the Metaswitch Networks product code or documentation (including this manual) will be copied or distributed without prior agreement in writing from Metaswitch Networks.
Metaswitch Networks reserves the right to, without notice, modify or revise all or part of this document and/or change product features or specifications and shall not be responsible for any loss, cost, or damage, including consequential damage, caused by reliance on these materials.
Metaswitch and the Metaswitch logo are trademarks of Metaswitch Networks. Other brands and products referenced herein are the trademarks or registered trademarks of their respective holders.
System requirements
This page details the system requirements for the development environment that will be used to run the VM Build Container.
OS Requirements
The VM Build Container is supported on Ubuntu Desktop, version 18.04 LTS or above.
If using another platform, Ubuntu can be installed in a Virtual Machine using Oracle Virtualbox or a similar technology, and the VM Build Container can be run inside the Ubuntu VM.
Resource Requirements
A development environment with at least 200 GB of available disk space is recommended. This is because various temporary versions of the VM are produced in different formats before the final output is produced.
We recommend that the development environment should have a minimum of at least 2 CPU cores and 8 GB of RAM.
Resource requirements will vary upwards as more complex applications are installed during the build phase.
Docker
Docker must be installed in the development environment. If you don’t already have this, follow the official instructions for installing Docker on Ubuntu
In general we recommend using the latest version of Docker available. The Docker version number is year based, so versions from 2018 or later should be suitable. For example, Ubuntu 18.04 LTS offers Docker 18.06 as its latest version, and that combination has been found to work fine.
To see the version of your Docker installation, from inside a Linux terminal, you can run:
docker version
Docker user group
By default Docker runs as the root user, but the vmbc
tool runs as the logged in user, yet needs to use Docker. To permit this, you need to create a docker group, and add the current user to it using the following sequence of commands:
sudo groupadd docker sudo usermod -aG docker $USER newgrp docker
You can find more explanation of what these commands are doing at Manage Docker as a non-root user.
Docker in Docker
Building a VM image is a complex task, and the tools used to do so themselves make use of Docker, as does the vmbc
tool which invokes those tools. The vmbc
tool comes pre-packaged with Docker-in-Docker and runs its own, internal, version of Docker automatically.
For more information about Docker-in-Docker, refer to https://hub.docker.com/_/docker.
Utilities
To run the VM Build Container, a version of bash
at or above 5.0.3 must be installed on the development environment.
The zip
and unzip
utilities are required to use the VM Build Container to make custom VMs.
The tar
utility is required to extract the archive containing the VM Build Container and the vmbc
tool.
Initial setup
This page details the steps that need to be taken to set up the development environment to run the VM Build Container.
Distribution package
The VM Build Container is distributed as single archive file containing two files:
vmbc | the executable script that you run to invoke the tool |
vmbc-image | the Docker image file that contains the actual VM Build Container |
To prepare to use the tool, you first need to extract the individual files from the archive.
Extracting the tar archive
To set up a development environment to run the VM Build Container, cd
into the directory containing the vmbc.tar.gz
tar archive and extract it by running tar -xvzf vmbc.tar.gz
.
After extracting the archive, you can safely delete the archive by running rm vmbc.tar.gz
.
The directory should now contain the two files mentioned above.
You can verify that the environment has been set up correctly by running ls
. The two files should be shown as per the example below.
$ ls vmbc.tar.gz $ tar -xvzf vmbc.tar.gz vmbc-image vmbc $ rm vmbc.tar.gz $ ls vmbc vmbc-image
Application requirements
This page describes the requirements for your application before you can use the VM Build Container to build VMs from your app.
-
Your application runs on Rhino 3.0.0. Currently, no other Rhino versions are supported.
-
You have a current license for Rhino 3.0.0 and all Metaswitch components used in your application. You need to provide this license when building the VM. No special Rhino license is required to use the VM Build Container.
-
Your application only uses Rhino, or the other software already installed on the VM. See the page on Included software and utilities for more details on what is included on the VMs.
Preparing your application
Choosing between a Rhino export or Sentinel package
There are two options for preparing your application as input to the VM Build Container. If your application is based on the Sentinel platform, we recommend using a Sentinel standalone package. Otherwise, you should use a Rhino export.
Sentinel package
To create a Sentinel package, follow the instructions in the Sentinel standalone packages documentation. Your output file should be named sentinel-package.zip
. Make sure you add .zip
to the output name so the package gets saved as a zip file.
Rhino export
Install your application into Rhino locally
After writing your application, you first need to deploy the application into Rhino version 3.0.0 locally. It does not matter if you use an SDK or production Rhino deployment for this purpose.
This guide does not describe how to write your application or how to deploy it into Rhino; this is described in the Rhino documentation.
If installing into a production Rhino deployment, ensure no per-node activation state is set before taking an export. You can achieve this using the |
Create a Rhino export
The VM Build Container takes as input a Rhino export created from a local deployment of your application into Rhino. It then creates a VM image containing your application by importing your export into a production Rhino on the VM.
Perform these steps to create your Rhino export.
-
Find your Rhino home directory, and
cd
into it. -
Run the
rhino-export
command, making sure to provide the-s
option:If exporting from a Rhino SDK installation, run:
./client/bin/rhino-export -J -s my_rhino_export
This will create a
my_rhino_export
directory in your Rhino home directory. For more details on how to run therhino-export
command, refer to the Rhino administration guide. -
Create a zip file of your Rhino export:
cd my_rhino_export zip -r ../rhino-export.zip .
This zip file
rhino-export.zip
will be the input to the VM Build Container. -
You can now remove the
my_rhino_export
directory if desired:cd .. rm -rf my_rhino_export
It is important to specify the |
Sizing your VM
Sizing your application
The size of your VM fundamentally depends on your application. Thus, before being able to size your VM, you need to size your Rhino application independently. For more details, see the Rhino Troubleshooting Guide.
Remember to size your application and VM for worst-case busy hour traffic rather than the average traffic.
Sizing your VM based on your application sizing
After determining the sizing for your application, you can size your VM. We recommend the following settings. Please note that these are only guidelines and the ideal value can only be determined through experimentation.
-
RAM:
Application RAM sizing + 3 GB
-
Disk space:
(Application disk sizing + 5 GB) * 1.33
, with a minimum of 20 GB -
vCPUs:
Application vCPU sizing
Running the VM Build Container
Required files
To run the VMBC, you first need to create a sub-directory containing just the following files, named as follows.
Name | Description |
---|---|
|
A license to use Rhino. This file will have been provided to you as part of your general contract to use this software. This file is required to run VMBC and can also be included in the final VM image. If the file is not included in the image, it needs to be added when configuring the VM(s) through the use of the rvtconfig tool. If the file was included it can be replaced at deployment time though use of the same tool. |
|
A Java Development Kit (JDK), in For licensing reasons, VMBC does not include a JDK by default; you must provide one. You can download a JDK from OpenJDK. Valid JDKs are determined using the following criteria, which all JDKs from Oracle and OpenJDK should meet:
The JDK you provide will be copied onto the produced VM. You must ensure that:
|
|
This file is described below in Contents of the node-parameters.yaml file. |
|
A Sentinel package or Rhino export of your application. You must include either one or the other, but not both. |
|
An archive containing custom scripts to run during the build process. This file is described in Build hooks. This file is optional. |
|
An archive containing custom configuration scripts to run during configuration. This file is described in initconf hooks. This file is optional. |
Contents of the node-parameters.yaml file
The node-parameters.yaml
file is a file you supply yourself, with values based on the example given below.
Be careful when editing the yaml file - the yaml file format is very flexible, but spaces and other small easily overlooked characters can be significant. You will be unlikely to have any problems if your changes are restricted to just changing the values inside the quotes. |
The example file includes a description of each of the values in the file, which should guide you when providing your values. When the vmbc
tool runs, it will validate the provided values, and provide guidance if any values are not acceptable.
The node-parameters.yaml
file consists of three distinct types of information.
-
The first part is the node-metadata, which gives the name of the product in three forms (which you will presumably decide on once, then never need to change), plus a version number. The version number is used in filenames, and should not include spaces, but beyond that no conventions are assumed. It is likely that you change this each time you release a new VM.
Here you can also set the username of the default Linux user, and a password for that user. The password must be at least 8 characters long, but has no other restrictions.
The password is only used for build and initial deployment of the custom VM. For example, if the VM fails to bootstrap successfully, the VM user will use the username and password you specify here to log in through the console in their VM host’s GUI, in order to diagnose the problem. Once the VM is up and running, as part of runtime configuration the user of the VM will specify a new password in the SDF. Finally, you can specify the signaling traffic types used by the VM. Any signaling traffic type that you wish to use with the VM needs to be defined here. This allows for use of more than two signaling interfaces, since each interface requires at least one traffic type. For valid signaling traffic types see the YANG schema. The default signaling traffic types are (mandatory)
custom-signaling
& (optional)custom-signaling2
. -
The second part of the file is concerned with specifying the configuration parameters of the VM that is produced, in particular the allocated resources for CPU, memory, and disk sizes. These parameters apply only to VMs produced to run on VMware’s vSphere or vCloud platforms - if you are targeting OpenStack, the parameters are set externally to the VM. It is likely that you may want to experiment to come up with appropriate values for these parameters.
Although these parameters apply only to VMware platforms, this file has appropriated the name
flavor
(as used by OpenStack) to describe a set of these configurations. You can specify multiple sets of configurations, if, for example, your product might be usable in a small (possibly trial) configuration, and in a larger configuration expected to be used in production. Whilst there is no fixed limit to the number of configurations you can have, each configuration ends up being built separately into the VM image file - so the size of the VM images is proportional to the number of configurations that you specify.You can find some guidelines on how to size your VM in the Sizing your VM section.
Be particularly careful when editing the file to add or remove configurations - the whitespace, -
and:
characters in particular are all very significant in this part of the file. -
The third part of the file specifies parameters that will affect the installed Rhino:
-
whether the installed Rhino should be clustered
-
whether it should have replication enabled
-
whether to build images with a Rhino license pre-installed
-
this is the license provided during the build
-
-
a set of JVM tuning parameters that will affect Rhino’s performance
-
it is strongly recommended that you experiment to come up with appropriate tuning parameters that match the performance requirements for your application
-
-
Sample node-parameters.yaml file
# The node metadata provides the name and version information for the VM that is being produced
node-metadata:
# The Linux username that the product code is installed for.
username: "rhino"
# The default password for the above user. This can be changed at runtime via configuration.
default-password: "!rhino123"
# The name of the product.
product-name: "Custom VM Rhino Product"
# A short form of the product name used in naming the output VM. Use hyphens instead of spaces,
# and prefer lowercase.
# Also used to derive the CSAR VNF type and the CDS group ID.
image-name: "custom-vm"
# A short form of the product name for display purposes, where spaces are acceptable.
display-name: "Custom VM"
# The product version number. No particular format is assumed, but it is used in filenames,
# and should not contain spaces.
version: "0.1.0rc2"
# The signaling traffic types used by the VM.
# Format:
# - name: <traffic_type_name> (this must be a valid signaling traffic type)
# optional: <boolean> (defaults to false if not specified)
# This field is optional. If not specified, the default traffic types are configured:
# signaling-traffic-types:
# - name: custom-signaling
# - name: custom-signaling2
# optional: true
signaling-traffic-types:
- name: internal
- name: sip
- name: diameter
optional: true
# Flavors describe the virtual hardware configuration options for deploying the VM on VMware.
flavors:
# Name for this configuration
- name: "small"
# Optional human-readable description for this configuration
description: "For small-sized deployments"
# How much RAM to use for this configuration
ram-mb: 16384
# How much disk space to use for this configuration
disk-gb: 30
# How many CPUs to allocate to this configuration (v stands for "virtual")
vcpus: 4
# A second named configuration - if you only want one configuration then remove this section.
- name: "medium"
description: "For medium-sized deployments"
ram-mb: 16384
disk-gb: 30
vcpus: 8
# Rhino parameters affect the behaviour and performance of the installed Rhino instances.
rhino-parameters:
# Whether the Rhinos on these VM instances will be clustered (true) or standalone (false).
# If not specified this defaults to true.
clustered: true
# Whether the Rhinos on these VM instances should have replication enabled (true) or not (false).
# This may only be specified if 'clustered' is true. If 'clustered' is false delete this line.
# If not specified this defaults to false.
replication-enabled: false
# JVM tuning parameters. These parameters affect Rhino performance.
# The provided values are just examples. They MUST be changed to match the demands of your application.
tuning-parameters:
heap-size-mb: 3072
new-size-mb: 256
max-new-size-mb: 256
# Garbage collection (GC) options.
# If you want to specify custom GC options, uncomment these lines
# and specify options in full, such as "-XX:+UseConcMarkSweepGC",
# with each option as a separate list item.
# gc-options:
# - "-XX:+UseConcMarkSweepGC"
# Whether to build (true) images with a Rhino license pre-installed or not (false).
# If false, a Rhino license must be provided when configuring the VM(s).
# If not specified this defaults to true.
preinstall-license: true
Running the vmbc
command
The command you then run is ./vmbc
, which takes parameters describing the target platforms for your VM, chosen from the following:
vsphere |
VMware’s vSphere platform |
vcloud |
VMware’s vCloud platform |
openstack |
OpenStack platform |
imageless |
Platform-independent |
For every platform specified, corresponding CSAR files (the packaging used by Metaswitch’s SIMPL VM) will automatically be built.
The imageless
parameter builds CSARs containing the regularly included files (product schema, resources, validation tests), but no VM image. These are mainly used for verifying configuration against the CSAR schema, or other similar testing purposes where VMs are not being deployed. They can also be used where VM images have already been uploaded to virtualisation platforms. These CSARs are much smaller in size, making file transfer much quicker.
Run this command from the directory containing the sub-directory of input files described above. The tool will automatically find which sub-directory contains the necessary files, so you do not need to pass that as a parameter. Avoid having multiple different sub-directories containing relevant files because this will confuse this auto detection.
The maximal invocation of the command is thus:
./vmbc vpshere vcloud openstack imageless
There is a lot of work involved in building a VM - this command will typically take 10-20 minutes or more to run.
You can obtain full help for the ./vmbc
script by running ./vmbc -h
.
Host networking vs. bridge networking
By default, VMBC is run using Docker’s bridge networking mode. However, on some host operating systems you may find that this mode does not allow the container to get internet access, which may be required for some of your custom build hooks (for example, a hook that installs yum
packages). Alternatively, you may find that, although the scripts have Internet access, they do not have working DNS resolution.
VMBC does not require internet access by default. |
If this is the case, you can try running the ./vmbc
command with the -n
option, which will change to host networking mode instead. This should resolve the lack of internet access or DNS, but be aware that, on some host operating systems, this can cause Rhino to fail to start. See the troubleshooting section for more information.
Output files
Images are written to a directory named target/images
located in the directory from which you invoked the ./vmbc
command.
The command will also output build.log
and Rhino log files in the target
directory, along with a number of other log files (notably Rhino logs) generated by the build process and the .out
files from any custom build hooks. These can help to debug issues encountered during the build.
$ ls -1 target
before-rhino-install.out
build.log
rhino.log
console.log
<other files>
images
$ ls -1 target/images
custom-vm-0.1.0rc2-openstack-csar.zip
custom-vm-0.1.0rc2-vmware-csar.zip
custom-vm-0.1.0rc2-vsphere-csar.zip
custom-vm-0.1.0rc2-vcloud.ovf
custom-vm-0.1.0rc2-vsphere.ova
custom-vm-0.1.0rc2-openstack.qcow2
custom-vm-0.1.0rc2-imageless-csar.zip
custom-vm-0.1.0rc2.yaml
traffic-type-configuration.yang
module traffic-type-configuration {
yang-version 1.1;
namespace "http://metaswitch.com/yang/tas-vm-build/traffic-type-configuration";
prefix "traffic-type";
organization "Metaswitch Networks";
contact "rvt-schemas@metaswitch.com";
description "Traffic type configuration schema.";
revision 2022-04-11 {
description "Initial revision";
reference "Metaswitch Deployment Definition Guide";
}
typedef signaling-traffic-type {
type enumeration {
enum internal {
description "Internal signaling traffic.";
}
enum diameter {
description "Diameter signaling traffic.";
}
enum ss7 {
description "SS7 signaling traffic.";
}
enum sip {
description "SIP signaling traffic.";
}
enum http {
description "HTTP signaling traffic.";
}
enum custom-signaling {
description "Applies to custom VMs only.
Custom signaling traffic.";
}
enum custom-signaling2 {
description "Applies to custom VMs only.
Second custom signaling traffic.";
}
}
description "The name of the signaling traffic type.";
}
typedef multihoming-signaling-traffic-type {
type enumeration {
enum diameter-multihoming {
description "Second Diameter signaling traffic.";
}
enum ss7-multihoming {
description "Second SS7 signaling traffic.";
}
}
description "The name of the multihoming signaling traffic type.";
}
typedef traffic-type {
type union {
type signaling-traffic-type;
type multihoming-signaling-traffic-type;
type enumeration {
enum management {
description "Management traffic.";
}
enum cluster {
description "Cluster traffic.";
}
enum access {
description "Access traffic.";
}
}
}
description "The name of the traffic type.";
}
}
Build hooks
This page details how you can add custom scripts that run on specified hooks during the VM build process.
Note that the build hooks run during the VM build process. See initconf hooks for hooks that run once the VM has been deployed and booted.
As examples, you may wish to use build hooks to:
-
install a particular RPM package using
yum
-
create or copy additional files required by your Rhino application into place
-
set suitable configuration (profiles, resource adaptor attributes, SAS bundle mappings, etc) for your Rhino application in preparation for its first-time startup.
Available build hooks
As an overview of the build process:
-
firstly, a Dockerfile containing instructions for all components of the VM is assembled and built
-
secondly, the container boots in order to run "live" installation steps such as starting Rhino for the first time
-
thirdly, the container is shut down and committed to a disk image
-
finally, the disk image is converted to VM image(s) of the requested format(s).
All hooks run during the "live" phase, as described below. As such, the hooks have full access to the VM-to-be’s filesystem.
Build hook name | Point hook is run |
---|---|
|
This is the first build hook that will run. This hook will run immediately prior to installing Rhino. NB. The |
|
This hook will run after Rhino is installed, but before it is started. |
|
This hook will run after Rhino is started and your Rhino export or package has been loaded into the Rhino SLEE. When this hook runs, Rhino is running and the SLEE is in the |
|
This hook will run at the very end of the "live" install phase. Rhino will no longer be running. |
Registering build hook scripts
To register scripts to run against build hooks, create executables named after each build hook you want to run against. Create a ZIP archive of these executables named custom-build.zip
, then place that archive in the same directory as the Rhino license file when running the VM Build Container.
Each script’s name must match the corresponding hook’s name exactly. In particular, note that the names use hyphens (- ), not underscores (_ ). |
Ensure that the scripts have the user-executable permission bit set. |
Ensure that the scripts' filenames have no extension, e.g. just before-rhino-install , not before-rhino-install.sh . |
The build scripts must be in the root of the custom-build.zip
archive, i.e. they must appear in the current working directory when the archive is extracted. The custom-build.zip
archive may contain additional supplementary data for your build scripts alongside the executables. This data can take the form of any number of directories and files in any structure; all that matters is that the executable hook scripts are in the root directory. Build scripts can call other programs to achieve their tasks; these must already exist on the VMs or be provided in the custom-build.zip
archive.
Example custom-build.zip
archive structure:
custom-build.zip
├── before-rhino-install
├── after-rhino-import
├── required-os-package.rpm
└── application-files
└── application-config.properties
How build hook scripts are run
Before any build hook scripts are run, the custom-build.zip
file will be extracted to the directory /home/<username>/install/build-hooks/workspace
. This is part of the following directory structure:
/home/<username>/install/build-hooks
├── workspace
├── output
All build hook scripts are run as the default user. They are given no command-line parameters and the current working directory will be set to the above workspace
directory, meaning the script will be able to find the entire contents of the custom-build.zip
file in the current working directory.
When a build hook script is run, any output to the stdout
and stderr
streams will be captured and this output will be stored in a file in the output
directory.
-
The output file for a build hook script will be named the same as the build hook script but with a
.out
extension added, e.g. The file containing output for thebefore-rhino-install
build hook will be namedbefore-rhino-install.out
. -
An empty file will be created if a build hook script does not write to either stdout or stderr.
-
No output files will be created for build hooks that do not have scripts registered against them.
If a build hook script returns with any return code other than 0, this will be interpreted as a failure and will cause the VM build process to be immediately aborted.
If the build succeeds, the workspace
directory is cleaned up, but the original executable scripts will be available on the VM in the scripts
directory, and the output
directory is left in place. For example, if the before-rhino-install
and after-rhino-import
hooks were included in the custom-build.zip
file, then the VM’s filesystem will include the following:
/home/<username>/install/build-hooks
├── scripts
├── before-rhino-install
├── after-rhino-import
├── output
├── before-rhino-install.out
├── after-rhino-import.out
In addition, regardless of whether the build succeeded or failed, the .out
files from the build hooks will be made available outside the VMBC in the target
directory. See Output files.
Environment variables
When a build hook script runs, the following variables are present in its environment.
-
The environment variable
JAVA_HOME
is set to the location of the Java Runtime Environment (JRE) that comes installed on the VM. -
Additionally, for all hooks except
before-rhino-install
, theRHINO_HOME
andCLIENT_HOME
environment variables are set.-
RHINO_HOME
points to the location of the Rhino installation. -
CLIENT_HOME
points to theclient
directory within the Rhino installation, inside which you can find various Rhino tools, most notably${CLIENT_HOME}/bin/rhino-console
.
-
Caveats when using build hooks
There are some actions that might have some unexpected effects. These include:
-
Editing the network-related files
/etc/hosts
and/etc/resolv.conf
. VMBC uses Docker during the build process, and Docker overrides/etc/hosts
and/etc/resolv.conf
as part of its internal networking. Therefore, these files cannot be edited during build hooks. Any required changes should be applied at deployment time through the Initconf hooks instead. -
Editing files and configuration stored in the node subdirectory. During the image build process, VMBC temporarily created a node with node ID
101
. This node gets deleted at the end of the build process, and a node with the configured ID is created when the VM is deployed. This means that any changes to files or configuration, stored in the directory for the temporary node with ID 101, will not persist. Such changes should instead by applied to the default configuration inetc/defaults
, or be done at deployment time through the Initconf hooks instead. Examples of configuration that is stored in the node subdirectory includes:-
Logging and Tracer configuration (either set through
rhino-console
or by editingconfig/logging.xml
directly) -
Rhino configuration in
config/rhino-config.xml
-
Savanna cluster configuration in
config/savanna
-
MLet configuration in
config/permachine-mlet.conf
-
specific files required by resource adaptors, e.g.
config/tcapsim-gt-table.txt
.
-
-
Updating system packages through
yum
during a build hook. It is strongly recommended not to update any of the pre-installed system packages using the build hooks, as this can lead to unexpected compatibility issues. In particular, thesystemd
package must never be updated as the build process depends on the exact version that comes pre-installed. Therefore, if running ayum update
command, the systemd package should be excluded by specifying--exclude=systemd*
.
Example build script
A full worked example of using the build hooks follows.
-
Create a
before-rhino-install
build hook script with the contents
#!/bin/bash
pwd
cat hello.txt
and a file called hello.txt
with the contents Hello, world!
. Ensure the before-rhino-install
script has the user-executable permission bit set, for example, by using chmod +x before-rhino-install
.
-
Compress these two files into an archive named
custom-build.zip
using the commandzip custom-build.zip before-rhino-install hello.txt
. -
Include this zip file in the same directory as the Rhino license when running the VMBC.
-
During the build process, the
before-rhino-install
script will be executed prior to the installation of Rhino. -
When the VM boots, you will be able to find this script in the
/home/<username>/install/build-hooks/scripts
directory. -
There will also be a
/home/<username>/install/build-hooks/output
directory containing one file,before-rhino-install.out
, whose contents will be:
/home/<username>/install/build-hooks/workspace
Hello, world!
-
You can also find the above
before-rhino-install.out
file in thetarget
directory on the machine running VMBC.
Initconf hooks
initconf
is the name of a daemon running in the background of all VMs produced using the VM Build Container. It handles configuration, decommissioning, and upgrade of the VMs.
This page details how you can add custom configuration scripts that run on specified hooks in initconf - both during configuration of your VMs, and when a VM is decommissioned.
initconf hooks
There are two types of initconf hooks:
-
configuration hooks which run after new config has been uploaded to the CDS
-
quiesce hooks which run when a VM is decommissioned, e.g. due to an upgrade triggered by MDM.
These hooks are detailed in the tables below.
Configuration hooks
Configuration hook name | Point hook is run |
---|---|
|
This is the first configuration hook that will run. This hook will run immediately after new config has been uploaded to the CDS. |
|
This will run before Rhino has been configured. |
|
This will run after Rhino has started. |
|
This will run after the |
|
This will run after the SLEE has started. |
|
This will run after all other configuration config hooks complete. |
The hook before-slee-start is run before the initconf task that is responsible for starting the SLEE after rhino has been started. If initconf runs due to a change in CDS config after the SLEE has been started, the SLEE will already be running, but the before-slee-start config hook will still be run. The before-slee-start config script must take into account that the SLEE may already be running by the time this hook is run. |
Registering initconf hook scripts
To register initconf hook scripts to run against configuration or quiesce hooks, create executables named after each configuration hook you want to run against and add these to the optional custom-configuration.zip
archive in the same directory as your Rhino license.
The initconf hook scripts must be in the root of the custom-configuration.zip
archive, i.e. they must appear in the current working directory when the archive is extracted. The custom-configuration.zip
archive may contain additional supplementary data for your initconf hook scripts alongside the executables. Configuration scripts can call other programs to achieve their tasks. These must already exist on the VMs or be provided in the custom-configuration.zip
archive.
Each script’s name must match the corresponding hook’s name exactly. In particular, note that the names use hyphens (- ), not underscores (_ ). |
Ensure that the scripts have the user-executable permission bit set. |
Ensure that the scripts' filenames have no extension, e.g. just before-rhino-configuration , not before-rhino-configuration.sh . |
Example custom-configuration.zip
archive structure:
custom-configuration.zip
├── initial
├── before-rhino-configuration
├── after-slee-start
├── custom-configurer.jar
└── supplementary-data
└── data.json
The custom-config
directory
A directory called custom-config
will be created in the home directory of your VMs. This directory will contain the following sub-directories:
Directory Name | Description |
---|---|
|
Contents of the |
|
All configuration yaml files will be copied to this directory |
|
This directory contains files containing any output generated by the initconf hook scripts. |
How initconf hook scripts are run
Configuration hook scripts will be wrapped by initconf tasks to be run during the configuration process when new config is uploaded to the CDS. Quiesce scripts will be wrapped by quiesce tasks to be run during the quiesce process when a VM is decommissioned.
The initconf hook scripts will be run as the default user and will be run from the custom-config-workspace
directory. The path to the custom-config-data
directory will be passed to the initconf hook scripts as an argument so that they can use the data in the configuration yaml files.
When an initconf hook script is run, any output to the stdout and stderr streams will be captured and this output will be stored in a file in the custom-config-output
directory. The output file for an initconf hook script will be named after the initconf hook script with a .out
extension added. E.g. The file containing output for the initial
initconf hook script will be named initial.out
.
An empty file will be created if an initconf hook script does not write to either stdout or stderr. No output files will be created for hooks that do not have scripts registered against them.
If an initconf hook script returns with any return code other than 0, this will be interpreted as a failure. On such a failure the configuration process will end, and will start again when new config is uploaded to the CDS.
Environment variables
When an initconf hook script runs, the following variables are present in its environment.
-
The environment variable
JAVA_HOME
is set to the location of the Java Runtime Environment (JRE) that comes installed on the VM. -
The environment variable
RHINO_HOME
points to the location of the Rhino installation. -
The environment variable
CLIENT_HOME
points to theclient
directory within the Rhino installation, inside which you can find various Rhino tools, most notably${CLIENT_HOME}/bin/rhino-console
.
Example initial
initconf hook script
If an initial
initconf hook script with the contents
#!/bin/bash
cat hello.txt
echo Configuration yaml files:
ls "$1"
and a file called hello.txt
with the contents Hello, world!
are compressed into an archive called custom-configuration.zip
using the command zip custom-configuration.zip initial hello.txt
, and this zip file is included in the same directory as the Rhino license before building a VM, when the VM has been built these files will be extracted into the custom-config-workspace
directory.
During configuration, the initial
initconf hook script will be run against the initial
configuration hook.
After configuration has been completed, an ls
on the custom-config
directory will show the following contents:
$ ls custom-config custom-config-data custom-config-workspace custom-config-output
An ls
on each sub-directory within the custom-config
directory will show:
$ ls custom-config/custom-config-workspace hello.txt initial $ ls custom-config/custom-config-data/ custom-config-data.yaml routing-config.yaml sdf-custom.yaml custom-vmpool-config.yaml sas-config.yaml snmp-config.yaml $ ls custom-config/custom-config-output/ initial.out
The contents of the resulting initial.out
file will be:
Hello, world!
Configuration yaml files:
custom-config-data.yaml
custom-vmpool-config.yaml
routing-config.yaml
sas-config.yaml
sdf-custom.yaml
snmp-config.yaml
Example before-slee-start
initconf hook script
The before-slee-start
initconf hook is useful for setting up configuration within Rhino, based on the configuration in the custom-config-data.yaml
configuration file.
For example, if a service uses an RA entity called custom-ra-entity
, with a configuration property config-property
, that we want to be able to configure on the VM at runtime. We can do this by providing the value in custom-config-data.yaml
:
my-configuration:
my-value: test
Note that this file is not present on the VM; it is provided using rvtconfig
after the VM has been deployed. This means the value can be reconfigured and is not hard-coded on the VM.
To set this configuration within the VM, we can use the following before-slee-start
initconf hook:
#!/opt/tasvmruntime-py/bin/python3
import os
import subprocess
import sys
import yaml
from pathlib import Path
config_dir = Path(sys.argv[1])
with open(config_dir / "custom-config-data.yaml", "r") as input_file:
custom_config_data = yaml.safe_load(input_file)
config_property_value = custom_config_data["my-configuration"]["my-value"]
client_home = os.environ["CLIENT_HOME"]
process = subprocess.run(
[
f"{client_home}/bin/rhino-console",
"updateraentityconfigproperties",
"custom-ra-entity",
"config-property",
config_property_value,
],
check=True,
text=True,
)
This will cause the custom-ra-entity
configuration property config-property
to be set to the value provided in my-configuration/my-value
in custom-config-data.yaml
.
Included software and utilities
This page details the software included on VMs built by the VM Build Container, both provided by Metaswitch and by third-parties.
Metaswitch
Name | Description | Version |
---|---|---|
An application server that supports the development of telecommunications applications. |
Third-Party
Name | Description | Version |
---|---|---|
A tool for automating software build processes. |
||
The Linux distribution installed on the VMs. |
||
A set of platform as a service (PaaS) products that uses OS-level virtualization to deliver software in packages called containers. |
||
An ultralight service mesh. |
||
An exporter for machine metrics. |
||
A high level general-purpose programming language. |
2.7.5 (version bundled with CentOS7) |
CentOS 7 packages
You can find information about installed CentOS 7 packages by executing this command on a VM:
sudo yum list installed
Limitations
This page describes some known limitations with the VM Build Container, and the VM images it produces.
Custom VM image limitations
The application on the VM cannot be changed
After creating a VM image, the application on the VM image cannot be changed. The only way to change it in a deployment is to build a new VM image, and use the Upgrades process to upgrade to the new image.
Cannot update the OS or kernel
It is not possible to update the OS or kernel on the VM image. If an updated OS or kernel is required, a new version of VMBC based on that updated OS or kernel should be used to build a new VM image, and afterward the Upgrades process should be used to upgrade to the new image.
Upgrades
Upgrades are supported using the SIMPL VM.
More information can be found in the pages below:
Troubleshooting
This page describes some common issues encountered when using the VM Build Container.
Rhino fails to start
If you get the following error during the build:
FAILED - Hit exception: Timed out waiting for Rhino to start
examine target/rhino.log
to identify the problem, as below.
"Endpoint for this node does not match active interface on this host" error
Symptoms
A message like this in target/rhino.log
:
Endpoint for this node does not match active interface on this host. Endpoint: 172.17.0.1:12100
Cause
You have encountered a limitation using host networking mode with VMBC, where Rhino does not correctly identify the network interfaces in use. This is an artifact of using Docker-in-Docker and happens only on some operating systems.
Solution
Re-run VMBC without the -n
flag.
If you have custom build scripts that require internet access and removal of the -n
flag causes them to fail, you need to find an alternative approach, such as downloading yum
packages or other files beforehand and including them in the custom build hooks archive.
Watchdog timeout error
Symptoms
Messages like the following in target/rhino.log
:
Watchdog Failure has been notified to all listeners because condition 'GroupHeartbeat for group rhino-cluster (sent=10 received=0)' failed *** WATCHDOG TIMEOUT *** Failed watchdog condition: GroupHeartbeat for group rhino-cluster (sent=10 received=0) Failed watchdog condition: GroupHeartbeat for group rhino-management (sent=10 received=0) Failed watchdog condition: GroupHeartbeat for group rhino-admin (sent=10 received=0) Failed watchdog condition: GroupHeartbeat for group domain-0-rhino-ah (sent=10 received=0) Failed watchdog condition: GroupHeartbeat for group domain-0-rhino-db-stripe-0 (sent=10 received=0)
"Connection to PostgreSQL database failed" error
Symptoms
A message like this in target/rhino.log
:
Connection to PostgreSQL database failed: server="localhost:5432", user="rhino", database="rhino_100"