In this part of the tutorial, you will:

  • Create the solution definition file (SDF), secrets file, and configuration files.

  • Upload the configuration files to CDS.

  • Use SIMPL VM to create your VMs.

  • Verify the created VMs are functional.

Steps

Before starting, open a separate terminal, and in that terminal start an SSH session to the SIMPL VM with the command ssh -o ServerAliveInterval=30 admin@<SIMPL VM IP address>. The ServerAliveInterval option keeps the connection alive; by default the SIMPL VM terminates idle connections after a few minutes.

Expand each section below to view the detailed steps.

Generate an SSH key and a secrets private key

SSH key

SSH access to the VMs is via public key authentication only - password authentication is not supported. Generate an SSH key named vm-key without a passphrase by using the following command:

ssh-keygen -t rsa -b 4096 -N "" -f vm-key

In the following step, you will add the content of the public key file (vm-key.pub) to the SDF.

Caution Always keep the SSH private key secret.

Refer to Logging in through SSH in the Custom VM Install Guide for more information about the configuration of SSH access.

Secrets private key

A secrets private key is a Fernet key, which is used to encrypt sensitive data in the SDF and configuration files before they are uploaded to CDS.

Caution Always keep the secrets private key secret. If it is compromised, then so are various sensitive fields in the configuration files, such as Rhino user passwords.

On the SIMPL VM, run the commands cdcsars and http-example/0.1.0/resources/rvtconfig generate-private-key to generate a secrets private key. The output of the command will be a base64 string of 44 characters that ends with an equal sign (=). Keep a copy of this value; you will use it in the Write the secrets file step below. Be sure to copy the whole key, including the equal sign.

Note If you specified a different image name or version in node-parameters.yaml, replace http-example and 0.1.0 in the above commands with appropriate values in this and all subsequent steps.

Extract the example SDF

The CSAR that VMBC builds contains an example SDF. In your development environment, extract this with:

unzip -j target/images/<csar>.zip resources/sdf-rvt.yaml

Write the SDF

Open the sdf-rvt.yaml you extracted in the previous step in a text editor.

Site name and deployment ID
1
2
3
4
5
msw-deployment:deployment:
  sites:
  - name: http-example-site-1
    site-parameters:
      deployment-id: http-example
  • The site name (line 3) can be any human-readable name.

  • The deployment ID (line 5) differentiates between multiple deployments using the same CDS. Specify a string of 8-20 characters, using lowercase letters, digits, and hyphens only.

Networking and DNS

In the networking section, specify the subnet information, including DNS addresses:

1
2
3
4
5
6
7
8
9
10
11
12
13
networking:
  subnets:
  - cidr: 172.16.0.0/24
    default-gateway: 172.16.0.1
    dns-servers:
    - 8.8.8.8
    - 8.8.4.4
    identifier: management
    vim-network: management-network
  - cidr: 173.16.0.0/24
    default-gateway: 173.16.0.1
    identifier: core-signaling
    vim-network: core-signaling-network
  • Specify the subnet mask in CIDR notation (line 3).

  • If you don’t have any suitable internal DNS servers, use a public DNS server(s) such as the example shown here (Google DNS). Only specify these for the management subnet. DNS servers must be specified as IPv4 addresses.

  • The identifier field (lines 8 and 12) is an arbitrary identifier for the subnet, used in the VNFC section later in the SDF to refer to this subnet.

  • The vim-network field (lines 9 and 13) must match the name of the network (vSphere) or subnet (OpenStack) as configured on your VNFI.

NTP

Specify at least one NTP address:

1
2
3
4
services:
  ntp-servers:
  - 1.2.3.4
  - 1.2.3.5

You can specify NTP servers as IP addresses, FQDNs, or a mixture of the two. If you don’t have any suitable internal NTP servers, use time.windows.com.

VNFI connection details

Fill in the VNFI connection details.

For vSphere:

1
2
3
4
5
6
7
8
9
10
11
vim-configuration:
  vsphere:
    connection:
      allow-insecure: true
      server: 172.1.1.1
      username: VSPHERE.LOCAL\vsphere
      password-id: password-secret-id
    datacenter: Automation
    folder: ''
    reserve-resources: true
    resource-pool-name: Resources
  • In the username field, specify the username that SIMPL VM should use to log in to the VNFI.

  • Specify the corresponding password as a secret ID (line 7). See the step Write the secrets file below.

  • Specify the datacenter, folder and resource pool name under which the VMs should be created. The folder can be an empty string, which means the VMs are created in the root folder.

For OpenStack:

1
2
3
4
5
6
7
8
9
10
vim-configuration:
  openstack:
    availability-zone: nonperf
    connection:
      auth-url: http://my-openstack-server:5000/v3
      keystone-v3:
        project-id: 0102030405060708090a0b0c0d0e0f10
        user-domain-name: Default
      username: openstack-user
      password-id: password-secret-id
  • In the username field, specify the username that SIMPL VM should use to log in to the VNFI.

  • Specify the corresponding password as a secret ID (line 10). See the step Write the secrets file below.

  • The auth-url is the API endpoint of your OpenStack host. Be sure to include the protocol (http or https) and the API version (only v3 is supported).

  • Specify the project ID and availability zone under which the VMs should be created.

Remove the MDM certificate secret ID

As this tutorial does not use MDM, remove the mdm-certificate-id field near the top of the SDF:

mdm-certificate-id: my-mdm-certificate
Remove the MDM VNFC

Find the vnfcs section, which contains two VNFCs - MDM and http-example. Remove the MDM entry from that list:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
 vnfcs:
 - cluster-configuration:
    count: 3
    instances:
    - name: example-mdm-1

  ...

  type: mdm
  version: 2.31.0
  vim-configuration:
    vsphere:
      deployment-size: medium
- cluster-configuration:
    ...

Do not remove the vnfcs container (line 1), or any part of the http-example VNFC, which starts on line 14.

The http-example VNFC

Fill out the http-example VNFC with three instances (VMs). Set the count field to 3, and choose three hostnames, for example http-example-1 through http-example-3.

For each instance, supply an ssh section with an SSH authorized key and a private-key-id:

1
2
3
4
5
- name: http-example-1
  ssh:
    authorized-keys:
    - ssh-rsa AAAA...
    private-key-id: simpl-private-key-secret-id
  • For the SSH authorized key (line 4), paste the contents of your public key file (vm-key.pub).

  • The private-key-id (line 5) is a secret ID for an SSH private key that SIMPL VM itself uses to log into the VM to perform validation tests. Specify the value simpl-private-key-secret-id.

In the networks subsection of the VNFC, specify IP addresses for each VM on both the management and signaling subnets:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
networks:
- ip-addresses:
    ip:
    - 172.16.0.10
    - 172.16.0.11
    - 172.16.0.12
  name: Management
  subnet: management
  traffic-types:
  - management
- ip-addresses:
    ip:
    - 172.16.10.10
    - 172.16.10.11
    - 172.16.10.12
  name: Signaling
  subnet: core-signaling
  traffic-types:
  - http
  • Within each ip list (lines 4-6 and 13-15), the order of the addresses in this list mirrors the order of the hostnames specified earlier.

  • The network name can be any human-readable name.

  • The subnet field (lines 8 and 17) needs to match the identifier field of the appropriate subnet in the subnets section.

Overall, the VNFC should look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
- cluster-configuration:
    count: 3
    instances:
    - name: http-example-1
      ssh:
        authorized-keys:
        - ssh-rsa <public key>
        private-key-id: simpl-private-key-secret-id
      vnfci-vim-options:
        datastore: data:storage1
        host: esxi.hostname
        resource-pool-name: Resources
    - name: http-example-2
      ssh:
        authorized-keys:
        - ssh-rsa <public key>
        private-key-id: simpl-private-key-secret-id
      vnfci-vim-options:
        datastore: data:storage1
        host: esxi.hostname
        resource-pool-name: Resources
    - name: http-example-3
      ssh:
        authorized-keys:
        - ssh-rsa <public key>
        private-key-id: simpl-private-key-secret-id
      vnfci-vim-options:
        datastore: data:storage1
        host: esxi.hostname
        resource-pool-name: Resources
  name: custom
  networks:
  - ip-addresses:
      ip:
      - <management IP of VM 1>
      - <management IP of VM 2>
      - <management IP of VM 3>
    name: Management
    subnet: management
    traffic-types:
    - management
  - ip-addresses:
      ip:
      - <signaling IP of VM 1>
      - <signaling IP of VM 2>
      - <signaling IP of VM 3>
    name: Core Signaling
    subnet: core-signaling
    traffic-types:
    - http
  product-options:
    custom:
      cds-addresses:
      - <CDS address>
      primary-user-password-id: my-password-secret-id
      secrets-private-key-id: my-secrets-private-key-secret-id
  type: http-example
  version: '0.1.0'
  vim-configuration:
    vsphere:
      deployment-size: medium
  • The vnfci-vim-options section (line 9 et seq) in each instance is only required for vSphere. It allows you to load-balance the VMs among multiple datacenters or hosts to improve fault tolerance. For this tutorial, specify the same values that you specified in the vim-configuration section.

  • If your CDS has a separate address on the signaling subnet, specify that address (only) under cds-addresses (line 54). Otherwise use the CDS management address.

For more information about writing an SDF, refer to Writing an SDF in the Custom VM Install Guide and Creating and editing an SDF in the SIMPL VM documentation.

Write the secrets file

As a security measure, secrets (sensitive values such as passwords or private keys) in the SDF are stored in a secure database known as Quicksilver Secrets Gateway (QSG).

Each secret has a "secret ID", which is a string consisting of lowercase letters, digits, and hyphens. In place of a plaintext password, for example, in the SDF, you put the secret ID for that password, and SIMPL VM retrieves the corresponding secret from QSG. By convention, secret IDs end with -id or -secret-id to make it clear they are secret IDs and not plaintext values.

Create a YAML file named secrets.yaml in your development environment, with the following content:

1
2
3
4
5
6
7
8
9
- secret-id: password-secret-id
  type: freeform
  value: <password for accessing the VNFI>
- secret-id: my-password-secret-id
  type: freeform
  value: <password for accessing the VM>
- secret-id: my-secrets-private-key-secret-id
  type: freeform
  value: <secrets private key>

This specifies the three secrets required, namely:

  • the password for accessing the VNFI (the secret ID in the example SDF is password-secret-id)

  • the primary user password (my-password-secret-id)

  • the secrets private key (my-secrets-private-key-secret-id).

Note The primary user password can be different from the password you specified in node-parameters.yaml.

Do not add an entry for the simpl-private-key-secret-id secret; the instructions below will have SIMPL VM create a key for you.

For more information on the format of the secrets.yaml file, refer to Adding secrets to the secret store in the SIMPL VM documentation.

Write configuration files

Besides the SDF, four configuration files are required, as follows.

The custom-vmpool-config.yaml file

This file provides more information about the deployment. Create this file with the following content, replacing the values in angle brackets with appropriate values that match the SDF:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
deployment-config:custom-virtual-machine-pool:
  cassandra-contact-points:
  - management.ipv4: <CDS address>
    signaling.ipv4: <CDS address>
  deployment-id: <deployment ID>
  image-name: http-example
  site-id: DC1
  virtual-machines:
  - vm-id: <hostname-1>
    rhino-node-id: 101
  - vm-id: <hostname-2>
    rhino-node-id: 102
  - vm-id: <hostname-3>
    rhino-node-id: 103
  rhino-auth:
  - username: <username>
    password: <password>

If your CDS has a separate address on the signaling subnet, specify it in the signaling.ipv4 field. Otherwise, re-use the management address.

The sas-config.yaml file

This file controls Metaswitch Service Assurance Server (SAS) tracing. Create this file with the following content, which disables all SAS tracing.

1
2
deployment-config:sas:
  enabled: false
The snmp-config.yaml file

This file controls SNMP notifications. Create this file with the following content, which disables all SNMP.

1
2
3
4
deployment-config:snmp:
  v1-enabled: false
  v2c-enabled: false
  v3-enabled: false
The custom-config-data.yaml file

This file contains data for the custom application. It must contain the following dictionary:

1
2
deployment-config:custom-data:
  custom-config:

where the custom-config element can be of any format that the custom application requires.

For this tutorial, the before-slee-start hook script expects custom-config to be a simple dictionary of key-value pairs, with values for the fields listen-port and secure-listen-port, which control the ports on which Rhino will listen for incoming HTTP and HTTPS requests respectively. Use the following content:

1
2
3
4
deployment-config:custom-data:
  custom-config:
    listen-port: 8000
    secure-listen-port: 8002

For more information about the options available in these configuration files, refer to Configuration YANG schema and Example configuration YAML files in the Custom VM Install Guide.

Transfer files to SIMPL VM

On the SIMPL VM, create the /home/admin/yamls directory.

From your development environment, use scp to upload all the configuration files, the SDF, and the Rhino license to this directory (for example, scp *.yaml rhino.license admin@<SIMPL VM IP address>:/home/admin/yamls).

Ensure you upload all of the following:

  • custom-config-data.yaml

  • custom-vmpool-config.yaml

  • rhino.license

  • sas-config.yaml

  • sdf-rvt.yaml

  • secrets.yaml

  • snmp-config.yaml

Add secrets to QSG

On the SIMPL VM, run csar secrets add secrets.yaml from the ~/yamls directory. If there are any errors, fix them in your development environment copy and then re-upload the file before trying again.

Once the secrets have been added to QSG, SIMPL VM deletes the secrets.yaml file as a security measure. If at any point you want to change the secret values, upload the secrets.yaml file and run the same command again.

Run the command csar secrets auto-create-keys --sdf sdf-rvt.yaml to generate an SSH key for the simpl-private-key-secret-id secret. If there are errors in the SDF, fix them in the copy of sdf-rvt.yaml in your development environment, re-upload the file, and try again.

Upload configuration to CDS

On the SIMPL VM, run the following commands:

  • cdcsars

  • http-example/0.1.0/resources/rvtconfig upload-config -c <CDS address> -i /home/admin/yamls -t custom --vm-version-source this-rvtconfig

This will upload configuration to CDS.

Upon successful completion, rvtconfig will print out a list of configuration present in CDS. Check there is configuration present for http-example version 0.1.0. You can also verify the configuration is present using http-example/0.1.0/resources/rvtconfig list-config -c <CDS address> -d <deployment ID>, where the deployment ID is as you specified in the SDF.

Wait for about one minute for the configuration changes to take effect.

Create VMs

Return to the configuration files directory with cd ~/yamls and instruct SIMPL VM to deploy the VMs using the command csar deploy --sdf sdf-rvt.yaml. If there are errors in the SDF, fix them in the copy of sdf-rvt.yaml in your development environment, re-upload the file, and try again.

Deploying the VMs will take about 5 minutes.

Verify VMs

SIMPL VM validation

SIMPL VM can run some basic validation checks. On the SIMPL VM, run the command csar validate --sdf sdf-rvt.yaml to validate all three VMs.

Showing a summary of VM status

On the SIMPL VM, run the following commands:

  • cdcsars

  • http-example/0.1.0/resources/rvtconfig report-group-status --ssh-key-secret-id simpl-private-key-secret-id -c <CDS address> -d <deployment ID> -g RVT-http-example.DC1

The summary should look like this, with all VMs showing an OK state (the uptimes and which VM is marked as leader may vary):

Gathering status information

*** Instance state for each VM per version
Version 0.1.0:
    [ OK ] Config found
    http-example-1: commissioned, up 15 minutes (leader)
    http-example-2: commissioned, up 15 minutes
    http-example-3: commissioned, up 15 minutes

*** Detailed instance state for each running VM
Version 0.1.0:
    http-example-1:
        [ OK ] Initconf is active (running) and converged
        [ OK ] CDS connection successful
        [ OK ] MDM connection successful
        [ OK ] Docker service is active (running), 14 minutes, 59 seconds
        [ OK ] Rhino
            Output of command '/home/viewer/rhino/client/bin/rhino-console -r 5 state':
                Node 101 is Running

        [ OK ] Scheduled Rhino restart is inactive (dead)
        [ OK ] PostgreSQL service is active (running), active for 14 minutes, 3 seconds
        [ OK ] Linkerd service is active (running), active for 7 minutes, 34 seconds
    (Similar output for VMs 2 and 3 omitted.)

*** Summary
    [ OK ] http-example-1
    [ OK ] http-example-2
    [ OK ] http-example-3
Logging into the VMs

From your development environment, run the command ssh -i vm_key rhino@<VM management IP address> to log into one of the VMs using the SSH key.

Checking the hook script logs

Once logged into the VM, examine the file /var/log/tas/custom-config-output/before-slee-start.log.

If you see no errors during any of these steps, your VMs have been booted and configured correctly.

Test the Rhino application

With the VMs deployed and configured, you can now run a functional test of the HTTP example application. From your development environment, run the following curl command:

curl --header "test-header: hello-world" http://<VM signaling IP address>:8000/

You should see an HTML response from Rhino, which includes the headers of the original request. Amongst them should be the test-header that you specified in the curl command.

Repeat the test for HTTPS, using the -k option because the Rhino application is using a self-signed HTTPS certificate:

curl -k --header "test-header: hello-world" https://<VM signaling IP address>:8002/

Finally, check that Rhino is only listening on the signaling IP address. Using the management address should fail with a Connection refused error:

$ curl http://<VM management IP address>:8000/
curl: (7) Failed to connect to <management IP address> port 8000: Connection refused

Result

You created and configured three VMs, and verified that they work correctly.

This concludes the tutorial.

Next step

As optional extra steps, you can learn about:

If you want to redeploy your VMs

If you want to try the tutorial again, for example with changes to the example application or configuration, be sure to first clean up the old VMs and CDS data by running the following commands:

  • Destroy the VMs with cd ~/yamls; csar delete --sdf sdf-rvt.yaml

  • Delete state from CDS with cdcsars; http-example/0.1.0/resources/rvtconfig delete-deployment -c <CDS address> -d <deployment ID>

  • Delete the CSAR with csar remove http-example/0.1.0 and rm /home/admin/<csar>.zip

Previous page Next page