This section describes details of components and services running on the SMO nodes.
Systemd Services
Sentinel IP-SM-GW can be disabled in smo-vmpool-config.yaml. If Sentinel IP-SM-GW has been disabled, Rhino will not be running. |
Rhino Process
The Rhino process is managed via the rhino.service
Systemd Service. To start Rhino, run sudo systemctl start rhino.service
. To stop, run sudo systemctl stop rhino.service
.
To check the status run sudo systemctl status rhino.service
. This is an example of a healthy status:
[sentinel@vm-1 ~]$ sudo systemctl status rhino.service
● rhino.service - Rhino Telecom Application Server
Loaded: loaded (/etc/systemd/system/rhino.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/rhino.service.d
└─50-ulimit-nofile.conf
Active: active (running) since Mon 2021-02-15 01:20:58 UTC; 9min ago
Docs: https://docs.rhino.metaswitch.com/ocdoc/go/product/rhino-documentation
Main PID: 25802 (bash)
Tasks: 134
Memory: 938.6M
CGroup: /system.slice/rhino.service
├─25802 /usr/bin/bash -c /home/sentinel/rhino/node-101/start-rhino.sh -l 2>&1 | /home/sentinel/rhino/node-101/consolelog.sh
├─25803 /bin/sh /home/sentinel/rhino/node-101/start-rhino.sh -l
├─25804 /home/sentinel/java/current/bin/java -classpath /home/sentinel/rhino/lib/log4j-api.jar:/home/sentinel/rhino/lib/log4j-core.jar:/home/sentinel/rhino/lib/rhino-logging.jar -Xmx64m -Xms64m c...
└─26114 /home/sentinel/java/current/bin/java -server -Xbootclasspath/a:/home/sentinel/rhino/lib/RhinoSecurity.jar -classpath /home/sentinel/rhino/lib/RhinoBoot.jar -Drhino.ah.gclog=True -Drhino.a...
Feb 15 01:20:58 vm-1 systemd[1]: Started Rhino Telecom Application Server.
OCSS7 Process
The OCSS7 process is managed via the ocss7.service
Systemd Service. To start OCSS7, run sudo systemctl start ocss7.service
. To stop, run sudo systemctl stop ocss7.service
.
To check the status run sudo systemctl status ocss7.service
. This is an example of a healthy status:
[sentinel@smo-1 ~]$ sudo systemctl status ocss7.service
● ocss7.service - Start the OCSS7 SGC
Loaded: loaded (/etc/systemd/system/ocss7.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-01-11 06:29:34 NZDT; 6min ago
CGroup: /system.slice/ocss7.service
├─1215 /bin/bash /home/sentinel/ocss7/DC1/smo1.sentinel-oc.internal/current/bin/sgc daemon --jmxhost 172.31.110.129 --jmxport 55555 --seed /dev/./urandom
├─1216 java com.cts.utils.LogRollover /home/sentinel/ocss7/DC1/smo1.sentinel-oc.internal/current/logs/startup.20210111062915
└─1225 java -DMODULE=SGC -server -ea -XX:MaxNewSize=128m -XX:NewSize=128m -Xms5120m -Xmx5120m -XX:SurvivorRatio=128 -XX:MaxTenuringThreshold=0 -Dsun.rmi.dgc.server.gcInterval=0x7FFFFFFFFFFFFFFE -Dsun.rmi.dgc.client.gcInterv...
Jan 11 06:29:15 smo-1 systemd[1]: Starting Start the OCSS7 SGC...
Jan 11 06:29:15 smo-1 ocss7[1201]: SGC starting - daemonizing ...
Jan 11 06:29:34 smo-1 systemd[1]: Started Start the OCSS7 SGC.
Linkerd
Linkerd is a transparent proxy that is used for outbound communication. The proxy is run
from inside a Docker container. To check if the process is running run
docker ps --filter name=linkerd
.
SNMP service monitor
The SNMP service monitor process is responsible for raising SNMP alarms when a disk partition gets too full.
The SNMP service monitor alarms are compatible with Rhino alarms and can be accessed in the same way. Refer to Accessing SNMP Statistics and Notifications for more information about this.
Alarms are sent to SNMP targets as configured through the configuration YAML files.
The following partitions are monitored:
-
the root partition (
/
) -
the log partition (
/var/log
) -
the cdr partition (
/mnt/cdr
), if CDRs are enabled
There are two thresholds for disk monitoring, expressed as a percentage of the total partition size. When disk usage exceeds:
-
the lower threshold, a warning (MINOR severity) alarm will be raised.
-
the upper threshold, a MAJOR severity alarm will be raised, and (except for the root partition) files will be automatically cleaned up where possible.
Once disk space has returned to a non-alarmable level, the SNMP service monitor will clear the
associated alarm on the next check. By default, it checks disk usage once per day. Running the
command sudo systemctl reload disk-monitor
will force an immediate check of the disk space, for example, if an alarm was raised and you have since cleaned up the appropriate partition and want to clear the alarm.
Configuring the SNMP service monitor
The default monitoring settings should be appropriate for the vast majority of deployments.
Should your Metaswitch Customer Care Representative advise you to reconfigure the disk monitor,
you can do so by editing the file /etc/disk_monitor.yaml
(you will need to use sudo
when
editing this file due to its permissions):
global:
check_interval_seconds: 86400
log:
lower_threshold: 80
max_files_to_delete: 10
upper_threshold: 90
root:
lower_threshold: 90
upper_threshold: 95
snmp:
enabled: true
notification_type: trap
targets:
- address: 192.168.50.50
port: 162
version: 2c
The file is in YAML format, and specifies the alarm thresholds for each disk partition (as a percentage), the interval between checks in seconds, and the SNMP targets.
-
Supported SNMP versions are
2c
and3
. -
Supported notification types are
trap
andnotify
. -
Supported values for the upper and lower thresholds are:
Partition |
Lower threshold range |
Upper threshold range |
Minimum difference between thresholds |
|
50% to 80% |
60% to 90% |
10% |
|
50% to 90% |
60% to 99% |
5% |
-
check_interval_seconds
must be in the range 60 to 86400 seconds inclusive. It is recommended to keep the interval as long as possible to minimise performance impact.
After editing the file, you can apply the configuration by running
sudo systemctl reload disk-monitor
.
Verify that the service has accepted the configuration by
running sudo systemctl status disk-monitor
. If it shows an error, run journalctl -u disk-monitor
for more detailed information. Correct the errors in the configuration and apply it again.
Systemd Timers
Cleanup Timer
The node contains a daily timer that cleans up stale Rhino SLEE activities and SBB instances which are created as part of transactions. This timer runs every night at 02:00 (in the system’s timezone), with a random delay of 15 minutes to avoid all nodes running the cleanup at the same time, as a safeguard to minimize the chance of a potential service impact.
This timer consists of two systemd units: cleanup-sbbs-activities.timer
,
which is the actual timer, and cleanup-sbbs-activities.service
, which is the
service that the timer activates.
The service in turn calls the manage-sbbs-activities
tool.
This tool can also be run manually to investigate if there are any stale activities or
SBB instances.
Run it with the -h
option to get help about its command line options.
Partitions
The nodes contain three partitions:
-
/boot
, with a size of 100MB. This contains the kernel and bootloader. -
/var/log
, with a size of 7000MB. This is where the OS and Rhino store their logfiles. The Rhino logs are within thetas
subdirectory, and within that each cluster has its own directory. -
/
, which uses up the rest of the disk. This is the root filesystem.
PostgreSQL Configuration
On the node, there are default restrictions made to who may access the postgresql instance.
These lie within the root-restricted file /var/lib/pgsql/9.6/data/pg_hba.conf
.
The default trusted authenticators are as follows:
Type of authenticator |
Database |
User |
Address |
Authentication method |
Local |
All |
All |
Trust unconditionally |
|
Host |
All |
All |
127.0.0.1/32 |
MD5 encrypted password |
Host |
All |
All |
::1/128 |
MD5 encrypted password |
Host |
All |
sentinel |
127.0.0.1/32 |
Unencrypted password |
In addition, the instance will listen on the localhost interface only.
This is recorded in /var/lib/pgsql/9.6/data/postgresql.conf
in the listen addresses
field.
Monitoring
Each VM contains a Prometheus exporter, which monitors statistics about the VM’s health (such as CPU usage, RAM usage, etc). These statistics can be retrieved using SIMon by connecting it to port 9100 on the VM’s management interface.
System health statistics can be retrieved using SNMP walking.
They are available via the standard UCD-SNMP-MIB
OIDs with prefix 1.3.6.1.4.1.2021
.