This document details basic procedures for system administrators managing, maintaining, configuring and deploying Rhino 3.1 using the command-line console. To manage using a web interface, see the Rhino Element Manager documentation.

Topics

SLEE Management

Administrative tasks for day-to-day management of the Rhino SLEE, its components and entities deployed in it, including: operational state, deployable units, services, resource adaptor entities, profile tables and profiles, alarms, usage, user transactions, and component activation priorities.

Rhino Configuration

Procedures for configuring Rhino upon installation, and as needed (for example to tune performance), including: logging, staging, object pools, licenses, rate limiting, security and external databases.

Application-State Maintenance

Finding Housekeeping MBeans, and finding, inspecting and removing one or all activities or SBB entities.

Backup and Restore

Backing up and restoring the database, and exporting and importing SLEE deployment state.

SNMP Monitoring

Managing the SNMP subsystem in Rhino, including: configuring the agent, managing MIB files, viewing OID mappings.

Replication Support Services

Configuring supplementary replication support services such as the session ownership store.

Management Tools

Tools included with Rhino for system administrators to manage Rhino.

Other documentation for the Rhino TAS can be found on the Rhino TAS product page.

Notices

Copyright © 2024 Microsoft. All rights reserved

This manual is issued on a controlled basis to a specific person on the understanding that no part of the Metaswitch Networks product code or documentation (including this manual) will be copied or distributed without prior agreement in writing from Metaswitch Networks.

Metaswitch Networks reserves the right to, without notice, modify or revise all or part of this document and/or change product features or specifications and shall not be responsible for any loss, cost, or damage, including consequential damage, caused by reliance on these materials.

Metaswitch and the Metaswitch logo are trademarks of Metaswitch Networks. Other brands and products referenced herein are the trademarks or registered trademarks of their respective holders.

SLEE Management

This section covers general administrative tasks for day-to-day management of the Rhino SLEE, its components and entities deployed in it.

JMX MBeans

Rhino SLEE uses Java Management Extension (JMX) MBeans for management functionality. This includes both functions defined in the JAIN SLEE 1.1 specification and Rhino extensions that allow additional functionality, beyond what’s in the specification.

Rhino’s command-line console is a front end for these MBeans, providing access to functions for managing the following:

Management may also be performed via the Rhino Element Manager web interface.

Note State management commands and JMX methods for setting SLEE, Resource Adaptor Entity or Service state that do not take arguments now accept a state change if at least one node in the cluster will change state. These commands delete any per-node state and set the default desired state to the target state. This behaviour is similar to enabling symmetric activation state mode for the component being updated in versions of Rhino prior to 3.0.0. Cluster nodes that are already in the required desired state are ignored, those that need to change transition. This behaviour is like the "-ifneeded" flag but the operation fails if no nodes are in the prerequisite state to transition to the target state.
The with-node-arg variants create/update (as necessary) per-node state for the requested nodes. All specified nodes must be in the required prerequisite state to transition to the target state.
This behaviour is new in Rhino 3.0.0. It affects the start/stop/activate/deactivate/wait-til rhino-console and Ant tasks.

Namespaces

As well as an overview of Rhino namespaces, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs:

Procedure rhino-console command MBean → Operation
 createnamespace

Namespace Management → createNamespace

 removenamespace

Namespace Management → removeNamespace

 listnamespaces

Namespace Management → getNamespaces

Setting the active namespace
for a client connection

 -n <namespace> (command-line option)
setactivenamespace (interactive command)

Namespace Management → setActiveNamespace

Getting the active namespace
for a client connection


         

Namespace Management → getActiveNamespace

About Namespaces

A namespace is an independent deployment environment that is isolated from other namespaces.

A namespace has:

  • its own SLEE operational state

  • its own set of deployable units

  • its own set of instantiated profile tables, profiles, and resource adaptor entities

  • its own set of component configuration state

  • its own set of desired and actual states for services and resource adaptor entities.

All of these things can be managed within an individual namespace without affecting the state of any other namespace.

Tip A namespace can be likened to a SLEE in itself, where Rhino with multiple namespaces is a container of SLEEs.

A Rhino cluster always has a default namespace that cannot be deleted. Any number of user-defined namespaces may also be created, managed, and subsequently deleted when no longer needed.

Management clients by default interact with the default namespace unless they explicitly request to interact with a different namespace.

Creating a Namespace

To create a new namespace, use the following rhino-console command or related MBean operation.

Console command: createnamespace

Command

createnamespace <name> [-replication-resource <resource-name>]
[-with-session-ownership-facility]
  Description
    Create a new deployment namespace. If the optional replication resource is not
    specified then the resource used for this namespace is the same as that used in
    the default namespace.

Example

$ ./rhino-console createnamespace testnamespace
Namespace testnamespace created

MBean operation: createNamespace

MBean

Rhino extension

public void createNamespace(String name, NamespaceOptions options)
  throws NullPointerException, InvalidArgumentException,
    NamespaceAlreadyExistsException, ManagementException;

Removing a Namespace

To remove an existing user-defined namespace, use the following rhino-console command or related MBean operation.

Tip The default namespace cannot be removed.
Note All deployable units (other than the deployable unit containing the standard JAIN SLEE-defined types) must be uninstalled and all profile tables removed from a namespace before that namespace can be removed.

Console command: removenamespace

Command

removenamespace <name>
  Description
    Remove an existing deployment namespace

Example

$ ./rhino-console removenamespace testnamespace
Namespace testnamespace removed

MBean operation: removeNamespace

MBean

Rhino extension

public void removeNamespace(String name)
  throws NullPointerException, UnrecognizedNamespaceException,
    InvalidStateException, ManagementException;

Listing Namespaces

To list all user-defined namespaces in a SLEE, use the following rhino-console command or related MBean operation.

Console command: listnamespaces

Command

listnamespaces [-v]
  Description
    List all deployment namespaces.  If the -v (verbose) option is given then the
    options that each namespace was created with is also given

Example

$ ./rhino-console listnamespaces
testnamespace

MBean operation: getNamespaces

MBean

Rhino extension

public String[] getNamespaces()
  throws ManagementException;

This operation returns the names of the user-defined namespaces that have been created.

Setting the Active Namespace

Each individual authenticated client connection to Rhino is associated with a namespace.

This setting, known as the active namespace, controls which namespace is affected by management commands such as those that install deployable units or change operational states.

To change the active namespace for a client connection, use the following rhino-console command or related MBean operation.

Console command: setactivenamespace

Command and command-line option

Interactive mode

In interactive mode, the setActiveNamespace command can be used to set the active namespace for future management operations.

setactivenamespace <-default|name>
  Description
    Set the active namespace
Non-interactive mode

In non-interactive mode, the -n command-line option can be used to select the namespace that the executed command is processed against.

Example

Interactive mode
$ ./rhino-console
Interactive Rhino Management Shell
Rhino management console, enter 'help' for a list of commands
[Rhino@localhost (#0)] setactivenamespace testnamespace
The active namespace is now testnamespace
[Rhino@localhost [testnamespace] (#1)] setactivenamespace -default
The active namespace is now the default namespace
[Rhino@localhost (#2)]

Non-interactive mode
$ ./rhino-console -n testnamespace start
The active namespace is now testnamespace
Starting SLEE on node(s) [101]
SLEE transitioned to the Starting state on node 101

MBean operation: setActiveNamespace

MBean

Rhino extension

public void setActiveNamespace(String name)
  throws NoAuthenticatedSubjectException, UnrecognizedNamespaceException,
    ManagementException;

This operation sets the active namespace for the client connection. A null parameter value can be used to specify that the default namespace should be made active.

Getting the Active Namespace

To get the active namespace for a client connection, use the following rhino-console information and related MBean operation.

Console:

Command prompt information

The currently active namespace is reported in the command prompt within square brackets.

If no namespace is reported, then the default namespace is active.

Example

$ ./rhino-console
Interactive Rhino Management Shell
Rhino management console, enter 'help' for a list of commands
[Rhino@localhost (#0)] setactivenamespace testnamespace
The active namespace is now testnamespace
[Rhino@localhost [testnamespace] (#1)] setactivenamespace -default
The active namespace is now the default namespace
[Rhino@localhost (#2)]

MBean operation: getActiveNamespace

MBean

Rhino extension

public String getActiveNamespace()
  throws NoAuthenticatedSubjectException, ManagementException;

This operation returns the name of the namespace currently active for the client connection.

Operational State

As well as an overview of SLEE operational states, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs:

Procedure rhino-console command MBean → Operation

Starting and Stopping the SLEE

 setsleedesiredstate

SLEE Management → setdesiredstate

Retrieving the basic operational state of nodes

 getsleeactualstate, getsleedesiredstate

SLEE Management → getActualState

Retrieving detailed information for every node in the cluster

 getClusterState

Rhino Housekeeping → getClusterState

Gracefully shutting nodes down and, optionally, rebooting them

 shutdown

SLEE Management → shutdown

Forcefully terminating nodes

 kill

SLEE Management → kill

Listing nodes with per-node desired state

 getnodeswithpernodedesiredstate

Node Housekeeping → getNodesWithPerNodeActivationState

Copying per-node desired state to another node

 copypernodedesiredstate

Node Housekeeping → copyPerNodeActivationState

Removing per-node desired state

 removepernodedesiredstate

Node Housekeeping → removePerNodeActivationState

About SLEE Operational States

The SLEE specification defines the operational lifecycle of a SLEE — illustrated, defined, and summarised below.

SLEE lifecycle
Figure 1. The SLEE lifecycle

SLEE lifecycle states

The SLEE lifecycle states are:

State Definition
 STOPPED

The SLEE has been configured and initialised, and is ready to be started. Active resource adaptor entities have been loaded and initialised, and SBBs corresponding to active services have been loaded and are ready to be instantiated. The entire event-driven subsystem, however, is idle: resource adaptor entities and the SLEE are not actively producing events, the event router is not processing work, and the SLEE is not creating SBB entities.

 STARTING

Resource adaptor entities in the SLEE that have been recorded in the management database as being in the ACTIVE state are started. The SLEE still does not create SBB entities.

The node automatically transitions from this state to the RUNNING state when all startup tasks are complete, or to the STOPPING state if a startup task fails.

 RUNNING

Activated resource adaptor entities in the SLEE can fire events, and the SLEE creates SBB entities and delivers events to them as needed.

 STOPPING

Identical to the RUNNING state, except resource adaptor entities do not create (and the SLEE does not accept) new activity objects. Existing activity objects can end (according to the resource adaptor specification).

The node automatically transitions out of this state, returning to the STOPPED state, when all SLEE activities have ended. The node can transition to this state directly from the STARTING state, effective immediately, if it has no activity objects.

Independent SLEE states

Each namespace in each event-router node in a Rhino cluster maintains its own SLEE-lifecycle state machine, independent from other namespaces on the same or other nodes in the cluster. For example:

  • the default namespace on one node in a cluster might be in the RUNNING state

  • while a user-defined namespace on the same node is in the STOPPED state

  • while the default namespace on another node is in the STOPPING state

  • and the user-defined namespace on that node is in the RUNNING state.

The operational state of each namespace on each cluster node persists to the disk-based database.

Bootup SLEE state

After completing bootup and initialisation, a namespace on a node will enter the STOPPED state if:

  • the database has no persistent operational state information for that namespace on that node;

  • the namespace’s persistent operational state is STOPPED on that node; or

  • the node was started with the -x option (see Start Rhino in the Rhino Getting Started Guide).

Otherwise, the namespace will return to the same operational state that it was last in, as recorded in persistent storage.

Changing a namespace’s operational state

You can change the operational state of any namespace on any node at any time, as long as least one node in the cluster is available to perform the management operation (regardless of whether or not the node whose operational state being changed is a current cluster member). For example, you might set the operational state of the default namespace on node 103 to RUNNING before node 103 is started — then, when node 103 is started, after it completes initialising, the default namespace will enter the RUNNING state.

Warning
Changing a quorum node’s operational state

You can also change the operational state of a node which is a current member of the cluster as a quorum node…​ but quorum nodes make no use of operational state information stored in the database, and will not respond to operational state changes. (A node only uses operational state information if it starts as a regular event-router node.)

Starting the SLEE

To start a SLEE on one or more nodes, use the following rhino-console command or related MBean operations.

Note If executed without a list of nodes, all per-node desired state for the SLEE is removed and the default desired state of the SLEE is set to running (if it is not already).

Console command: start

Command

start [-nodes node1,node2,...] [-ifneeded]
  Description
    Start the SLEE (on the specified nodes)

Example

To start nodes 101 and 102:

$ ./rhino-console start -nodes 101,102
Starting SLEE on node(s) [101,102]
SLEE transitioned to the Starting state on node 101
SLEE transitioned to the Starting state on node 102

MBean operation: setPerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on specific nodes
public void setPerNodeDesiredState(int[] nodeIDs, SleeDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        SLEEManagementException;

Rhino provides an extension to set the desired state for a SLEE on a set of nodes.

MBean operation: setDefaultDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that do not have per-node SLEE state configured
public void setDefaultDesiredState(SleeDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        SLEEManagementException;

Rhino provides an extension to set the desired state for a SLEE on nodes that do not have a per-node desired state configured.

MBean operation: removePerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that have per-node SLEE state configured that is different from the default state
public void removePerNodeDesiredState(int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        SLEEManagementException;

Rhino provides an extension to clear the desired state for a SLEE on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state.

MBean operation: start

MBean

SLEE-defined

Start all nodes
public void start()
  throws InvalidStateException, ManagementException;

Rhino’s implementation of the SLEE-defined start operation attempts to transition all event-router nodes in the primary component from the STOPPED to the STARTING state. For this to work, at least one node must be in the STOPPED state.

Rhino extension

Start specific nodes
public void start(int[] nodeIDs)
  throws NullPointerException, InvalidArgumentException,
    InvalidStateException, ManagementException;

Rhino provides an extension that adds an argument which lets you control which nodes to start (by specifying node IDs). For this to work, the specified nodes must be in the STOPPED state.

Stopping the SLEE

To stop SLEE event-routing functions on one or more nodes, use the following rhino-console command or related MBean operations.

Note If executed without a list of nodes, all per-node desired state for the SLEE is removed and the default desired state of the SLEE is set to stopped (if it is not already).

Console command: stop

Command

stop [-nodes node1,node2,...] [-reassignto -node3,node4,...] [-ifneeded]
  Description
    Stop the SLEE (on the specified nodes (reassigning replicated activities to the
    specified nodes))

Examples

To stop nodes 101 and 102:

$ ./rhino-console stop -nodes 101,102
Stopping SLEE on node(s) [101,102]
SLEE transitioned to the Stopping state on node 101
SLEE transitioned to the Stopping state on node 102

To stop only node 101 and reassign replicated activities to node 102:

$ ./rhino-console stop -nodes 101 -reassignto 102
Stopping SLEE on node(s) [101]
SLEE transitioned to the Stopping state on node 101
Replicated activities reassigned to node(s) [102]

To stop node 101 and distribute replicated activities of each replicating resource adaptor entity to all other eligible nodes (those on which the resource adaptor entity is in the ACTIVE state and the SLEE is in the RUNNING state), specify an empty (zero-length) argument for the -reassignto option:

$ ./rhino-console stop -nodes 101 -reassignto ""
Stopping SLEE on node(s) [101]
SLEE transitioned to the Stopping state on node 101
Replicated activities reassigned to node(s) [102,103]
Tip See also Reassigning a Resource Adaptor Entity’s Activities to Other Nodes, particularly the Requirements tab.

MBean operation: setPerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on specific nodes
public void setPerNodeDesiredState(int[] nodeIDs, SleeDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        SLEEManagementException;

Rhino provides an extension to set the desired state for a SLEE on a set of nodes.

MBean operation: setDefaultDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that do not have per-node SLEE state configured
public void setDefaultDesiredState(SleeDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        SLEEManagementException;

Rhino provides an extension to set the desired state for a SLEE on nodes that do not have a per-node desired state configured.

MBean operation: removePerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that have per-node state configured that is different from the default state
public void removePerNodeDesiredState(int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        SLEEManagementException;

Rhino provides an extension to clear the desired state for a SLEE on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state.

MBean operation: stop

MBean

SLEE-defined

Stop all nodes
public void stop()
  throws InvalidStateException, ManagementException;

Rhino’s implementation of the SLEE-defined stop operation attempts to transition all event-router nodes in the primary component from the RUNNING to the STOPPING state. For this to work, at least one node must begin in the RUNNING state.

Rhino extensions

Stop specific nodes
public void stop(int[] nodeIDs)
  throws NullPointerException, InvalidArgumentException,
    InvalidStateException, ManagementException;

Rhino provides an extension that adds an argument which lets you control which nodes to stop (by specifying node IDs). For this to work, specified nodes must begin in the RUNNING state.


Reassign activities to other nodes
public void stop(int[] stopNodeIDs, int[] reassignActivitiesToNodeIDs)
  throws NullPointerException, InvalidArgumentException,
    InvalidStateException, ManagementException;

Rhino also provides an extension that adds another argument, which lets you reassign ownership of replicated activities (from replicating resource adaptor entities) from the stopping nodes, distributing the activities of each resource adaptor entity equally among other event-router nodes where the resource adaptor entity is eligible to adopt them. With a smaller set of activities, the resource adaptor entities on the stopping nodes can more quickly return to the INACTIVE state (which is required for the SLEE to transition from the STOPPING to the STOPPED state). This only works for resource adaptor entities that are replicating activity state (see the description of the "Rhino-defined configuration property" on the MBean tab on Creating a Resource Adaptor Entity). See also Reassigning a Resource Adaptor Entity’s Activities to Other Nodes, in particular the Requirements tab.

Retrieving the State of Nodes

Basic Operational State of a Node

Retrieving actual state

To retrieve the actual operational state of a node, use the following rhino-console command or related MBean operation. For an explanation of the terms "actual state" and "desired state" see bxref:concepts-and-terminology.

Console command: getsleeactualstate

Command

getsleeactualstate <-all|-nodes node1,node2,...>
  Description
    Get the actual SLEE state for the specified nodes. If -all is specified, query
    the state of all current event router cluster members.

Output

The rhino-console client displays the actual operational state of the specified node(s), or every event-router node in the primary component if -all is specified.

Examples

To display the actual state of only node 101:

$ ./rhino-console getsleeactualstate -nodes 101
Node 101: Stopped

To display the actual state of every event-router node:

$ ./rhino-console getsleeactualstate -all
Getting desired SLEE state for node(s) [101,102]
Node 101: Stopped
Node 102: Running

MBean operation: getActualState

MBean

Rhino extension

Return actual state of a set of nodes
public SleeActualState getActualState(int[] nodeIDs)
  throws ManagementException;

Retrieving desired state

To retrieve the desired operational state of a node, use the following rhino-console command or related MBean operation.

Console command: getsleedesiredstate

Command

getsleedesiredstate <-default|-all|-nodes node1,node2,...>
  Description
    Get the default or per-node desired SLEE state. If -all is specified, query the
    state of all current event router nodes as well as all nodes with saved per-node
    state.

Output

The rhino-console client displays the desired state of the specified node(s), or every node with configured state and every event-router node in the primary component if -all is specified.

Examples

To display the desired state of only node 101:

$ ./rhino-console getsleedesiredstate -nodes 101
Node 101: Stopped

To display the desired state of every event-router node and configured node:

$ ./rhino-console getsleedesiredstate -all
Node 101: Stopped
Node 102: Running (default)
Node 103: Running

To display the default desired state that unconfigured event router nodes will inherit:

$ ./rhino-console getsleedesiredstate -default
Getting default SLEE state
Default SLEE state is: running

MBean operation: getPerNodeDesiredState

MBean

Rhino extension

Return desired state of a set of nodes
public SleeActualState getPerNodeDesiredState(int[] nodeIDs)
  throws ManagementException;

MBean operation: getDefaultDesiredState

MBean

Rhino extension

Return the default desired state used by nodes that do not have a configured per-node state
public SleeActualState getDefaultDesiredState()
  throws ManagementException;

Retrieving SLEE-defined state

To retrieve the basic operational state of a node in a form compatible with the JAIN SLEE specification, use the following rhino-console command or related MBean operation.

Console command: state

Command

state [-nodes node1,node2,...]
  Description
    Get the state of the SLEE (on the specified nodes)

Output

The rhino-console client displays the operational state of the specified node(s), or every event-router node in the primary component if none are specified.

Examples

To display the state of only node 101:

$ ./rhino-console state -nodes 101
Node 101 is Stopped

To display the state of every event-router node:

$ ./rhino-console state
Node 101 is Stopped
Node 102 is Running

MBean operation: getState

MBean

SLEE-defined

Return state of current node
public SleeState getState()
  throws ManagementException;

Rhino’s implementation of the SLEE-defined getState operation returns the SLEE defined state most closely representative of the actual state the SLEE on the node the invoking client is connected to. When using the Rhino client library with a list of hosts this will usually be the node on the first host in the list. When multiple nodes are running on the same host, the oldest node on the host will usually expose the management interface and thus be the target of this query.

Note

Since Rhino 3.0.0 the actual state of components on each node can update asynchronously. This differs from symmetric activation state mode in earlier Rhino versions in that the value returned by getState() is not representative of the state on other cluster nodes. Users of this method who previously configured symmetric activation state mode should switch to checking the state of all nodes using the method getState(int[] nodeIDs) or one of the new getDesiredState(int[] nodeIDs) or getActualState(int[] nodeIDs) depending on the purpose of the state query. A list of event router node IDs can be obtained using RhinoHousekeepingMBean.getEventRouterNodes(). For example, to wait until the SLEE is active on all nodes:

RhinoHousekeepingMBean rhinoHousekeeping =  RhinoManagement.getRhinoHousekeepingMBean(client);
SleeManagementMBean sleeManagement =  RhinoManagement.getSleeManagementMBean(client);
boolean active = false;
while (!active) {
    SleeState[] nodeStates = sleeManagement.getActualState(rhinoHousekeeping.getEventRouterNodes());
    active = Arrays.stream(nodeStates).filter(s -> s != SleeActualState.ACTIVE).count() == 0;
}

Rhino extension

Return state of specific nodes
public SleeState[] getState(int[] nodeIDs)
  throws NullPointerException, InvalidArgumentException,
    ManagementException;

Rhino provides an extension that adds an argument which lets you control which nodes to examine (by specifying node IDs).

Detailed Information for Every Node in the Cluster

To retrieve detailed information for every node in the cluster (including quorum nodes), use the following rhino-console command or related MBean operation.

Console command: getclusterstate

Command

getclusterstate
  Description
    Display the current state of the Rhino Cluster

Output

For every node in the cluster, retrieves detailed information on the:

  • node ID

  • type (event-router or quorum)

  • host name of the machine the node is running on

  • time the node was started, and how long it has been alive

  • operational state

  • number of alarms currently raised on the node.

Example

$ ./rhino-console getclusterstate
node-id   active-alarms   host               node-type      slee-state   start-time          up-time
--------  --------------  -----------------  -------------  -----------  ------------------  -----------------
     101               0   host1.domain.com   event-router      Stopped   20080327 12:16:26    0days,2h,40m,3s
     102               0   host2.domain.com   event-router      Running   20080327 12:16:30   0days,2h,39m,59s
     103               0   host3.domain.com         quorum          n/a   20080327 14:36:25    0days,0h,20m,4s

MBean operation: getClusterState

MBean

Rhino extension

public TabularData getClusterState()
throws ManagementException;

(Refer to the javadoc for the structure of the TabularData returned by this operation.)

Terminating Nodes

To terminate cluster nodes, you can:

Note
What’s the difference between "stopping", "shutting down" and "killing" a node?

You can stop functions on nodes and nodes themselves, by:

  • Stopping  — stops event-routing functions on the node, but the node remains alive and a member of the primary component.

  • Shutting down — terminates the node (so that it leaves the primary component).

  • Killing  — terminates the node regardless of its operational state. Killing a node is not recommended unless the node cannot be shut down normally (for example, the node becomes stuck in the STOPPING state for some reason).

See also Stop Rhino in the Getting Started Guide, which details using the stop-rhino.sh script (which uses the rhino-console commands described in this section) to shut down or kill nodes or clusters.

Shut Down Gracefully

To gracefully shut down one or more nodes, use the following rhino-console command or related MBean operation.

Console command: shutdown

Command

shutdown [-nodes node1,node2,...] [-timeout timeout] [-restart]
  Description
    Gracefully shutdown and terminate the cluster (or the specified nodes). If the
    SLEE is running in any namespace on any target node, event routing functions are
    allowed to complete before termination without affecting existing desired state.
    The optional timeout is specified in seconds. Optionally restart the node(s)
    after shutdown

Examples

To shut down the entire cluster:

$ ./rhino-console shutdown
Shutting down the SLEE
Shutdown successful

To shut down only node 102:

$ ./rhino-console shutdown -nodes 102
Shutting down node(s) [102]
Shutdown successful
Note Since Rhino 3.0.0 the shutdown console command will shut down the specified nodes regardless of the desired SLEE state. If the SLEE is running in any namespace on any target node, event routing functions are allowed to complete before termination without affecting existing desired state.

MBean operation: shutdownCluster

MBean

Rhino extension

Shut down all nodes
public void shutdownCluster(boolean restart)
  throws InvalidStateException, ManagementException;

The shutdownCluster operation terminates every node in the cluster. If the restart flag is set, the nodes will be restarted to the currently configured desired state.

Rhino extension

Shut down specified nodes
public void shutdown(boolean restart, long timeout)
  throws NullPointerException, InvalidArgumentException,
    InvalidStateException, ManagementException;

The shutdownCluster operation terminates every node in the cluster. If the restart flag is set, the nodes will be restarted to the currently configured desired state. If the timeout argument is greater than zero, any nodes that still have live activities will be shutdown anyway. This may result in call failures.

MBean operation: shutdownNodes

MBean

Rhino extension

Shut down specific nodes
public void shutdownNodes(int[] nodeIDs, boolean restart)
  throws InvalidStateException, ManagementException;

The shutdownNodes operation terminates the specified set of nodes. If the restart flag is set, the nodes will be restarted to the currently configured desired state.

Rhino extension

Shut down specific nodes
public void shutdown(int[] nodeIDs, boolean restart, long timeout)
  throws NullPointerException, InvalidArgumentException,
    InvalidStateException, ManagementException;

The shutdownNodes operation terminates the specified set of nodes. If the restart flag is set, the nodes will be restarted to the currently configured desired state. If the timeout argument is greater than zero, any nodes that still have live activities will be shutdown anyway. This may result in call failures.

MBean operation: shutdown

MBean

SLEE-defined

Shut down all nodes
public void shutdown()
  throws InvalidStateException, ManagementException;

Rhino’s implementation of the SLEE-defined shutdown operation terminates every node in the cluster.

Rhino extension

Shut down specific nodes
public void shutdown(int[] nodeIDs)
  throws NullPointerException, InvalidArgumentException,
    InvalidStateException, ManagementException;

Rhino provides an extension that adds an argument which lets you control which nodes to shut down (by specifying node IDs).

MBean operation: reboot

MBean

Rhino extension

Reboot all nodes
public void reboot(SleeState[] states)
  throws InvalidArgumentException, InvalidStateException, ManagementException;

Reboots every node in the cluster to the state specified.

Rhino extension

Reboot specific nodes
public void reboot(int[] nodeIDs, SleeState[])
  throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException;

Extension to reboot that adds an argument which lets you control which nodes to shut down (by specifying node IDs).

Note Event-router nodes can restart to either the RUNNING state or the STOPPED state. Quorum nodes must have a state provided but do not use this in operation.

Forcefully Terminate

To forcefully terminate a cluster node that is in any state where it can respond to management operations, use the following rhino-console command or related MBean operation.

Console command: kill

Command

kill -nodes node1,node2,...
  Description
    Forcefully terminate the specified nodes (forces them to become non-primary)

Example

To forcefully terminate nodes 102 and 103:

$ ./rhino-console kill -nodes 102,103
Terminating node(s) [102,103]
Termination successful

MBean operation: kill

MBean

Rhino operation

public void kill(int[] nodeIDs)
  throws NullPointerException, InvalidArgumentException,
    ManagementException;

Rhino’s kill operation immediately and forcefully terminates specified nodes. It requires an argument to select which nodes to terminate (by specifying node IDs).

Caution
Application state may be lost

Killing a node is not recommended — forcibly terminated nodes lose all non-replicated application state.

Activation State

This section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs:

Procedure rhino-console command MBean → Operation

Listing all default and per-node desired states

 listdesiredstates

Node Housekeeping → getDesiredStates

Listing nodes with per-node desired state

 getnodeswithpernodedesiredstate

Node Housekeeping → getNodesWithPerNodeActivationState

Copying per-node desired state to another node

 copypernodedesiredstate

Node Housekeeping → copyPerNodeActivationState

Removing per-node desired state

 removePerNodeActivationState

Node Housekeeping → removePerNodeActivationState

It also describes the deprecated activation state modes that have been functionally replaced by default and per-node desired state.

About Activation State Modes

Rhino versions prior to 3.0.0 had two modes of operation for managing the activation state of services and resource adaptor entities: per-node and symmetric. From Rhino 3.0.0 these two modes were combined and have been superseded by default desired state which can be overridden by per-node desired state. Per-node desired state overrides default desired state if present. Default desired state is effective if no per-node desired state exists.

The actual state for all functions is always maintained on a per-node basis.

Per-node activation state

In per-node activation state mode, Rhino maintained activation state for the installed services and created resource adaptor entities in a namespace on a per-node basis. That is, the SLEE recorded separate activation state information for each individual cluster node.

The per-node activation state mode was the default mode in a newly installed Rhino cluster.

Symmetric activation state

In the symmetric activation state mode, Rhino maintained a single cluster-wide activation state view for each installed service and created resource adaptor entity. So, for example, if a service was activated, then it was simultaneously activated on every cluster node. If a new node joined the cluster, then the services and resource adaptor entities on that node each entered the same operational state as for existing cluster nodes.

Default and per-node desired state and actual state

In Rhino 3.0.0, a default activation state for the SLEE, an installed service, or a created resource adaptor entity is configured for all nodes in the cluster with optional overrides configured on a per-node basis. The effective desired state for a node is the per-node state, or the default state if no per-node state exists for a given function. If it is desired to manage the state of a cluster in the way previously served by symmetric activation state mode, the default state should be used and per-node state left unconfigured. Commands for managing per-node desired state can be found under the topic Per-Node Desired State.

In operation, Rhino nodes have an actual state that is the current operational state. The actual state follows the desired state with a per-node convergence subsystem managing transitions between actual states as the lifecycle rules of system functions allow.

These terms are defined under Declarative Configuration Concepts and Terminology.

Listing All Desired States

To obtain a report detailing all the default and per-node desired states for the SLEE, services, and resource adaptor entities, use the following rhino-console command or related MBean operation.

Console command: listdesiredstates

Command

listdesiredstates [-o filename]
  Description
    List all default and per-node desired states for the SLEE, services, and
    resource adaptor entities.  The -o option will output the raw json-formatted
    report to the specified file instead of a human-readable report being output to
    the console.

Examples

$ ./rhino-console listdesiredstates
SLEE desired state:
  Default desired state: running
  Per-node desired states:
    node 103: stopped

Service desired states:
  Service: ServiceID[name=SIS-IN Test Service Composition Selector Service,vendor=OpenCloud,version=0.3]
    Default desired state: active
    Per-node desired states:
      node 103: inactive

  Service: ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.3]
    Default desired state: active
    Per-node desired states:
      node 103: inactive

  Service: ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.3]
    Default desired state: active
    Per-node desired states:
      node 103: inactive

  Service: ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.3]
    Default desired state: active
    Per-node desired states:
      node 103: inactive

  Service: ServiceID[name=VPN Service,vendor=OpenCloud,version=0.3]
    Default desired state: active
    Per-node desired states:
      node 103: inactive

Resource adaptor entity desired states:
  Resource adaptor entity: insis-ptc-1a
    Default desired state: active

  Resource adaptor entity: insis-ptc-1b
    Default desired state: active

  Resource adaptor entity: insis-ptc-external
    Default desired state: active

To save the report to a file in JSON format:

$ ./rhino-console listdesiredstates -o desired-states.json
Output written to file: desired-states.json

MBean operation: getDesiredStates

MBean

Rhino operation

public String getDesiredStates()
    throws ManagementException;

This operation returns a JSON-formatted string that reports the default desired state and any per-node desired state, where it exists, for the SLEE and each service and resource adaptor entity.

Per-Node Desired State

This section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs.

Procedure rhino-console command MBean → Operation
 getnodeswithpernodedesiredstate

Node Housekeeping → getNodesWithPerNodeActivationState

 copypernodedesiredstate

Node Housekeeping → copyPerNodeActivationState

 removepernodedesiredstate

Node Housekeeping → removePerNodeActivationState

Listing Nodes with Per-Node Desired State

To get a list of nodes with per-node desired state, use the following rhino-console command or related MBean operation.

Console command: getnodeswithpernodedesiredstate

Command

getnodeswithpernodedesiredstate
  Description
    Get the set of nodes for which per-node desired state exists

Example

$ ./rhino-console getnodeswithpernodedesiredstate
Nodes with per-node desired state: [101,102,103]

MBean operation: getNodesWithPerNodeActivationState

MBean

Rhino operation

public int[] getNodesWithPerNodeActivationState()
    throws ManagementException;

This operation returns an array, listing the cluster node IDs for nodes that have per-node desired state recorded in the database).

Note The term activation state is used in this method name due to functional equivalence to per-node activation state configuration in versions of Rhino prior to 3.0.0.

Copying Per-Node Desired State to Another Node

To copy per-node desired state from one node to another, use the following rhino-console command or related MBean operation. This replaces any configured desired state for the node and triggers state convergence to update the actual state for the SLEE and all Services and Resource Adaptor Entities. Copying the state from a node that does not have per-node desired state configured will remove the state configuration for the target node. When a node has no per-node desired state configured it uses the default desired state.

Console command: copypernodedesiredstate

Command

copypernodedesiredstate <from-node-id> <to-node-id>
  Description
    Copy per-node desired state from one node to another

Example

To copy the per-node desired state from node 101 to node 102:

$ ./rhino-console copypernodedesiredstate 101 102
Per-node desired state copied from 101 to 102

MBean operation: copyPerNodeActivationState

MBean

Rhino operation

public boolean copyPerNodeActivationState(int targetNodeID)
    throws UnsupportedOperationException, InvalidArgumentException,
           InvalidStateException, ManagementException;

This operation:

  • copies the per-node desired state recorded in the database for the node for which the Node Housekeeping MBean was created (see Finding Housekeeping MBeans) to the given target node

  • returns the value true if it found and copied per-node desired state, or false if the two nodes had identical per-node desired state.

Note The term activation state is used in this method name due to functional equivalence to per-node activation state configuration in versions of Rhino prior to 3.0.0.
Note The start-rhino.sh command with the Production version of Rhino also includes an option (-c) to copy per-node desired state from another node to the booting node as it initialises. (See Start Rhino in the Getting Started Guide.)

Removing Per-Node Desired State

To remove per-node desired state, use the following rhino-console command or related MBean operation. This removes any configured desired state for the node and triggers state convergence to update the actual state for the SLEE and all Services and Resource Adaptor Entities. When a node has no per-node desired state configured it uses the default desired state.

Console command: removepernodedesiredstate

Command

removepernodedesiredstate <-all|-nodes node1,node2,...>
  Description
    Removes all per-node desired state from either all nodes (with -all), or
    specific nodes (with -nodes). This can remove per-node desired state from
    offline nodes.

Example

To remove per-node desired state from node 103:

$ ./rhino-console removepernodedesiredstate 103
Per-node desired state removed from 103

MBean operation: removePerNodeActivationState

MBean

Rhino operation

public boolean removePerNodeActivationState()
    throws UnsupportedOperationException, InvalidStateException,
           ManagementException;

This operation:

  • removes the per-node desired state recorded in the database for node for which this Node Housekeeping MBean was created (see Finding Housekeeping MBeans)

  • returns the value true if it found and removed per-node desired state, or false if it found no per-node desired state to remove.

Note The start-rhino.sh command with the Production version of Rhino also includes an option (-d) to remove per-node desired state from the booting node as it initialises. (See Start Rhino in the Getting Started Guide.)

Startup and Shutdown Priority

Startup and shutdown priorities should be set when resource adaptors and services need to be activated or deactivated in a particular order when the SLEE is started or stopped. For example, the resource adaptors responsible for writing Call Detail Records often need to be deactivated last.

Valid priorities are between -128 and 127. Startup and shutdown occur from highest to lowest priority.

Console commands

Console command: getraentitystartingpriority

Command

getraentitystartingpriority <entity-name>
  Description
    Get the starting priority for a resource adaptor entity

Examples

./rhino-console getraentitystartingpriority sipra
Resource adaptor entity sipra activation priority is currently 0

Console command: getraentitystoppingpriority

Command

getraentitystoppingpriority <entity-name>
  Description
    Get the stopping priority for a resource adaptor entity

Examples

./rhino-console getraentitystoppingpriority sipra
Resource adaptor entity sipra deactivation priority is currently 0

Console command: getservicestartingpriority

Command

getservicestartingpriority <service-id>
  Description
    Get the starting priority for a service

Examples

./rhino-console getservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1
Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority is currently 0

Console command: getservicestoppingpriority

Command

getservicestoppingpriority <service-id>
  Description
    Get the stopping priority for a service

Examples

./rhino-console getservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1
Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority is currently 0

Console command: setraentitystartingpriority

Command

setraentitystartingpriority <entity-name> <priority>
  Description
    Set the starting priority for a resource adaptor entity.  The priority must be
    between -128 and 127 and higher priority values have precedence over lower
    priority values

Examples

./rhino-console setraentitystartingpriority sipra 127
Resource adaptor entity sipra activation priority set to 127
./rhino-console setraentitystartingpriority sipra -128
Resource adaptor entity sipra activation priority set to -128

Console command: setraentitystoppingpriority

Command

setraentitystoppingpriority <entity-name> <priority>
  Description
    Set the stopping priority for a resource adaptor entity.  The priority must be
    between -128 and 127 and higher priority values have precedence over lower
    priority values

Examples

./rhino-console setraentitystoppingpriority sipra 127
Resource adaptor entity sipra deactivation priority set to 127
./rhino-console setraentitystoppingpriority sipra -128
Resource adaptor entity sipra deactivation priority set to -128

Console command: setservicestartingpriority

Command

setservicestartingpriority <service-id> <priority>
  Description
    Set the starting priority for a service.  The priority must be between -128 and
    127 and higher priority values have precedence over lower priority values

Examples

./rhino-console setservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 127
Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority set to 127
./rhino-console setservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 -128
Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority set to -128

Console command: setservicestoppingpriority

Command

setservicestoppingpriority <service-id> <priority>
  Description
    Set the stopping priority for a service.  The priority must be between -128 and
    127 and higher priority values have precedence over lower priority values

Examples

./rhino-console setservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 127
Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority set to 127
./rhino-console setservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 -128
Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority set to -128

MBean operations

Services

MBean

Rhino extensions

getStartingPriority
byte getStartingPriority(ServiceID service)
    throws NullPointerException, UnrecognizedServiceException, ManagementException;
getStartingPriorities
Byte[] getStartingPriorities(ServiceID[] services)
    throws NullPointerException, ManagementException;
getStoppingPriority
byte getStoppingPriority(ServiceID service)
   throws NullPointerException, UnrecognizedServiceException, ManagementException;
getStoppingPriorities
Byte[] getStoppingPriorities(ServiceID[] services)
    throws NullPointerException, ManagementException;
setStartingPriority
void setStartingPriority(ServiceID service, byte priority)
    throws NullPointerException, UnrecognizedServiceException, ManagementException;
setStoppingPriority
void setStoppingPriority(ServiceID service, byte priority)
    throws NullPointerException, UnrecognizedServiceException, ManagementException;

Resource Adaptors

MBean

Rhino extensions

getStartingPriority
byte getStartingPriority(String entityName)
    throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
getStartingPriorities
Byte[] getStartingPriorities(String[] entityNames)
    throws NullPointerException, ManagementException;
getStoppingPriority
byte getStoppingPriority(String entityName)
   throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
getStoppingPriorities
Byte[] getStoppingPriorities(String[] entityNames)
    throws NullPointerException, ManagementException;
setStartingPriority
void setStartingPriority(String entityName, byte priority)
    throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
setStoppingPriority
void setStoppingPriority(String entityName, byte priority)
    throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;

Deployable Units

As well as an overview of deployable units, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean → Operation
 install
installlocaldu

DeploymentMBean → install

 uninstall

DeploymentMBean → uninstall

 listdeployableunits

DeploymentMBean → getDeployableUnits

About Deployable Units

Below are a definition, preconditions for installing and uninstalling, and an example of a deployable unit.

What is a deployable unit?

A deployable unit is a jar file that can be installed in the SLEE. It contains:

  • a deployment descriptor

  • constituent jar files, with Java class files and deployment descriptors for components such as:

    • SBBs

    • events

    • profile specifications

    • resource adaptor types

    • resource adaptors

    • libraries

  • XML files for services.

Note The JAIN SLEE 1.1 specification defines the structure of a deployable unit.

Installing and uninstalling deployable units

You must install and uninstall deployable units in a particular order, according to the dependencies of the SLEE components they contain. You cannot install a deployable unit unless either it contains all of its dependencies, or they are already installed. For example, if your deployable unit contains an SBB which depends on a library jar, the library jar must either already be installed in the SLEE, or be included in that same deployable unit jar.

Pre-conditions

A deployable unit cannot be installed if any of the following is true:

  • A deployable unit with the same URL has already been installed in the SLEE.

  • The deployable unit contains a component with the same name, vendor and version as a component of the same type that is already installed in the SLEE.

  • The deployable unit contains a component that references other components that are not yet installed in the SLEE and are not included in the deployable unit jar. (For example, an SBB component may reference event-type components and profile-specification components that are not included or pre-installed.)

A deployable unit cannot be uninstalled if either of the following is true:

  • There are any dependencies on any of its components from components in other installed deployable units. For example, if a deployable unit contains an SBB jar that depends on a profile-specification jar contained in a second deployable unit, the deployable unit containing the profile-specification jar cannot be uninstalled while the deployable unit containing the SBB jar remains installed.

  • There are "instances" of components contained in the deployable unit. For example, a deployable unit containing a resource adaptor cannot be uninstalled if the SLEE includes resource adaptor entities of that resource adaptor.

Deployable unit example

The following example illustrates the deployment descriptor for a deployable unit jar file:

<deployable-unit>
<description> ... </description>
...
<jar> SomeProfileSpec.jar </jar>
<jar> BarAddressProfileSpec.jar </jar>
<jar> SomeCustomEvent.jar </jar>
<jar> FooSBB.jar </jar>
<jar> BarSBB.jar </jar>
...
<service-xml> FooService.xml </service-xml>
...
</deployable-unit>

The content of the deployable unit jar file is as follows:

META-INF/deployable-unit.xml
META-INF/MANIFEST.MF
...
SomeProfileSpec.jar
BarAddressProfileSpec.jar
SomeCustomEvent.jar
FooSBB.jar
BarSBB.jar
FooService.xml
...

Installing Deployable Units

To install a deployable unit, use the following rhino-console command or related MBean operation.

Console commands: install, installlocaldu

Commands

Installing from a URL
install <url> [-type <type>] [-installlevel <level>]
  Description
    Install a deployable unit jar or other artifact.  To install something other
    than a deployable unit, the -type option must be specified.  The -installlevel
    option controls to what degree the deployable artifact is installed
Installing from a local file
installlocaldu <file url> [-type <type>] [-installlevel <level>] [-url url]
  Description
    Install a deployable unit or other artifact. This command will attempt to
    forward the file content (by reading the file) to rhino if the management client
    is on a different host.  To install something other than a deployable unit, the
    -type option must be specified.  The -installlevel option controls to what
    degree the deployable artifact is installed.  The -url option allows the
    deployment unit to be installed with an alternative URL identifier

Examples

To install a deployable unit from a given URL:

$ ./rhino-console install file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar
installed: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar]

To install a deployable unit from the local file system of the management client:

$ ./rhino-console installlocaldu file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar
installed: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar]

MBean operation: install

MBean

SLEE-defined

Install a deployable unit from a given URL
public DeployableUnitID install(String url)
  throws NullPointerException, MalformedURLException,
    AlreadyDeployedException, DeploymentException,
      ManagementException;

Installs the given deployable unit jar file into the SLEE. The given URL must be resolvable from the Rhino node.

Rhino extension

Install a deployable unit from a given byte array
public DeployableUnitID install(String url, byte[] content)
  throws NullPointerException, MalformedURLException,
      AlreadyDeployedException, DeploymentException,
          ManagementException;

Installs the given deployable unit jar file into the SLEE. The caller passes the actual file contents of the deployable unit in a byte array as a parameter to this method. The SLEE then installs the deployable unit as if it were from the URL.

Uninstalling Deployable Units

To uninstall a deployable unit, use the following rhino-console command or related MBean operation.

Warning A deployable unit cannot be uninstalled if it contains any components that any other deployable unit installed in the SLEE depends on.

Console command: uninstall

Command

uninstall <url>
  Description
    Uninstall a deployable unit jar

Examples

To uninstall a deployable unit which was installed with the given URL:

$ ./rhino-console uninstall file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar
uninstalled: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar]

Console command: cascadeuninstall

Command

cascadeuninstall <type> <url|component-id> [-force] [-s]
  Description
    Cascade uninstall a deployable unit or copied component. The optional -force
    argument prevents the command from prompting for confirmation before the
    uninstall occurs. The -s argument removes the shadow from a shadowed component
    and is not valid for deployable units

Examples

To uninstall a deployable unit which was installed with the given URL and all deployable units that depend on this:

  $ ./rhino-console cascadeuninstall du file:du/ocsip-ra-2.3.1.17.du.jar
Cascade removal of deployable unit file:du/ocsip-ra-2.3.1.17.du.jar requires the following operations to be performed:
  Deployable unit file:jars/sip-registrar-service.jar will be uninstalled
    SBB with SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8] will be uninstalled
    Service with ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] will be uninstalled
      This service will first be deactivated
  Deployable unit file:jars/sip-presence-service.jar will be uninstalled
    SBB with SbbID[name=EventStateCompositorSbb,vendor=OpenCloud,version=1.0] will be uninstalled
    SBB with SbbID[name=NotifySbb,vendor=OpenCloud,version=1.1] will be uninstalled
    SBB with SbbID[name=PublishSbb,vendor=OpenCloud,version=1.0] will be uninstalled
    Service with ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] will be uninstalled
      This service will first be deactivated
    Service with ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] will be uninstalled
      This service will first be deactivated
    Service with ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] will be uninstalled
      This service will first be deactivated
  Deployable unit file:jars/sip-proxy-service.jar will be uninstalled
    SBB with SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8] will be uninstalled
    Service with ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] will be uninstalled
      This service will first be deactivated
  Deployable unit file:du/ocsip-ra-2.3.1.17.du.jar will be uninstalled
    Resource adaptor with ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] will be uninstalled
      Resource adaptor entity sipra will be removed
        This resource adaptor entity will first be deactivated
        Link name OCSIP bound to this resource adaptor entity will be removed

Continue? (y/n): y
Deactivating service ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8]
Deactivating service ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1]
Deactivating service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1]
Deactivating service ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0]
Deactivating service ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8]
All necessary services are inactive
Deactivating resource adaptor entity sipra
All necessary resource adaptor entities are inactive
Uninstalling deployable unit file:jars/sip-registrar-service.jar
Uninstalling deployable unit file:jars/sip-presence-service.jar
Uninstalling deployable unit file:jars/sip-proxy-service.jar
Unbinding resource adaptor entity link name OCSIP
Removing resource adaptor entity sipra
Uninstalling deployable unit file:du/ocsip-ra-2.3.1.17.du.jar

Utility: cascade-uninstall

cascade-uninstall

Uninstalls a deployable unit along with everything that depends on it

MBean operation: uninstall

MBean

SLEE-defined

public void uninstall(DeployableUnitID id)
  throws NullPointerException, UnrecognizedDeployableUnitException,
         DependencyException, InvalidStateException, ManagementException;

Uninstalls the given deployable unit jar file (along with all the components it contains) out of the SLEE.

Listing Deployable Units

To list the installed deployable units, use the following rhino-console command or related MBean operation.

Console command: listdeployableunits

Command

listdeployableunits
  Description
    List the current installed deployable units

Example

To list the currently installed deployable units:

$ ./rhino-console listdeployableunits
DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar]
DeployableUnitID[url=file:/home/rhino/rhino/lib/javax-slee-standard-types.jar]

MBean operation: getDeployableUnits

MBean

SLEE-defined

  public DeployableUnitID[] getDeployableUnits()
    throws ManagementException;

Returns the set of deployable unit identifiers that identify all the deployable units installed in the SLEE.

Services

As well as an overview of SLEE services, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean → Operation(s)
 listservices

Deployment → getServices

 getserviceactualstate, getservicedesiredstate

Service Management → getServiceState

 listservicesbystate

Service Management → getServices

 activateservice

Service Management → activate

 deactivateservice

Service Management → deactivate

 deactivateandactivateservice +

Service Management → deactivateAndActivate

 listserviceralinks

Deployment → getServices

 listsbbs

Deployment → getSbbs


         

Deployment → getDescriptors

 getservicemetricsrecordingenabled

ServiceManagementMBean → getServiceMetricsRecordingEnabled

 setservicemetricsrecordingenabled

ServiceManagementMBean → setServiceMetricsRecordingEnabled

 getservicereplicationselectors

ServiceManagementMBean → getReplicationSelectors

 setservicereplicationselectors

ServiceManagementMBean → setReplicationSelectors

About Services

The SLEE specification presents the operational lifecycle of a SLEE service — illustrated, defined and summarised below.

Note
What are SLEE services?

Services are SLEE components that provide the application logic to act on input from resource adaptors.

Service lifecycle

Service lifecycle

Service lifecycle states

State Definition
 INACTIVE

The service has been installed successfully and is ready to be activated, but not yet running. The SLEE will not create SBB entities of the service’s root SBB, to process events.

 ACTIVE

The service is running. The SLEE will create SBB entities, of the service’s root SBB, to process initial events. The SLEE will also deliver events to SBB entities of the service’s SBBs, as appropriate.

 STOPPING

The service is deactivating. Existing SBB entities of the service continue running and may complete their processing. But the SLEE will not create new SBB entities of the service’s root SBB, for new activities.

Note When the SLEE can reclaim all of a service’s SBB entities, it transitions out of the STOPPING state and returns to the INACTIVE state.

Independent operational states

As explained in About SLEE Operational States, each event-router node in a Rhino cluster maintains its own lifecycle state machine, independent of other nodes in the cluster. This is also true for each service: one service might be INACTIVE on one node in a cluster, ACTIVE on another, and STOPPING on a third. The operational state of a service on each cluster node also persists to the disk-based database.

A service will enter the INACTIVE state, after node bootup and initialisation completes, if the database’s persistent operational state information for that service is missing, or is set to INACTIVE or STOPPING.

And, like node operational states, you can change the operational state of a service at any time, as long as least one node in the cluster is available to perform the management operation (regardless of whether or not the node whose operational state being changed is a current cluster member). For example, you might activate a service on node 103 before node 103 is booted — then, when node 103 boots, and after it completes initialisation, that service will transition to the ACTIVE state.

Configuring services

An administrator can configure a service before deployment by modifying its service-jar.xml deployment descriptor (in its deployable unit). This includes specifying:

  • the address profile table to use when a subscribed address selects initial events for the service’s root SBB

  • the default event-router priority for the SLEE to give to root SBB entities of the service when processing initial events.

Individual SBBs used in a service can also have configurable properties or environment entries. Values for these environment entries are defined in the sbb-jar.xml deployment descriptor included in the SBB’s component jar. Administrators can set or adjust the values for each environment entry before the SBB is installed in the SLEE.

The SLEE only reads the configurable properties defined for a service or SBB deployment descriptor at deployment time. If you need to change the value of any of these properties, you’ll need to:

  • uninstall the related component (service or SBB whose properties you want to configure) from the SLEE

  • change the properties

  • reinstall the component

  • uninstall and reinstall other components (as needed) to satisfy dependency requirements enforced by the SLEE.

Retrieving a Service’s State

Retrieving actual state

To retrieve the actual operational state of a Service, use the following rhino-console command or related MBean operation. For an explanation of the terms "actual state" and "desired state" see bxref:concepts-and-terminology.

Console command: getserviceactualstate

Command

getserviceactualstate <service-id> <-all|-nodes node1,node2,...>
  Description
    Get the actual service state for the specified nodes. If -all is specified,
    query the state of all current event router cluster members.

Output

The rhino-console client displays the actual operational state of the specified node(s), or every event-router node in the primary component if -all is specified.

Examples

To display the actual state of the service with the ServiceID name=SimpleService1,vendor=Open Cloud,version=1.0 only node 101:

$ ./rhino-console getserviceactualstate name=SimpleService1,vendor=Open Cloud,version=1.0 -nodes 101
Getting actual service state for node(s) [101]
Node 101: Stopped

To display the actual state of the service name=SimpleService1,vendor=Open Cloud,version=1.0 on every event-router node:

$ ./rhino-console getserviceactualstate name=SimpleService1,vendor=Open Cloud,version=1.0 -all
Getting actual service state for node(s) [101,102]
Node 101: Stopped
Node 102: Running

MBean operation: getActualState

MBean

Rhino extension

Return actual state of a set of nodes
public ServiceActualState getActualState(ServiceID service ID, int[] nodeIDs)
  throws ManagementException;

Retrieving desired state

To retrieve the desired operational state of a Service, use the following rhino-console command or related MBean operation.

Console command: getservicedesiredstate

Command

getservicedesiredstate <service-id> <-default|-all|-nodes node1,node2,...>
  Description
    Get the default or per-node desired service state. If -all is specified, query
    the state of all current event router nodes as well as all nodes with saved
    per-node state.

Output

The rhino-console client displays the desired state of the specified node(s), or every node with configured state and every event-router node in the primary component if -all is specified.

Examples

To display the desired state of only node 101:

$ ./rhino-console getservicedesiredstate -nodes 101
Node 101: Stopped

To display the desired state of the service name=SimpleService1,vendor=Open Cloud,version=1.0 every event-router node and configured node:

$ ./rhino-console getservicedesiredstate -all
Node 101: Stopped
Node 102: Running (default)
Node 103: Running

To display the default desired state that unconfigured event router nodes will inherit:

$ ./rhino-console getservicedesiredstate -default
Getting default service state
Default service state is: running

MBean operation: getPerNodeDesiredState

MBean

Rhino extension

Return desired state of a set of nodes
public ServiceDesiredState getPerNodeDesiredState(ServiceID service ID, int[] nodeIDs)
  throws ManagementException;

MBean operation: getDefaultDesiredState

MBean

Rhino extension

Return the default desired state used by nodes that do not have a configured per-node state
public ServiceDesiredState getDefaultDesiredState()
  throws ManagementException;

Retrieving SLEE-defined state

To retrieve the operational state of a service in a form compatible with the JAIN SLEE specification, use the following rhino-console command or related MBean operation.

Console command: getservicestate

Command

getservicestate <service-id> [-nodes node1,node2,...]
  Description
    Get the state of a service (on the specified nodes)

Output

The rhino-console client displays the operational state of the specified node(s), or every event-router node in the primary component if none are specified.

Examples

To display the state of the service with the ServiceID name=SimpleService1,vendor=Open Cloud,version=1.0 on every event-router node:

$ ./rhino-console getservicestate name=SimpleService1,vendor=Open Cloud,version=1.0
Service is Inactive on node 101
Service is Active on node 102

To display the state of the service on only node 101:

$ ./rhino-console getservicestate name=SimpleService1,vendor=Open Cloud,version=1.0 -nodes 101
Service is Inactive on node 101

MBean operation: getState

MBean

SLEE-defined

Return state of service on current node
public ServiceState getState(ServiceID id)
    throws NullPointerException,
    UnrecognizedServiceException,
    ManagementException;

Rhino’s implementation of the SLEE-defined getState operation returns the SLEE-defined state most closely representative of the actual state of a service on the node the invoking client is connected to. When using the Rhino client library with a list of hosts this will usually be the node on the first host in the list. When multiple nodes are running on the same host, the oldest node on the host will usually expose the management interface and thus be the target of this query.

Note

Since Rhino 3.0.0 the actual state of components on each node can update asynchronously. This differs from symmetric activation state mode in earlier Rhino versions in that the value returned by getState() is not representative of the state on other cluster nodes. Users of this method who previously configured symmetric activation state mode should switch to checking the state of all nodes using the method getState(int[] nodeIDs) or one of the new getDesiredState(int[] nodeIDs) or getActualState(int[] nodeIDs) depending on the purpose of the state query. A list of event router node IDs can be obtained using RhinoHousekeepingMBean.getEventRouterNodes(). For example, to verify that a service is configured to be active on all nodes:

RhinoHousekeepingMBean rhinoHousekeeping =  RhinoManagement.getRhinoHousekeepingMBean(client);
ServiceManagementMBean serviceManagement =  RhinoManagement.getServiceManagementMBean(client);
ServiceState[] nodeStates = serviceManagement.getDesiredState(serviceID, rhinoHousekeeping.getEventRouterNodes());
boolean active = Arrays.stream(nodeStates).filter(s -> s != ServiceDesiredState.ACTIVE).count() == 0;

Rhino extension

Return state of service on specified node(s)
public ServiceState[] getState(ServiceID id, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
    UnrecognizedServiceException, ManagementException;

Rhino provides an extension that adds an argument which lets you control the nodes on which to return the state of the service (by specifying node IDs).

Listing Services

All Available Services

To list all available services installed in the SLEE, use the following rhino-console command or related MBean operation.

Console command: listservices

Command

listservices
  Description
    List the current installed services

Example

$ ./rhino-console listservices
ServiceID[name=SIP AC Location Service,vendor=OpenCloud,version=1.7]
ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8]
ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8]

MBean operation: getServices

MBean

SLEE-defined

public ServiceID[] getServices()
  throws ManagementException;

This operation returns an array of service component identifiers, identifying the services installed in the SLEE.

Tip See also Services by State.

Services by State

To list the services in a particular operational state, use the following rhino-console command or related MBean operation.

Console command: listservicesbystate

Command

listservicesbystate <state> [-node node]
  Description
    List the services that are in the specified state (on the specified node)

Output

The operational state of a service is node-specific. If the -node argument is not provided, this command returns the services in the given operational state on the node that rhino-console is connected to for management. (Otherwise, the command returns the services in the given operational state on the specified node.)

Example

To list the services in the ACTIVE state on node 102:

$ ./rhino-console listservicesbystate Active -node 102
Services in Active state on node 102:
  ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8]
  ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8]

MBean operation: getServices

MBean

SLEE-defined

Get services on all nodes
public ServiceID[] getServices(ServiceState state)
  throws NullPointerException, ManagementException;

Rhino’s implementation of the SLEE-defined getServices operation returns an array identifying all the services in the requested state on the node where you invoke the operation.

Rhino extension

Get services on specific nodes
public ServiceID[] getServices(ServiceState state, int nodeID)
  throws NullPointerException, InvalidArgumentException,
    ManagementException;

Rhino provides an extension that adds an argument that lets you control the nodes on which to list services in a particular state (by specifying node IDs).

Activating and Deactivating Services

Activating Services

To activate one or more services, use the following rhino-console command or related MBean operations.

Note If executed without a list of nodes, all per-node desired state for the service is removed and the default desired state of the service is set to active (if it is not already).

Console command: activateservice

Command

activateservice <service-id>* [-nodes node1,node2,...] [-ifneeded]
  Description
    Activate a service (on the specified nodes)

Example

To activate the Call Barring and Call Forwarding services on nodes 101 and 102:

$ ./rhino-console activateservice \
  "name=Call Barring Service,vendor=OpenCloud,version=0.2" \
  "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \
  -nodes 101,102
Activating services [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2],
  ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]] on node(s) [101,102]
Services transitioned to the Active state on node 101
Services transitioned to the Active state on node 102

MBean operation: setPerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on specific nodes
public void setPerNodeDesiredState(ServiceID id, int[] nodeIDs, ServiceDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, ManagementException;

Rhino provides an extension to set the desired state for a service on a set of nodes.

MBean operation: setDefaultDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that do not have per-node state configured for the specified service
public void setDefaultDesiredState(ServiceID id, ServiceDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, ManagementException;

Rhino provides an extension to set the desired state for a service on nodes that do not have a per-node desired state configured.

MBean operation: removePerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that have per-node service state configured that is different from the default state
public void removePerNodeDesiredState(ServiceID id, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, ManagementException;

Rhino provides an extension to clear the desired state for a service on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state.

MBean operation: activate

MBean

SLEE-defined

Activate on all nodes
public void activate(ServiceID id)
    throws NullPointerException, UnrecognizedServiceException,
        InvalidStateException, InvalidLinkNameBindingStateException,
        ManagementException;

public void activate(ServiceID[] ids)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, InvalidStateException,
        InvalidLinkNameBindingStateException, ManagementException;

Rhino’s implementation of the SLEE-defined activate operation will attempt to activate particular services on all current event-router nodes in the primary component. For this to work, the specified services must be in the INACTIVE state on at least one node.

Rhino extension

Activate on specific nodes
public void activate(ServiceID id, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, InvalidStateException,
        ManagementException;

public void activate(ServiceID[] ids, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, InvalidStateException,
        ManagementException;

Rhino provides an extension that adds an argument to let you control the nodes on which to activate the specified services (by specifying node IDs). For this to work, the specified services must be in the INACTIVE state on the specified nodes.

Warning A service may require resource adaptor entity link names to be bound to appropriate resource adaptor entities before it can be activated. (See Getting Link Bindings Required by a Service and Managing Resource Adaptor Entity Link Bindings.)

Deactivating Services

To deactivate one or more services on one or more nodes, use the following rhino-console command or related MBean operations.

Note If executed without a list of nodes, all per-node desired state for the service is removed and the default desired state of the service is set to inactive (if it is not already).

Console command: deactivateservice

Command

deactivateservice <service-id>* [-nodes node1,node2,...] [-ifneeded]
  Description
    Deactivate a service (on the specified nodes)

Example

To deactivate the Call Barring and Call Forwarding services on nodes 101 and 102:

$ ./rhino-console deactivateservice \
    "name=Call Barring Service,vendor=OpenCloud,version=0.2" \
    "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \
    -nodes 101,102
Deactivating services [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2],
  ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]] on node(s) [101,102]
Services transitioned to the Stopping state on node 101
Services transitioned to the Stopping state on node 102

MBean operation: setPerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on specific nodes
public void setPerNodeDesiredState(ServiceID id, int[] nodeIDs, ServiceDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, ManagementException;

Rhino provides an extension to set the desired state for a service on a set of nodes.

MBean operation: setDefaultDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that do not have per-node state configured for the specified service
public void setDefaultDesiredState(ServiceID id, ServiceDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, ManagementException;

Rhino provides an extension to set the desired state for a service on nodes that do not have a per-node desired state configured.

MBean operation: removePerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that have per-node state configured that is different from the default state
public void removePerNodeDesiredState(ServiceID id, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, ManagementException;

Rhino provides an extension to clear the desired state for a service on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state.

MBean operation: deactivate

MBean

SLEE-defined

Deactivate on all nodes
public void deactivate(ServiceID id)
    throws NullPointerException, UnrecognizedServiceException,
          InvalidStateException, ManagementException;

public void deactivate(ServiceID[] ids)
    throws NullPointerException, InvalidArgumentException,
          UnrecognizedServiceException, InvalidStateException,
          ManagementException;

Rhino’s implementation of the SLEE-defined deactivate operation attempts to deactivate particular services on all current event-router nodes in the primary component. For this to work, the specified services must be in the ACTIVE state on at least one node.

Rhino extension

Deactivate on specific nodes
public void deactivate(ServiceID id, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, InvalidStateException,
        ManagementException;

public void deactivate(ServiceID[] ids, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedServiceException, InvalidStateException,
        ManagementException;

Rhino provides an extension that adds an argument that lets you control the nodes on which to deactivate the specified services (by specifying node IDs). For this to work, the specified services must be in the ACTIVE state on the specified nodes.

Console command: waittilserviceisinactive

Command

waittilserviceisinactive <service-id> [-timeout timeout] [-nodes node1,node2,...]
    Wait for a service to finish deactivating (on the specified nodes) (timing out after N seconds)

Example

To wait for the Call Barring and Call Forwarding services on nodes 101 and 102:

$ ./rhino-console waittilserviceisinactive \
    "name=Call Barring Service,vendor=OpenCloud,version=0.2" \
    "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \
    -nodes 101,102
Service ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2] is in the Inactive state on node(s) [101,102]
Service ServiceID[Call Forwarding Service,vendor=OpenCloud,version=0.2] is in the Inactive state on node(s) [101,102]

Upgrading (Activating & Deactivating) Services

To activate some services and deactivate others, use the following rhino-console command or related MBean operation.

Tip
Activating and deactivating in one operation

The SLEE specification defines the ability to deactivate some services and activate other services in a single operation. As one set of services deactivates, the existing activities being processed by those services continue to completion, while new activities (started after the operation is invoked) are processed by the activated services. The intended use of this is to upgrade a service or services with new versions (however the operation does not have to be used strictly for this purpose).

Console command: deactivateandactivateservice

Command

deactivateandactivateservice Deactivate <service-id>* Activate <service-id>*
[-nodes node1,node2,...]
  Description
    Deactivate some services and Activate some other services (on the specified
    nodes)

Example

To deactivate version 0.2 of the Call Barring and Call Forwarding services and activate version 0.3 of the same services on nodes 101 and 102:

$ ./rhino-console deactivateandactivateservice \
    Deactivate "name=Call Barring Service,vendor=OpenCloud,version=0.2" \
               "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \
    Activate   "name=Call Barring Service,vendor=OpenCloud,version=0.3" \
               "name=Call Forwarding Service,vendor=OpenCloud,version=0.3" \
    -nodes 101,102
On node(s) [101,102]:
    Deactivating service(s) [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2],
      ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]]
    Activating service(s) [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.3],
      ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.3]]
Deactivating service(s) transitioned to the Stopping state on node 101
Activating service(s) transitioned to the Active state on node 101
Deactivating service(s) transitioned to the Stopping state on node 102
Activating service(s) transitioned to the Active state on node 102

MBean operation: deactivateAndActivate

MBean

SLEE-defined

Deactivate and activate on all nodes
public void deactivateAndActivate(ServiceID deactivateID, ServiceID activateID)
    throws NullPointerException, InvalidArgumentException,
           UnrecognizedServiceException, InvalidStateException,
           InvalidLinkNameBindingStateException, ManagementException;

public void deactivateAndActivate(ServiceID[] deactivateIDs, ServiceID[] activateIDs)
    throws NullPointerException, InvalidArgumentException,
           UnrecognizedServiceException, InvalidStateException,
           InvalidLinkNameBindingStateException, ManagementException;

Rhino’s implementation of the SLEE-defined deactivateAndActivate operation attempts to deactivate specified services and activate others on all current event-router nodes in the primary component. For this to work, the services to deactivate must be in the ACTIVE state, and the services to activate must be in the INACTIVE state, on those nodes.

Rhino extension

Deactivate and activate on specific nodes
public void deactivateAndActivate(ServiceID deactivateID, ServiceID activateID, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
           UnrecognizedServiceException, InvalidStateException,
           ManagementException;

public void deactivateAndActivate(ServiceID[] deactivateIDs, ServiceID[] activateIDs, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
           UnrecognizedServiceException, InvalidStateException,
           ManagementException;

Rhino provides an extension that adds an argument that lets you control the nodes on which to activate and deactivate services (by specifying node IDs). For this to work, the services to deactivate must be in the ACTIVE state, and the services to activate must be in the INACTIVE state, on the specified nodes.

To find the resource adaptor entity link name bindings needed for a service, and list the service’s SBBs, use the following rhino-console commands or related MBean operations.

Command

listserviceralinks service-id
  Description
    List resource adaptor entity links required by a service

Example

To list the resource adaptor entity links that the JCC VPN service needs:

$ ./rhino-console listserviceralinks "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0"
In service ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0]:
    SBB SbbID[name=AnytimeInterrogation sbb,vendor=Open Cloud,version=1.0] requires entity link bindings: slee/resources/map
    SBB SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0] requires entity link bindings: slee/resources/cdr

Command

listsbbs [service-id]
  Description
    List the current installed SBBs.  If a service identifier is specified only the
    SBBs in the given service are listed

Example

To list the SBBs in the JCC VPN service:

$ ./rhino-console listsbbs "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0"
SbbID[name=AnytimeInterrogation sbb,vendor=Open Cloud,version=1.0]
SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0]
SbbID[name=Proxy route sbb,vendor=Open Cloud,version=1.0]

MBean

SLEE-defined

Get all services in the SLEE
public ServiceID[] getServices()
    throws ManagementException;

getServices returns an array containing the component identifiers of all services installed in the SLEE.


Get all SBBs in a service
public SbbID[] getSbbs(ServiceID service)
    throws NullPointerException, UnrecognizedServiceException,
        ManagementException;

getSbbs returns an array containing the component identifiers of all SBBs included in the given service.


Get the component descriptor for a component
public ComponentDescriptor[] getDescriptors(ComponentID[] ids)
    throws NullPointerException, ManagementException;

getDescriptors returns the component descriptor for each given component.

Tip
Getting entity link information for an SBB

To find the entity link names for an individual SBB, you can:

  • cast a ComponentDescriptor object for the SBB to an SbbDescriptor

  • retrieve an array of entity link names required by the SBB (from the SbbDescriptor), using the getResourceAdaptorEntityLinks operation. The array will be zero-length if the SBB does not require any entity link bindings.

Configuring service metrics recording status

To check and configure the status for recording service metrics, use the following rhino-console commands or related MBean operations.

The details for metrics stats are listed in Metrics.Services.cmp and Metrics.Services.lifecycle.

Note The default is set to disabled for performance consideration.

Console commands

getservicemetricsrecordingenabled

Command

getservicemetricsrecordingenabled <service-id>
  Description
    Determine if metrics recording for a service has been enabled

Example

To check the status for recording metrics:

$ ./rhino-console getservicemetricsrecordingenabled name=service1,vendor=OpenCloud,version=1.0
Metrics recording for ServiceID[name=service1,vendor=OpenCloud,version=1.0] is currently disabled

setservicemetricsrecordingenabled

Command

setservicemetricsrecordingenabled <service-id> <true|false>
  Description
    Enable or disable the recording of metrics for a service

Example

To enable the recording metrics:

$ ./rhino-console setservicemetricsrecordingenabled name=service1,vendor=OpenCloud,version=1.0 true
Metrics recording for ServiceID[name=service1,vendor=OpenCloud,version=1.0] has been enabled

MBean operations: getServiceMetricsRecordingEnabled and setServiceMetricsRecordingEnabled

MBean

Rhino extension

Determine if the recording of metrics for a service is currently enabled or disabled.
public boolean getServiceMetricsRecordingEnabled(ServiceID service)
    throws NullPointerException, UnrecognizedServiceException, ManagementException;
Enable or disable the recording of metrics for a service.
public void setServiceMetricsRecordingEnabled(ServiceID service, boolean enabled)
    throws NullPointerException, UnrecognizedServiceException, ManagementException;

Configuring service replication

The default replication behaviour of a service is defined by the service in its deployment descriptor, but may be overridden by an administrator after the service has been installed into the SLEE.

Default replication behaviour

Default replication behaviour is specified by a service in its oc-service.xml extension service deployment descriptor. The service can specify the conditions under which the application state of the service will be replicated by using the following replication selectors:

  • Savanna — Service replication will occur if the namespace the service is installed in replicates application state over the traditional savanna framework.

  • KeyValueStore — Service replication will occur if the namespace the service is installed in utilises a key/value store to persist application state.

  • Always — The service will always be replicated regardless of any underlying replication mechanism.

Zero or more replication selectors can be specified by the service. If any condition for replication is matched at deployment time then the service application state will be replicated. If not, no replication will take place for that service.

Configuring replication behaviour

The default replication selectors specified by a service can be changed by an administrator after the service is installed, but before it is deployed, using the following rhino-console commands or related MBean operations.

Console commands

getservicereplicationselectors

Command

getservicereplicationselectors <service-id>
  Description
    Get the replication selectors for a service

Example

To check the current replication selectors for a service:

$ ./rhino-console getservicereplicationselectors name=service1,vendor=OpenCloud,version=1.0
Service ServiceID[name=service1,vendor=OpenCloud,version=1.0] current replication selectors are: [KEYVALUESTORE]
setservicereplicationselectors

Command

setservicereplicationselectors <service-id> -none|selector*
  Description
    Set the replication selectors for a service, valid selectors are: [ALWAYS,
    SAVANNA, KEYVALUESTORE]

Example

The change the replication selectors for a service:

$ ./rhino-console setservicereplicationselectors name=service1,vendor=OpenCloud,version=1.0 SAVANNA KEYVALUESTORE
Service ServiceID[name=service1,vendor=OpenCloud,version=1.0] replication selectors set to [SAVANNA, KEYVALUESTORE]

MBean operations: getReplicationSelectors and setReplicationSelectors

MBean

Rhino extension

Get the current replication selectors for a service.
public ReplicationSelector[] getReplicationSelectors(ServiceID id)
    throws NullPointerException, UnrecognizedServiceException, ManagementException;
Set the replication selectors for a service.
    public void setReplicationSelectors(ServiceID id, ReplicationSelector[] selectors)
        throws NullPointerException, UnrecognizedServiceException, InvalidStateException, ManagementException;

Resource Adaptor Entities

As well as an overview of resource adaptor entities, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command MBean → Operation
 listraconfigproperties

Resource Management → getConfigurationProperties

 createraentity

Resource Management → createResourceAdaptorEntity

 removeraentity

Resource Management → removeResourceAdaptorEntity

 listraentityconfigproperties

Resource Management → getConfigurationProperties

 updateraentityconfigproperties

Resource Management → updateConfigurationProperties

 activateraentity

Resource Management → activateResourceAdaptorEntity

 deactivateraentity

Resource Management → deactivateResourceAdaptorEntity

 reassignactivities

Resource Management → reassignActivities

 getraentityactualstate, getraentitydesiredstate

Resource Management → getState

 listraentitiesbystate

Resource Management → getResourceAdaptorEntities

 bindralinkname

Resource Management → bindLinkName

 unbindralinkname

Resource Management → unbindLinkName

 listralinknames

Resource Management → getLinkNames

About Resource Adaptor Entities

Resource adaptors (RAs) are SLEE components which let particular network protocols or APIs be used in the SLEE.

They typically include a set of configurable properties (such as address information of network endpoints, URLs to external systems, or internal timer-timeout values). These properties may include default values. A resource adaptor entity is a particular configured instance of a resource adaptor, with defined values for all of that RA’s configuration properties.

The resource adaptor entity lifecycle

The SLEE specification presents the operational lifecycle of a resource adaptor entity — illustrated, defined, and summarised below.

RA lifecycle

Resource adaptor entity lifecycle states

The SLEE lifecycle states are:

State Definition
 INACTIVE

The resource adaptor entity has been configured and initialised. It is ready to be activated, but may not yet create activities or fire events to the SLEE. Typically, it is not connected to network resources.

 ACTIVE

The resource adaptor entity is connected to the resources it needs to function (assuming they are available), and may create activities and fire events to the SLEE.

 STOPPING

The resource adaptor entity may not create new activities in the SLEE, but may fire events to the SLEE on already existing activities. A resource adaptor entity transitions out of the STOPPING state, returning to the INACTIVE state, when all activities it owns have either ended or been assigned to another node for continued processing.

Note
Creating activities in the STOPPING state

By default, Rhino 3.1 prevents a resource adaptor from creating an activity in the STOPPING state.

This behaviour is controlled by the rhino.skip_lifecycle_checks system property, which defaults to false.

When set to true, Rhino does not enforce this restriction. Resource adaptors should check the state before creating an activity, to avoid a situation where a resource adaptor entity never deactivates because new activities are being created.

The default value in earlier versions of Rhino was true.

Independent lifecycle state machines

As explained in About SLEE Operational States, each event-router node in a Rhino cluster maintains its own lifecycle state machine, independent of other nodes in the cluster. This is also true for each resource adaptor entity: one resource adaptor entity might be INACTIVE on one node in a cluster, ACTIVE on another, and STOPPING on a third. The operational state of a resource adaptor entity on each cluster node also persists to the disk-based database.

A resource adaptor entity will enter the INACTIVE state, after node bootup and initialisation completes, if the database’s persistent operational state information for that resource adaptor entity is missing, or is set to INACTIVE or STOPPING.

And, like node operational states, you can change the operational state of a resource adaptor entity at any time, as long as least one node in the cluster is available to perform the management operation (regardless of whether or not the node whose operational state being changed is a current cluster member). For example, you might activate a resource adaptor entity on node 103 before node 103 is booted — then, when node 103 boots, and after it completes initialisation, that resource adaptor entity will transition to the ACTIVE state.

Creating and Removing Resource Adaptor Entities

Finding RA Configuration Properties

To determine resource adaptor configuration properties (which you need to know when Creating a Resource Adaptor Entity) use the following rhino-console command or related MBean operation.

Console command: listraconfigproperties

Command

listraconfigproperties <resource-adaptor-id>
  Description
    List the configuration properties (and any default values) for a resource
    adaptor

Example

To list the configuration properties of the Metaswitch SIP Resource Adaptor:

$ ./rhino-console listraconfigproperties name=OCSIP,vendor=OpenCloud,version=2.1
Configuration properties for resource adaptor name=OCSIP,vendor=OpenCloud,version=2.1:
  Automatic100TryingSupport (java.lang.Boolean): true
  CRLLoadFailureRetryTimeout (java.lang.Integer): 900
  CRLNoCRLLoadFailureRetryTimeout (java.lang.Integer): 60
  CRLRefreshTimeout (java.lang.Integer): 86400
  CRLURL (java.lang.String):
  ...

MBean operation: getConfigurationProperties

MBean

SLEE-defined

public ConfigProperties getConfigurationProperties(ResourceAdaptorID id)
    throws NullPointerException, UnrecognizedResourceAdaptorException,
        ManagementException

Output

This operation returns a ConfigProperties object. A ConfigProperties object contains a set of ConfigProperty.Property objects, each of which identifies one configuration property defined by the RA. If the RA has defined a default value for the configuration property, the ConfigProperty.Property object will include it.

Creating a Resource Adaptor Entity

To create a resource adaptor entity use the following rhino-console command or related MBean operation.

Console command: createrantity

Command

createraentity <resource-adaptor-id> <entity-name>
[<config-params>|(<property-name> <property-value)*]
  Description
    Create a resource adaptor entity with the given name.  Optionally configuration
    properties can be specified, either as a single comma-separated string of
    name=value pairs, or as a series of separate name and value argument pairs

Example

To create an instance of the Metaswitch SIP resource adaptor, called sipra, with the following configuration property values replacing the defaults (if any) for the IPAddress (192.168.0.100), Port (5160) and SecurePort (5161) :

$ ./rhino-console createraentity name=OCSIP,vendor=OpenCloud,version=2.1 sipra \
    IPAddress=192.168.0.100,Port=5160,SecurePort=5161
Created resource adaptor entity sipra

Notes

Entering configuration properties

When creating a resource adaptor entity, determine its configuration properties and then enter them in rhino-console as a comma-separated list of property-name=value pairs.


Warning
White space, commas, quotes

If a configuration-property value contains white space or a comma, you must quote the value. For example:

$ ./rhino-console createraentity name=MyRA,vendor=Me,version=1.0 myra Value="The quick brown fox",Colour=brown

If the value requires quotes, you must escape them using a backslash ``' (to avoid them being removed by the parser). For example:

$ ./rhino-console createraentity name=MyRA,vendor=Me,version=1.0 myra Value="\"The quick brown fox\"",Colour=brown

MBean operation: createResourceAdaptorEntity

MBean

SLEE-defined

public void createResourceAdaptorEntity(ResourceAdaptorID id, String entityName, ConfigProperties properties)
    throws NullPointerException, InvalidArgumentException,
          UnrecognizedResourceAdaptorException,
          ResourceAdaptorEntityAlreadyExistsException,
          InvalidConfigurationException, ManagementException;

Arguments

This operation requires that you specify the resource adaptor entity’s:

  • ResourceAdaptorID — identifier of the resource adaptor from which to create the resource adaptor entity

  • entityName — an assigned name

  • ConfigProperties — configuration properties.


Note You only need to specify configuration properties that have no defined default, or have a default other than what the resource adaptor entity requires. (Rhino uses the default value if it is not specified within the properties argument.)

Tip
Rhino-defined configuration property

When creating a resource adaptor entity, you may specify the Rhino-defined configuration property: slee-vendor:com.opencloud.rhino_replicate_activities. This property describes the resource adaptor entity’s activity-replication behaviour (assuming it has been specifically designed to support activity-state replication in Rhino). Possible values are:

  • none — the resource adaptor entity will not generate replicated activities

  • mixed — the resource adaptor entity will generate a mix of replicated and non-replicated activities

  • all — all activities generated by the resource adaptor entity will be replicated.

The default value is none. (You can specify an alternative default by defining a configuration property in the deployment descriptor with this name but with a different default value.)

Removing a Resource Adaptor Entity

To remove a resource adaptor entity use the following rhino-console command or related MBean operation.

Note You can only remove a resource adaptor entity from the SLEE when it is in the INACTIVE state on all event-router nodes currently in the primary component.

Console command: removeraentity

Command

removeraentity <entity-name>
  Description
    Remove a resource adaptor entity

Example

To remove the resource adaptor entity named sipra:

$ ./rhino-console removeraentity sipra
Removed resource adaptor entity sipra

MBean operation: removeResourceAdaptorEntity

MBean

SLEE-defined

public void removeResourceAdaptorEntity(String entityName)
    throws NullPointerException,
          UnrecognizedResourceAdaptorEntityException,
          InvalidStateException, DependencyException, ManagementException;

Configuring Resource Adaptor Entities

Listing configuration properties for a Resource Adaptor Entity

To list the configuration properties for a resource adaptor entity use the following rhino-console command or related MBean operation.

Console command: listraentityconfigproperties

Command

listraentityconfigproperties <entity-name>
  Description
    List the configuration property values for a resource adaptor entity

Example

To list the resource adaptor entity called sipra:

$ ./rhino-console listraentityconfigproperties sipra
Configuration properties for resource adaptor entity sipra:
 Automatic100TryingSupport (java.lang.Boolean): true
 AutomaticOptionsResponses (java.lang.Boolean): true
 CRLLoadFailureRetryTimeout (java.lang.Integer): 900
 CRLNoCRLLoadFailureRetryTimeout (java.lang.Integer): 60
 CRLRefreshTimeout (java.lang.Integer): 86400
 CRLURL (java.lang.String):
 ClientAuthentication (java.lang.String): NEED
 EnableDialogActivityTests (java.lang.Boolean): false
 EnabledCipherSuites (java.lang.String):
 ExtensionMethods (java.lang.String):
 IPAddress (java.lang.String): AUTO
 Keystore (java.lang.String): sip-ra-ssl.keystore
 KeystorePassword (java.lang.String):
 KeystoreType (java.lang.String): jks
 MaxContentLength (java.lang.Integer): 131072
 OffsetPorts (java.lang.Boolean): false
 Port (java.lang.Integer): 5060
 PortOffset (java.lang.Integer): 101
 ReplicatedDialogSupport (java.lang.Boolean): false
 RetryAfterInterval (java.lang.Integer): 5
 SecurePort (java.lang.Integer): 5061
 TCPIOThreads (java.lang.Integer): 1
 Transports (java.lang.String): udp,tcp
 Truststore (java.lang.String): sip-ra-ssl.truststore
 TruststorePassword (java.lang.String):
 TruststoreType (java.lang.String): jks
 UseVirtualAddressInURIs (java.lang.Boolean): true
 ViaSentByAddress (java.lang.String):
 VirtualAddresses (java.lang.String):
 WorkerPoolSize (java.lang.Integer): 4
 WorkerQueueSize (java.lang.Integer): 50
 slee-vendor:com.opencloud.rhino_max_activities (java.lang.Integer): 0
 slee-vendor:com.opencloud.rhino_replicate_activities (java.lang.String): mixed

MBean operation: getConfigurationProperties

MBean

SLEE-defined

public ConfigProperties getConfigurationProperties(String entityName)
    throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;

Output

This operation returns a ConfigProperties object. A ConfigProperties object contains a set of ConfigProperty.Property objects, each of which identifies one configuration property defined by the RA. If the RA has defined a default value for the configuration property, the ConfigProperty.Property object will include it.

Updating configuration properties for a Resource Adaptor Entity

To update configuration properties for a resource adaptor entity use the following rhino-console command or related MBean operation.

Note
When is it appropriate to update configuration properties?

A resource adaptor may elect to support reconfiguration when its resource adaptor entities are active using the supports-active-reconfiguration attribute of the <resource-adaptor-class> deployment descriptor element.

If the value of the supports-active-reconfiguration attribute is False, the updateraentityconfigproperties command and related MBean operation may only be invoked to reconfigure a resource adaptor entity when it is in the Inactive state, or when the SLEE is in the Stopped state.

If the value of the supports-active-reconfiguration attribute is True, then a resource adaptor entity may be reconfigured when it, and the SLEE, are in any state, i.e. reconfiguration is possible while the resource adaptor entity is creating activities and firing events in the SLEE.

Console command: updateraentityconfigproperties

Command

updateraentityconfigproperties <entity-name> [<config-params>|(<property-name>
<property-value)*]
  Description
    Update configuration properties for a resource adaptor entity. Properties can be
    specified either as a single comma-separated string of name=value pairs or as a
    series of separate name and value argument pairs

Example

To update the Port and SecurePort in resource adaptor entity called sipra:

$ ./rhino-console updateraentityconfigproperties sipra Port 5061 SecurePort 5062
Updated configuration parameters for resource adaptor entity sipra

MBean operation: updateConfigurationProperties

MBean

SLEE-defined

public void updateConfigurationProperties(String entityName, ConfigProperties properties)
    throws NullPointerException, UnrecognizedResourceAdaptorEntityException,
           InvalidStateException, InvalidConfigurationException,
           ManagementException;

Input

This operation requires a ConfigProperties object. A ConfigProperties object contains a set of ConfigProperty.Property objects, each of which identifies one configuration property defined by the RA.

Activating and Deactivating Resource Adaptor Entities

Activating a Resource Adaptor Entity

To activate a resource adaptor entity on one or more nodes use the following rhino-console command or related MBean operations.

Note If executed without a list of nodes, all per-node desired state for the resource adaptor entity is removed and the default desired state of the resource adaptor entity is set to active (if it is not already).

Console command: activateraentity

Command

activateraentity <entity-name> [-nodes node1,node2,...] [-ifneeded]
  Description
    Activate a resource adaptor entity (on the specified nodes)

Example

To activate the resource adaptor entity called sipra on nodes 101 and 102:

$ ./rhino-console activateraentity sipra -nodes 101,102
Activating resource adaptor entity sipra on node(s) [101,102]
Resource adaptor entity transitioned to the Active state on node 101
Resource adaptor entity transitioned to the Active state on node 102

MBean operation: setPerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on specific nodes
public void setPerNodeDesiredState(String entityName, int[] nodeIDs, ResourceAdaptorEntityDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedResourceAdaptorEntityException, ManagementException;

Rhino provides an extension to set the desired state for a resource adaptor entity on a set of nodes.

MBean operation: setDefaultDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that do not have per-node state configured for the specified resource adaptor entity
public void setDefaultDesiredState(String entityName, ResourceAdaptorEntityDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedResourceAdaptorEntityException, ManagementException;

Rhino provides an extension to set the desired state for a resource adaptor entity on nodes that do not have a per-node desired state configured.

MBean operation: removePerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that have per-node state configured that is different from the default state
public void removePerNodeDesiredState(String entityName, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedResourceAdaptorEntityException, ManagementException;

Rhino provides an extension to clear the desired state for a resource adaptor entity on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state.

MBean operation: activateResourceAdaptorEntity

MBean

SLEE-defined

Activate on all nodes
public void activateResourceAdaptorEntity(String entityName)
    throws NullPointerException, UnrecognizedResourceAdaptorEntityException,
          InvalidStateException, ManagementException;

Rhino’s implementation of the SLEE-defined activateResourceAdaptorEntity operation attempts to activate a resource adaptor entity on all current event-router nodes in the primary component. For this to work, the resource adaptor entity must be in the INACTIVE state on at least one node.

Rhino extension

Activate on specific nodes
public void activateResourceAdaptorEntity(String entityName, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
          UnrecognizedResourceAdaptorEntityException,
          InvalidStateException, ManagementException;

Rhino provides an extension that adds an argument that lets you control the nodes on which to activate the resource adaptor entity (by specifying node IDs). For this to work, the resource adaptor entity must be in the INACTIVE state on the specified nodes.

Deactivating a Resource Adaptor Entity

To deactivate a resource adaptor entity on one or more nodes use the following rhino-console command or related MBean operation.

Note If executed without a list of nodes, all per-node desired state for the resource adaptor entity is removed and the default desired state of the resource adaptor entity is set to inactive (if it is not already).
Tip See also Reassigning a resource adaptor entity’s Activities to Other Nodes, particularly the Requirements tab.

Console command: deactivateraentity

Command

deactivateraentity <entity-name> [-nodes node1,node2,... [-reassignto
node3,node4,...]] [-ifneeded]
  Description
    Deactivate a resource adaptor entity (on the specified nodes (reassigning
    replicated activities to the specified nodes))

Examples

To deactivate the resource adaptor entity named sipra on nodes 101 and 102:

$ ./rhino-console deactivateraentity sipra -nodes 101,102
Deactivating resource adaptor entity sipra on node(s) [101,102]
Resource adaptor entity transitioned to the Stopping state on node 101
Resource adaptor entity transitioned to the Stopping state on node 102

To deactivate the resource adaptor entity named sipra on node 101, and reassign replicated activities to node 102:

$ ./rhino-console deactivateraentity sipra -nodes 101 -reassignto 102
Deactivating resource adaptor entity sipra on node(s) [101]
Resource adaptor entity transitioned to the Stopping state on node 101
Replicated activities reassigned to node(s) [102]

To deactivate the resource adaptor entity named sipra on node 101, and distribute replicated activities equally among all other eligible nodes (those on which the resource adaptor entity is in the ACTIVE state and the SLEE is in the RUNNING state), specify an empty (zero-length) argument for the -reassignto option:

$ ./rhino-console deactivateraentity sipra -nodes 101 -reassignto ""
Deactivating resource adaptor entity sipra on node(s) [101]
Resource adaptor entity transitioned to the Stopping state on node 101
Replicated activities reassigned to node(s) [102,103]

MBean operation: setPerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on specific nodes
public void setPerNodeDesiredState(String entityName, int[] nodeIDs, ResourceAdaptorEntityDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedResourceAdaptorEntityException, ManagementException;

Rhino provides an extension to set the desired state for a resource adaptor entity on a set of nodes.

MBean operation: setDefaultDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that do not have per-node state configured for the specified resource adaptor entity
public void setDefaultDesiredState(String entityName, ResourceAdaptorEntityDesiredState desiredState)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedResourceAdaptorEntityException, ManagementException;

Rhino provides an extension to set the desired state for a resource adaptor entity on nodes that do not have a per-node desired state configured.

MBean operation: removePerNodeDesiredState

MBean

Rhino extension

Activate or deactivate on nodes that have per-node state configured that is different from the default state
public void removePerNodeDesiredState(String entityName, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedResourceAdaptorEntityException, ManagementException;

Rhino provides an extension to clear the desired state for a resource adaptor entity on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state.

MBean operation: deactivateResourceAdaptorEntity

MBean

SLEE-defined

Deactivate on all nodes
public void deactivateResourceAdaptorEntity(String entityName)
    throws NullPointerException, UnrecognizedResourceAdaptorEntityException,
          InvalidStateException, ManagementException;

Rhino’s implementation of the SLEE-defined deactivateResourceAdaptorEntity operation attempts to deactivate a resource adaptor entity on all current event-router nodes in the primary component. For this to work, the resource adaptor entity must be in the ACTIVE state on at least one node.

Rhino extensions

Deactivate on specific nodes
public void deactivateResourceAdaptorEntity(String entityName, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedResourceAdaptorEntityException, InvalidStateException,
        ManagementException;

Rhino provides an extension that adds an argument that lets you control the nodes on which to deactivate the resource adaptor entity (by specifying node IDs). For this to work, the resource adaptor entity must be in the ACTIVE state on the specified nodes.


Reassign deactivating activities to other nodes
public void deactivateResourceAdaptorEntity(String entityName, int[] nodeIDs, int[] reassignActivitiesToNodeIDs)
    throws NullPointerException, InvalidArgumentException,
        UnrecognizedResourceAdaptorEntityException, InvalidStateException,
        ManagementException;

Rhino also provides an extension that adds another argument, that lets you reassign ownership of replicated activities (from a replicating resource adaptor entity), distributing them equally among other available event-router nodes. This reduces the set of activities on the nodes with the deactivating resource adaptor entity, so the resource adaptor entity can return to the INACTIVE state on those nodes quicker. This only works for resource adaptor entities that are replicating activity state (see the description of the "Rhino-defined configuration property" for the MBean on Creating a Resource Adaptor Entity).

Reassigning a Resource Adaptor Entity’s Activities to Other Nodes

To reassign activities from a resource adaptor entity to a different node, use the following rhino-console command or related MBean operation, noting the requirements.

Note
Why reassign replicating activities?

A resource adaptor entity in the STOPPING state cannot return to the INACTIVE state until all the activities that it owns have ended. You can let a deactivating resource adaptor entity return to the INACTIVE state quicker by reassigning its replicating activities to other eligible nodes.

Console command: reassignactivities

Command

reassignactivities <entity-name> -from node1,node2,... -to node3,node4,...
  Description
    Reassign replicated activities of a resource adaptor entity from the specified
    nodes to other nodes

Examples

To reassign activities owned by the resource adaptor entity named sipra from node 101 to node 102 and 103:

$ ./rhino-console reassignactivities sipra -from 101 -to 102,103
Replicated activities for sipra reassigned to node(s) [102,103]
Tip
Reassigning to all available nodes

You can also specify an empty (zero-length) argument for the -to option. This reassigns replicated activities, distributing them equally among all other nodes that can adopt them (nodes on which the resource adaptor entity is in the ACTIVE state and the SLEE is in the RUNNING state).


To reassign activities owned by the resource adaptor entity named sipra from node 101 to all other eligible nodes:

$ ./rhino-console reassignactivities sipra -from 101 -to ""
Replicated activities for sipra reassigned to node(s) [102,103]

MBean operation: reassignActivities

MBean

Rhino extension

public void reassignActivities(String entityName, int[] fromNodeIDs, int[] toNodeIDs)
      throws NullPointerException, InvalidArgumentException,
            UnrecognizedResourceAdaptorEntityException,
            InvalidStateException, ManagementException;

This operation reassigns replicated activities owned by the named resource adaptor entity, on the nodes specified, using Rhino’s standard failover algorithm, to the nodes specified by the toNodeIDs argument. (If toNodeIDs is a zero-length array, the operation reassigns activities to any remaining eligible nodes.)

Requirements for reassigning activities

You can only reassign replicated activities from a resource adaptor entity to other nodes if the all the following conditions are satisfied:

  • The node is a current member of the primary component.

  • The node is an event-router node (not a quorum node).

  • The operational state of the SLEE on the node is RUNNING or STOPPING.

  • The operational state of the resource adaptor entity on the node is ACTIVE or STOPPING.

Further, a node can only take ownership of replicated activities if it satisfies all the following conditions:

  • The node is a current member of the primary component.

  • The node is an event-router node (not a quorum node).

  • The operational state of the SLEE on the node is RUNNING.

  • The operational state of the resource adaptor entity on the node is ACTIVE.

Also, non-replicated activities cannot be reassigned to other nodes, and a resource adaptor entity must end any non-replicated activities it created itself.

Tip You can choose to forcefully remove activities if a resource adaptor entity fails to end them in a timely manner.

Retrieving the State of Resource Adaptor Entities

Retrieving a Resource Adaptor Entity’s State

Retrieving actual state

To retrieve the actual operational state of a Resource Adaptor Entity, use the following rhino-console command or related MBean operation. For an explanation of the terms "actual state" and "desired state" see bxref:concepts-and-terminology.

Console command: getraentityactualstate

Command

getraentityactualstate <entity-name> <-all|-nodes node1,node2,...>
  Description
    Get the actual resource adaptor entity state for the specified nodes. If -all is
    specified, query the state of all current event router cluster members.

Output

The rhino-console client displays the actual operational state of the specified node(s), or every event-router node in the primary component if -all is specified.

Examples

To display the actual state of the Resource Adaptor Entity sipra only node 101:

$ ./rhino-console getraentityactualstate sipra -nodes 101
Getting actual Resource Adaptor Entity state for node(s) [101]
Node 101: Stopped

To display the actual state of the Resource Adaptor Entity sipra on every event-router node:

$ ./rhino-console getraentityactualstate sipra -all
Getting actual Resource Adaptor Entity state for node(s) [101,102]
Node 101: Stopped
Node 102: Running

MBean operation: getActualState

MBean

Rhino extension

Return actual state of a set of nodes
public ResourceAdaptorEntityActualState getActualState(String entityName, int[] nodeIDs)
  throws ManagementException;

Retrieving desired state

To retrieve the desired operational state of a Resource Adaptor Entity, use the following rhino-console command or related MBean operation.

Console command: getraentitydesiredstate

Command

getraentitydesiredstate <entity-name> <-default|-all|-nodes node1,node2,...>
  Description
    Get the default or per-node desired resource adaptor entity state. If -all is
    specified, query the state of all current event router nodes as well as all
    nodes with saved per-node state.

Output

The rhino-console client displays the desired state of the specified node(s), or every node with configured state and every event-router node in the primary component if -all is specified.

Examples

To display the desired state of only node 101:

$ ./rhino-console getraentitydesiredstate -nodes 101
Node 101: Stopped

To display the desired state of the Resource Adaptor Entity sipra every event-router node and configured node:

$ ./rhino-console getraentitydesiredstate -all
Node 101: Stopped
Node 102: Running (default)
Node 103: Running

To display the default desired state that unconfigured event router nodes will inherit:

$ ./rhino-console getraentitydesiredstate -default
Getting default Resource state
Default Resource state is: running

MBean operation: getPerNodeDesiredState

MBean

Rhino extension

Return the desired state of a set of nodes
public ResourceAdaptorEntityDesiredState getPerNodeDesiredState(String entityName, int[] nodeIDs)
  throws ManagementException;

MBean operation: getDefaultDesiredState

MBean

Rhino extension

Return the default desired state used by nodes that do not have a configured per-node state
public ResourceAdaptorEntityDesiredState getDefaultDesiredState(String entityName)
  throws ManagementException;

Retrieving SLEE-defined state

To retrieve the operational state of a Resource Adaptor Entity in a form compatible with the JAIN SLEE specification, use the following rhino-console command or related MBean operation.

Console command: getraentitystate

Command

getraentitystate <entity-name> [-nodes node1,node2,...]
  Description
    Get the state of a resource adaptor entity (on the specified nodes)

Output

The rhino-console client displays the operational state of the specified node(s), or every event-router node in the primary component if none are specified.

Examples

To display the state of the resource adaptor entity with the name sipra on every event-router node:

$ ./rhino-console getraentitystate sipra
Resource Adaptor Entity is Inactive on node 101
Resource Adaptor Entity is Active on node 102

To display the state of the Resource Adaptor Entity on only node 101:

$ ./rhino-console getraentitystate sipra -nodes 101
Resource Adaptor Entity is Inactive on node 101

MBean operation: getState

MBean

SLEE-defined

Return state of Resource Adaptor Entity on current node
public ResourceAdaptorEntityState getState(String entityName)
    throws NullPointerException,
    UnrecognizedResourceAdaptorEntityException,
    ManagementException;

Rhino’s implementation of the SLEE-defined getState operation returns the SLEE-defined state most closely representative of the actual state of a Resource Adaptor Entity on the node the invoking client is connected to. When using the Rhino client library with a list of hosts this will usually be the node on the first host in the list. When multiple nodes are running on the same host, the oldest node on the host will usually expose the management interface and thus be the target of this query.

Note

Since Rhino 3.0.0 the actual state of components on each node can update asynchronously. This differs from symmetric activation state mode in earlier Rhino versions in that the value returned by getState() is not representative of the state on other cluster nodes. Users of this method who previously configured symmetric activation state mode should switch to checking the state of all nodes using the method getState(int[] nodeIDs) or one of the new getDesiredState(int[] nodeIDs) or getActualState(int[] nodeIDs) depending on the purpose of the state query. A list of event router node IDs can be obtained using RhinoHousekeepingMBean.getEventRouterNodes(). For example, to verify that a Resource Adaptor Entity is configured to be active on all nodes:

RhinoHousekeepingMBean rhinoHousekeeping =  RhinoManagement.getRhinoHousekeepingMBean(client);
ResourceManagementMBean resourceManagement =  RhinoManagement.getResourceManagementMBean(client);
ResourceAdaptorEntityState[] nodeStates = resourceManagement.getDesiredState(entityName, rhinoHousekeeping.getEventRouterNodes());
boolean active = Arrays.stream(nodeStates).filter(s -> s != ResourceAdaptorEntityDesiredState.ACTIVE).count() == 0;

Rhino extension

Return state of Resource Adaptor Entity on specified node(s)
public ResourceAdaptorEntityState[] getState(String entityName, int[] nodeIDs)
    throws NullPointerException, InvalidArgumentException,
    UnrecognizedResourceAdaptorEntityException, ManagementException;

Rhino provides an extension that adds an argument which lets you control the nodes on which to return the state of the Resource Adaptor Entity (by specifying node IDs).

Listing Resource Adaptor Entities by State

To list resource adaptor entities in a particular operational state, use the following rhino-console command or related MBean operation.

Console command: listraentitiesbystate

Command

listraentitiesbystate <state> [-node node]
  Description
    List the resource adaptor entities that are in the specified state (on the
    specified node)

Examples

To list the resource adaptor entities on the node where rhino-console is connected:

$ ./rhino-console listraentitiesbystate Active
No resource adaptor entities in Active state on node 101

To list the resource adaptor entities that are active on node 102:

$ ./rhino-console listraentitiesbystate Active -node 102
Resource adaptor entities in Active state on node 102:
sipra

MBean operation: getResourceAdaptorEntities

MBean

SLEE-defined

Return names of resource adaptor entities in specified state on current node
public String[] getResourceAdaptorEntities(ResourceAdaptorEntityState state)
    throws NullPointerException, ManagementException;

Rhino’s implementation of the SLEE-defined getResourceAdaptorEntities operation returns the names of resource adaptor entities in a specified state on the node where you invoke the operation.

Rhino extension

Return names of resource adaptor entities in specified state on specified node
public String[] getResourceAdaptorEntities(ResourceAdaptorEntityState state, int nodeID)
    throws NullPointerException, InvalidArgumentException,
    ManagementException;

Rhino provides an extension that lets you specify the nodes (by specifying node IDs) on which to return the names of resource adaptor entities in the specified state.

Note
What are resource adaptor entity link name bindings?

When an SBB needs access to a resource adaptor entity, it uses JNDI to get references to Java objects that implement the resource adaptor interface (provided by the resource adaptor entity). The SBB declares (in its deployment descriptor) the resource adaptor type it expects, and an arbitrary link name. Before activating a service using the SBB, an administrator must bind a resource adaptor entity (of the type expected) to the specified link name.

Rhino includes procedures for:

To bind a resource adaptor entity to a link name, use the following rhino-console command or related MBean operation.

Warning Only one resource adaptor entity can be bound to a link name at any time.

Command

bindralinkname <entity-name> <link-name>
  Description
    Bind a resource adaptor entity to a link name

Example

To bind the resource adaptor entity with the name sipra to the link name sip:

$ ./rhino-console bindralinkname sipra sip
Bound sipra to link name sip

MBean

SLEE-defined

public void bindLinkName(String entityName, String linkName)
    throws NullPointerException, InvalidArgumentException,
            UnrecognizedResourceAdaptorEntityException,
            LinkNameAlreadyBoundException, ManagementException;

To unbind a resource adaptor entity from a link name, use the following rhino-console command or related MBean operation.

Command

unbindralinkname <link-name>
  Description
    Unbind a resource adaptor entity from a link name

Example

To unbind the link name sip:

$ ./rhino-console unbindralinkname sip
Unbound link name sip

MBean

SLEE-defined

public void unbindLinkName(String linkName)
    throws NullPointerException, UnrecognizedLinkNameException,
            DependencyException,ManagementException;

To list resource adaptor entity link names that have been bound in the SLEE, use the following rhino-console command or related MBean operation.

Command

listralinknames [entity name]
  Description
    List the bound link names (for the specified resource adaptor entity)

Examples

To list all resource adaptor entity link name bindings:

$ ./rhino-console listralinknames
slee/resources/cdr -> cdrra
slee/resources/map -> mapra

To list all link name bindings for the resource adaptor entity named mapra:

$ ./rhino-console listralinknames mapra
slee/resources/map

MBean

SLEE-defined

List all bound link names
public String[] getLinkNames()
    throws ManagementException;

Rhino’s implementation of the SLEE-defined getLinkNames operation returns an array of all link names that have been bound in the SLEE.


List link names to which a specific resource adaptor entity has been bound
public String[] getLinkNames(String entityName)
    throws NullPointerException,
            UnrecognizedResourceAdaptorEntityException,
            ManagementException;

The SLEE-defined operation also includes an argument for returning just link names to which a specified resource adaptor entity has been bound. If the resource adaptor entity has not been bound to any link names, the returned array is zero-length.

Profile Tables and Profiles

As well as an overview of SLEE profiles, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation
 createprofiletable

Profile Provisioning → createProfileTable

 createprofile

Profile Provisioning → createProfile

 listprofiletables

Profile Provisioning → getProfileTables

 listprofiles

Profile Provisioning → getProfiles

 listprofileattributes

Profile Provisioning, Profile → getProfile

 setprofileattributes

Profile Provisioning, Profile → getProfile

  listprofilesbyattribute +
listprofilesbyindexedattribute

Profile Provisioning → getProfilesByAttribute
Profile Provisioning → getProfilesByIndexedAttribute
Profile Provisioning → getProfilesByStaticQuery
Profile Provisioning → getProfilesByDynamicQuery

 exportall +
importprofiles

Profile Provisioning → exportProfiles
Profile Provisioning → importProfiles

About Profiles

Note
What are profiles? profile tables? profile specifications?

A profile is an entry in a profile table. It has a name, may have values (called "attributes") and may have indexed fields. It’s like a row in SQL, but may also include business and management logic.

A profile table is a "container" for profiles. Its specification schema, the profile specification deployment descriptor, may define queries for the profile table. The SLEE specification defines the format and structure of profile specification schemas.

A profile table’s default profile is the initial set of profile attribute values for newly created profiles within that table (if not specified explicitly with the profile-creation command).

Before deploying a profile into the SLEE, an administrator can configure its profile specification. You do this by modifying values in the profile’s profile-spec-jar.xml deployment descriptor (in its deployable unit). For example, you can specify:

  • static queries available to SLEE components, and administrators using the management interface

  • profile specification environment entries

  • indexing hints for profile attributes.

Tip For more on profile static queries, environment entires and indexing, see the SLEE 1.1 specification.

Creating Profile Tables

To create a new profile table based on an already-deployed profile specification, use the following rhino-console command or related MBean operation.

Warning
Name character restriction

The profile table name cannot include the / character.

Console command: createprofiletable

Command

createprofiletable <profile-spec-id> <table-name>
  Description
    Create a profile table

Example

$ ./rhino-console createprofiletable name=AddressProfileSpec,vendor=javax.slee,version=1.1 testtable
Created profile table testtable

MBean operation: createProfileTable

MBean

SLEE-defined

public void createProfileTable(javax.slee.profile.ProfileSpecificationID id, String newProfileTableName)
    throws NullPointerException, UnrecognizedProfileSpecificationException,
          InvalidArgumentException, ProfileTableAlreadyExistsException,
          ManagementException;

Arguments

This operation requires that you specify the profile table’s:

  • id — component identifier of the profile specification from which to create the profile table

  • newProfileTableName —  name of the profile table to create.

Creating Profiles

To create a profile in an existing profile table, use the following rhino-console command or related MBean operation.

Console command createprofile

Command

createprofile <table-name> <profile-name> (<attr-name> <attr-value>)*
  Description
    Add a profile to a table, optionally setting attributes (see
    -setProfileAttributes option)

Add a profile to a table, optionally setting attributes (See Setting Profile attributes)

Example

$ ./rhino-console createprofile testtable testprofile
Profile testtable/testprofile created

Notes

Setting profile attributes

When creating a profile, decide the profile’s attribute names and then enter them in rhino-console as a space-separated list of property-name value pairs.


White space, commas, quotes

If a profile or profile table name or an attribute name or value contains white space or a comma, you must quote the string. For example:

$ ./rhino-console createprofile "testtable 2" "testprofile 2" SubscriberAddress "my address" forwarding true

If the value requires quotes, you must escape them using a backslash "\" (to avoid them being removed by the parser). For example:

$ ./rhino-console createprofile testtable testprofile attrib "\"The quick brown fox\""

Name uniqueness

The profile name must be unique within the scope of the profile table.

MBean operation: createProfile

MBean

SLEE-defined

public javax.management.ObjectName createProfile(String profileTableName, String newProfileName)
    throws NullPointerException, UnrecognizedProfileTableNameException,
          InvalidArgumentException, ProfileAlreadyExistsException,
          ManagementException;

Arguments

This operation requires that you specify the profile’s:

  • profileTableName — name of the profile table to create the profile in

  • newProfileName — name of the new profile.

Notes

Profile MBean commit state

This operation returns an ObjectName, which the management client can use to access a Profile MBean for the new profile. This MBean will be in the read-write state, so the management client can configure initial values for profile attributes before the SLEE adds the profile to the profile table. You cannot see the new profile in the profile table until you commit the Profile MBean’s state, using the ProfileMBean.commitProfile() operation.


Name uniqueness

The profile name must be unique within the scope of the profile table.

Listing Profile Tables

To list all profile tables in a SLEE, use the following rhino-console command or related MBean operation.

Console command: listprofiletables

Command

listprofiletables
  Description
    List the current created profile tables

Example

$ ./rhino-console listprofiletables
callbarring
callforwarding

MBean operation: getProfileTables

MBean

SLEE-defined

public Collection getProfileTables()
    throws ManagementException;

Listing Profiles

To list all profiles of a specific profile table, use the following rhino-console command or related MBean operation.

Console command: listprofiles

Command

listprofiles <table-name>
  Description
    List the profiles in a table

Example

$ ./rhino-console listprofiles testtable
testprofile

MBean operation: getProfiles

MBean

SLEE-defined

public Collection getProfiles(String profileTableName)
    throws NullPointerException, UnrecognizedProfileTableNameException,
          ManagementException;

Arguments

This operation requires that you specify the profile table’s:

  • profileTableName — name of the profile table.

Listing Profile Attributes

To list a profile’s attributes (names and current values), use the following rhino-console command or related MBean operation.

Console command: listprofileattributes

Command

listprofileattributes <table-name> [profile-name]
  Description
    List the current values of a profile, or if no profile is specified the current
    values of the default profile are listed

Example

$ ./rhino-console listprofileattributes testtable testprofile
Address={null}

MBean operation: getProfile

MBean

SLEE-defined

public javax.management.ObjectName getProfile(String profileTableName,String profileName)
    throws NullPointerException, UnrecognizedProfileTableNameException,
          UnrecognizedProfileNameException, ManagementException;

Arguments

This operation requires that you specify the profile table’s:

  • profileTableName — name of the profile table to get the profile from.

  • profileName — name of the profile.

Notes

Profile MBean state

This operation returns an ObjectName, which the management client can use to access a Profile MBean for this specific profile. This MBean will be in the read-only state, so the management client can only read the profile attributes. (To change profile attributes, see Setting Profile Attributes.)

Note For more about Profile MBeans, their requirements and restrictions, please see chapter 10.26 "Profile MBean" in the SLEE 1.1 Specification.

Setting Profile Attributes

To set a profile’s attribute values, use the following rhino-console command or related MBean operation.

Console command: setprofileattributes

Command

setprofileattributes <table-name> <profile-name> (<attr-name> <attr-value>)*
  Description
    Set the current values of a profile (use "" for default profile). The
    implementation supports only a limited set of attribute types that it can
    convert from strings to objects

Example

$ ./rhino-console setprofileattributes testtable testprofile Address IP:192.168.0.1
Set attributes in profile testtable/testprofile

Notes

White space, commas, quotes

If a profile or profile table name or an attribute name or value contains white space or a comma, you must quote the string. For example:

$ ./rhino-console setprofileattributes "testtable 2" "testprofile 2" SubscriberAddress "my address" forwarding true

If the value requires quotes, you must escape them using a backslash "\" (to avoid them being removed by the parser). For example:

$ ./rhino-console setprofileattributes testtable testprofile attrib "\"The quick brown fox\""

MBean operation: getProfile

MBean

SLEE-defined

public javax.management.ObjectName getProfile(String profileTableName,String profileName)
    throws NullPointerException, UnrecognizedProfileTableNameException,
          UnrecognizedProfileNameException, ManagementException;

Arguments

This operation requires that you specify the profile table’s:

  • profileTableName — name of the profile table to get the profile from.

  • profileName — name of the profile.

Notes

Profile MBean state

This operation returns an ObjectName, which the management client can use to access a Profile MBean for this specific profile. This MBean will be in the read-only state, so the management client can only read the profile attributes.

To put the MBean into the read-write state, invoke ProfileMBean.editProfile(). This will give you access to the profile’s attributes using the MBean’s getter and setter methods. You cannot see the profile’s new values until you commit the Profile MBean’s state, using the ProfileMBean.commitProfile() operation.

Note For more about Profile MBeans, their requirements and restrictions, please see chapter 10.26 "Profile MBean" in the SLEE 1.1 Specification.

Finding Profiles

Finding Profiles by Attribute Value

To retrieve all profiles with a specific attribute value, use the following rhino-console commands or related MBean operations:

Console command: listprofilesbyattribute

Command

listprofilesbyattribute <table-name> <attr-name> <attr-value>
[display-attributes (true/false)]
  Description
    List the profile which have an attribute <attr-name> equal to <attr-value>. The
    implementation supports only a limited set of attribute types that it can
    convert from strings to objects

Example

$ ./rhino-console listprofilesbyattribute testtable Address IP:192.168.0.1
1 profiles returned
ProfileID[table=testtable,profile=testprofile]

Notes

SLEE 1.1- & SLEE 1.0-specific commands

Between SLEE 1.0 and SLEE 1.1, the underlying profile specification schema changed significantly. According to the SLEE 1.1 Specification, profile attributes no longer have to be indexed to be legally used by a find-by-attribute-value query. Therefore, the listprofilesbyattribute command can only be used for profiles and profile tables that are based on a SLEE 1.1-compliant profile specification. For running a find-by-attribute-value search on a SLEE 1.0-compliant profile table, use the listprofilesbyindexedattribute command.


Backwards compatibility

SLEE 1.1 demands backwards compatibility for SLEE 1.0-compliant profiles, which means a SLEE 1.0 -compliant profile specification can be deployed into the SLEE; and profile tables and profiles can be successfully created and managed.

Console command: listprofilesbyindexedattribute

Command

listprofilesbyindexedattribute <table-name> <attr-name> <attr-value>
[display-attributes (true/false)]
  Description
    List the profiles which have an indexed attribute <attr-name> equal to
    <attr-value>. The implementation supports only a limited set of attribute types
    that it can convert from strings to objects

Example

$ ./rhino-console listprofilesbyindexedattribute testtable indexedAttrib someValue
1 profiles returned
ProfileID[table=testtable,profile=testprofile]

MBean operation: getProfilesByAttribute

MBean

SLEE-defined

public Collection getProfilesByAttribute(String profileTableName, String attributeName, Object attributeValue)
    throws NullPointerException, UnrecognizedProfileTableNameException,
          UnrecognizedAttributeException, InvalidArgumentException,
          AttributeTypeMismatchException, ManagementException;

Arguments

This operation requires that you specify the:

  • profileTableName — name of the profile table

  • attributeName — name of the profile’s attribute to check

  • attributeValue — value to compare the attribute with.

Notes

SLEE 1.1- & SLEE 1.0-specific commands

Between SLEE 1.0 and SLEE 1.1, the underlying profile specification schema changed significantly. According to the SLEE 1.1 Specification, profile attributes no longer have to be indexed to be legally used by a find-by-attribute-value query. Therefore, the getProfilesByAttribute operation can only be used for profiles and profile tables that are based on a SLEE 1.1-compliant profile specification. For running a find-by-attribute-value search on a SLEE 1.0-compliant profile table, use the getProfilesByIndexedAttribute operation.


Backwards compatibility

SLEE 1.1 demands backwards compatibility for SLEE 1.0-compliant profiles, which means a SLEE 1.0 compliant profile specification can be deployed into the SLEE; and profile tables and profiles can be successfully created and managed.

MBean operation: getProfilesByIndexedAttribute

MBean

SLEE-defined

public Collection getProfilesByIndexedAttribute(String profileTableName, String attributeName, Object attributeValue)
    throws NullPointerException, UnrecognizedProfileTableNameException,
          UnrecognizedAttributeException, AttributeNotIndexedException,
          AttributeTypeMismatchException, ManagementException;

Arguments

This operation requires that you specify the:

  • profileTableName — name of the profile table

  • attributeName — name of the profile’s attribute to check

  • attributeValue — value to compare the attribute with.

Finding Profiles Using Static Queries

To retrieve all profiles match a static query (pre-defined in a profile table’s profile specification schema), use the following MBean operation.

Note The Rhino SLEE does not use a rhino-console command for this function.

MBean operation: getProfilesByStaticQuery

MBean

SLEE-defined

public Collection getProfilesByStaticQuery(String profileTableName, String queryName, Object[] parameters)
    throws NullPointerException, UnrecognizedProfileTableNameException,
          UnrecognizedQueryNameException, InvalidArgumentException,
          AttributeTypeMismatchException, ManagementException;

Arguments

This operation requires that you specify the:

  • profileTableName — name of the profile table

  • queryName — name of a static query defined in the profile table’s profile specification deployment descriptor

  • parameters — an array of parameter values, to apply to parameters in the query (may only be null if the static query takes no arguments).

Tip For more about static query methods, please see chapter 10.8.2 "Static query methods" in the SLEE 1.1 specification.

Finding Profiles Using Dynamic Queries

To retrieve all profiles match a dynamic query (an expression the administrator constructs at runtime) , use the following MBean operation.

Note The Rhino SLEE does not use a rhino-console command for this function.

MBean operation: getProfilesByDynamicQuery

MBean

SLEE-defined

public Collection getProfilesByDynamicQuery(String profileTableName, QueryExpression expr)
    throws NullPointerException, UnrecognizedProfileTableNameException,
        UnrecognizedAttributeException, AttributeTypeMismatchException,
        ManagementException;

Arguments

This operation requires that you specify the:

  • profileTableName — name of the profile table

  • expr — query expression to apply to profiles in the profile table.

Tip For more about dynamic query methods, please see chapter 10.20.3 "Dynamic Profile queries" in the SLEE 1.1 specification.

Export and Import

Rhino includes procedures for:

Exporting Profiles

To export SLEE profiles, use the following rhino-console command or related MBean operation.

Console command: exportall

Note The Rhino command console currently does not have a command specific to profile exports. Instead you use a more general export function, which (apart from SLEE profiles) also exports deployable units for services and RAs currently installed in the SLEE.

Command

exportall <zip|directory>
  Description
    Export the internal state of the SLEE including deployable units, profile
    tables, and other component state as an imperative-style configuration export.
    Uses JMX to export profiles. Use of the standalone rhino-export utility is
    encouraged for deployments involving large profile sets.

Example

$ ./rhino-console exportall /home/userXY/myexport
Exporting file:jars/incc-callbarring-service.jar...
Exporting file:jars/incc-callforwarding-service.jar...
Taking snapshot for callforwarding
Saving callforwarding.jar (183kb)
Streaming profile table 'callforwarding' snapshot to callforwarding.data (2 entries)
[################################################################################] 2/2 entries

Taking snapshot for callbarring
Saving callbarring.jar (177kb)
Streaming profile table 'callbarring' snapshot to callbarring.data (2 entries)
[################################################################################] 2/2 entries

Extracted 4 of 4 entries (157 bytes)
Snapshot timestamp 2008-05-07 15:17:42.325 (1210130262325)
   Critical region time     : 0.002 s
   Request preparation time : 0.053 s
   Data extraction time     : 0.302 s
   Total time               : 0.355 s

Converting 2 profile table snapshots...
Converting callforwarding...
bean class=class com.opencloud.deployed.Profile_Table_2.ProfileOCBB_Bean
[###########################################################################] converted 2 of 2
[###########################################################################] converted 2 of 2
Converted 2 records

Converting callbarring...
bean class=class com.opencloud.deployed.Profile_Table_1.ProfileOCBB_Bean
[###########################################################################] converted 2 of 2
[###########################################################################] converted 2 of 2
Converted 2 records
Export complete
Tip
Exported profile files

After the export, you will find the exported profiles as .xml files in the profiles subfolder of the chosen export directory (in the above example, /home/userXY/myexport/profiles).

Exporting "snapshots"

See also Profile Snapshots, to export profile snapshots in binary format and convert them into xml format for later imports.

Exporting a SLEE

See also Exporting a SLEE, to export all deployed components and configuration of a Rhino SLEE.

MBean operation: exportProfiles

MBean

Rhino extension

com.opencloud.rhino.management.profile.ProfileDataCollection exportProfiles(String profileTableName, String[] profileNames)
    throws NullPointerException, UnrecognizedProfileTableNameException,
    ManagementException;

Arguments

This operation requires that you specify the profile table’s:

  • profileTableName — name of the profile table to export from

  • profileNames — an array listing the profile names to export (elements corresponding to unknown profile names are ignored).

Tip
Exporting the default profile

To export the default profile, enter a null element in the profileNames array.

Importing Profiles

To import SLEE profiles, use the following rhino-console command or related MBean operation.

Console command: importprofiles

Tip Use the importprofiles command to import profile data from an xml file that has previously been created (for example, using the exportall command).

Command

importprofiles <filename.xml> [-table table-name] [-replace] [-max
profiles-per-transaction] [-noverify]
  Description
    Import profiles from xml data

Example

$ ./rhino-console exportall /home/userXY/myexport
...
./rhino-console importprofiles /home/userXY/myexport/profiles/testtable.xml
Importing profiles into profile table: testtable
2 profile(s) processed: 1 created, 0 replaced, 0 removed, 1 skipped

Notes

Referenced profile table must exist

For the profile import to run successfully, the profile table the xml data refers to must exist before invoking the importprofiles command. (The importprofiles command will not create the profile table if it does not exist. Instead it will complete successfully — but without importing anything.)

MBean operation: importProfiles

MBean

Rhino extension

com.opencloud.rhino.management.profile.ProfileImportResult importProfiles(com.opencloud.rhino.management.profile.ProfileDataCollection profileData)
    throws NullPointerException, UnrecognizedProfileTableNameException,
          InvalidArgumentException, ProfileAlreadyExistsException,
          UnrecognizedProfileNameException, ManagementException;

Arguments

This operation requires that you specify the profile table’s:

  • profileData — the profile data collection containing the exported profiles.

Tip
Importing the default profile

To import the default profile, include a profile with a null name in the profile data collection.

Alarms

As well as an overview and list of alarms, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs.

Procedure rhino-console command MbBean → Operations
 listactivealarms

Alarm → getAlarms
Alarm → getDescriptors

 clearalarm

Alarm → clearAlarm

 clearalarms

Alarm → clearAlarms

 setalarmlogperiod
getalarmlogperiod

Logging Management → SetAlarmLogPeriod
Logging Management → GetAlarmLogPeriod

 createthresholdrule
removethresholdrule

Threshold Rule Management → createRule
Threshold Rule Management → removeRule

 listthresholdrules

Threshold Rule Management → getRules

  • Viewing a current threshold-alarm rule

  • Saving a threshold-alarm rule configuration to a file for editing

  • Importing a modified threshold-alarm rule

  • Configuring trigger conditions for a threshold-alarm rule (adding, getting, removing, getting and setting operators and periods)

  • Configuring reset conditions for a threshold-alarm rule (adding, getting, removing, getting and setting operators and periods)

  • Setting a threshold rule alarm

  • Getting a threshold rule alarm’s level, type and message

 getconfig
exportconfig
importconfig

Threshold Rule → addTriggerCondition
Threshold Rule → getTriggerConditions
Threshold Rule → removeTriggerCondition
Threshold Rule → getTriggerConditionsOperator
Threshold Rule → setTriggerConditionsOperator
Threshold Rule → getTriggerPeriod
Threshold Rule → setTriggerPeriod
Threshold Rule → addResetCondition
Threshold Rule → getResetConditions
Threshold Rule → removeResetCondition
Threshold Rule → getResetConditionsOperator
Threshold Rule → setResetConditionsOperator
Threshold Rule → getResetPeriod
Threshold Rule → setResetPeriod
Threshold Rule → setAlarm
Threshold Rule → getAlarmLevel
Threshold Rule → getAlarmType
Threshold Rule → getAlarmMessage

  activatethresholdrule

Threshold Rule → activateRule
Threshold Rule → deactivateRule

 getthresholdrulescanperiod
setthresholdrulescanperiod

Threshold Rule Management → getScanPeriod
Threshold Rule Management → setScanPeriod

About Alarms

Alarms in Rhino alert the SLEE administrator to exceptional conditions.

Application components in the SLEE raise them, as does Rhino itself (upon detecting an error condition). Rhino clears some alarms automatically when the error conditions are resolved. The SLEE administrator must clear others manually.

When an alarm is raised or cleared, Rhino generates a JMX notification from the Alarm MBean. Management clients may attach a notification listener to the Alarm MBean, to receive alarm notifications. Rhino logs all alarm notifications.

What’s new in SLEE 1.1?

While only SBBs could generate alarms in SLEE 1.0, other types of application components can also generate alarms in SLEE 1.1.

In SLEE 1.1, alarms are stateful — between being raised and cleared, an alarm persists in the SLEE, where an administrator may examine it. (In SLEE 1.0, alarms could be generated with a severity level that indicated a cleared alarm, but the fact that an error condition had occurred did not persist in the SLEE beyond the initial alarm generation.)

Sample log file messages

SLEE 1.1

Alarm1 1

SLEE 1.0

Alarm1 0
Note For both SLEE 1.1 and 1.0, if the cause of an alarm is an Java exception, the log includes the exception and its stack trace (following the alarm description message).

Configuring alarm log period

To set and get the interval between periodic active alarm logs, use the following rhino-console commands or related MBean operations.

Rhino periodically logs active alarms and the default interval is 60 seconds.

setalarmlogperiod

Command

setalarmlogperiod <seconds>
  Description
    Sets the interval between periodic active alarm logs.
  Required Arguments
    seconds  The interval between periodic alarm logs. Setting to 0 will disable
    logging of periodic alarms.

Example

To set log period to 30 seconds:

$ ./rhino-console setalarmlogperiod 30
  Active alarm logging period set to 30 seconds.

getalarmlogperiod

Command

getalarmlogperiod
  Description
    Returns the interval between periodic active alarm logs.

Example

To get alarm log period:

$ ./rhino-console getalarmlogperiod
  Active alarm logging period is currently 30 seconds.

MBean operations: setAlarmLogPeriod

MBean

SLEE-defined

Set the interval between periodic active alarm logs
public void setAlarmLogPeriod(int period) throws IllegalArgumentException, ConfigurationException;

Sets the interval between periodic active alarm logs. Setting the period to 0 will disable periodic alarm logging.

Get the interval between periodic active alarm logs
public int getAlarmLogPeriod() throws ConfigurationException;

Returns the interval between periodic active alarm logs.

Viewing Active Alarms

To view active alarms, use the following rhino-console command or related MBean operation.

Console command: listactivealarms

Command

listactivealarms [<type> <notif-source>] [-stack]
  Description
    List the alarms currently active in the SLEE (for a specific notification if
    provided).  Use -stack to display stack traces for alarm cause exceptions.

Example

To list all active alarms in the SLEE:

$ ./rhino-console listactivealarms
1 alarm:

Alarm 101:193215480667648 [diameter.peer.connectiondown]
  Level      : Warning
  InstanceID : diameter.peer.hss-instance
  Source     : (RA Entity) sh-cache-ra
  Timestamp  : 20161019 14:02:58 (active 15m 30s)
  Message    : Connection to hss-instance:3868 is down

The number value on the first line "101:193215480667648" is the alarmid.

The value in the square brackets "diameter.peer.connectiondown" is the alarm-type.

MBean operations: getAlarms and getDescriptors

MBean

SLEE-defined

Get identifiers of all active alarms in the SLEE
public String[] getAlarms()
    throws ManagementException;

Rhino’s implementation of the SLEE-defined getAlarms operation returns an array containing the identifiers of all alarms currently raised in the SLEE, regardless of which cluster node the alarm was raised on.


Get identifiers of active alarms raised by a specific notification source
public String[] getAlarms(NotificationSource notificationSource)
    throws NullPointerException, UnrecognizedNotificationSourceException,
           ManagementException;

This variant of getAlarms returns an array containing the identifiers of the current alarms that were raised by the specified notification source, on any node in the cluster. If there are currently no active alarms raised by this notification source, the operation returns a zero-length array.


Get alarm descriptor for an alarm identifier
public Alarm[] getDescriptors(String[] alarmIDs)
    throws NullPointerException, ManagementException;

This operation returns the alarm descriptor for each given alarm identifier.

Clearing Alarms

Clear Individual Alarms

To clear an alarm using its alarm identifier, use the following rhino-console command or related MBean operation.

Console command: clearalarm console command

Command

clearalarm <alarmid>
  Description
    Clear the specified alarm.

Example

To clear the alarm with the identifier 101:102916243593:1:

$ ./rhino-console clearalarm 101:102916243593:1
Alarm 101:102916243593:1 cleared

MBean operation: clearAlarm

MBean

SLEE-defined

public boolean clearAlarm(String alarmID)
  throws NullPointerException, ManagementException;

Rhino’s implementation of the SLEE-defined clearAlarm operation clears the alarm with the given identifier, regardless of the cluster node the alarm was raised on. It returns a value of either true or false, depending on whether or not the SLEE found and cleared the alarm.

Clear Alarms Raised by a Particular Notification Source

To clear alarms raised by a particular notification source, use the following rhino-console command or related MBean operation.

Console command: clearalarms

Command

clearalarms <type> <notification-source> [<alarm type>]
  Description
    Clear all alarms raised by the notification source (of the specified alarm type)

This command clears all alarms of the specified alarm type (or all alarms if no alarm-type is specified), that have been raised by the specified notification source.

Example

To clear all alarms raised by a resource adaptor entity named insis-cap:

$ ./rhino-console clearalarms resourceadaptorentity insis-cap
2 alarms cleared

To clear only "noconnection" alarms raised by the resource adaptor entity named insis-cap:

$ ./rhino-console clearalarms resourceadaptorentity insis-cap noconnection
1 alarm cleared

MBean operation: clearAlarms

MBean

SLEE-defined

Clear all active alarms raised by a notification source
public int clearAlarms(NotificationSource notificationSource)
    throws NullPointerException, UnrecognizedNotificationSourceException,
        ManagementException

Rhino’s implementation of the SLEE-defined clearAlarms operation clears all active alarms that were raised by the given notification source on any cluster node. It returns the number of alarms that were cleared.


Clear active alarms of a specified type raised by a notification source
public int clearAlarms(NotificationSource notificationSource, String alarmType)
    throws NullPointerException, UnrecognizedNotificationSourceException,
        ManagementException;

This variant of clearAlarms clears only active alarms of the given alarm type raised by the given notification source on any cluster node. It also returns the number of alarms that were cleared.

Threshold Alarms

To supplement standard alarms (which Rhino and installed components raise), an administrator may configure custom alarms (which Rhino will raise or clear automatically based on SLEE Statistics.

These are known as threshold alarms, and you manage them using the Threshold Rule Management MBean.

Threshold rules

You configure the threshold rules governing each threshold alarm using a Threshold Rule MBean.

Each threshold rule consists of:

  • a unique name identifying the rule

  • one or more trigger conditions

  • an alarm level, type and message text

  • and optionally:

    • one or more reset conditions

    • how long (in milliseconds) the trigger conditions must remain before Rhino raises the alarm

    • how long (in milliseconds) the reset conditions must remain before Rhino clears the alarm.

Tip You can combine condition sets using either an AND or an OR operator. (AND means all conditions must be satisfied, whereas OR means any one of the conditions may be satisfied — to raise or clear the alarm.)

Parameter sets

Threshold rules use the same parameter sets as the statistics client. You can discover them either by using the statistics client graphically or by using its command-line mode from a command shell as shown below.

To list all available parameter sets:
$ ./rhino-stats -l
The following root parameter sets are available for monitoring:
Activities, ActivityHandler, ByteArrayBuffers, CGIN, DatabaseQuery, Diameter,
EndpointLimiting, EventRouter, Events, HTTP, JDBCDatasource, JVM, LicenseAccounting,
Limiters, LockManagers, MemDB-Local, MemDB-Replicated, MemDB-Timestamp, Metrics,
ObjectPools, SIP, SIS-SIP, SLEE-Usage, Services, StagingThreads, StagingThreads-Misc,
TimerFacility, TransactionManager

For parameter set type descriptions and a list of available parameter sets use the
-l <root parameter set name> option
To list the statistics collected by the JVM parameter set:
$ ./rhino-stats -l JVM
Connecting to localhost:1199
Parameter Set: JVM
Parameter Set Type: JVM
Description: JVM Statistics

Counter type statistics:
  Id: Name:                 Label:      Description:
  0   heapUsed              husd        Used heap memory
  1   heapCommitted         hcomm       Committed heap memory
  2   heapInitial           hinit       Initial heap memory
  3   heapMaximum           hmax        Maximum heap memory
  4   nonHeapUsed           nhusd       Used non-heap memory
  5   nonHeapCommitted      nhcomm      Committed non-heap memory
  6   nonHeapInitial        nhinit      Initial non-heap memory
  7   nonHeapMaximum        nhmax       Maximum non-heap memory
  8   classesCurrentLoaded  cLoad       Number of classes currently loaded
  9   classesTotalLoaded    cTotLoad    Total number of classes loaded since JVM start
  10  classesTotalUnloaded  cTotUnload  Total number of classes unloaded since JVM start

Sample type statistics: (none defined)

Found 1 parameter sets under 'JVM':
    ->  "JVM"

How Rhino evaluates threshold rules

Rhino periodically evaluates the trigger conditions of each configured rule. When a trigger condition is satisfied and its trigger period has been met or exceeded, Rhino raises the corresponding alarm. If the rule has reset conditions, Rhino evaluates those too, and when the reset condition is satisfied and the reset trigger period has been met or exceeded, clears the alarm. If the rule does not have reset conditions, an administrator must clear the alarm manually.

You can configure the frequency of threshold alarm rule evaluation, using the Threshold Rule Management MBean. An administrator can specify a polling frequency in milliseconds, or enter 0 to disable rule evaluation. The Rhino default is 0 (which must be changed to enable threshold-rule evaluation). Ideal polling frequency is highly dependent on the nature of alarms configured.

Simple and relative rule conditions

There are two types of threshold rule conditions, explained in the tables below.

Simple rule conditions

What it compares Operators for comparison Conditions Example

The value of a counter-type Rhino statistic against a constant value.

 >, >=, <, <=, ==, !=

The constant value to compare against may be any floating-point number. The condition can either compare against the absolute value of the statistic (suitable for gauge-type statistics), or against the observed difference between successive samples (suitable for pure counter-type statistics).

A condition that selects the statistic rolledBack from the Transactions parameter set, and evaluates to true when the number of transactions rolled back is > 100.

Relative rule conditions

What it compares Operators for comparison Conditions Example

The ratio between two monitored statistics against a constant value.

 >, >=, <, <=, ==, !=

The constant value to compare against may be any floating-point number.

A condition that selects the statistics freeMemory and totalMemory from the SystemInfo parameter set, and evaluates to true when free heap space is less than 20% of total heap space. (Using the < operator and a constant value of 0.2, the condition would evaluate to true when the value of freeMemory / totalMemory is less than 0.2.)

Note For definitions of counter, guage and sample type statistics, see About Rhino Statistics.

Creating and Removing Rules

To create or remove a threshold-alarm rule, use the following rhino-console commands or related MBean operations.

Console command: createthresholdrule

Command

createthresholdrule <name>
  Description
    Create a threshold alarm rule

Example

To create a rule named "low memory":

$ ./rhino-console createthresholdrule "low memory"
Threshold rule low memory created

MBean operation: createRule

MBean

Rhino operation

public ObjectName createRule(String ruleName)
    throws ConfigurationException, ValidationException;

This operation creates a rule with the name given, and returns the JMX object name of a Threshold Rule MBean (which you can use to configure that rule).

Console command: removethresholdrule

Command

removethresholdrule <name>
  Description
    Remove a threshold alarm rule

Example

To remove a rule named "low memory":

$ ./rhino-console removethresholdrule "low memory"
Threshold rule low memory removed

MBean operation: removeRule

MBean

Example

public void removeRule(String ruleName)
    throws ConfigurationException, ValidationException;

This operation removes the rule with the name given.

Listing Rules

To list all threshold alarm rules, use the following rhino-console command or related MBean operation.

Console command: listthresholdrules

Command

listthresholdrules
  Description
    List threshold alarm rules

Example

To list all threshold alarm rules, with their activation states:

$ ./rhino-console listthresholdrules
Current threshold rules:
    low memory (active)
    low disk (inactive)
    testrule (inactive)

MBean operation: getRules

MBean

Rhino operation

public String[] getRules()
    throws ConfigurationException;

Configuring Rules

To configure a threshold alarm rule:

View rules

To view a current threshold alarm rule., use the getconfig console command:

Command

getconfig [-namespace] <configuration type> [configuration key]
  Description
    Extract and display content of a container configuration key.  The optional
    -namespace argument must be used to get the config of a namespace-specific key.
    If no key is specified the configs of all keys of the given type are shown

Example

To display the threshold alarm rule named "low_memory":

$ ./rhino-console getconfig threshold-rules "rule/low_memory"
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rhino-threshold-rules-config PUBLIC "-//Open Cloud Ltd.//DTD Rhino Threshold Rules Config 2.3//EN" "rhino-threshold-rules-config-2.3.dtd">
<rhino-threshold-rules-config config-version="2.3" rhino-version="Rhino (version='2.5', release='1.0', build=xxx, revision=xxx)" timestamp="xxx">
    <!-- Generated Rhino configuration file: xxxx-xx-xx xx:xx:xx.xxx -->
    <threshold-rules active="false" name="low_memory">
        <trigger-conditions name="Trigger conditions" operator="OR" period="0">
            <relative-threshold operator="&lt;=" value="0.2">
                <first-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="freePhysicalMemorySize"/>
                <second-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="totalPhysicalMemorySize"/>
            </relative-threshold>
        </trigger-conditions>
        <reset-conditions name="Reset conditions" operator="OR" period="0"/>
        <trigger-actions>
            <raise-alarm-action level="Major" message="Low on memory" type="MEMORY"/>
        </trigger-actions>
        <reset-actions>
            <clear-raised-alarm-action/>
        </reset-actions>
    </threshold-rules>
</rhino-threshold-rules-config>

Export rules

To save a threshold rule configuration to a file for editing, use the exportconfig console command:

Command

exportconfig [-namespace] <configuration type> [configuration key] <filename>
  Description
    Extract content of a container configuration key and save it to a file.  The
    optional -namespace argument must be used to export the config of a
    namespace-specific key

Example

To export the threshold alarm rule named "low memory" to the file rule_low_memory.xml:

$ ./rhino-console exportconfig threshold-rules "rule/low memory" rule_low_memory.xml
Export threshold-rules: (rule/low memory) to rule_low_memory.xml
Wrote rule_low_memory.xml
Note The structure of the exported data in the XML file is identical to that displayed by the getconfig command.

Edit rules

You can modify a rule using a text editor. In the following example, a reset condition has been added to a rule previously exported, so that the alarm raised will automatically clear when free memory becomes greater than 30%. (Previously the reset-conditions element in this rule had no conditions.)

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rhino-threshold-rule PUBLIC "-//Open Cloud Ltd.//DTD Rhino Threshold Rule 2.0//EN"
  "http://www.opencloud.com/dtd/rhino-config-threshold-rules_2_0.dtd">
<rhino-threshold-rule config-version="2.3" rhino-version="Rhino (version='2.5', release='00',
  build=xxx, revision=xxx)" timestamp=xxx>
    <!-- Generated Rhino configuration file: xxxx-xx-xx xx:xx:xx.xxx -->
    <threshold-rules active="false" name="low memory">
        <trigger-conditions name="Trigger conditions" operator="OR" period="0">
            <relative-threshold operator="&lt;=" value="0.2">
                <first-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="freePhysicalMemorySize"/>
                <second-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="totalPhysicalMemorySize"/>
            </relative-threshold>
        </trigger-conditions>
        <reset-conditions name="Reset conditions" operator="OR" period="0">
            <relative-threshold operator="&gt;" value="0.3">
                <first-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="freePhysicalMemorySize"/>
                <second-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="totalPhysicalMemorySize"/>
            </relative-threshold>
        </reset-conditions>
        <trigger-actions>
            <raise-alarm-action level="Major" message="Low on memory" type="MEMORY"/>
        </trigger-actions>
        <reset-actions>
            <clear-raised-alarm-action/>
        </reset-actions>
    </threshold-rules>
</rhino-threshold-rule>

Import rules

To import the modified threshold alarm rule file, use the importconfig console command:

Command

importconfig [-namespace] <configuration type> <filename> [-replace]
  Description
    Import a container configuration key.  The optional -namespace argument must be
    used to import a config for a namespace-specific key

Example

To import the threshold alarm rule from the file rule_low_memory.xml:

$ ./rhino-console importconfig threshold-rules rule_low_memory.xml -replace
Configuration successfully imported.
Warning The -replace option is required when importing a rule with the same name as an existing rule, as there can be only one rule configuration with a given name present at any one time.

Threshold Rule MBean Operations

To configure a threshold alarm rule, use the following MBean operations (defined on the Threshold Rule MBean interface), for:

  • adding, removing and getting trigger conditions, and getting and setting their operators and periods

  • adding, removing and getting reset conditions, and getting and setting their operators and periods

  • setting the alarm

  • getting an alarm’s level, type, and message.

Tip See also Configuring Rules.

Trigger conditions

To add, remove and get threshold alarm trigger conditions, and get and set their operators and periods, use the following MBean operations:

Operations Usage

addTriggerCondition
getTriggerConditions
removeTriggerCondition
getTriggerConditionsOperator
setTriggerConditionsOperator
getTriggerPeriod
setTriggerPeriod

To add a trigger condition to the rule:
public void addTriggerCondition(String parameterSetName, String statistic, String operator, double value)
    throws ConfigurationException, UnknownStatsParameterSetException,
          UnrecognizedStatisticException, ValidationException;
public void addTriggerCondition(String parameterSetName1, String statistic1, String parameterSetName2,
              String statistic2, String operator, double value
    throws ConfigurationException, UnknownStatsParameterSetException,
          UnrecognizedStatisticException, ValidationException;

The first operation adds a simple trigger condition to the rule. The second operation adds a relative condition between two parameter set statistics (see Simple and relative rule conditions).


To get the current trigger conditions:
public String[] getTriggerConditions()
    throws ConfigurationException;

To remove a trigger condition:
public void removeTriggerCondition(String key)
    throws ConfigurationException, ValidationException;

To get or set the trigger condition operator:
public String getTriggerConditionsOperator()
    throws ConfigurationException;
public void setTriggerConditionsOperator(String operator)
    throws ConfigurationException, ValidationException;

The operator must be one of the logical operators AND or OR (the operator is ignored if the rule has only one trigger condition).


To get or set the trigger condition period:
public long getTriggerPeriod()
    throws ConfigurationException;
public void setTriggerPeriod(long period)
    throws ConfigurationException, ValidationException;

The trigger period is measured in milliseconds. If it is 0, the SLEE raises the alarm whenever the trigger conditions are true (and the alarm is not already raised). Otherwise, the SLEE raises the alarm once the trigger conditions have held true for at least the amount of time specified.

Reset conditions

To add, remove and get threshold alarm reset conditions, and get and set their operators and periods, use the following MBean operations:

Operations Usage

addResetCondition
getResetConditions
removeResetCondition
getResetConditionsOperator
setResetConditionsOperator
getResetPeriod
setResetPeriod

To add a reset condition to the rule:
public void addResetCondition(String parameterSetName, String statistic, String operator, double value)
    throws ConfigurationException, UnknownStatsParameterSetException,
           UnrecognizedStatisticException, ValidationException;
public void addResetCondition(String parameterSetName1, String statistic1, String parameterSetName2,
                              String statistic2, String operator, double value)
    throws ConfigurationException, UnknownStatsParameterSetException,
           UnrecognizedStatisticException, ValidationException;

The first operation adds a simple reset condition to the rule. The second operation adds a relative condition between two parameter set statistics (see bxfref:threshold-alarms[Simple and relative rule conditions]).


To get the current reset conditions:
public String[] getResetConditions()
    throws ConfigurationException;

To remove a reset condition:
public void removeResetCondition(String key)
    throws ConfigurationException, ValidationException;

To get or set the reset condition operator:
public String getResetConditionsOperator()
    throws ConfigurationException;
public void setResetConditionsOperator(String operator)
    throws ConfigurationException, ValidationException;

The operator must be one of the logical operators AND or OR (the operator is ignored if the rule has only one reset condition).


To get or set the reset condition period:
public long getResetPeriod()
    throws ConfigurationException;
public void setResetPeriod(long period)
    throws ConfigurationException, ValidationException;

The reset period is measured in milliseconds. If it is 0, the SLEE clears the alarm whenever the reset conditions are true (and the alarm is raised). Otherwise, the SLEE clears the alarm once the reset conditions have held true for at least the amount of time specified.

Setting alarms

To set the alarm to be raised by a threshold rule, use the following MBean operation:

Operations Usage

setAlarm

public void setAlarm(AlarmLevel level, String type, String message)
    throws ConfigurationException, ValidationException;

The alarm level may be any level other than AlarmLevel.CLEAR

Getting alarm information

To get a threshold alarm’s level, type, and message, use the following MBean operations:

Operations Usage

getAlarmLevel
getAlarmType
getAlarmMessage

public AlarmLevel getAlarmLevel()
    throws ConfigurationException;
public String getAlarmType()
    throws ConfigurationException;
public String getAlarmMessage()
    throws ConfigurationException;

Activating and Deactivating Rules

To activate or deactivate a threshold-alarm rule, use the following rhino-console commands or related MBean operations.

Activate Rules

Console command: activatethresholdrule

Command

activatethresholdrule <name>
  Description
    Activate a threshold alarm rule

Example

To activate the rule with the name "low memory":

$ ./rhino-console activatethresholdrule "low memory"
Threshold rule low memory activated
Tip You can also activate a rule by exporting it, modifying the XML, and then reimporting it (assuming the active parameter in the rule is set to true — see Configuring Rules).

MBean operation: activateRule

MBean

Rhino operation

public void activateRule()
    throws ConfigurationException;

This operation activates the threshold-alarm rule represented by the ThresholdRuleMBean.

Warning threshold rule scan period must be configured to a non-zero value before Rhino will evaluate active threshold-alarm rules.

Deactivate rules

Console command: deactivatethresholdrule

Command

deactivatethresholdrule <name>
  Description
    Deactivate a threshold alarm rule

Example

To deactivate the rule with the name "low memory":

$ ./rhino-console deactivatethresholdrule "low memory"
Threshold rule low memory deactivated

MBean operation: deactivateRule

MBean

Rhino operation

public void deactivateRule()
    throws ConfigurationException;

This operation deactivates the threshold-alarm rule represented by the ThresholdRuleMBean.

Setting and Getting Rule-Scan Periods

To set or get the threshold rule scan period, use the following rhino-console commands or MBean operations.

Note
What is a rule-scan period?

A threshold-alarm rule-scan period determines when Rhino’s threshold-rule scanner evaluates active threshold-alarm rules.

The scan period must be set to a valid non-zero value for Rhino to evaluate the rules. At the beginning of each scan period, Rhino evaluates each active threshold-alarm rule as follows:

  • If the rule’s trigger condition is true and the trigger period is 0, the rule triggers and raises its alarm.

  • The first time the threshold-rule scanner finds a rule’s trigger condition to be true, with a trigger period greater than 0, it records the time. Thereafter, when it evaluates the rule, if the trigger condition continues to be true, when the accumulated time exceeds the rule’s trigger period, the rule triggers and raises its alarm. (If the rule evaluates to false at any time, the rule scanner discards any accumulated time from while it was true.)

(The same process applies to the reset conditions once a rule has been triggered.)

Console command: setthresholdrulescanperiod

Command

setthresholdrulescanperiod <period>
  Description
    Set the threshold alarm rule scan period, measured in ms.  Must be > 500 or 0 to
    disable rule checking

Example

To set the threshold rule scan period to 30000ms (30s):

$ ./rhino-console  setthresholdrulescanperiod 30000
Threshold rule scan period set to 30000ms

To disable threshold rule scanning:

$ ./rhino-console  setthresholdrulescanperiod 0
Threshold rule scanning disabled

MBean operation: setScanPeriod

MBean

Rhino operation

public void setScanPeriod(int scanPeriod)
    throws ConfigurationException, ValidationException;

The scan period is measured in milliseconds.

Console command: getthresholdrulescanperiod

Command

getthresholdrulescanperiod
  Description
    Get the threshold alarm rule scan period

Example

$ ./rhino-console  getthresholdrulescanperiod
Threshold rule scan period set to 30000ms

MBean operation: getScanPeriod

MBean

Rhino operation

public int getScanPeriod()
  throws ConfigurationException;

The scan period is measured in milliseconds.

Runtime Alarm List

To list all alarms that may be raised by Rhino and installed components (including their messages, and when raised and cleared), use the following rhino-console command.

Console command: alarmcatalog

Command

alarmcatalog [-v]
  Description
    List the alarms that may be raised by Rhino and installed components. Using the
    -v flag will display more detail.

Example

$ ./rhino-console alarmcatalog

  Rhino Alarms
  ============

    Source                 Category                  Level     Alarm Type and Message
    ------                 --------                  -----     ----------------------
    Abnormal Execution     AbnormalExecution         WARNING   rhino.uncaught-exception  "Uncaught exception thrown by thread %s: %s"
    Activity Handler       Activity Handler          WARNING   rhino.ah.snapshot-age  "Oldest activity handler snapshot is older than %s, snapshot is %s (from %d), creating thread: %s"
    Cluster State          Clustering                MAJOR     rhino.node-failure  "Node %d has left the cluster"

... edited for brevity ...

And this displays more detail:

$ ./rhino-console alarmcatalog -v
Rhino Alarms
============

Source: Abnormal Execution

  Category: AbnormalExecution (Alarms raised as a result of an abnormal execution condition being detected)

    Alarm Type:  rhino.uncaught-exception
    Level:       WARNING
    Message:     "Uncaught exception thrown by thread %s: %s"
    Description: An uncaught exception has been detected.
    Raised:      When an uncaught exception has been thrown.
    Cleared:     Never, must be cleared manually or Rhino restarted with the source of the uncaught exception corrected.


Source: Activity Handler

  Category: Activity Handler (Alarms raised by Rhino activity handler)

    Alarm Type:  rhino.ah.snapshot-age
    Level:       WARNING
    Message:     "Oldest activity handler snapshot is older than %s, snapshot is %s (from %d), creating thread: %s"
    Description: The oldest activity handler snapshot is too old.
    Raised:      When the age of the oldest activity handler snapshot is greater than the threshold set by the rhino.ah.snapshot_age_warn system property (30s default).
    Cleared:     When the age of the oldest snapshot is less than or equal to the threshold.


Source: Cluster State

  Category: Clustering (Alarms raised by Rhino cluster state changes)

    Alarm Type:  rhino.node-failure
    Level:       MAJOR
    Message:     "Node %d has left the cluster"
    Description: A node left the cluster for some reason other than a management-initiated shutdown.
    Raised:      When the cluster state listener detects a node has left the cluster unexpectedly.
    Cleared:     When the failed node returns to the cluster.

... edited for brevity ...

Rhino Alarm List

This is a list of all alarms raised by this version of Rhino. For the management command that lists all alarms that may be raised by Rhino and installed components see Runtime Alarm List.

Alarm Type Description

Category: AbnormalExecution (Alarms raised as a result of an abnormal execution condition being detected)

rhino.uncaught-exception

An uncaught exception has been detected.

Category: Activity Handler (Alarms raised by Rhino activity handler)

rhino.ah.snapshot-age

The oldest activity handler snapshot is too old.

Category: Cassandra Key/Value Store (Alarms raised by the Cassandra key/value store)

rhino.cassandra-kv-store.no-nodes-available

All database nodes for all persistence instances have failed or are otherwise unreachable.

rhino.cassandra-kv-store.connection-error

The local database driver cannot connect to the configured persistence instance.

rhino.cassandra-kv-store.db-node-failure

The local database driver cannot connect to a database node.

rhino.cassandra-kv-store.pending-size-limit-reached

The volume of committed but not yet persisted application state has exceeded the configured pending size limit threshold. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the pending size volume below the limit again

rhino.cassandra-kv-store.scan-persist-time-threshold-reached

The allowed pending transaction scan or persist time has exceeded the configured thresholds due to overload. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the load on the pending transaction scanner

Category: Cassandra Session Ownership Store (Alarms raised by the Cassandra session ownership store)

rhino.cassandra-session-ownership-store.no-nodes-available

All database nodes for all persistence instances have failed or are otherwise unreachable.

rhino.cassandra-session-ownership-store.connection-error

The local database driver cannot connect to the configured persistence instance.

rhino.cassandra-session-ownership-store.db-node-failure

The local database driver cannot connect to a database node.

Category: Cluster Clock Synchronisation (Alarms raised by the cluster clock synchronisation monitor)

rhino.monitoring.clocksync

Another cluster node is reporting a system clock deviation relative to the local node beyond the maximum permitted threshold. The status of external processes maintaining the system clock on that node (eg. NTP) should be checked.

Category: Clustering (Alarms raised by Rhino cluster state changes)

rhino.node-failure

A node left the cluster for some reason other than a management-initiated shutdown.

Category: Configuration Management (Alarms raised by the Rhino configuration manager)

rhino.config.save-error

An error occurred while trying to write the file-based configuration for the configuration type specified in the alarm instance.

rhino.config.read-error

An error occurred while trying to read the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file.

rhino.config.activation-failure

An error occurred while trying to activate the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file.

Category: Database (Alarms raised during database communications)

rhino.database.no-persistence-config

A persistence resource configuration referenced in rhino-config.xml has been removed at runtime.

rhino.database.no-persistence-instances

A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated.

rhino.database.persistence-instance-instantiation-failure

Rhino requires a backing database for persistence of state for failure recovery purposes. A persistent instance defines a connection to a database backend. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance.

rhino.database.connection-lost

Rhino requires a backing database for persistence of state for failure recovery purposes. If no connection to the database backend is available, state cannot be persisted.

rhino.jdbc.persistence-instance-instantiation-failure

A persistent instance defines the connection to the database backend. If the persistent instance cannot be instantiated then JDBC connections cannot be made.

Category: Event Router State (Alarms raised by event router state management)

rhino.state.unlicensed-slee

A licensing problem was detected during SLEE start.

rhino.state.unlicensed-service

A licensing problem was detected during service activation.

rhino.state.unlicensed-raentity

A licensing problem was detected during resource adaptor entity activation.

rhino.state.convergence-failure

A component reported an unexpected error during convergence

rhino.state.convergence-timeout

A component has not transitioned to the effective desired state after the timeout period

rhino.state.raentity.active-reconfiguration

A resource adaptor entity is of a type that does not support active reconfiguration but has a desired state that contains configuration properties different from those in the actual state

Category: GroupRMI (Alarms raised by the GroupRMI server)

rhino.group-rmi.dangling-transaction

A group RMI invocation completed without committing or rolling back a transaction that it started. The dangling transaction will be automatically rolled back by the group RMI server to prevent future issues but these occurrences are software bugs that should be reported.

Category: Key/Value Store (Alarms raised by key/value store persistence resource managers)

rhino.kv-store.no-persistence-config

A persistence resource configuration referenced in rhino-config.xml has been removed at runtime.

rhino.kv-store.no-persistence-instances

A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated.

rhino.kv-store.persistence-instance-instantiation-failure

A persistence instance used by a key/value store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance.

Category: Licensing (Alarms raised by Rhino licensing)

rhino.license.over-limit

Rate limiter throttling is active. This throttling and hence this alarm only happens in SDK versions of Rhino, not production versions.

rhino.license.expired

A license installed in Rhino has passed its expiry time.

rhino.license.pending-expiry

A license installed in Rhino is within seven days of its expiry time.

rhino.license.partially-licensed-host

The hardware addresses listed in a host-based license only partially match those on the host.

rhino.license.unlicensed-host

The hardware addresses listed in a host-based license do not match those on the host.

rhino.license.unlicensed-rhino

Rhino does not have a valid license installed.

rhino.license.over-licensed-capacity

The work done by a function exceeds licensed capacity.

rhino.license.unlicensed-function

A particular function is not licensed.

Category: Limiting (Alarms raised by Rhino limiting)

rhino.limiting.below-negative-capacity

A rate limiter is below negative capacity.

rhino.limiting.stat-limiter-misconfigured

A stat limiter is misconfigured.

Category: Logging (Alarms raised by Rhino logging)

rhino.logging.appender-error

An appender has thrown an exception when attempting to pass log messages from a logger to it.

Category: M-lets Startup (Alarms raised by the M-let starter)

rhino.mlet.loader-failure

The M-Let starter component could not register itself with the platform MBean server. This normally indicates a serious JVM misconfiguration.

rhino.mlet.registration-failure

The M-Let starter component could not register an MBean for a configured m-let. This normally indicates an error in the m-let configuration file.

Category: REM Startup (Alarms raised by embedded REM starter)

rhino.rem.missing

This version of Rhino is supposed to contain an embedded instance of REM but it was not found, most likely due to a packaging error.

rhino.rem.startup

There was an unexpected problem while starting the embedded REM. This could be because of a port conflict or packaging problem.

Category: Remote Timer Server (Alarms raised by the remote timer server)

rhino.remote-timer-server.bind-error

The remote timer server is unable to bind a socket listener within the configured range of ports. The Rhino configuration may need to be checked and an alternate or wider range of ports allowed.

Category: Runtime Environment (Alarms related to the runtime environment)

rhino.runtime.unsupported.jvm

This JVM is not a supported JVM.

rhino.runtime.slee

SLEE event-routing functions failed to start after node restart

rhino.runtime.long-filenames-unsupported

Filenames with the maximum length expected by Rhino are unsupported on this filesystem. Unexpected deployment errors may occur as a result

Category: SAS facility (Alarms raised by Rhino SAS Facility)

rhino.sas.connection.lost

Attempting to reconect to SAS server

rhino.sas.queue.full

SAS message queue is full. Some events have not been reported to SAS

Category: SLEE State (Alarms raised by SLEE state management)

rhino.state.slee-start

An unexpected exception was caught during SLEE start.

Category: SNMP (Alarms raised by Rhino SNMP)

rhino.snmp.no-bind-addresses

The SNMP agent listens for requests received on all network interfaces that match the requested SNMP configuration. If no suitable interfaces can be found that match the requested configuration, then the SNMP agent cannot process any SNMP requests.

rhino.snmp.bind-failure

The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If no ports could be bound, the SNMP agent cannot process any SNMP requests.

rhino.snmp.partial-failure

The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If this succeeds on some (but not all) interfaces, the SNMP agent can only process requests received via the interfaces that succeeded.

rhino.snmp.general-failure

This is a catchall alarm for unexpected failures during agent startup. If an unexpected failure occurs, the state of the SNMP agent is unpredictable and requests may not be successfully processed.

rhino.snmp.notification-address-failure

This alarm represents a failure to determine an address from the notification target configuration. This can occur if the notification hostname is not resolvable, or if the specified hostname is not parseable.

rhino.snmp.duplicate-oid-mapping

Multiple parameter set type configurations for in-use parameter set types map to the same OID. All parameter set type mappings will remain inactive until the conflict is resolved.

rhino.snmp.duplicate-counter-mapping

Multiple counters in the parameter set type configuration map to the same index. The parameter set type mappings will remain inactive until the conflict is resolved.

Category: Scattercast Management (Alarms raised by Rhino scattercast management operations)

rhino.scattercast.update-reboot-required

Reboot needed to make scattercast update active.

Category: Service State (Alarms raised by service state management)

rhino.state.service-activation

The service threw an exception during service activation, or an unexpected exception occurred while attempting to activate the service.

Category: Session Ownership Store (Alarms raised by session ownership store persistence resource managers)

rhino.session-ownership-store.no-persistence-config

The persistence resource configuration referenced in rhino-config.xml has been removed at runtime.

rhino.session-ownership-store.no-persistence-instances

The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated.

rhino.session-ownership-store.persistence-instance-instantiation-failure

A persistence instance used by the session ownership store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance.

Category: Threshold Rules (Alarms raised by the threshold alarm rule processor)

rhino.threshold-rules.rule-failure

A threshold rule trigger or reset rule failed.

rhino.threshold-rules.unknown-parameter-set

A threshold rule trigger or reset rule refers to an unknown statistics parameter set.

Category: Watchdog (Alarms raised by the watchdog)

rhino.watchdog.no-exit

The system property watchdog.no_exit is set, enabling override of default node termination behaviour on failed watchdog conditions. This can cause catastrophic results and should never be used.

rhino.watchdog.forward-timewarp

A forward timewarp was detected.

rhino.watchdog.reverse-timewarp

A reverse timewarp was detected.


Category: AbnormalExecution

Alarms raised as a result of an abnormal execution condition being detected

rhino.uncaught-exception

Alarm Type

rhino.uncaught-exception

Level

WARNING

Message

Uncaught exception thrown by thread %s: %s

Description

An uncaught exception has been detected.

Raised

When an uncaught exception has been thrown.

Cleared

Never, must be cleared manually or Rhino restarted with the source of the uncaught exception corrected.


Category: Activity Handler

Alarms raised by Rhino activity handler

rhino.ah.snapshot-age

Alarm Type

rhino.ah.snapshot-age

Level

WARNING

Message

Oldest activity handler snapshot is older than %s, snapshot is %s (from %d), creating thread: %s

Description

The oldest activity handler snapshot is too old.

Raised

When the age of the oldest activity handler snapshot is greater than the threshold set by the rhino.ah.snapshot_age_warn system property (30s default).

Cleared

When the age of the oldest snapshot is less than or equal to the threshold.


Category: Cassandra Key/Value Store

Alarms raised by the Cassandra key/value store

rhino.cassandra-kv-store.connection-error

Alarm Type

rhino.cassandra-kv-store.connection-error

Level

CRITICAL

Message

Connection error for persistence instance %s

Description

The local database driver cannot connect to the configured persistence instance.

Raised

When communication with the database fails, for example because no host is available to execute a query.

Cleared

When the connection error is resolved.

rhino.cassandra-kv-store.db-node-failure

Alarm Type

rhino.cassandra-kv-store.db-node-failure

Level

MAJOR

Message

Connection lost to database node %s in persistence instance %s

Description

The local database driver cannot connect to a database node.

Raised

When communication with the database node fails.

Cleared

When the connection error is resolved or the node is removed from the cluster.

rhino.cassandra-kv-store.no-nodes-available

Alarm Type

rhino.cassandra-kv-store.no-nodes-available

Level

CRITICAL

Message

No database node in any persistence instance is available to execute queries

Description

All database nodes for all persistence instances have failed or are otherwise unreachable.

Raised

When an attempted database query execution fails because no node is available to accept it in any persistence instance.

Cleared

When one or more nodes become available to accept queries.

rhino.cassandra-kv-store.pending-size-limit-reached

Alarm Type

rhino.cassandra-kv-store.pending-size-limit-reached

Level

WARNING

Message

Not-yet-persisted application state has exceeded the configured pending size limit, newly committed state is being discarded

Description

The volume of committed but not yet persisted application state has exceeded the configured pending size limit threshold. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the pending size volume below the limit again

Raised

When the pending size volume exceeds the pending size limit.

Cleared

When the pending size volume falls below the pending size limit again.

rhino.cassandra-kv-store.scan-persist-time-threshold-reached

Alarm Type

rhino.cassandra-kv-store.scan-persist-time-threshold-reached

Level

WARNING

Message

Pending transaction scan or persist time has exceeded the configured maximum thresholds, newly committed state is being discarded

Description

The allowed pending transaction scan or persist time has exceeded the configured thresholds due to overload. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the load on the pending transaction scanner

Raised

When the pending transaction scan or persist times exceed the configured maximum thresholds.

Cleared

When the pending transaction scan and persist times fall below the configured maximum thresholds again.


Category: Cassandra Session Ownership Store

Alarms raised by the Cassandra session ownership store

rhino.cassandra-session-ownership-store.connection-error

Alarm Type

rhino.cassandra-session-ownership-store.connection-error

Level

CRITICAL

Message

Connection error for persistence instance %s

Description

The local database driver cannot connect to the configured persistence instance.

Raised

When communication with the database fails, for example because no host is available to execute a query.

Cleared

When the connection error is resolved.

rhino.cassandra-session-ownership-store.db-node-failure

Alarm Type

rhino.cassandra-session-ownership-store.db-node-failure

Level

MAJOR

Message

Connection lost to database node %s in persistence instance %s

Description

The local database driver cannot connect to a database node.

Raised

When communication with the database node fails.

Cleared

When the connection error is resolved or the node is removed from the cluster.

rhino.cassandra-session-ownership-store.no-nodes-available

Alarm Type

rhino.cassandra-session-ownership-store.no-nodes-available

Level

CRITICAL

Message

No database node in any persistence instance is available to execute queries

Description

All database nodes for all persistence instances have failed or are otherwise unreachable.

Raised

When an attempted database query execution fails because no host is available to accept it in any persistence instance.

Cleared

When one or more nodes become available to accept queries.


Category: Cluster Clock Synchronisation

Alarms raised by the cluster clock synchronisation monitor

rhino.monitoring.clocksync

Alarm Type

rhino.monitoring.clocksync

Level

WARNING

Message

Node %d is reporting a local clock time deviation beyond the maximum expected threshold of %dms

Description

Another cluster node is reporting a system clock deviation relative to the local node beyond the maximum permitted threshold. The status of external processes maintaining the system clock on that node (eg. NTP) should be checked.

Raised

When a cluster node reports a local clock time deviation relative to the local node beyond the maximum permitted threshold.

Cleared

When the clock deviation returns to a value at or below the threshold.


Category: Clustering

Alarms raised by Rhino cluster state changes

rhino.node-failure

Alarm Type

rhino.node-failure

Level

MAJOR

Message

Node %d has left the cluster

Description

A node left the cluster for some reason other than a management-initiated shutdown.

Raised

When the cluster state listener detects a node has left the cluster unexpectedly.

Cleared

When the failed node returns to the cluster.


Category: Configuration Management

Alarms raised by the Rhino configuration manager

rhino.config.activation-failure

Alarm Type

rhino.config.activation-failure

Level

MAJOR

Message

Error activating configuration from file %s. Configuration was replaced with defaults and old configuration file was moved to %s.

Description

An error occurred while trying to activate the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file.

Raised

When an exception occurs while activating a file-based configuration.

Cleared

Never, must be cleared manually.

rhino.config.read-error

Alarm Type

rhino.config.read-error

Level

MAJOR

Message

Error reading configuration from file %s. Configuration was replaced with defaults and old configuration file was moved to %s.

Description

An error occurred while trying to read the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file.

Raised

When an exception occurs while reading a configuration file.

Cleared

Never, must be cleared manually.

rhino.config.save-error

Alarm Type

rhino.config.save-error

Level

MAJOR

Message

Error saving file based configuration: %s

Description

An error occurred while trying to write the file-based configuration for the configuration type specified in the alarm instance.

Raised

When an exception occurs while writing to a configuration file.

Cleared

Never, must be cleared manually.


Category: Database

Alarms raised during database communications

rhino.database.connection-lost

Alarm Type

rhino.database.connection-lost

Level

MAJOR

Message

Connection to %s database failed: %s

Description

Rhino requires a backing database for persistence of state for failure recovery purposes. If no connection to the database backend is available, state cannot be persisted.

Raised

When the connection to a database backend is lost.

Cleared

When the connection is restored.

rhino.database.no-persistence-config

Alarm Type

rhino.database.no-persistence-config

Level

CRITICAL

Message

Persistence resource config for %s has been removed

Description

A persistence resource configuration referenced in rhino-config.xml has been removed at runtime.

Raised

When an in-use persistence resource configuration is removed by a configuration update.

Cleared

When the persistence resource configuration is restored.

rhino.database.no-persistence-instances

Alarm Type

rhino.database.no-persistence-instances

Level

CRITICAL

Message

Persistence resource config for %s has no active persistence instances

Description

A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated.

Raised

When an in-use persistence resource configuration has no active persistence instances.

Cleared

When at least one active persistence instance exists for the persistence resource configuration.

rhino.database.persistence-instance-instantiation-failure

Alarm Type

rhino.database.persistence-instance-instantiation-failure

Level

MAJOR

Message

Unable to instantiate persistence instance %s for database %s

Description

Rhino requires a backing database for persistence of state for failure recovery purposes. A persistent instance defines a connection to a database backend. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance.

Raised

When a persistent instance configuration change occurs but instantiation of that persistent instance fails.

Cleared

When a correct configuration is installed.

rhino.jdbc.persistence-instance-instantiation-failure

Alarm Type

rhino.jdbc.persistence-instance-instantiation-failure

Level

MAJOR

Message

Unable to instantiate persistence instance %s for JDBC configuration with JNDI name %s

Description

A persistent instance defines the connection to the database backend. If the persistent instance cannot be instantiated then JDBC connections cannot be made.

Raised

When a persistent instance configuration change occurs but instantiation of that persistent instance fails.

Cleared

When a correct configuration is installed.


Category: Event Router State

Alarms raised by event router state management

rhino.state.convergence-failure

Alarm Type

rhino.state.convergence-failure

Level

MAJOR

Message

State convergence failed for "%s". The component remains in the "%s" state.

Description

A component reported an unexpected error during convergence

Raised

When a configuration change requiring a component to change state does not complete convergence due to an error.

Cleared

When the component transitions to the configured desired state.

rhino.state.convergence-timeout

Alarm Type

rhino.state.convergence-timeout

Level

MINOR

Message

State convergence timed out for "%s". The component remains in the "%s" state. Convergence will be retried periodically until it reaches the desired state.

Description

A component has not transitioned to the effective desired state after the timeout period

Raised

When a configuration change requiring a component to change state does not complete convergence in the expected time.

Cleared

When the component transitions to the configured desired state.

rhino.state.raentity.active-reconfiguration

Alarm Type

rhino.state.raentity.active-reconfiguration

Level

MINOR

Message

Resource adaptor entity "%s" does not support active reconfiguration. Configuration changes will not take effect until the resource adaptor entity is deactivated and reactivated

Description

A resource adaptor entity is of a type that does not support active reconfiguration but has a desired state that contains configuration properties different from those in the actual state

Raised

When a configuration change requiring a resource adaptor entity to be reconfigured and the resource adaptor does not support active reconfiguration.

Cleared

When the resource adaptor entity is deactivated and convergence has updated the configuration properties.

rhino.state.unlicensed-raentity

Alarm Type

rhino.state.unlicensed-raentity

Level

MAJOR

Message

No valid license for resource adaptor entity "%s" found. The resource adaptor entity has not been activated.

Description

A licensing problem was detected during resource adaptor entity activation.

Raised

When a node attempts to transition a resource adaptor entity from an actual state of INACTIVE to an actual state of ACTIVE but absence of a valid license prevents that transition.

Cleared

When a valid license is installed.

rhino.state.unlicensed-service

Alarm Type

rhino.state.unlicensed-service

Level

MAJOR

Message

No valid license for service "%s" found. The service has not been activated.

Description

A licensing problem was detected during service activation.

Raised

When a node attempts to transition a service from an actual state of INACTIVE to an actual state of ACTIVATING but absence of a valid license prevents that transition.

Cleared

When a valid license is installed.

rhino.state.unlicensed-slee

Alarm Type

rhino.state.unlicensed-slee

Level

CRITICAL

Message

No valid license for the SLEE found. The SLEE has not been started.

Description

A licensing problem was detected during SLEE start.

Raised

When a node attempts to transition its SLEE from an actual state of STOPPED state to an actual state of STARTING but absence of a valid license prevents that transition.

Cleared

When a valid license is installed.


Category: GroupRMI

Alarms raised by the GroupRMI server

rhino.group-rmi.dangling-transaction

Alarm Type

rhino.group-rmi.dangling-transaction

Level

WARNING

Message

Group RMI invocation %s completed leaving an active transaction dangling: %s. Please report this bug to Metaswitch support.

Description

A group RMI invocation completed without committing or rolling back a transaction that it started. The dangling transaction will be automatically rolled back by the group RMI server to prevent future issues but these occurrences are software bugs that should be reported.

Raised

When a group RMI invocation completes leaving an active transaction dangling.

Cleared

Never, must be cleared manually.


Category: Key/Value Store

Alarms raised by key/value store persistence resource managers

rhino.kv-store.no-persistence-config

Alarm Type

rhino.kv-store.no-persistence-config

Level

CRITICAL

Message

Persistence resource config for %s has been removed

Description

A persistence resource configuration referenced in rhino-config.xml has been removed at runtime.

Raised

When an in-use persistence resource configuration is removed by a configuration update.

Cleared

When the persistence resource configuration is restored.

rhino.kv-store.no-persistence-instances

Alarm Type

rhino.kv-store.no-persistence-instances

Level

CRITICAL

Message

Persistence resource config for %s has no active persistence instances

Description

A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated.

Raised

When an in-use persistence resource configuration has no active persistence instances.

Cleared

When at least one active persistence instance exists for the persistence resource configuration.

rhino.kv-store.persistence-instance-instantiation-failure

Alarm Type

rhino.kv-store.persistence-instance-instantiation-failure

Level

MAJOR

Message

Unable to instantiate persistence instance %s for key/value store %s

Description

A persistence instance used by a key/value store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance.

Raised

When a persistent instance configuration change occurs but instantiation of that persistent instance fails.

Cleared

When a correct configuration is installed.


Category: Licensing

Alarms raised by Rhino licensing

rhino.license.expired

Alarm Type

rhino.license.expired

Level

MAJOR

Message

License with serial "%s" has expired

Description

A license installed in Rhino has passed its expiry time.

Raised

When a license expires and there is no superseding license installed.

Cleared

When the license is removed or a superseding license is installed.

rhino.license.over-licensed-capacity

Alarm Type

rhino.license.over-licensed-capacity

Level

MAJOR

Message

Over licensed capacity for function "%s".

Description

The work done by a function exceeds licensed capacity.

Raised

When the amount of work processed by the named function exceeds the licensed capacity.

Cleared

When the amount of work processed by the function becomes less than or equal to the licensed capacity.

rhino.license.over-limit

Alarm Type

rhino.license.over-limit

Level

MAJOR

Message

Rate limiter throttling active, throttled to %d events/second

Description

Rate limiter throttling is active. This throttling and hence this alarm only happens in SDK versions of Rhino, not production versions.

Raised

When there is more incoming work than allowed by the licensed limit so Rhino starts rejecting some.

Cleared

When the total input rate (both accepted and rejected work) drops below the licensed limit.

rhino.license.partially-licensed-host

Alarm Type

rhino.license.partially-licensed-host

Level

MINOR

Message

Host "%s" is not fully licensed. Not all hardware addresses on this host match those licensed. Please request a new license for host "%s".

Description

The hardware addresses listed in a host-based license only partially match those on the host.

Raised

When a host-based license with invalid host addresses is installed.

Cleared

When the license is removed, or a superseding license is installed.

rhino.license.pending-expiry

Alarm Type

rhino.license.pending-expiry

Level

MAJOR

Message

License with serial "%s" is due to expire on %s

Description

A license installed in Rhino is within seven days of its expiry time.

Raised

Seven days before a license will expire and there is no superseding license installed.

Cleared

When the license expires, the license is removed, or a superseding license is installed.

rhino.license.unlicensed-function

Alarm Type

rhino.license.unlicensed-function

Level

MAJOR

Message

There are no valid licenses installed for function "%s" and version "%s".

Description

A particular function is not licensed.

Raised

When a unit of an unlicensed function is requested.

Cleared

When a license is installed that licenses a particular function, and another unit is requested.

rhino.license.unlicensed-host

Alarm Type

rhino.license.unlicensed-host

Level

MINOR

Message

"%s" is not licensed. Hardware addresses on this host did not match those licensed, or hostname has changed. Please request a new license for host "%s".

Description

The hardware addresses listed in a host-based license do not match those on the host.

Raised

When a host-based license with invalid host addresses is installed.

Cleared

When the license is removed, or a superseding license is installed.

rhino.license.unlicensed-rhino

Alarm Type

rhino.license.unlicensed-rhino

Level

MAJOR

Message

Rhino platform is no longer licensed

Description

Rhino does not have a valid license installed.

Raised

When a license expires or is removed leaving Rhino in an unlicensed state.

Cleared

When an appropriate license is installed.


Category: Limiting

Alarms raised by Rhino limiting

rhino.limiting.below-negative-capacity

Alarm Type

rhino.limiting.below-negative-capacity

Level

WARNING

Message

Token count in rate limiter "%s" capped at negative saturation point on node %d. Too much work has been forced. Alarm will clear once token count >= 0.

Description

A rate limiter is below negative capacity.

Raised

By a rate limiter when a very large number of units have been forcibly used and the internal token counter has reached the biggest possible negative number (-2,147,483,648).

Cleared

When the token count becomes greater than or equal to zero.

rhino.limiting.stat-limiter-misconfigured

Alarm Type

rhino.limiting.stat-limiter-misconfigured

Level

MAJOR

Message

Stat limiter "%s" is misconfigured: %s. All unit requests will be allowed by this limiter until the error is resolved.

Description

A stat limiter is misconfigured.

Raised

By a stat limiter that has been asked for one or more units and has been unable to find the configured parameter set or statistic name.

Cleared

When the stat limiter is reconfigured or the configured parameter set that was missing is deployed.


Category: Logging

Alarms raised by Rhino logging

rhino.logging.appender-error

Alarm Type

rhino.logging.appender-error

Level

MAJOR

Message

An error occurred logging to an appender: %s

Description

An appender has thrown an exception when attempting to pass log messages from a logger to it.

Raised

When an appender throws an AppenderLoggingException when a logger tries to log to it.

Cleared

When the problem with the given appender has been resolved and the logging configuration is updated.


Category: M-lets Startup

Alarms raised by the M-let starter

rhino.mlet.loader-failure

Alarm Type

rhino.mlet.loader-failure

Level

MAJOR

Message

Error registering MLetLoader MBean

Description

The M-Let starter component could not register itself with the platform MBean server. This normally indicates a serious JVM misconfiguration.

Raised

During Rhino startup if an error occurred registering the m-let loader component with the MBean server.

Cleared

Never, must be cleared manually or Rhino restarted.

rhino.mlet.registration-failure

Alarm Type

rhino.mlet.registration-failure

Level

MINOR

Message

Could not create or register MLet: %s

Description

The M-Let starter component could not register an MBean for a configured m-let. This normally indicates an error in the m-let configuration file.

Raised

During Rhino startup if an error occurred starting a m-let configured.

Cleared

Never, must be cleared manually or Rhino restarted with updated configuration.


Category: REM Startup

Alarms raised by embedded REM starter

rhino.rem.missing

Alarm Type

rhino.rem.missing

Level

MINOR

Message

Rhino Element Manager classes not found, embedded REM is disabled.

Description

This version of Rhino is supposed to contain an embedded instance of REM but it was not found, most likely due to a packaging error.

Raised

During Rhino startup if the classes could not be found to start the embedded REM.

Cleared

Never, must be cleared manually.

rhino.rem.startup

Alarm Type

rhino.rem.startup

Level

MINOR

Message

Could not start embedded Rhino Element Manager

Description

There was an unexpected problem while starting the embedded REM. This could be because of a port conflict or packaging problem.

Raised

During Rhino startup if an error occurred starting the embedded REM.

Cleared

Never, must be cleared manually or Rhino restarted with updated configuration.


Category: Remote Timer Server

Alarms raised by the remote timer server

rhino.remote-timer-server.bind-error

Alarm Type

rhino.remote-timer-server.bind-error

Level

CRITICAL

Message

Unable to bind remote timer server to %s %s

Description

The remote timer server is unable to bind a socket listener within the configured range of ports. The Rhino configuration may need to be checked and an alternate or wider range of ports allowed.

Raised

If the remote timer server cannot bind a socket listener within the configured range of ports.

Cleared

Never, must be cleared manually.


Category: Runtime Environment

Alarms related to the runtime environment

rhino.runtime.long-filenames-unsupported

Alarm Type

rhino.runtime.long-filenames-unsupported

Level

WARNING

Message

Filenames with a length of %s characters are unsupported on this filesystem. Unexpected deployment errors may occur as a result

Description

Filenames with the maximum length expected by Rhino are unsupported on this filesystem. Unexpected deployment errors may occur as a result

Raised

During Rhino startup if the long filename check fails.

Cleared

Never, must be cleared manually or Rhino restarted after being installed on a filesystem supporting long filenames.

rhino.runtime.slee

Alarm Type

rhino.runtime.slee

Level

CRITICAL

Message

SLEE event-routing functions failed to start after node restart

Description

SLEE event-routing functions failed to start after node restart

Raised

During Rhino startup if SLEE event-routing functions fail to restart.

Cleared

Never, must be cleared manually or the node restarted.

rhino.runtime.unsupported.jvm

Alarm Type

rhino.runtime.unsupported.jvm

Level

WARNING

Message

This JVM (%s) is not supported. Supported JVMs are: %s

Description

This JVM is not a supported JVM.

Raised

During Rhino startup if an unsupported JVM was detected.

Cleared

Never, must be cleared manually or Rhino restarted with a supported JVM.


Category: SAS facility

Alarms raised by Rhino SAS Facility

rhino.sas.connection.lost

Alarm Type

rhino.sas.connection.lost

Level

MAJOR

Message

Connection to SAS server at %s:%d is down

Description

Attempting to reconect to SAS server

Raised

When SAS client loses connection to server

Cleared

On reconnect

rhino.sas.queue.full

Alarm Type

rhino.sas.queue.full

Level

WARNING

Message

SAS message queue is full

Description

SAS message queue is full. Some events have not been reported to SAS

Raised

When SAS facility outgoing message queue is full

Cleared

When the queue is not full for at least sas.queue_full_interval


Category: SLEE State

Alarms raised by SLEE state management

rhino.state.slee-start

Alarm Type

rhino.state.slee-start

Level

CRITICAL

Message

The SLEE failed to start successfully.

Description

An unexpected exception was caught during SLEE start.

Raised

When a node attempts to transition its SLEE from an actual state of STOPPED state to an actual state of STARTING but an unexpected exception occurred while fulfilling that request.

Cleared

After the desired state of the SLEE is reset to STOPPED.


Category: SNMP

Alarms raised by Rhino SNMP

rhino.snmp.bind-failure

Alarm Type

rhino.snmp.bind-failure

Level

MAJOR

Message

The SNMP agent could not be started on node %d: no addresses were successfully bound.

Description

The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If no ports could be bound, the SNMP agent cannot process any SNMP requests.

Raised

When the SNMP Agent attempts to start listening for requests, but no port in the configured range on any configured interface could be used.

Cleared

When the SNMP Agent is stopped.

rhino.snmp.duplicate-counter-mapping

Alarm Type

rhino.snmp.duplicate-counter-mapping

Level

WARNING

Message

Duplicate counter mappings in parameter set type %s

Description

Multiple counters in the parameter set type configuration map to the same index. The parameter set type mappings will remain inactive until the conflict is resolved.

Raised

When a in-use parameter set type has a configuration with duplicate counter mappings.

Cleared

When the conflict is resolved, either by changing the relevant counter mappings, or if the parameter set type is removed from use.

rhino.snmp.duplicate-oid-mapping

Alarm Type

rhino.snmp.duplicate-oid-mapping

Level

WARNING

Message

Duplicate parameter set type mapping configurations for OID %s

Description

Multiple parameter set type configurations for in-use parameter set types map to the same OID. All parameter set type mappings will remain inactive until the conflict is resolved.

Raised

When multiple in-use parameter set types have a configuration that map to the same OID.

Cleared

When the conflict is resolved, either by changing the OID mappings in the relevant parameter set type configurations, or if a parameter set type in conflict is removed from use.

rhino.snmp.general-failure

Alarm Type

rhino.snmp.general-failure

Level

MINOR

Message

The SNMP agent encountered an error during startup: %s

Description

This is a catchall alarm for unexpected failures during agent startup. If an unexpected failure occurs, the state of the SNMP agent is unpredictable and requests may not be successfully processed.

Raised

When the SNMP Agent attempts to start listening for requests, but there is an unexpected failure not covered by other alarms.

Cleared

When the SNMP Agent is stopped.

rhino.snmp.no-bind-addresses

Alarm Type

rhino.snmp.no-bind-addresses

Level

MAJOR

Message

The SNMP agent could not be started on node %d: no suitable bind addresses available.

Description

The SNMP agent listens for requests received on all network interfaces that match the requested SNMP configuration. If no suitable interfaces can be found that match the requested configuration, then the SNMP agent cannot process any SNMP requests.

Raised

When the SNMP Agent attempts to start listening for requests, but no suitable network interface addresses can be found to bind to.

Cleared

When the SNMP Agent is stopped.

rhino.snmp.notification-address-failure

Alarm Type

rhino.snmp.notification-address-failure

Level

MAJOR

Message

Failed to create notification target for address "%s".

Description

This alarm represents a failure to determine an address from the notification target configuration. This can occur if the notification hostname is not resolvable, or if the specified hostname is not parseable.

Raised

During SNMP agent start if a notification target address cannot be determined (e.g. due to a hostname resolution failing).

Cleared

When the SNMP Agent is stopped.

rhino.snmp.partial-failure

Alarm Type

rhino.snmp.partial-failure

Level

MINOR

Message

The SNMP agent failed to bind to the following addresses: %s

Description

The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If this succeeds on some (but not all) interfaces, the SNMP agent can only process requests received via the interfaces that succeeded.

Raised

When the SNMP Agent attempts to start listening for requests, and only some of the configured interfaces successfully bound a UDP port.

Cleared

When the SNMP Agent is stopped.


Category: Scattercast Management

Alarms raised by Rhino scattercast management operations

rhino.scattercast.update-reboot-required

Alarm Type

rhino.scattercast.update-reboot-required

Level

CRITICAL

Message

Scattercast endpoints have been updated. A cluster reboot is required to apply the update. An automatic reboot has been triggered, Manual intervention required if the reboot fails.

Description

Reboot needed to make scattercast update active.

Raised

When scattercast endpoints are updated.

Cleared

On node reboot.


Category: Service State

Alarms raised by service state management

rhino.state.service-activation

Alarm Type

rhino.state.service-activation

Level

MAJOR

Message

Service "%s" failed to activate successfully.

Description

The service threw an exception during service activation, or an unexpected exception occurred while attempting to activate the service.

Raised

When a node attempts to transition a service from an actual state of INACTIVE to an actual state of ACTIVATING but the service rejected the activation request or an unexpected exception occurred while fulfilling that request.

Cleared

After the desired state of the service is reset to INACTIVE.


Category: Session Ownership Store

Alarms raised by session ownership store persistence resource managers

rhino.session-ownership-store.no-persistence-config

Alarm Type

rhino.session-ownership-store.no-persistence-config

Level

CRITICAL

Message

Persistence resource config has been removed

Description

The persistence resource configuration referenced in rhino-config.xml has been removed at runtime.

Raised

When an in-use persistence resource configuration is removed by a configuration update.

Cleared

When the persistence resource configuration is restored.

rhino.session-ownership-store.no-persistence-instances

Alarm Type

rhino.session-ownership-store.no-persistence-instances

Level

CRITICAL

Message

Persistence resource config has no active persistence instances

Description

The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated.

Raised

When an in-use persistence resource configuration has no active persistence instances.

Cleared

When at least one active persistence instance exists for the persistence resource configuration.

rhino.session-ownership-store.persistence-instance-instantiation-failure

Alarm Type

rhino.session-ownership-store.persistence-instance-instantiation-failure

Level

MAJOR

Message

Unable to instantiate persistence instance %s

Description

A persistence instance used by the session ownership store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance.

Raised

When a persistent instance configuration change occurs but instantiation of that persistent instance fails.

Cleared

When a correct configuration is installed.


Category: Threshold Rules

Alarms raised by the threshold alarm rule processor

rhino.threshold-rules.rule-failure

Alarm Type

rhino.threshold-rules.rule-failure

Level

WARNING

Message

Threshold rule %s trigger or reset condition failed to run

Description

A threshold rule trigger or reset rule failed.

Raised

When a threshold rule condition cannot be evaluated, for example it refers to a statistic that does not exist.

Cleared

When the threshold rule condition is corrected.

rhino.threshold-rules.unknown-parameter-set

Alarm Type

rhino.threshold-rules.unknown-parameter-set

Level

WARNING

Message

Threshold rule %s refers to unknown statistics parameter set '%s'

Description

A threshold rule trigger or reset rule refers to an unknown statistics parameter set.

Raised

When a threshold rule condition cannot be evaluated because it refers to a statistics parameter set that does not exist.

Cleared

When the threshold rule condition is corrected.


Category: Watchdog

Alarms raised by the watchdog

rhino.watchdog.forward-timewarp

Alarm Type

rhino.watchdog.forward-timewarp

Level

WARNING

Message

Forward timewarp of %sms detected at %s

Description

A forward timewarp was detected.

Raised

When the system clock is detected to have progressed by an amount exceeding the sum of the watchdog check interval and the maximum pause margin.

Cleared

Never, must be cleared manually.

rhino.watchdog.no-exit

Alarm Type

rhino.watchdog.no-exit

Level

CRITICAL

Message

System property watchdog.no_exit is set, watchdog will be terminated rather than killing the node if a failed watchdog condition occurs

Description

The system property watchdog.no_exit is set, enabling override of default node termination behaviour on failed watchdog conditions. This can cause catastrophic results and should never be used.

Raised

When the watchdog.no_exit system property is set.

Cleared

Never, must be cleared manually.

rhino.watchdog.reverse-timewarp

Alarm Type

rhino.watchdog.reverse-timewarp

Level

WARNING

Message

Reverse timewarp of %sms detected at %s

Description

A reverse timewarp was detected.

Raised

When the system clock is detected to have progressed by an amount less than the difference between the watchdog check interval and the reverse timewarp margin.

Cleared

Never, must be cleared manually.

Usage

As well as an overview of usage, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs:

Procedure rhino-console command MBean → Operations
 dumpusagestats

Usage → get<usage-parameter-name>

 setusagenotificationsenabled

UsageNotificationManager → set<usage-parameter-name> NotificationsEnabled

 listusagenotificationsenabled

UsageNotificationManager → get<usage-parameter-name> NotificationsEnabled

 createusageparameterset

ServiceUsage → createUsageParameterSet
ProfileTableUsage → createUsageParameterSet
ResourceUsage → createUsageParameterSet

 listusageparametersets

ServiceUsage → getUsageParameterSets
ProfileTableUsage → getUsageParameterSets
ResourceUsage → getUsageParameterSets

 removeusageparameterset

ServiceUsage → removeUsageParameterSet
ProfileTableUsage → removeUsageParameterSet
ResourceUsage → removeUsageParameterSet

About Usage

A usage parameter is a parameter that an object in the SLEE can update, to provide usage information.

There are two types:

  • Counter-type usage parameters have values that can be incremented or decremented.

  • Sample-type usage parameters accumulate sample data.

Accessing usage parameters

Administrators can access usage parameters through the SLEE’s management interface.

Management clients can access usage parameters through the usage parameters interface declared in an SBB, resource adaptor or profile specification. Usage parameters cannot be created through the management interface. Instead, a usage parameters interface must be declared in the SLEE component. For example, an SBB declares an sbb-usage-parameters-interface element in the SBB deployment descriptor (similar procedures apply for resource adaptors and profile specifications).

You can also use notifications to output usage parameters to management clients.

Creating named usage parameter sets

By default, the SLEE creates unnamed usage parameter sets for a notification source. You can also create named usage parameter sets, for example to hold multiple values of usage parameters for the same notification source.

Rhino usage extensions

To alleviate the limitations of the SLEE-defined usage mechanism, Rhino provides a usage extension mechanism that allows an SBB or resource adaptor to declare multiple usage parameters interfaces, and defines a Usage facility with which SBBs and resource adaptors can manage and access their own usage parameter sets.

Viewing Usage Parameters

To view the current value of a usage parameter, use the following rhino-console command or related MBean operation.

Note Whereas the MBean operation below can only get individual usage parameter values, the console command outputs current values of all usage parameters for a specified notification source.

Console command: dumpusagestats

Command

dumpusagestats <type> <notif-source> [param-set-name] [reset]
  Description
    Dump the current values of the usage parameters for the specified notification
    source.  The usage parameter set name is optional and if not specified the
    values for the unnamed (or root) parameter set are returned.  If [reset] is
    specified, the values of the usage parameters are reset after being obtained

Example

$ ./rhino-console  dumpusagestats sbb \
  "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]"
parameter-name       counter-value   sample-stats   type
-------------------  --------------  -------------  --------
       callAttempts               0                  counter
  missingParameters               0                  counter
        offNetCalls               0                  counter
         onNetCalls               0                  counter
   unknownShortCode               0                  counter
 unknownSubscribers               0                  counter
6 rows

MBean operation: get<usage-parameter-name>

MBean

SLEE-defined

Counter-type usage parameters
public long get<usage-parameter-name>(boolean reset)
        throws ManagementException;

Sample-type usage parameters
public SampleStatistics get<usage-parameter-name>(boolean reset)
        throws ManagementException;

Arguments

This operation requires that you specify whether the values are to be reset after being read:

  • reset — boolean value to reset the usage parameter’s value after being read

Return value

Operations for counter-type usage parameters return the current value of the counter. Operations for sample-type usage parameters return a SampleStatistics object.

Usage Notifications

You can enable or disable usage notifications, and list which usage notifications are enabled:

Enabling and Disabling Usage Notifications

To enable or disable usage notifications, use the following rhino-console command or related MBean operation.

Note
The notifications-enabled flag

To enable notifications to output usage parameters to management clients, set the usage notifications-enabled flag, and an appropriate debug level for the SLEE component’s relevant tracer. To disable notifications, unset the notifications-enabled flag.

Console command: setusagenotificationsenabled

Command

setusagenotificationsenabled <type> <notif-source> [upi-type] <param-name>
<flag>
  Description
    Set the usage notifications-enabled flag for specified usage notification
    source's usage parameter.  The usage parameters interface type is optional and
    if not specified the root usage parameters interface type is used

Example

$ ./rhino-console setusagenotificationsenabled sbb \
    "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" \
    callAttempts true
Usage notifications for usage parameter callAttempts for
SbbNotification[service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]]
have been enabled

MBean operation: set<usage-parameter-name>NotificationsEnabled

MBean

SLEE-defined

public void set<usage-parameter-name>NotificationsEnabled(boolean enabled)
    throws ManagementException;

Arguments

  • enabled — a flag to enable or disable notifications for this usage parameter.

Notes

Enabling usage notification

Usage notifications are enabled or disabled on a per-usage-parameter basis for each notification source. That means that if usage notifications are enabled for a particular usage parameter, if that usage parameter is updated in any usage parameter set belonging to the notification source, a usage notification will be generated by the SLEE.

Viewing Usage Notification Status

To list usage parameter status, use the following rhino-console command or related MBean operation.

Note To see which usage parameters management clients are receiving through notifications, you can list usage parameter status.

Console command: listusagenotificationsenabled

Command

listusagenotificationsenabled <type> <notif-source> [upi-type]
  Description
    List the usage notification manager flags for the specified notification source.
     The usage parameters interface type is optional and if not specified the flags
    for the root usage parameters interface type are returned

Example

$ ./rhino-console listusagenotificationsenabled sbb \
    "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]"
parameter-name       notifications-enabled
-------------------  ----------------------
       callAttempts                    true
  missingParameters                   false
        offNetCalls                   false
         onNetCalls                   false
   unknownShortCode                   false
 unknownSubscribers                   false
6 rows

MBean operation: get<usage-parameter-name>NotificationsEnabled

MBean

SLEE-defined

public boolean get<usage-parameter-name>NotificationsEnabled()
        throws ManagementException;

Arguments

  • return — a flag to indicate whether notifications are enabled or disabled for this usage parameter.

Named Usage Parameter Sets

By default, the SLEE creates unnamed usage parameter sets for a notification source. You can also create named usage parameter sets, for example to hold multiple values of usage parameters for the same notification source.

Rhino includes facilities for creating, listing and removing named usage parameter sets for services, resource adaptor entities and profile tables.

This section includes the following procedures:

Warning
Usage parameter sets for internal subsystems (not listed using console command)

The SLEE specification also includes usage parameter sets for "internal subsystems". You can list these, but not create or remove them, since they are part of the SLEE implementation. However, Rhino uses its own statistics API to collect statistics from internal subsystems — so if you try to list usage parameter set names for an internal subsystem using rhino-console, it will always returns an empty list.

Creating Usage Parameter Sets

To create a named usage parameter set for services, resource adaptor entities or profile tables, use the following rhino-console or related MBean operations.

Services

Console command: createusageparameterset

Command

createusageparameterset <type> <notif-source> <param-set-name>
  Description
    Create a new usage parameter set with the specified name for the specified
    notification source

Example

$ ./rhino-console createusageparameterset sbb \
    "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" \
    firstLook
created usage parameter set firstLook for
SbbNotification[service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]]

MBean operation: createUsageParameterSet

MBean

SLEE-defined

public void createUsageParameterSet(SbbID id, String paramSetName)
        throws NullPointerException, UnrecognizedSbbException,
               InvalidArgumentException, UsageParameterSetNameAlreadyExistsException,
               ManagementException;

Arguments

  • id — the component identifier of an SBB, which must be used in a service whose usage information this MBean manages.

  • paramSetName — the usage parameter set name.

Resource adaptor entities

Console command: createusageparameterset

Command

createusageparameterset <type> <notif-source> <param-set-name>
  Description
    Create a new usage parameter set with the specified name for the specified
    notification source

Example

$ ./rhino-console createusageparameterset resourceadaptorentity \
    "entity=cdr" \
    cdr-usage
created usage parameter set cdr-usage for RAEntityNotification[entity=cdr]

MBean operation: createUsageParameterSet

MBean

SLEE-defined

public void createUsageParameterSet(String paramSetName)
        throws NullPointerException, InvalidArgumentException,
               UsageParameterSetNameAlreadyExistsException,
               ManagementException;

Arguments

  • paramSetName — the usage parameter set name.

Profile tables

Console command: createusageparameterset

Command

createusageparameterset <type> <notif-source> <param-set-name>
  Description
    Create a new usage parameter set with the specified name for the specified
    notification source

Example

$ ./rhino-console createusageparameterset profiletable \
    "table=PostpaidChargingPrefixTable" \
    ppprefix-usage
created usage parameter set ppprefix-usage for ProfileTableNotification[table=PostpaidChargingPrefixTable]

MBean operation: createUsageParameterSet

MBean

SLEE-defined

public void createUsageParameterSet(String paramSetName)
        throws NullPointerException, InvalidArgumentException,
               UsageParameterSetNameAlreadyExistsException,
               ManagementException;

Arguments

  • paramSetName — the usage parameter set name.

Listing Usage Parameter Sets

To list named usage parameter sets for services, resource adaptor entities or profile tables, use the following rhino-console or related MBean operations.

Services

Console command: listusageparametersets

Command

listusageparametersets <type> <notif-source>
  Description
    List the usage parameter sets for the specified notification source.  The
    unnamed (or root) parameter set is not included in this list

Example

$ ./rhino-console listusageparametersets sbb \
      "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]"
firstLook
secondLook

MBean operation: getUsageParameterSets

MBean{cth}

SLEE-defined

public String[] getUsageParameterSets(SbbID id)
        throws NullPointerException, UnrecognizedSbbException,
              InvalidArgumentException, ManagementException

Arguments

  • id — the component identifier of an SBB, which must be used in a service whose usage information this MBean manages.

Resource adaptor entities

Console command: listusageparametersets

Command

listusageparametersets <type> <notif-source>
  Description
    List the usage parameter sets for the specified notification source.  The
    unnamed (or root) parameter set is not included in this list

Example

$ ./rhino-console listusageparametersets resourceadaptorentity \
      "entity=cdr"
cdr-usage

MBean operation: getUsageParameterSets

MBean{cth}

SLEE-defined

public String[] getUsageParameterSets()
        throws ManagementException

Profile tables

Console command: listusageparametersets

Command

listusageparametersets <type> <notif-source>
  Description
    List the usage parameter sets for the specified notification source.  The
    unnamed (or root) parameter set is not included in this list

Example

$ ./rhino-console listusageparametersets profiletable \
      "table=PostpaidChargingPrefixTable"
ppprefix-usage

MBean operation: getUsageParameterSets

MBean{cth}

SLEE-defined

public String[] getUsageParameterSets()
        throws ManagementException

Removing Usage Parameter Sets

To removed a named usage parameter set for services, resource adaptor entities or profile tables, use the following rhino-console or related MBean operations.

Services

Console command: removeusageparameterset

Command

removeusageparameterset <type> <notif-source> <param-set-name>
  Description
    Remove the existing usage parameter set with the specified name from the
    specified notification source

Example

$ ./rhino-console  removeusageparameterset sbb \
      "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" \
      secondLook
removed usage parameter set secondLook for
SbbNotification[service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]]

MBean operation: removeUsageParameterSet

MBean

SLEE-defined

public void removeUsageParameterSet(SbbID id, String paramSetName)
        throws NullPointerException, UnrecognizedSbbException,
              InvalidArgumentException, UnrecognizedUsageParameterSetNameException,
              ManagementException;

Arguments

  • id — the component identifier of an SBB, which must be used in a service whose usage information this MBean manages.

  • paramSetName — the usage parameter set name.

Resource adaptor entities

Console command: removeusageparameterset

Command

removeusageparameterset <type> <notif-source> <param-set-name>
  Description
    Remove the existing usage parameter set with the specified name from the
    specified notification source

Example

$ ./rhino-console  removeusageparameterset resourceadaptorentity \
      "entity=cdr" \
      cdr-usage
removed usage parameter set cdr-usage for RAEntityNotification[entity=cdr]

MBean operation: removeUsageParameterSet

MBean

SLEE-defined

public void removeUsageParameterSet(String paramSetName)
        throws NullPointerException,
              InvalidArgumentException, UnrecognizedUsageParameterSetNameException,
              ManagementException;

Argument

  • paramSetName — the usage parameter set name.

Profile tables

Console command: removeusageparameterset

Command

removeusageparameterset <type> <notif-source> <param-set-name>
  Description
    Remove the existing usage parameter set with the specified name from the
    specified notification source

Example

$ ./rhino-console removeusageparameterset profiletable \
      "table=PostpaidChargingPrefixTable" \
      ppprefix-usage
removed usage parameter set ppprefix-usage for ProfileTableNotification[table=PostpaidChargingPrefixTable]

MBean operation: removeUsageParameterSet

MBean

SLEE-defined

public void removeUsageParameterSet(String paramSetName)
        throws NullPointerException,
                InvalidArgumentException, UnrecognizedUsageParameterSetNameException,
                ManagementException;

Argument

  • paramSetName — the usage parameter set name.

User Transactions

As well as an overview of user transactions, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation
 `startusertransaction`

User Transaction Management → startUserTransaction

 `commitusertransaction`

User Transaction Management → commitUserTransaction

 `rollbackusertransaction`

User Transaction Management → rollbackUserTransaction

About User Transactions

Using the User Transaction Management MBean, a client can demarcate transaction boundaries for a subset of profile-management operations, by:

  • starting a user transaction

  • performing some profile-management operations, across a number of different profiles (in the context of that transaction)

  • then committing the transaction — resulting in an atomic update of profile state.

Binding user transactions with authenticated subjects

The SLEE binds user transactions to the java.security.auth.Subject associated with the invoking thread. For all user-transaction management, the thread invoking the management operation must therefore be associated with an authenticated subject. The command console interface handles this task as part of the client-login process. (Other user-provided m-lets installed in the Rhino SLEE will need to take care of this requirement in their own way.)

Executing Profile Provisioning operations in a user transaction

Furthermore, accessing a Profile MBean while a user transaction is active:

  • enlists that MBean into that user transaction

  • changes that MBean to the read/write state

  • puts any changes to the profile in context of the user transaction.

Note
Committing or rolling back profiles enlisted in user transactions

You cannot invoke the ProfileMBean.commitProfile() or ProfileMBean.restoreProfile() operations on a Profile MBean enlisted in a user transaction. Any changes made to such a profile will be committed or rolled back when the user transaction is committed or rolled back (respectively).

Starting User Transactions

To start a user transaction, use the following rhino-console command or related MBean operation.

Console command: startusertransaction

Command

startusertransaction
  Description
    Start a client-demarcated transaction.  Note that only a limited set of Rhino
    management operations support user transactions

Example

$ ./rhino-console startusertransaction

MBean operation: startUserTransaction

MBean

Rhino extension

void startUserTransaction()
    throws com.opencloud.rhino.management.usertx.NoAuthenticatedSubjectException,
          NotSupportedException, ManagementException;

Committing User Transactions

To commit a user transaction, use the following rhino-console command or related MBean operation.

Console command: commitusertransaction

Command

commitusertransaction
  Description
    Commit a client-demarcated transaction

Example

$ ./rhino-console commitusertransaction

MBean operation: commitUserTransaction

MBean

Rhino extension

void commitUserTransaction()
    throws com.opencloud.rhino.management.usertx.NoAuthenticatedSubjectException,
           InvalidStateException, ProfileVerificationException,
           HeuristicMixedException, HeuristicRollbackException,
           RollbackException, ManagementException;

Rolling Back User Transactions

To rollback a user transaction, use the following rhino-console command or related MBean operation.

Console command: rollbackusertransaction

Command

rollbackusertransaction
  Description
    Roll back a client-demarcated transaction

Example

$ ./rhino-console rollbackusertransaction

MBean operation: rollbackUserTransaction

MBean

Rhino extension

void rollbackUserTransaction()
    throws com.opencloud.rhino.management.usertx.NoAuthenticatedSubjectException,
           InvalidStateException, ManagementException;

Auditing Management Operations

Rhino logs all management operations to two log files in the working directory of each Rhino node (work/logs by default):

  • management.csv, a plain text CSV file

  • encrypted.management.csv, a second, encrypted copy of the logs.

Tip You can use the encrypted version to detect tampering with the plain text copy — decode with the decrypt-management-log script, in the client/bin directory.
Note The management audit logs roll over once they reach 100MB, with an unlimited number of backup files. This logging configuration is currently hard-coded.
Note The format of management audit log can be chosen via system property rhino.audit.log_format. See more detail in system properties.

Below are descriptions of:

and two examples:

What’s in the log file?

Rhino management operations logs include the following fields:

Field Description
 date

A timestamp in the form 2010-05-11 14:55:33.692.

 uniqueID

An identifier used to correlate a set of log lines for a single management operation. All of the log lines from the same operation will have the same uniqueID.

 txID

The transaction ID associated with the operation, used to correlate a set of log lines scoped to a single transactional update. This field only has a value:

  • for operations invoked while a user (externally demarcated) transaction is active; or

  • when logging internal state changes that occur as a result of a declarative configuration import operation.

 opcode

Uniquely identifies the type of operation.

 user

The name of the user invoking the management operation, or unknown1 if there is no authenticated user.

 roles

Any roles associated with the user.

 access

Identifies whether the operation results in a state change of some sort. May be read or write. 2

 client address

The IP address of the client invoking the management operation.

 namespace

The namespace where the management operation invoked. Empty if it is the default namespace.

 MBean name

ObjectName of the MBean invoked.

 operation type

The general type of operation.

 operation name

The name of the invoked method or get/set attribute.

 arguments

The contents of all arguments passed to the management operation. Byte array arguments display converted to a length and a hash.

 duration

How long (in milliseconds) the operation took.

 result

Either ok or failed.3 4

 failure reason

A text string indicating why an operation failed. (Only present for failed results.) 2

Note All management operations except for AUTHENTICATION type operations come in pairs with the first entry indicating the start of an operation, and the second entry indicating success or failure, as well as how long the operation took. Only the result lines make use of the duration, result, and failure reason fields.
Tip For a list of all operations currently recognised by the auditing subsystem, run the getopcodexml command from the command-line console. It will return the complete XML representation of all known management operations.

1 This usually only happens if unauthenticated access has been enabled in Rhino.
2 By default, users with the view permission may only perform read operations.
3 This field is only set for operation results.
4 A failed management operation is one which did not return successfully.

Operation types

The operation type field may contain one of the following values:

Type Result type Description
 AUTHENTICATION

n/a

A successful or failed authentication attempt.

 INVOKE
 INVOKE (RESULT)

An MBean invoke operation.

 GET
 GET (RESULT)

An MBean attribute get operation.

 SET
 SET (RESULT)

An MBean attribute set operation.

 GET-ATTRIBUTES
 GET-ATTRIBUTES (RESULT)

An MBean bulk-attributes GET operation. Log lines with these markers denote a series of related GET requests.

 SET-ATTRIBUTES
 SET-ATTRIBUTES (RESULT)

An MBean bulk-attributes SET operation. Log lines with these markers denote a series of related SET requests.

Managing the audit level

The auditing subsystem provides two console commands to manage what gets logged to the management audit log:

getmanagementauditlevel
  Description
    Returns the current level of management operation auditing.
setmanagementauditlevel <none \| writes \| all>
  Description
    Sets the current level of management operation auditing.
Tip Writes is useful, for example, to avoid receiving excessive messages from an automated management client that continually polls Rhino state using JMX.
Note Rhino always logs changes to the audit level (irrespective of the current level).

Example 1: Resource adaptor deployment and activation

The following example shows management logs from deploying a resource adaptor, creating a resource adaptor entity for it, and activating that resource adaptor entity.

Note The log shows the resource adaptor activated twice in a row, the second operation failing (because the RA was already activated) — see the result and failure fields.
date uniqueID txID opcode user roles access client address MBean name operation type operation name arguments duration result failure reason namespace
 2010-06-08 14:22:06.850
 101:176452077447:22

          

          
 admin

          

          
  192.168.0.7

          
 AUTHENTICATION

          

          

          
  ok

          

          
 2010-06-08 14:22:35.622
 101:176452077447:29

          
 19000
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=Deployment
 INVOKE
 install
 [file:/home/alex/simple/simple-ra-ha.jar,
[byte array, length=65164,
  md5sum=96322071e6128333bdee3364a224b48c]

          

          

          

          
 2010-06-08 14:22:38.961
 101:176452077447:29

          
 19000
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=Deployment
 INVOKE (RESULT)
 install
 [file:/home/alex/simple/simple-ra-ha.jar,
[byte array, length=65164,
  md5sum=96322071e6128333bdee3364a224b48c]
]
 3339ms
 ok

          

          
 2010-06-08 14:22:53.356
 101:176452077447:36

          
 22014
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=ResourceManagement
 INVOKE
 getConfigurationProperties
 [ResourceAdaptorID
[name=Simple,vendor=OpenCloud,version=1.0]
]

          

          

          

          
 2010-06-08 14:22:53.359
 101:176452077447:36

          
 22014
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=ResourceManagement
 INVOKE (RESULT)
 getConfigurationProperties
 [ResourceAdaptorID
[name=Simple,vendor=OpenCloud,version=1.0]
]
 3ms
 ok

          

          
 2010-06-08 14:22:53.369
 101:176452077447:39

          
 22016
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=ResourceManagement
 INVOKE
 createResourceAdaptorEntity
[ResourceAdaptorID
[name=Simple,vendor=OpenCloud,version=1.0],
 simplera,
[(Host:java.lang.String=localhost),
  (Port:java.lang.Integer=14477),
  (slee-vendor:
    com.opencloud.rhino_replicate_activities:
    java.lang.String=none)
 ]
]

          

          

          

          
 2010-06-08 14:22:53.536
 101:176452077447:39

          
 22016
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=ResourceManagement
 INVOKE (RESULT)
 createResourceAdaptorEntity
[ResourceAdaptorID
[name=Simple,vendor=OpenCloud,version=1.0],
 simplera,
[(Host:java.lang.String=localhost),
  (Port:java.lang.Integer=14477),
  (slee-vendor:
    com.opencloud.rhino_replicate_activities:
    java.lang.String=none)
 ]
]
 167ms
 ok

          

          
 2010-06-08 14:23:11.987
 101:176452077447:47

          
 22004
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=ResourceManagement
 INVOKE
 activateResourceAdaptorEntity
[simplera,[101]]

          

          

          

          
 2010-06-08 14:23:12.029
 101:176452077447:47

          
 22004
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=ResourceManagement
 INVOKE (RESULT)
 activateResourceAdaptorEntity
[simplera,[101]]
 42ms
 ok

          

          
 2010-06-08 14:23:30.802
 101:176452077447:52

          
 22004
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=ResourceManagement
 INVOKE
 activateResourceAdaptorEntity
[simplera,[101]]

          

          

          

          
 2010-06-08 14:23:30.820
 101:176452077447:52

          
 22004
 admin
 admin
 write
 192.168.0.7
 javax.slee.management:
 name=ResourceManagement
 INVOKE (RESULT)
 activateResourceAdaptorEntity
[simplera,[101]]
 18ms
 failed
 simplera not in INACTIVE state on node(s)[101]

          

Example 2: Bulk GET operation on Licensing MBean

The example below shows a GET-ATTRIBUTES operation called on the Licensing MBean. It includes queries on four separate attributes: LicenseSummary, LicensedFunctions, LicensedVersions, and Licenses. The result of the bulk-attribute query operation are in the last line.

Note Note that the uniqueID field is the same for all lines representing the GET-ATTRIBUTES operation.
date uniqueID txID opcode user roles access client address MBean name operation type operation name arguments duration result failure reason namespace
 2010-05-28 14:07:11.223
 101:175500674962:292

          

          
 admin
 admin

          
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET-ATTRIBUTES

          

          

          

          

          
 2010-05-28 14:07:11.223
 101:175500674962:292

          
 2008
 admin
 admin
 read
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET
 LicenseSummary

          

          

          

          
 2010-05-28 14:07:11.223
 101:175500674962:292

          
 2005
 admin
 admin
 read
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET
 LicensedFunctions

          

          

          

          
 2010-05-28 14:07:11.223
 101:175500674962:292

          
 2006
 admin
 admin
 read
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET
 LicensedVersions

          

          

          

          
 2010-05-28 14:07:11.223
 101:175500674962:292

          
 2004
 admin
 admin
 read
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET
 Licenses

          

          

          

          
 2010-05-28 14:07:11.226
 101:175500674962:292

          
 2008
 admin
 admin
 read
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET (RESULT)
 LicenseSummary

          
 3ms
 ok

          
 2010-05-28 14:07:11.226
 101:175500674962:292

          
 2005
 admin
 admin
 read
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET (RESULT)
 LicensedFunctions

          
 3ms
 ok

          
 2010-05-28 14:07:11.226
 101:175500674962:292

          
 2006
 admin
 admin
 read
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET (RESULT)
 LicensedVersions

          
 3ms
 ok

          
 2010-05-28 14:07:11.226
 101:175500674962:292

          
 2004
 admin
 admin
 read
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET (RESULT)
 Licenses

          
 3ms
 ok

          
 2010-05-28 14:07:11.226
 101:175500674962:292

          

          
 admin
 admin

          
 192.168.0.7
 com.opencloud.rhino:
 type=Licensing
 GET-ATTRIBUTES (RESULT)

          

          
 3ms
 ok

          

Note The durations listed for the individual GET (RESULT) lines correspond to the duration of the entire GET-ATTRIBUTES operation and not the individual GET components. In the example above, the entire GET-ATTRIBUTES operation took 3ms.

Linked and Shadowed Components

There can be times when creating component dependencies in a deployable unit where a specific dependency target may not be known. For example, the particular version of a dependent library may be variable. At other times, some already installed component may need to be replaced with another, possibly a new version with a bug fix, and reinstalling all dependent components with updated deployment descriptors is undesirable.

Bindings can help with this problem, to some degree, however bindings can introduce other issues. Bindings always operate on virtual copies of the original components, and keeping track of copied components can be difficult if many binding operations are made.

Rhino provides a solution to these problems with support for linked and shadowed components.

Linked components

A linked component is a virtual component that provides an alias for some other component. Incoming references to the linked component are redirected to the link target. A linked component’s component type, for example SBB, profile specification, library, and so on, is the same as the component that it is linked to; and, like all other components, has a unique identity represented by the (name, vendor, version) tuple.

A linked component identifier can be used anywhere where a regular component identifier is required.

Shadowed components

A shadowed component is an existing component that has been "shadowed" or replaced by a link to another component of the same type. Incoming references to the shadowed component are redirected to the link target rather than using the original component.

Conceptually, linked and shadowed component perform the same function: to redirect an incoming reference to another component. The difference is that a linked component is a new virtual component with a unique identity, whereas a shadow replaces a component that is already installed in the SLEE.

Note
Components supporting links and shadows

The following types of components currently support links and shadows:

  • services

  • SBBs

  • SBB parts

  • profile specifications

  • resource adaptor types

  • resource adaptors

  • event types

  • libraries.

Managing linked components

Below are overviews of the procedures to create, remove, and view the metadata for linked components.

Creating a linked component

You create linked components using the createLinkedComponent management operation. For example, using rhino-console:

[Rhino@localhost:2199 (#0)] createlinkedcomponent sbb name=MySBB,vendor=OpenCloud,version=1.0 MySBBLink OpenCloud 1.0
Component SbbID[name=MySBBLink,vendor=OpenCloud,version=1.0] linked to SbbID[name=MySBB,vendor=OpenCloud,version=1.0]

The first two arguments identify the component type and identifier of the link target. The target component must already exist in the SLEE. The last three arguments define the name, vendor, and version strings for the new linked component identifier.

Removing a linked component

You remove a linked component using the removeLinkedComponent management operation. For example, using rhino-console:

[Rhino@localhost:2199 (#0)] removelinkedcomponent sbb name=MySBBLink,vendor=OpenCloud,version=1.0
Linked component SbbID[name=MySBBLink,vendor=OpenCloud,version=1.0] removed

A linked component cannot be removed if:

  • another component with an install level of VERIFIED or DEPLOYED references it;

  • another linked component specifies this linked component as its target; or

  • another component is shadowed by this linked component.

Viewing linked component metadata

The getDescriptor management operation returns a SLEE ComponentDescriptor object for any component that exists in the SLEE. A ComponentDescriptor object for a linked component has the following properties:

  • its deployable unit is the same as the deployable unit of the link target

  • its source component jar is the same as the source component jar of the link target

  • it contains a vendor-specific data object of type LinkedComponentDescriptorExtensions.

Linked component descriptor extensions

The LinkedComponentDescriptorExtensions class defines Rhino-specific component metadata extensions for linked components. Here’s what it looks like:

package com.opencloud.rhino.management.deployment;

import java.io.Serializable;
import java.util.Date;
import javax.slee.ComponentID;

public class LinkedComponentDescriptorExtensions implements Serializable {
    public LinkedComponentDescriptorExtensions(...) { ... }

    public ComponentID getLinkTarget() { ... }

    public Date getLinkDate() { ... }

    public InstallLevel getInstallLevel() { ... }

    public ComponentID[] getIncomingLinks() { ... }

    public ComponentID[] getShadowing() { ... }

    ...
}
  • The getLinkTarget method returns the component identifier of the link target.

  • The getLinkDate method returns a Date object that specifies the date and time the linked component was created.

  • The getInstallLevel method returns the current install level of the linked component. The install level of a linked component is immaterial, and changing it has no effect on the linked component itself; however, since an install level is a property of all components installed in Rhino, a linked component must have one by definition.

  • The getIncomingLinks method returns the component identifiers of any other linked components that have this linked component as a target.

  • The getShadowing method returns the component identifiers of any other component that has been shadowed by this linked component.

Managing component shadows

Shadowing or unshadowing a component effectively changes the definition of the component; therefore a component can only undergo these transitions if it has an install level of INSTALLED. This ensures that any components that depend on the affected component also have an install level of INSTALLED, and thus will need (re)verifying against the updated component before further use. Rhino will allow a component with an install level of VERIFIED to be shadowed or unshadowed, but will automatically transition the component (and any upstream dependencies) to the INSTALLED install level first. A component with an install level of DEPLOYED must be manually undeployed before a shadow can be created or removed.

Below are overviews of the procedures to shadow, unshadow, and view the shadow metadata for a component.

Shadowing a component

You shadow one component with another using the shadowComponent management operation. For example, using rhino-console:

[Rhino@localhost:2199 (#0)] shadowcomponent sbb name=MySBB,vendor=OpenCloud,version=1.0 name=MySBB,vendor=OpenCloud,version=1.0.2
Component SbbID[name=MySBB,vendor=OpenCloud,version=1.0] shadowed by SbbID[name=MySBB,vendor=OpenCloud,version=1.0.2]

The first two arguments identify the component type and identifier of the component to be shadowed. The last argument identifies the component that this component will be shadowed by. Both the shadowed and shadowing components must already exist in the SLEE.

Warning
Link cycles won’t work

Using shadows, you might try to create a link cycle. For example, if component A is a link to component B, component B is a link to component C, and component C is shadowed by component A, then a link cycle would exist which cannot be resolved to any concrete component. Rhino forbids such cycles to be created. An attempt to shadow a component with another component that would result in a link cycle will fail with an appropriate error.

Unshadowing a component

You unshadow a component using the unshadowComponent management operation. For example, using rhino-console:

[Rhino@localhost:2199 (#0)] unshadow sbb name=MySBB,vendor=OpenCloud,version=1.0
Shadow removed from component SbbID[name=MySBB,vendor=OpenCloud,version=1.0]

Viewing shadowed component metadata

The getDescriptor management operation returns a SLEE ComponentDescriptor object for any component that exists in the SLEE. The component descriptor for a shadowed component continues to describe the original unshadowed component, but contains a vendor-specific data object of type com.opencloud.rhino.management.deployment.ComponentDescriptorExtensions that includes the following information relevant to shadowing:

  • The getShadowedBy method returns the component identifier of the component that shadows this component. This target component will be used in place of the described component.

  • The getShadowDate method returns a Date object that specifies the date and time the shadow was established.

  • The getShadowing method returns the component identifiers of any other component that has in turn been shadowed by this shadowed component.

Linked and shadowed component resolution

In most cases where a component identifier is specified, Rhino will follow a chain of links and shadows to resolve the component identifier to a concrete target component. Typical cases where this occurs are as follows:

  • wherever a component references another component in its deployment descriptor or in a binding descriptor

  • if a service component is activated or deactivated

  • when a profile table is created from a profile specification
    (though Rhino will report that the profile table was created from the specified component rather than the resolved target)

  • when a resource adaptor entity is created from a resource adaptor
    (though again Rhino will report that the resource adaptor entity was created from the specified component rather than the resolved target)

  • when interrogating or updating a component’s security policy.

Specific cases where a management operation applies directly to a linked or shadowed component rather than its resolved target are as follows:

  • when requesting a component’s metadata descriptor

  • when copying a shadowed component
    (The shadowed component itself is copied, rather than the shadowing component. Linked components are still resolved though when determining the actual component to copy; so an attempt to copy a linked component will result in a copy of the resolved target component being copied.)

Additional notes

  • Creating a link to a service component automatically adds a clone of the resolved target service’s statistics with the linked component identifier to the stats manager. For example, if service component A is linked to service component B, then the stats for B can be accessed from the stats manager using either component identifier A or B. The same result will be obtained in each case. Listing the available stats parameter sets will include both A and B.

  • The actual and desired states reported for a linked or shadowed service component are the states of the service component that the link or shadow resolves to. Activating or deactivating the linked or shadowed component has the same effect as activating or deactivating the resolved component.

  • If a resource adaptor entity generates events that may be consumed by a given service component, and a link to that service component is created, then the resource adaptor entity will also be notified about a change to the lifecycle state for the linked component when the state of the target service component changes.

  • A resource adaptor entity may fire an event targeted at a linked service component, and Rhino will deliver the event to the resolved target service component. If an SBB in the service invokes a resource adaptor interface API method while handling that event, then the value returned by the ResourceAdaptorContext.getInvokingService() method will equal the target service component identifier specified by the resource adaptor entity when the event was fired; that is, it will be the linked component identifier. However if an SBB in the service invokes a resource adaptor interface API method while handling an event that had no specific service target, then the value returned by the same getInvokingService() method will be the service component identifier of the resolved service that is actually processing the event.

Component Activation Priorities

Rhino versions 2.4 and above allow configuration of the activation order of SLEE components. This ordering is controlled separately for activating and deactivating components.

Introduction to priorities

In Rhino 2.3.1 and older, RAs and services started in effectively random order. The startup order was based on the indexing hash order in the system.

The priority system added in Rhino 2.4 allows operator control of this order.

Priorities are values between -128 and 127. If a component (service or resource adaptor entity), X, has a numerically higher priority value than another component, Y, then X will be started before Y. Components with the same priority may be started in an arbitrary order, or may be started concurrently. The same rule applies for component stopping priorities; i.e., highest priority stops first.

If you have assigned startup priorities to; RAs A=100, B=20, C=10; and service S=15, they will start up in the following order:

  1. activate RA entity A

  2. activate RA entity B

  3. activate service S

  4. activate RA entity C

Note that a service can still potentially receive an event from an RA before it receives a ServiceStartedEvent on the ServiceActivity. That’s a completely different problem to activation ordering, and given the asynchronous nature of event delivery not something we can readily do anything about. A service that depends on the ServiceStartedEvent may be able to use the service activation callbacks in Rhino 2.4 instead. You may notice that services of the same priority level as RA entities start before the RA entities and stop after them. This ordering is not part of the priority system definition. It is possible that they will be started concurrently in the future, so always use different priorities if you need a specific order.

Managing priorities

Below are overviews of the procedures to manage service priorities, manage RA entity priorities, and list priorities.

Managing service priorities

You get priorities for services using the getStartingPriority and getStoppingPriority management operations.

You set priorities for services using the setStartingPriority and setStoppingPriority management operations.

For example, using rhino-console:

[Rhino@localhost:2199 (#0)] setservicestartingpriority name=MyService,vendor=OpenCloud,version=1.0 10
Service ServiceID[name=MyService,vendor=OpenCloud,version=1.0] activation priority set to 10

Managing RA entity priorities

You get priorities for RA entities using the getStartingPriority and getStoppingPriority management operations.

You set priorities for RA entities using the setStartingPriority and setStoppingPriority management operations.

For example, using rhino-console:

[Rhino@localhost:2199 (#0)] setraentitystartingpriority sipra 80
Resource adaptor entity sipra activation priority set to 80

Listing priorities

You list priorities for services using the getStartingPriorities and getStoppingPriorities management operations.

You list priorities for RA entities using the getStartingPriorities and getStoppingPriorities management operations.

You get a full combined listing using the liststartingpriorities and liststoppingpriorities commands in rhino-console.

For example:

[Rhino@localhost:2199 (#0)] liststartingpriorities
Starting priorities of installed services and resource adaptor entities:
  80 : resource adaptor entity sipra
  20 : ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1]
  10 : ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8]
   0 : ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1]
       ServiceID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0]
       ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0]
  -5 : ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8]
Note: Components with the same priority may be started in any order

Declarative Configuration

Declarative configuration decouples the configuration of components and their actual activation state from their declared desired state. This allows the platform to be configured with an intended configuration, with Rhino asynchronously making the required component state transitions to achieve the new configuration. This applies to services, resource adaptor entities, and the activation state of the SLEE.

The expected users of Rhino declarative configuration are tools that manage a cluster (or clusters) of Rhino nodes providing a service. For this use case, declarative configuration bundles replace the role previously held by SLEE management Ant tasks or the use of console commands in scripts. It is possible to use the import of partial declarative configuration bundles for manual SLEE management, but this does not offer any significant benefits over other methods. Notably, the Rhino Element Manager continues to function as an easy to use administrative interface for common maintenance tasks.

In this section...

Concepts and Terminology

Concept Meaning

Declarative configuration

Configuration that describes the intended state of the system, rather than the changes needed to achieve that state. A traditional example is a configuration file. In the case of Rhino, this is represented by configuration fragments and configuration bundles.

Configuration fragment

A YAML document file that contains intended configuration information. This could be about the whole system or only part of the system.

Configuration bundle

A .zip file containing one or more configuration fragments plus metadata that describes the type and structure of the configuration bundle.

Desired state

The intended target state of the system or a part of the system. This includes things such as desired configuration properties (environment entries) and activation state of services, which resource adaptor entities should be present and their configuration and activation state, the activation state of the SLEE, and so on.

Default desired state

The intended target state of the system or a part of the system. This applies to any nodes that do not have a desired state configured and provides similar functionality to the symmetric activation state mode in Rhino releases prior to 3.0.0.

Per-node desired state

The intended target state of the system or a part of the system on a node.

Effective desired state

The computed target state of a part of the system on a node. This is the per-node desired state if configured or the default desired state if no per-node desired state exists for the part of the system.

Actual state

The current operational state of the system or a part of the system.

Convergence

The internally driven process of transitioning the actual state of the system towards the desired state until they are equal.

Complete configuration or complete configuration bundle

A configuration bundle that contains configuration fragments that represent the entire state of the system. A complete configuration bundle is denoted by the configuration bundle containing a format value of complete. Complete configuration bundles replace current system configuration when applied.

Partial configuration or partial configuration bundle

A configuration bundle that contains configuration fragments that represent changes to the state of the system. A partial configuration bundle is denoted by the configuration bundle containing a format value of partial. Partial configuration bundles supplement and alter current system configuration when applied.

Configuration Bundles

A configuration bundle is a zip archive containing one or more configuration fragments as YAML files, each containing a single YAML document. Individual configuration fragments are combined and normalized by Rhino during a declarative import. The content of an individual configuration fragment does not need to be structurally complete, however the combination of all configuration fragments in a configuration bundle must be both structurally valid and free from contradictory configuration settings.

Configuration fragment YAML files within a configuration bundle must have a .yaml filename extension, but other than this the names of these YAML files and any directory structure imposed on them within a configuration bundle are immaterial. Rhino will scan a configuration bundle and consider only *.yaml files as part of the desired configuration. This means it’s possible to include other content, such as documentation, in a configuration bundle using files with other filename extensions without interfering with the validity of the configuration bundle. Collectively, the configuration YAML files in a configuration bundle must adhere to the configuration bundle schema.

A configuration bundle must include a YAML document which contains a top-level object with the following structure:

Note The examples given here assume the content is given in a single configuration fragment, however as previously stated, it is possible for this content to be split across configuration fragments within a configuration bundle.
rhino-config:config-bundle:

  format: complete

  schema-version: '1.0'

The schema version defines the structure of the configuration fragment documents. Currently only one schema version is supported and this field must be set to 1.0.

Rhino supports two types of configuration bundles - complete and partial.

  • A complete configuration bundle includes everything that needs to be configured for the deployed application to function, including profiles, resource adaptor entities, tracer configuration, usage parameter sets and desired activation states for the SLEE, services and resource adaptor entities.

  • A partial configuration bundle only includes configuration for some aspects of some of these. For example, it might create a resource adaptor entity link name and set the level for a tracer for a service to FINEST.

Complete Configuration Bundles

A complete configuration bundle includes all intended configuration states for the deployed application. Anything already present in a Rhino instance that is not specifically included in a complete configuration bundle when it’s imported will either be removed from Rhino if it’s removable, or be reverted to a default state otherwise.

The table below illustrates the effect a complete configuration import would have on state not defined in the configuration bundle:

State type Effect of non-declaration in complete configuration bundle

Profile table

Removed

Profile

Removed

Usage parameter set

Removed

Tracer level

Removed

SBB environment entry

Reverts to default value as defined in the SBB’s deployment descriptor.

Per-node desired state

Removed

Default desired state

Reverts to the unconfigured default state – STOPPED for SLEE state and INACTIVE for services and resource adaptor entities.

Resource adaptor entity

Removed

Resource adaptor entity configuration property

Reverts to the default value as defined in the resource adaptor’s deployment descriptor.

Security permission specification

Reverts to the default permissions as defined in the component’s component jar.

Partial Configuration Bundles

A partial configuration bundle includes a subset of the configuration state for the deployed application. Configuration not described in the partial configuration bundle will not be modified. In a partial configuration bundle, configuration that is to be removed is marked with the attribute present: false. It is not possible to remove the default desired state for a component.

Importing a Configuration Bundle

Two user interfaces are provided for importing configuration bundles. Both use the same underlying method. A Java Management Bean method - ConfigManagementMBean.importConfiguration() and rhino-console command - importdeclarativeconfig.

When a configuration is imported, Rhino returns a JSON document describing the changes that will be made to the state of the system or a list of validation errors that prevented the import from succeeding. If using rhino-console, the document is parsed and a human-readable status output is printed. You can also save the returned document to a file using the –o <output.json> command parameter.

Note

Version control of configuration bundles imported into Rhino simplifies change management and rollback.

Declarative Configuration Commands

Managing Rhino declarative configuration typically consists of tasks such as importing and exporting configuration bundles, checking and setting the desired state of components and querying whether the actual state of the system has converged to the currently configured desired state.

This section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs:

Exporting Configuration

To export configuration, use the following rhino-console command or associated MBean operation.

Console command: exportdeclarativeconfig

Command

exportdeclarativeconfig <zip|directory>
  Description
    Export a declarative configuration bundle which represents the current desired
    component configuration. Outputs the configuration bundle to a directory by
    default, but will instead output a zip file if a .zip suffix is used for the
    destination argument.

Example

To export the configuration bundle site-1-config-backup.zip:

$ ./rhino-console exportdeclarativeconfig site-1-config-backup.zip
Export written to: site-1-config-backup.zip

MBean operation: exportConfiguration

MBean

Rhino operation

public byte[] exportConfiguration()
    throws ManagementException;

This operation:

  • returns a Zip archive containing a single complete config bundle describing the configuration for all namespaces in the cluster.

Importing Configuration

To import configuration, use the following rhino-console command or associated MBean operation providing a configuration bundle. After importing a configuration bundle it is useful to wait for convergence before directing traffic to the system.

The status document returned by the MBean operation can be saved to disk by providing the -o <output file> option to the console command. The format of this document is described at Declarative Config Import Result Schema.

Console command: importdeclarativeconfig

Command

importdeclarativeconfig <zip|directory> [-dryrun] [-o resultfile.json] [-v]
[-reconfigurationaction
<validateEntityState|deactivateAndReactivate|doNotDeactivate>
  Description
    Import a declarative configuration bundle into the SLEE.  Source may be either a
    config bundle zip file or a path to a directory containing an unzipped config
    bundle. Specifying -o will output the resulting import status json to a file
    instead of the console. Specifying -v will include verbose elements such as
    stacktraces in the console output. Specifying a reconfiguration action other
    than the default of "doNotDeactivate" will change how resource adaptor entities
    that do not support active reconfiguration are managed during this import.

Example

To import the configuration bundle site-1.zip:

$ ./rhino-console importdeclarativeconfig site-1.zip
Importing configuration...

Result
=======
 Status: Success


Import result reported success; all changes have been applied.

MBean operation: importConfiguration

MBean

Rhino operation

public String importConfiguration(byte[] configBundle, ImportOptions options)
    throws NullPointerException, MalformedConfigBundleException,
           ManagementException;

This operation:

  • validates and imports the configuration provided in the configBundle argument

  • initiates a convergence check in all namespaces affected by the imported configuration

  • returns a status document containing the overall status of the import operation (Success or Failure) and detailed results describing the validation errors found or state changes that resulted from the import.

Waiting For Convergence

To check the current convergence status of a Rhino cluster use the following rhino-console commands or associated MBean operation:

Console command: isconverged

Command

isconverged [-nodes node1,node2,...]
  Description
    Check if Rhino's actual state is currently equal to desired state.  If a node
    list is provided, only the specified nodes are checked.

Examples

To check if the actual states of all components match the desired states:

$ ./rhino-console isconverged
Rhino is currently converged to desired state

To check if the actual states of all components match the desired states on node 101:

$ ./rhino-console isconverged -nodes 101
Node [101] is currently converged to desired state

Console command: waittilconverged

Command

waittilconverged [-timeout timeout]
  Description
    Wait for actual state to converge to desired state. The optional timeout is
    specified in seconds

Example

To wait for up to one minute for the actual states of all components to match the desired states:

$ ./rhino-console waittilconverged -timeout 60
Covergence reached. Actual state has converged to desired state.

MBean operation: isConvergedToDesiredState

MBean

Rhino operations

public boolean isConvergedToDesiredState()
    throws ManagementException;

public boolean isConvergedToDesiredState(int[] nodes)
    throws ManagementException;

These operations return true if the actual state of all components in all namespaces matches the desired state. The first method checks all event router nodes in the cluster. The second method only checks the specified cluster nodes.

Retrieving a Convergence Status Report

To retrieve a detailed report on the current convergence status of a Rhino cluster use the following rhino-console command or associated MBean operation:

Console command: reportconvergencestatus

Command

reportconvergencestatus [-nodes node1,node2] [-diff] [-o filename]
  Description
    Report on the current convergence status.  If a node list is provided, only the
    specified nodes are included in the report.  The -diff option will limit the
    report to include only entities where the actual state differs from the desired
    state.  The -o option will output the raw json-formatted report to the specified
    file instead of a human-readable report being output to the console.

Examples

To retrieve a full convergence report for the cluster:

$ ./rhino-console reportconvergencestatus
In the following report, any desired state that is followed by an asterisk (*)
indicates that that desired state is a per-node override from the default.

In the default namespace:
SLEE status:
  Node   Desired state   Actual state
  101    running         running
  102    running         running
  103    stopped *       stopped

Service status:
  service: ServiceID[name=SIS-IN Test Service Composition Selector Service,vendor=OpenCloud,version=0.3]
    Node   Desired state   Actual state
    101    active          active
    102    active          active
    103    inactive *      inactive

  service: ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.3]
    Node   Desired state   Actual state
    101    active          active
    102    active          active
    103    inactive *      inactive

  service: ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.3]
    Node   Desired state   Actual state
    101    active          active
    102    active          active
    103    inactive *      inactive

  service: ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.3]
    Node   Desired state   Actual state
    101    active          active
    102    active          active
    103    inactive *      inactive

  service: ServiceID[name=VPN Service,vendor=OpenCloud,version=0.3]
    Node   Desired state   Actual state
    101    active          active
    102    active          active
    103    inactive *      inactive

Resource adaptor entity status:
  entity name: insis-ptc-1a
    Node   Desired state   Actual state
    101    active          active
    102    active          active
    103    active          active

  entity name: insis-ptc-1b
    Node   Desired state   Actual state
    101    active          active
    102    active          active
    103    active          active

  entity name: insis-ptc-external
    Node   Desired state   Actual state
    101    active          active
    102    active          active
    103    active          active

To report only on where convergence has not been met, you can use the -diff option:

$ ./rhino-console reportconvergencestatus -diff
Rhino is currently converged to desired state, no differences to report

To save the report to a file in JSON format:

$ ./rhino-console reportconvergencestatus -o convergence-report.json
Output written to file: convergence-report.json

MBean operation: getConvergenceStatus

MBean

Rhino operations

public String getConvergenceStatus(boolean differencesOnly)
    throws ManagementException;

public String getConvergenceStatus(int[] nodeIDs, boolean differencesOnly)
    throws InvalidArgumentException, ManagementException;

These operations return a JSON-formatted string that reports the desired state and actual state for each service and resource adaptor entity in the SLEE, along with the desired state and actual state of the SLEE itself, across all namespaces.

Convergence

When a change is made to the desired state of components or another configuration change allows a component that has failed to activate to be retried, Rhino will initiate a series of changes to the actual state that will converge this to match the desired state. First, Rhino will calculate the changes required to make the actual state match the desired state. Once the required actions have been calculated, a series of tasks will run that update the state of individual components towards the desired state.

Convergence of components proceeds stepwise with actions ordered roughly by the desired activation state. Typically, deactivation of components will occur first, followed by reconfiguration, then activation. Sometimes components will be unable to transition to the desired state immediately. For example, a service may remain in the Stopping state for some time while SBBs drain. Tasks that depend on the state transition completing, such as removing a resource adaptor entity link name binding, will wait for a short period, then retry. Other tasks that do not depend on a state transition will still execute.

Sometimes change to the configuration of a component requires additional state transitions to occur first. The two main cases where this is true are reconfiguration of a resource adaptor entity that does not support active reconfiguration, and removing a resource adaptor entity link name binding. In both cases, any components that must be deactivated for the configuration change to complete will first be deactivated, then the configuration change will be made. Finally, Rhino will attempt to reactivate the components. If the new configuration is missing data required for reactivation to succeed the components may fail to activate.

Rhino periodically checks that there are no outstanding differences between the actual state and the desired state. If any are found, it will start converging the actual state towards the desired state.

The management interface for importing configuration provides a single method which can be polled until the state has converged – isConvergedToDesiredState(). In rhino-console this polling is contained in the command waittilconverged.

Monitoring Convergence

Most users of the Rhino declarative configuration interfaces will want to monitor the progress of the system towards convergence. Rhino provides some basic interfaces to support this.

Checking convergence status

Through rhino-console

The following console commands can be used to monitor convergence status:

 isconverged

Reports whether convergence has been achieved.

 reportconvergencestatus

Provides a detailed report on the convergence status of every stateful entity in the SLEE.

 waittilconverged

Wait until the state of all components has converged.

Via JMX

Poll the system using the ConfigManagementMBean method isConvergedToDesiredState().

Obtain a convergence report using the ConfigManagementMBean method getConvergenceStatus(boolean).

Checking the status of individual components

Through rhino-console

The Rhino console provides commands for querying the desired and actual state of components. There are commands for checking the desired and actual state of a SLEE, resource adaptor entities and services. You can list these in the console by typing help declarative.

Via JMX

The ResourceManagementMBean, ServiceManagementMBean and SLEEManagementMBean contain methods to get the desired and actual state of components. Tools that create and import configuration bundles can use these methods to retrieve the states and present them to the system administrator.

Developing With Configuration Bundles

There are two major development use cases that use configuration bundles: development on the Rhino platform such as TAS services, Sentinel services, and development of management tools.

When developing a new service, it is useful to write or export a configuration bundle that can be used for testing. For automated testing this may be templated with the test setup tool substituting values as required for the test environment. Performing test setup in this manner is particularly useful for container-based deployment.

Management tools can loosely be divided into two categories: ones that manipulate configuration bundles to provide a service-specific configuration interface, and ones that operate on the state of the Rhino cluster using the declarative configuration management operations. Some examples of tools are:

  • Sentinel VoLTE config-converter – a tool that generates configuration bundles for Rhino from simplified high-level configuration documents describing the operator and site-specific attributes of a Sentinel VoLTE deployment.

  • Initconf – a daemon for managing cloud-based Rhino clusters. Initconf is responsible for ensuring that newly started Rhino VMs are correctly configured, and for performing controlled shutdown of nodes.

Configuration bundle manipulation tools are often task specific and frequently work with configuration fragments, creating, modifying and combining them into a configuration bundle for import into Rhino. They use the configuration-bundle YANG schema to structure and validate the generated fragments.

Rhino management tools can be task specific, such as a simple importer that only uses the ConfigManagementMBean methods to import a configuration bundle and wait until the system state has converged or general such as the rhino-console. Management tools should use the configuration operations operating on desired and actual state in preference to the imperative operations, such as ServiceManagementMBean.activate().

Import tools should parse the JSON document returned by ConfigManagementMBean.importConfiguration() to provide useful feedback to the user. Validation errors, in particular, need to be printed with the ID of the associated component. Feedback on the progress of convergence can be provided by using the component management MBeans to get the status of components having an activation state delta in the returned success document.

Version Control

It is advisable to store the complete configuration for operational Rhino clusters in a version control system (VCS). There are several ways to organize a version control repository; a general guideline is to use branches to separate variants and avoid storing multiple copies of files. A simple method is described below.

Store common configuration on a master branch, and the complete configuration for each site on a separate branch derived from this master branch. Store partial configurations alongside the complete configuration they modify in a branch derived from the site that is affected. By doing this, it is easy to follow the history of changes to the configuration as the branch records the history of reconfigurations and makes it easy to identify the prior version if a change needs to be rolled back. Name branches for change rollout according to the nature of the changes that they contain, for example, “ocs-server-migration".

When making a permanent change using partial configuration, update the site master with the changes so future maintenance work has a clear starting reference. Update the site master and, if appropriate, the master branch once the system configuration has been tested as functioning correctly. After applying partial configuration and verifying, export the configuration, unzip it, and add it to the site master branch. This avoids the need to run multiple changes in sequence to restore the configuration of a cluster after data loss.

version control branching

Common Use Cases

The use cases below illustrate the tasks that declarative configuration supports. Many operators and developers will have different approaches to these tasks. Use of declarative configuration simplifies management of variants and migration between versions of the system configuration.

Deploying a new application

A system integrator (SI) creates an application on Rhino to deploy to multiple sites in a customer’s network.

The SI creates a single VM image with the components needed for the application deployed into Rhino and a configuration fragment describing the configuration common to all sites. For each site, the SI creates a configuration fragment describing the configuration of the components specific to that site. They combine this configuration fragment with the common fragment to create a site-specific configuration bundle.

Using the VM image, the SI starts a fleet of virtual machines on each site. They import the site-specific configuration bundle using the rhino-console command importdeclarativeconfig <config-bundle.zip> -o output.json and save the resulting output file. Rhino applies the configuration and converges the actual state of the updated components to match the newly imported desired state.

Upgrade

A developer creates a new version of the application on Rhino to deploy in a customer’s network. The developer has created a VM image with the components needed for the application deployed into Rhino.

The developer uses the rhino-console command exportdeclarativeconfig <old-config-bundle.zip> to write the state of the operational cluster to a file. The developer unzips this configuration bundle and saves it in a version control repository for the customer. The developer uses a transformation script to update component versions and configuration properties from the pre-upgrade values to the post-upgrade ones.

Using the VM image, the developer starts an initial subset of the new cluster. They import the post-upgrade configuration bundle using the rhino-console command importdeclarativeconfig <config-bundle.zip> -o output.json and save the resulting output file. Rhino applies the configuration and converges the actual state of the updated components to match the newly imported desired state.

The developer runs the rhino-console command waittilconverged and watches for alarms. Once the actual state has converged, the system administrator redirects traffic to the new cluster and shuts down VMs hosting the old cluster one by one while booting VMs for the new cluster to replace them.

Maintenance

Unlike versions of Rhino prior to 3.0.0, the declarative state management commands do not require disabling and re-enabling symmetric activation state mode when performing operations on a single Rhino node. The default behavior is to have a default desired state for the cluster and use temporary per-node state configuration to override this as needed.

Temporary stop

A system administrator using rhino-console can use the desired state management commands to temporarily deactivate nodes for maintenance.

For example, to deactivate a node while running diagnostics that have the potential to interrupt call flow, the administrator runs the console command setsleedesiredstate <node-ID> stopped. After the task is complete, they then run removepernodesiredstate <node-ID> to return the node to the same state as the rest of the cluster.

Reboot

A system administrator needs to reboot the host VM for OS updates. They run the OS reboot command. The init script for the Rhino service runs the console command shutdown -nodes <node-IDs> -timeout <drain timeout> to shut down the Rhino nodes running on the host. No change is made to the desired state, so when the host restarts, the nodes return to the same state as before the reboot.

Backup

A system administrator makes configuration backups before performing configuration changes. Backups are made before and after importing a partial configuration or after running a series of console commands that change the configuration of components.

The administrator uses the rhino-console command exportdeclarativeconfig <pre-change-bundle.zip> to write the state of the operational cluster to a file. The administrator unzips this configuration bundle and compares it with the latest version in the version control repository. If it is different, the administrator adds it to the repository with a commit message describing the difference.

The administrator makes the planned configuration changes to the system using a management interface. They use the rhino-console command exportdeclarativeconfig <post-change-bundle.zip> to write the state of the operational cluster to a file. They unzip this configuration bundle and save it in the version control repository with a commit message describing the purpose of the change.

Common Problems

A component fails to activate because of a configuration fault

Problem

If a component has a valid but incorrect configuration it may fail to activate. In the case of a service, Rhino will raise an alarm. Resource adaptors are responsible for raising their own alarms.

Solution

Fix the configuration problem and import the new configuration bundle. You can do this by editing the complete configuration previously imported or creating a new partial configuration bundle.

A component fails to start because it is not licensed

Problem

Some components, such as Sentinel VoLTE and the Service Interaction SLEE, require additional license functions to operate. If unlicensed they will fail to activate.

Solution

Install a license that allows the required functions. Rhino will try to activate any components that have a desired state of active and require the newly licensed functions.

Convergence times out

Problem

The state of a component does not converge due to factors inside the SLEE. Typically, this will occur when there are still active calls and SBBs or Activities remain live longer than the convergence timeout period. Rhino raises a convergence-timeout alarm and continues waiting for the component to transition to the next desired state.

2020-08-20 14:37:46.8651200 Minor [rhino.facility.alarm.manager] <ConvergenceExecutor-0> {ns=test} Alarm 101:255170210055681 [SubsystemNotification[subsystem=Convergence],rhino.state.convergence-timeout,ConvergenceTask[, namespace='test', component=ResourceAdaptorEntityPKey[test:activity-test-ra], desiredState=non_existent]] was raised at 2020-08-20 14:37:46.865 to level Minor

        State convergence timed out for "ResourceAdaptorEntityPKey[test:activity-test-ra]".
The component remains in the "stopping" state.
Convergence will be retried periodically until it reaches the desired state.
Solution

Wait for calls to drain or identify the entity that is preventing the component state transition and remove it. The Rhino console commands findactivities, findsbbs and findtimers will list the entities that are preventing a resource adaptor entity or service from stopping. The commands removeactivity, removeallactivities, removesbb, removeallsbbs and canceltimer will remove these entities. You can also use the Rhino Element Manager to manage the live entities in the system.

Rhino Configuration

This section covers procedures for configuring Rhino upon installation, and as needed (for example to tune performance).

This includes configuring:

Tip See also Management Tools.

Logging

Rhino supports logging to multiple different locations and offers granular configuration of loggers and output. The most common configuration used is of log levels and output appenders. Log appenders are used to direct logging output for display and storage, typically to files or the console terminal. Rhino provides management interfaces and commands for configuring the logging framework. It also provides access to the logging framework for deployed components as extensions to the SLEE specification.

This section includes the following topics:

Note JMX clients can access logging management operations via the Logging Management MBean.

About Logging

The Rhino SLEE uses the Apache Log4j 2 logging framework to provide logging facilities for SLEE components and deployed services.

Tip
The Logging Management MBean

Rhino SLEE allows changes to logging configuration at runtime. This is useful for capturing log information to diagnose a problem, without having to restart the SLEE. You configure logging using the Logging Management MBean, through the command console. This MBean lets you query the log configuration, and for most subsystems effect immediate changes (some require a node restart for performance reasons).

Rhino’s logging system includes logger names, log levels, log appenders, filters, and tracers.

Asynchronous Logging

The Log4j 2 logging architecture provides a new approach to asynchronous logging. It uses asynchronous loggers, which submit log events to a work queue for later handling by the appropriate appenders.

More details can be found in the Log4j 2 async loggers documentation

Rhino offers support for mixed synchronous and asynchronous logging through logger configuration commands. Correctly configuring asynchronous logging has some complexity, discussed in how to Configure a Logger.

Mapped Diagnostic Context

Rhino 2.6 introduces access to the Mapped Diagnostic Context (MDC) as a tool to tag and correlate log messages throughout an activity’s life cycle. This tagging can be combined with the new filters to allow very fine grained control of logging and tracing.

A simple SIP example of useful context would be including the P-charging-vector header. As this uniquely identifies a single call, it becomes trivial to identify all log messages related to handling an individual call. Identification (or filtering) remains simple even under load, with multiple calls handled in parallel.

The Logging Context Facility discusses MDC in greater detail.

Logger names

Subsystems within the Rhino SLEE send log messages to specific loggers. For example, the rhino.facility.alarm logger periodically receives messages about which alarms are currently active within the Rhino SLEE.

Examples of logger names include:

  • root — the root logger, from which all loggers are derived (can be used to change the log level for all loggers at once)

  • rhino — main Rhino logger

  • rhino.management — for log messages related to Rhino management systems

  • trace.<namespace>.<deployable_type>.<notification_source>.<tracer name> — loggers used by deployed SLEE components that use tracers. By default these keys appear abbreviated in console and file logs. Details of tracer abbreviation can be found at Tracer pattern converter.

Log levels

Log levels can be assigned to individual loggers to filter how much information the SLEE produces:

Log level Information sent
 OFF

No messages sent to logs (not recommended).

 FATAL

Error messages for unrecoverable errors only (not recommended).

 ERROR

Error messages (not recommended).

 WARN

Warning messages.

 INFO

Informational messages (especially during node startup or deployment of new resource adaptors or services). The default.

 DEBUG

Detailed log messages. Used for debugging by Metaswitch Rhino SLEE developers.

 TRACE

Finest level. Not currently used.

 ALL

All of the above.

Each log level will log all messages for that log level and above. For example, if a logger is set to the INFO level (the default), all of the log messages logged at the INFO, WARN, ERROR, and FATAL levels will be logged as well.

If a logger is not assigned a log level, it inherits its parent’s. For example, if the rhino.management logger has not been assigned a log level, it will have the same effective log level as the rhino logger.

The root logger is a special logger which is considered the parent of all other loggers. By default, the root logger is configured with the INFO log level. In this way, all other logger will output log messages at the INFO log level or above unless explicitly configured otherwise.

Warning
Use INFO

A lot of useful or crucial information is output at the INFO log level. Because of this, setting log levels to WARN, ERROR or FATAL is not recommended.

Log appenders

System administrators can use the console to create some simple log appenders. Full appender creation is available through the Rhino Element Manager (REM). These append log messages to destinations such as the console, a log file, socket, or Unix syslog daemon. At runtime, when Rhino logs a message (as permitted by the log level of the associated logger), Rhino sends the message to the log appender for writing. Types of log appenders include:

  • file appenders — which append messages to files (and may be rolling file appenders)

  • console appenders — which send messages to the Rhino console

  • socket appenders — Append to a network socket. Either raw, or syslog formatted.

  • custom appenders — which can do a wide variety of things. Common "custom" apenders are used to append to various kinds of database.

Rolling file appenders

Typically, to manage disk usage, administrators are interested in sending log messages to a set of rolling files. They do this by setting up rolling file appenders which:

  • create new log files if the current one gets too large

  • rename old log files as numbered backups

  • delete old logs when a certain number of them have been archived.

Log files roll over when they exceed the configured size i.e. size is checked after logging each message, if the log file is larger than the maximum the next message will be written to a new file. Since Rhino 2.6.0 only the SDK rolls over log files on node restart. Production nodes use a size-based policy only.

You can configure the size and number of rolled-over log files and the rollover behaviour. Options include size-based, time-based and on node-restart. All configurations described for Log4j 2 are valid: https://logging.apache.org/log4j/2.x/manual/appenders.html#RollingFileAppender

An example logging config containing a complex rollover strategy that increments file numbers, retaining up to 4 historical files younger than 30 days:

        <appender name="RhinoLog" plugin-name="RollingFile">
            <layout name="RhinoLogLayout" plugin-name="PatternLayout">
                <property name="pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%logger] <%threadName> %mdc : %msg%n%throwable"/>
                <property name="header" value="${rhinoVersionHeader}"/>
            </layout>
            <property name="filePattern" value="${logDir}/rhino.log.%i"/>
            <property name="fileName" value="${logDir}/rhino.log"/>
            <component name="RhinoLogPolicy" plugin-name="SizeBasedTriggeringPolicy">
                <property name="size" value="100MB"/>
            </component>
            <component name="RhinoLogStrategy" plugin-name="NotifyingRolloverStrategy">
                <property name="min" value="1"/>
                <property name="max" value="2147483647"/>
                <component name="deleteAction" plugin-name="Delete">
                    <property name="basePath" value="${logDir}"/>
                    <property name="maxDepth" value="1"/>
                    <component name="fileName" plugin-name="IfFileName">
                        <property name="glob" value="rhino.log.*"/>
                    </component>
                    <component name="any" plugin-name="IfAny">
                        <component name="lastmodified" plugin-name="IfLastModified">
                            <property name="age" value="30d"/>
                        </component>
                        <component name="fileCount" plugin-name="IfAccumulatedFileCount">
                            <property name="exceeds" value="4"/>
                        </component>
                    </component>
                </component>
            </component>
        </appender>

Default appenders

By default, the Rhino SLEE comes configured with the following appenders active:

Appender Where it sends messages Logger name Type of appender
 RhinoLog

the Rhino logs directory (work/log/rhino.log)

 root

a rolling file appender

 STDERR

the Rhino console where a node is running (in a standard error stream)

 root

a console appender

 ConfigLog

work/log/config.log

 rhino.config

a rolling file appender

Warning
New appenders won’t receive messages until associated with at least one logger

By default, a newly created log appender is not associated with any loggers, so will not receive any log messages.

Appender additivity

If a logger has its additivity flag set to true, all of the output of its log statements goes to all of its appenders and ancestors. If a specific ancestor has its additivity flag set to false, then the output goes to all appenders and ancestors up to and including that specific ancestor — but not to appenders in any of that ancestor’s ancestors. (By default, logger additivity flags are set to true.}

Note See Apache’s Log4j 2 Architecture page for details on additivity.

Filters

Filters can be applied to both loggers and appenders to restrict the set of log messages that are reported by a logger or through an appender. They provide a more flexible limiting approach than log level alone. Configuring filters can be performed using the Rhino Element Manager or by modifying an export of the logging configuration. A list of the filters available by default and their configuration properties can be found in the Log4j 2 filter documentation

An example filter configuration setting for limiting log levels to Finer in namespace volte and Info in all other namespaces is shown below:

                <component plugin-name="DynamicThresholdFilter">
                    <property name="defaultThreshold" value="Finer"/>
                    <property name="key" value="namespace"/>
                    <component plugin-name="KeyValuePair">
                        <property name="key" value="volte"/>
                        <property name="value" value="Warning"/>
                    </component>
                </component>

If three trace messages are emitted by the service

tracer.warning("TransparentDataCache(MMTEL-Services) (RepositoryDataAccessKey{REPOSITORY_DATA, userId=tel:+34600000002, userIdType=IMPU, serviceIndication=MMTEL-Services}): [DoUDR] failed to send request")
tracer.finer("Cache gave immediate response. Latency: 1 ms")
tracer.finest("Removing service indication: MMTEL-Services from the session state list.Initial items: [MMTEL-Services]")

With the service deployed in namespace volte only the Warning will be logged:

2017-11-14 13:35:38.123+1300 Warning [trace.sh_cache_ra.sh-cache-ra] <jr-4> {ns=volte, txID=101:210487189646097} TransparentDataCache(MMTEL-Services) (RepositoryDataAccessKey{REPOSITORY_DATA, userId=tel:+34600000002, userIdType=IMPU, serviceIndication=MMTEL-Services}): [DoUDR] failed to send request

otherwise both the Finer and Warning messages will be logged:

2017-11-14 13:35:38.123+1300 Warning [trace.sh_cache_ra.sh-cache-ra] <jr-4> {ns=mmtel, txID=101:210487189646097} TransparentDataCache(MMTEL-Services) (RepositoryDataAccessKey{REPOSITORY_DATA, userId=tel:+34600000002, userIdType=IMPU, serviceIndication=MMTEL-Services}): [DoUDR] failed to send request
2017-11-14 13:35:38.137+1300 Finer   [trace.volte_sentinel_sip.2_7_0_copy_1.volte_sentinel_sip.sentinel.sbb] <jr-4> {ns=mmtel, txID=101:210487189646097} Cache gave immediate response. Latency: 1 ms

The default threshold of Finer will cause the Finest message to never be logged.

Logging plugins

Rhino contains several logging plugins to extend the functionality of Log4j 2 to aid SLEE management and provide additional context to logs.

  • NotifyingRolloverStrategy

  • NotifyingDirectWriteRolloverStrategy

  • LogNotificationAppender

  • PolledMemoryAppender

NotifyingRolloverStrategy

An extended variant of the DefaultRolloverStrategy providing an API for components to receive notification of log file rollover. The RolloverNotificationListener can be registered to receive a callback whenever a log file is rolled over. This strategy should be used instead of the Log4j 2 DefaultRolloverStrategy so Rhino can send notifications to monitoring systems.

NotifyingDirectWriteRolloverStrategy

An extended variant of the DirectWriteRolloverStrategy providing an API for components to receive notification of log file rollover. The RolloverNotificationListener can be registered to receive a callback whenever a log file is rolled over. This strategy should be used instead of the Log4j 2 DirectWriteRolloverStrategy so Rhino can send notifications to monitoring systems.

LogNotificationAppender

A log appender for delivering log messages to a listener inside the application. This is used to send log messages to JMX monitoring clients and as SNMP notifications. It is only necessary to use the LogNotificationAppender if using SNMP to receive log messages.

TraceNotificationAppender

A log appender for delivering log messages to a listener inside the application that extracts tracer messages to send as `TraceNotification`s. This is used to send tracer messages to JMX monitoring clients such as REM. It is necessary to use the TraceNotificationAppender if using JMX to receive tracer messages. Without an instance of this appender in the log configuration REM instances connecting to this Rhino instance will not be able to receive or display tracer messages.

PolledMemoryAppender

A log appender that stores messages in an internal buffer that the REM can poll for live log watching. This implementation is only of use when log output is infrequent enough for human monitoring and has a minor performance cost. It will be removed in a future release of Rhino. We recommend that log files or an external log server be used as the primary log output.

Note See Logging plugins for instructions on enabling additional appender types.

Other plugins

The Log4j 2 project (https://logging.apache.org/log4j/2.x) provides a number of plugins for extending the functionality of Log4j 2. These plugins provide appenders for sending logs to a number of log servers, files and databases, layouts for configuring the format of log messages, and filters to restrict the logging of messages. System integrators or operators can create plugins to add further functionality or support for other log handling systems.

Rhino log configuration properties

Rhino log configuration variables include a rhino namespace containing options useful for providing additional context in log files. These are:

  • ${rhino:node-id}: The node ID of the Rhino node that wrote the log message parameterised with this variable

  • ${rhino:version}: The version of Rhino running at the time the log message parameterised with this variable was written

Tracer objects

SLEE 1.1 provides tracer objects for logging messages from deployed components.

Rhino logs all messages sent to a Tracer under the trace.<notification source>.<tracer name> logger.

In an extension of the SLEE specification Rhino allows configuration of tracer levels at coarser grain than the component tracer. This extended functionality is accessed through the Rhino logging configuration. For example setloglevel trace Finest will set the default tracer level to Finest. All tracers not explicitly set will log at levels from Finest up. To support this SLEE extension root tracers for individual notification sources inherit their levels from the trace logger. It is also permitted to unset the root tracer level for a given notification source using setTracerLevel. Unsetting the root tracer level reverts to using the inherited level.

A further extension of the SLEE specification allows for full use of logging management commands against Tracers. A SLEE 1.1 Tracer may have appenders and filters added to further customise tracing output, both to JMX notifications, and logging destinations. Any supported appender may be used, so logging destinations are not restricted to file only.

Tracer log levels

Log levels for trace messages are logged at the level they are emitted.

Warning SLEE 1.0-based application components can still use the trace facility (defined in the JAIN SLEE 1.0 specification) for logging, however the trace facility has been deprecated for JAIN SLEE 1.1.

About SLEE 1.1 Tracers

Tracer Interface

In SLEE 1.1, there are more components that may need tracing support. In addition to SBBs, trace messages may also be generated by profile abstract classes and resource adaptors, and potentially any other SLEE subsystem.

All of these components may use the SLEE 1.1 javax.slee.facilities.Tracer interface. The Tracer interface will be familiar to users of other logging APIs. It provides methods for generating traces at different trace levels. Details of the tracing methods available are in the javax.slee.facilities.Tracer javadoc.

Obtaining a Tracer

Components obtain Tracers by calling the getTracer() method on the particular component’s context object. Rhino 2.6 provides 'com.opencloud.rhino.facilities.ExtendedTracer' instances when acquiring a Tracer, If only Rhino 2.6 support is required, the Tracer acquired from a context may be safely cast to ExtendedTracer

Older Rhino versions provide a com.opencloud.rhino.facilities.Tracer. The older Rhino implementation does not offer the extended logging API that the ExtendedTracer does.

For backwards compatibility Rhino 2.6 API library contains a com.opencloud.rhino.facilities.trace.TracerAccessor which handles safely acquiring a Rhino 2.6 ExtendedTracer.

Example Tracer acquisition
Component Tracer access method

SBB

 ExtendedTracer trace = (ExtendedTracer)SbbContext.getTracer(String)

Profiles

ProfileContext.getTracer(String)

Resource Adaptors

ResourceAdaptorContext.getTracer(String) or TracerAccessor.getExtendedTracer(ResourceAdaptorContext, String)

The string parameter in the above methods is the tracer name. This is a hierarchical name, following Java naming conventions, where the different levels in the hierarchy are delimited by a dot. For example, a tracer named "com.foo" is the parent of "com.foo.bar". Components may create any number of tracers, with different names, for different purposes. Tracers inherit the trace level of their parent in the hierarchy. The tracer named "" (empty string) is the top-level or root tracer. The hierarchical naming is a convention used in most logging APIs, and allows an administrator to easily enable or disable tracing for an entire hierarchy of tracers.

Example
import javax.slee.Sbb;
import javax.slee.SbbContext;
import javax.slee.facilities.Tracer;

public abstract class MySbb implements Sbb {

    private Tracer rootTracer;
    private ExtendedTracer fooTracer;
    private SbbContext context;

    public void setSbbContext(SbbContext context) {
        this.context = context;
        this.rootTracer = context.getTracer("");
        this.fooTracer = (ExtendedTracer)context.getTracer("foo");
    }

    ...

    // Generate an INFO trace on the root tracer
    rootTracer.info("An event has occurred");
    ...

    // Generate a WARNING trace on the fooTracer
    fooTracer.warning("Could not combobulate {}", "discombobulator");

Notification Sources

SLEE 1.1 introduces the javax.slee.management.NotificationSource interface, which the SLEE automatically adds to notifications generated by SLEE tracers. As this is automatically asssociated with the Tracer object, there is no need to manually specify source as in SLEE 1.0. This solves the problem of identifying which SBB in which service generated a trace message. The NotificationSource explicity identifies the component that generated the trace, so a management client can easily see which service and SBB the trace came from, allowing filtering by service or SBB.

Tracer Extensions

To alleviate some limitations of the SLEE 1.1 Tracer system, Rhino offers an extended Tracer API. This extended API offers a larger set of tracing methods, to support tracing without string concatenation to build trace messages. Tracer extensions contains details of the Tracer API extensions, and com.opencloud.rhino.facilities.ExtendedTracer javadoc is available.

Rhino 2.6 Tracer Extensions

In Rhino 2.6, the Tracer subsystem has been substantially reworked. As a result, Tracers are now first class loggers. This means that a Tracer may be manipulated by logging management commands as if it were a logger, with the exception that it will only accept Tracer levels.

Tracers now have very long logger names, as they must be unique to support making Tracers first class loggers. In log files these very long names are inconvenient, as they will frequently cause log entries to run over multiple lines on screen. In order to alleviate this issue, we have included a default tracer name abbreviation system.

Tracer pattern converter

The Tracer abbreviator used by default is based heavily on the logger pattern converter supplied with Log4j 2. See Log4j 2 Pattern Layout for documentation.

The tracer pattern converter shipped with Rhino allows for optionally completely removing a logger/tracer name component. In contrast, the logger pattern converter will always leave a . literal to show where elements have been abbreviated. The tracer pattern converter also does not implement Log4j 2 integer precision abbreviation, only pattern abbreviation.

Tracer name Pattern Output

trace.default.resourceadaptorentity.simplera.example

 %logger{\*.0.0.*}
 trace...simplera.example

trace.default.resourceadaptorentity.simplera.example

 %tracer{\*.0.0.*}
 trace.simplera.example

Tracer abbreviation behaviour can be managed through REM or by editing an exported logging.xml.

The default tracer pattern converter shipped with Rhino is shown below

Default tracer pattern converters
<component plugin-name="MarkerPatternSelector" >
  <property name="defaultPattern" value="%d{yyyy-MM-dd HH:mm:ss.SSSZ} %-7level [%logger] &lt;%threadName&gt; %mdc %msg{nolookups}%n%throwable"/>
  <component plugin-name="PatternMatch">
    <property name="key" value="Trace"/>
    <property name="pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSSZ} ${plainLevel} [%tracer{*.0.0.*}] &lt;%threadName&gt; %mdc %msg{nolookups}%n%throwable"/>
  </component>
  <component plugin-name="PatternMatch">
    <property name="key" value="SbbTrace"/>
    <property name="pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSSZ} ${plainLevel} [%tracer{*.0.0.*.0.*.*.0.0.*}] &lt;%threadName&gt; %mdc %msg{nolookups}%n%throwable"/>
  </component>
</component>

Note that there are three patterns in use here.

Marker Pattern Usecase

None (defaultPattern)

 %logger

Used for non-tracer log messages

SbbTrace

 %tracer{\*.0.0.*.0.\*.*.0.0.*}

Used for Tracer messages logged from an SBB.

Trace

 %tracer{\*.0.0.*}

Used for Tracer messages logged from anything other than an SBB

Different patterns are required for SBB and non-SBB Tracers, due to the more complex notification source identity of SBB notification sources. An SBB notification source includes both SBB id and Service ID. All other notification sources have no equivalent of Service ID.

Creating a File Appender

To create a file appender, use the following rhino-console command or related MBean operation. Since Rhino 2.6 there are many varieties of file appenders supported.

There are two major classes of file appenders discussed below. Non rolling file appenders do not rollover logfiles ever. Rolling file appenders are able to roll over logfiles, and must be configured with automatic rollover rules.

Warning FileName arguments are paths to files, not just filenames. To create a logfile in the configured logging directory (Default is ${NODE_HOME}/work/log) one can use the property ${logDir} as the leading element of the filename

Non rolling file appenders

These appenders cannot be rolled over with the rolloverlogfiles console command.

Console command: createfileappender

Command

createfileappender <appenderName> <fileName> [-append <true|false>] [-bufferedIO
<true|false>] [-bufferSize size] [-createOnDemand <true|false>] [-immediateFlush
<true|false>] [-locking <true|false>] [-ignoreExceptions <true|false>] [-pattern
<pattern>]
  Description
    The FileAppender is an appender that writes to the File named in the <fileName>
    parameter.
  Required Arguments
    appenderName  The name of the Appender.
    fileName  The name of the file to write to. If the file, or any of its parent
    directories, do not exist, they will be created.
  Options
    -append  When true, records will be appended to the end of the file. When set to
    false, the file will be cleared before new records are written. The default is
    true.
    -bufferedIO  When true, records will be written to a buffer and the data will be
    written to disk when the buffer is full or, if immediateFlush is set, when the
    record is written. File locking cannot be used with bufferedIO. The default is
    true.
    -bufferSize  The buffer size. The default is 8192 bytes.
    -createOnDemand  When true, the appender creates the file on-demand. The default
    is false.
    -immediateFlush  When true, each write will be followed by a flush. This will
    guarantee the data is written to disk but could impact performance. The default
    is true.
    -locking  When true, I/O operations will occur only while the file lock is held.
    The default is false.
    -ignoreExceptions  When true, exceptions encountered while appending events will
    be internally logged and then ignored. The default is true.
    -pattern  The pattern to use for logging output.

Example

To create a logfile in the configured logging directory
$ ./rhino-console createfileappender myappender "${logDir}/myappender.log"
Done.
To create a logfile in an absolute path
$ ./rhino-console createfileappender myappender /var/logs/rhino/myappender.log
Done.

Console command: createrandomaccessfileappender

Command

createrandomaccessfileappender <appenderName> <fileName> [-append <true|false>]
[-immediateFlush <true|false>] [-bufferSize size] [-ignoreExceptions
<true|false>] [-pattern <pattern>]
  Description
    The RandomAccessFileAppender is an appender that writes to the File named in the
    <fileName> parameter. It is similar to the standard FileAppender except it is
    always buffered.
  Required Arguments
    appenderName  The name of the Appender.
    fileName  The name of the file to write to. If the file, or any of its parent
    directories, do not exist, they will be created.
  Options
    -append  When true, records will be appended to the end of the file. When set to
    false, the file will be cleared before new records are written. The default is
    true.
    -immediateFlush  When true, each write will be followed by a flush. This will
    guarantee the data is written to disk but could impact performance. The default
    is true.
    -bufferSize  The buffer size. The default is 8192 bytes.
    -ignoreExceptions  When true, exceptions encountered while appending events will
    be internally logged and then ignored. The default is true.
    -pattern  The pattern to use for logging output.

Example

$ ./rhino-console createrandomaccessfileappender myappender "${logDir}/myappender.log"
Done.

Console command: creatememorymappedfileappender

Command

creatememorymappedfileappender <appenderName> <fileName> [-append <true|false>]
[-immediateFlush <true|false>] [-regionLength length] [-ignoreExceptions
<true|false>] [-pattern <pattern>]
  Description
    The MemoryMappedFileAppender maps a part of the specified file into memory and
    writes log events to this memory, relying on the operating system's virtual
    memory manager to synchronize the changes to the storage device
  Required Arguments
    appenderName  The name of the Appender.
    fileName  The name of the file to write to. If the file, or any of its parent
    directories, do not exist, they will be created.
  Options
    -append  When true, records will be appended to the end of the file. When set to
    false, the file will be cleared before new records are written. The default is
    true.
    -immediateFlush  When true, each write will be followed by a flush. This will
    guarantee the data is written to disk but could impact performance. The default
    is true.
    -regionLength  The length of the mapped region, defaults to 32 MB.
    -ignoreExceptions  When true, exceptions encountered while appending events will
    be internally logged and then ignored. The default is true.
    -pattern  The pattern to use for logging output.

Example

$ ./rhino-console creatememorymappedfileappender myappender "${logDir}/myappender.log"
Done.

Rolling file appenders

Console command: createrollingfileappender

Command

createrollingfileappender <appenderName> <fileName> <filePattern> <size>
[-append <true|false>] [-bufferedIO <true|false>] [-bufferSize size]
[-createOnDemand <true|false>] [-immediateFlush <true|false>] [-min <min>] [-max
<max>] [-ignoreExceptions <true|false>] [-pattern <pattern>]
  Description
    The RollingFileAppender is an appender that writes to the File named in the
    <fileName> parameter and rolls the file over according the values set by the
    <size> [-min][-max] options.
  Required Arguments
    appenderName  The name of the Appender.
    fileName  The name of the file to write to. If the file, or any of its parent
    directories, do not exist, they will be created.
    filePattern  The pattern of the file name of the archived log file. Both a
    date/time pattern compatible with SimpleDateFormat and/or a %i which represents
    an integer counter can be used.
    size  The file size required before a roll over will occur.
  Options
    -append  When true, records will be appended to the end of the file. When set to
    false, the file will be cleared before new records are written. The default is
    true.
    -bufferedIO  When true, records will be written to a buffer and the data will be
    written to disk when the buffer is full or, if immediateFlush is set, when the
    record is written. File locking cannot be used with bufferedIO. The default is
    true.
    -bufferSize  The buffer size. The default is 8192 bytes.
    -createOnDemand  When true, the appender creates the file on-demand. The default
    is false.
    -immediateFlush  When true, each write will be followed by a flush. This will
    guarantee the data is written to disk but could impact performance. The default
    is true.
    -min  The minimum value of the roll over counter. The default value is 1.
    -max  The maximum value of the roll over counter. Once this values is reached
    older archives will be deleted on subsequent rollovers.
    -ignoreExceptions  When true, exceptions encountered while appending events will
    be internally logged and then ignored. The default is true.
    -pattern  The pattern to use for logging output.

Example

$ ./rhino-console createrollingfileappender myappender "${logDir}/myappender.log"
Done.

Console command: createrollingrandomaccessfileappender

Command

createrollingrandomaccessfileappender <appenderName> <fileName> <filePattern>
<size> [-append <true|false>] [-bufferSize size] [-immediateFlush <true|false>]
[-min <min>] [-max <max>] [-ignoreExceptions <true|false>] [-pattern <pattern>]
  Description
    The RollingRandomAccessFileAppender is an appender that writes to the File named
    in the <fileName> parameter and rolls the file over according the values set by
    the <size>[-min][-max] options. It is similar to the standard
    RollingFileAppender except it is always buffered.
  Required Arguments
    appenderName  The name of the Appender.
    fileName  The name of the file to write to. If the file, or any of its parent
    directories, do not exist, they will be created.
    filePattern  The pattern of the file name of the archived log file. Both a
    date/time pattern compatible with SimpleDateFormat and/or a %i which represents
    an integer counter can be used.
    size  The file size required before a roll over will occur.
  Options
    -append  When true, records will be appended to the end of the file. When set to
    false, the file will be cleared before new records are written. The default is
    true.
    -bufferSize  The buffer size. The default is 8192 bytes.
    -immediateFlush  When true, each write will be followed by a flush. This will
    guarantee the data is written to disk but could impact performance. The default
    is true.
    -min  The minimum value of the roll over counter. The default value is 1.
    -max  The maximum value of the roll over counter. Once this values is reached
    older archives will be deleted on subsequent rollovers.
    -ignoreExceptions  When true, exceptions encountered while appending events will
    be internally logged and then ignored. The default is true.
    -pattern  The pattern to use for logging output.

Example

$ ./rhino-console createrollingrandomaccessfileappender myappender "${logDir}/myappender.log"
Done.

Create a Socket Appender

Rhino 2.6 supports two varieties of socket appenders, configurable format socket appenders, and syslog format. To create either use the following rhino-console commands or related MBean operations

Console command: createsocketappender

Command

createsocketappender <appenderName> <host> <port> [-bufferedIO <true|false>]
[-bufferSize size] [-connectTimeoutMillis <timeout(ms)>] [-immediateFail
<true|false>] [-immediateFlush <true|false>] [-protocol <protocol>]
[-reconnectionDelayMillis <delay(ms)>] [-keyStoreLocation <location>]
[-keyStorePassword <password>] [-trustStoreLocation <location>]
[-trustStorePassword <password>] [-ignoreExceptions <true|false>]
  Description
    The SocketAppender is an appender that writes its output to a remote destination
    specified by a host and port. The data can be sent over either TCP or UDP and
    the default format of the data is to send a Serialized LogEvent.
  Required Arguments
    appenderName  The name of the Appender.
    host  The name or address of the system that is listening for log events.
    port  The port on the host that is listening for log events.
  Options
    -bufferedIO  When true, records will be written to a buffer and the data will be
    written to disk when the buffer is full or, if immediateFlush is set, when the
    record is written. File locking cannot be used with bufferedIO. The default is
    true.
    -bufferSize  The buffer size. The default is 8192 bytes.
    -connectTimeoutMillis  The connect timeout in milliseconds. The default is 0
    (infinite timeout).
    -immediateFail  When set to true, log events will not wait to try to reconnect
    and will fail immediately if the socket is not available.
    -immediateFlush  When true, each write will be followed by a flush. This will
    guarantee the data is written to disk but could impact performance. The default
    is true.
    -protocol  'TCP' (default), 'SSL' or 'UDP'.
    -reconnectionDelayMillis  If set to a value greater than 0, after an error there
    will be an attempt to reconnect to the server after waiting the specified number
    of milliseconds.
    -keyStoreLocation  The location of the keystore for SSL connections.
    -keyStorePassword  The password of the keystore for SSL connections.
    -trustStoreLocation  The location of the truststore for SSL connections.
    -trustStorePassword  The password of the truststore for SSL connections.
    -ignoreExceptions  When true, exceptions encountered while appending events will
    be internally logged and then ignored. The default is true.

Example

$ ./rhino-console createsocketappender myappender localhost 12000
Done.

Console command: createsyslogappender

Command

createsyslogappender <appenderName> <host> <port> <facility> [-advertise
<true|false>] [-appName <name>] [-charset <name>] [-connectTimeoutMillis
<timeout(ms)>] [-enterpriseNumber <number>] [-format <name>] [-id <id>]
[-immediateFail <true|false>] [-immediateFlush <true|false>] [-includeMDC
<true|false>] [-mdcExcludes <key1,key2...>] [-mdcId <id>] [-mdcIncludes
<key1,key2...>] [-mdcRequired <key1,key2...>] [-mdcPrefix <prefix>] [-messageId
<msgid>] [-newLine <true|false>] [-reconnectionDelayMillis <delay(ms)>]
[-keyStoreLocation <location>] [-keyStorePassword <password>]
[-trustStoreLocation <location>] [-trustStorePassword <password>]
[-ignoreExceptions <true|false>] [-protocol <protocol>]
  Description
    The SyslogAppender is a SocketAppender that writes its output to a remote
    destination specified by a host and port in a format that conforms with either
    the BSD Syslog format or the RFC 5424 format.
  Required Arguments
    appenderName  The name of the Appender.
    host  The name or address of the system that is listening for log events.
    port  The port on the host that is listening for log events.
    facility  The facility is used to try to classify the message. The facility
    option must be set to one of 'KERN', 'USER', 'MAIL', 'DAEMON', 'AUTH', 'SYSLOG',
    'LPR', 'NEWS', 'UUCP', 'CRON', 'AUTHPRIV', 'FTP', 'NTP', 'AUDIT', 'ALERT',
    'CLOCK', 'LOCAL0', 'LOCAL1', 'LOCAL2', 'LOCAL3', 'LOCAL4', 'LOCAL5', 'LOCAL6',
    or 'LOCAL7'.
  Options
    -advertise  Indicates whether the appender should be advertised.
    -appName  The value to use as the APP-NAME in the RFC 5424 syslog record.
    -charset  The character set to use when converting the syslog String to a byte
    array. The String must be a valid Charset. If not specified, the default system
    Charset will be used.
    -connectTimeoutMillis  The connect timeout in milliseconds. The default is 0
    (infinite timeout).
    -enterpriseNumber  The IANA enterprise number as described in RFC 5424
    -format  If set to 'RFC5424' the data will be formatted in accordance with RFC
    5424. Otherwise, it will be formatted as a BSD Syslog record.
    -id  The default structured data id to use when formatting according to RFC
    5424. If the LogEvent contains a StructuredDataMessage the id from the Message
    will be used instead of this value.
    -immediateFail  When set to true, log events will not wait to try to reconnect
    and will fail immediately if the socket is not available.
    -immediateFlush  When true, each write will be followed by a flush. This will
    guarantee the data is written to disk but could impact performance. The default
    is true.
    -includeMDC  Indicates whether data from the ThreadContextMap will be included
    in the RFC 5424 Syslog record. Defaults to true.
    -mdcExcludes  A comma separated list of mdc keys that should be excluded from
    the LogEvent.
    -mdcId  The id to use for the MDC Structured Data Element.
    -mdcIncludes  A comma separated list of mdc keys that should be included in the
    FlumeEvent.
    -mdcRequired  A comma separated list of mdc keys that must be present in the
    MDC.
    -mdcPrefix  A string that should be prepended to each MDC key in order to
    distinguish it from event attributes
    -messageId  The default value to be used in the MSGID field of RFC 5424 syslog
    records.
    -newLine  If true, a newline will be appended to the end of the syslog record.
    The default is false.
    -reconnectionDelayMillis  If set to a value greater than 0, after an error there
    will be an attempt to reconnect to the server after waiting the specified number
    of milliseconds.
    -keyStoreLocation  The location of the keystore for SSL connections.
    -keyStorePassword  The password of the keystore for SSL connections.
    -trustStoreLocation  The location of the truststore for SSL connections.
    -trustStorePassword  The password of the truststore for SSL connections.
    -ignoreExceptions  When true, exceptions encountered while appending events will
    be internally logged and then ignored. The default is true.
    -protocol  'TCP' (default), 'SSL' or 'UDP'.

Example

$ ./rhino-console createsyslogappender myappender localhost 12000 USER
Done.

Creating a Console Appender

To create a new Console appender, use the following rhino-console command or related MBean operation.

Console command: createconsoleappender

Command

createconsoleappender <appenderName> [-follow <true|false>] [-direct
<true|false>] [-target <SYSTEM_OUT|SYSTEM_ERR>] [-ignoreExceptions <true|false>]
[-pattern <pattern>]
  Description
    Appends log events to System.out or System.err using a layout specified by the
    user.
  Required Arguments
    appenderName  The name of the Appender.
  Options
    -follow  Identifies whether the appender honors reassignments of System.out or
    System.err
    -direct  Write directly to java.io.FileDescriptor and bypass
    java.lang.System.out/.err. Can give up to 10x performance boost when the output
    is redirected to file or other process.
    -target  Either 'SYSTEM_OUT' or 'SYSTEM_ERR'. The default is 'SYSTEM_OUT'.
    -ignoreExceptions  When true, exceptions encountered while appending events will
    be internally logged and then ignored. The default is true.
    -pattern  The pattern to use for logging output.

Example

$ ./rhino-console  createconsoleappender myappender -target SYSTEM_OUT
Done.

Remove an Appender

To remove a no-longer required appender, use the following rhino-console commands, and related MBean methods.

Console command: removeappender

Command

removeappender <appenderName>
  Description
    Remove all references to an appender and remove the appender.
  Required Arguments
    appenderName  The name of the Appender.

Example

$ ./rhino-console removeappender TraceNotification
Removed appender: TraceNotification

Attaching an Appender to a Logger

To attach an appender to a logger, use the following rhino-console command or related MBean operation.

Console command: addappenderref

Command

addappenderref <logKey> <appenderName>
  Description
    Adds an appender for a log key.
  Required Arguments
    logKey  The log key of the logger.
    appenderName  The name of the Appender.

Example

To configure log keys to output their logger’s messages to a specific file appender:

$ ./rhino-console addappenderref root myappender
Added appender reference of myappender to root.

Console command: removeappenderref

Command

removeappenderref <logKey> <appenderName>
  Description
    Removes an appender for a log key.
  Required Arguments
    logKey  The log key of the logger.
    appenderName  The name of the Appender.

Example

$ ./rhino-console removeappenderref rhino.main AlarmsLog
Removed appender reference of AlarmsLog from rhino.main.

Configure a Logger

To configure/reconfigure a Logger, use the following console commands and related MBean methods. Since 2.6, Rhino has offered fully asynchronous logging through asynchronous loggers. Asynchronous logging is based on the idea of returning control to the processing thread as early as possible, for maximum throughput.

Rhino allows any individual logger to be asynchronous. This requires careful setup, as the way that log messages are logged is not entirely straightfoward.

In order to get the expected behaviour, that messages to logger foo are logged asynchronously, and only once, logger foo must be configured as follows:

  • asynchronous set to true. Make this logger asynchronous

  • additivity set to false. This prevents double logging of messages if any parent logger also has a reference to the same appenders.

  • add relevant appender refs. A non-additive logger must have at least one appender ref to log anything

  • set level. Asynchronous loggers do not inherit levels from synchronous parents.

As a result of this complexity, there is no rhino-console command to set or get asynchronous alone. Configuring an Asynchronous Logger shows an example.

Possible behaviours with Asynchronous Loggers.

An asynchronous logger may not necessarily behave as expected, with all messages always logged asynchronously. To determine the actual behaviour of an asynchronous logger requires examining the whole path back to the first non-additive parent (or root logger)

Configuration

Behaviour

Logger
name        : rhino.main
level       : INFO
additivity  : false
asynchronous: true
appenders   : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]
Parent
name        : root
level       : INFO
additivity  : true
asynchronous: <not configured - default is false>
appenders   : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]

rhino.main logs asynchronously to STDERR, RhinoLog, LogNotification, and PolledMemoryAppender.

Logger
name        : rhino.main
level       : INFO
additivity  : false
asynchronous: true
appenders   : []
Parent
name        : root
level       : INFO
additivity  : true
asynchronous: <not configured - default is false>
appenders   : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]

rhino.main does not log to parent loggers, as it is not additive. rhino.main logs to nowhere as it has no appenders attached to log to.

Logger
name        : rhino.main
level       : INFO
additivity  : true
asynchronous: true
appenders   : []
Parent
name        : root
level       : INFO
additivity  : true
asynchronous: <not configured - default is false>
appenders   : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]

rhino.main logs nothing directly, and logs synchronously to parent logger. As above, rhino.main has no attached appenders to log to. Calls to the parent logger are always synchronous, regardless of logger asynchrony.

Logger
name        : rhino.main
level       : INFO
additivity  : true
asynchronous: true
appenders   : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]
Parent
name        : root
level       : INFO
additivity  : true
asynchronous: <not configured - default is false>
appenders   : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]

rhino.main logs asynchronously all attached appenders, and synchronously to parent logger. This results in every log message doubling up due to shared appenders.

Logger
name        : rhino.main
level       : INFO
additivity  : true
asynchronous: true
appenders   : [mainAppender]
Parent
name        : root
level       : INFO
additivity  : true
asynchronous: <not configured - default is false>
appenders   : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]

rhino.main logs asynchronously to mainAppender, and synchronously to parent logger. As no appenders are shared by the loggers, this does not result in messages doubling up. However, calls to rhino.main will synchronously log through appenders attached to root.

Console command: configurelogger

Command

configurelogger <logKey> [-level <level>] [-additivity <additivity>]
[-asynchronous <asynchronosity>] [-appender <appender-ref>]* [-plugin
<plugin-name>]*
  Description
    Set the configuration for a logger.  At least one option must be specified.
    Plugins can be defined using the defineplugincomponent command.

Example

$ ./rhino-console configurelogger root -level info -additivity true -appender STDERR -appender RhinoLog -appender LogNotification
Created/updated logger configuration for root
Make rhino.management log asynchronously to rhino.log only
$ ./rhino-console configurelogger rhino.manangement -level info -additivity false -asynchronous true -appender RhinoLog
Created/updated logger configuration for rhino.management

Console command: getloggerconfig

Command

getloggerconfig <logKey>
  Description
    Get the configuration for a logger.
  Required Arguments
    logKey  The log key of the logger.

Example

$ ./rhino-console getloggerconfig rhino
Logger rhino is not configured

$ ./rhino-console getloggerconfig rhino.main
name        : rhino.main
level       : INFO
additivity  : <not configured - default is true>
asynchronous: <not configured - default is false>
appenders   : []

Console command: removeloggerconfig

Command

removeloggerconfig <logKey>
  Description
    Remove the configuration for a logger.
  Required Arguments
    logKey  The log key of the logger.

Example

$ ./rhino-console removeloggerconfig rhino.main
Configuration for logger rhino.main removed

Mananging a Logger’s Additivity

To specify whether or not a logger is additive, use the following rhino-console command or related MBean operation.

The meaning of "additivity" is explained in the Appender additivity section of the About Logging page.

Tip Loggers are additive by default.

Console command: setadditivity

Command

setadditivity <logKey> <additivity>
  Description
    Sets whether the log key inherits the log filter level of its parent logger.
  Required Arguments
    logKey  The log key of the logger.
    additivity  Set to true for enabled, false for disabled, or - to use the
    platform default

Example

To make a logger additive:

$ ./rhino-console setadditivity rhino.foo true
Done.

Console command: getadditivity

Command

getadditivity <logKey>
  Description
    Get the configured additivity for a logger.
  Required Arguments
    logKey  The log key of the logger.

Example

To make a logger additive:

$ ./rhino-console getadditivity rhino
Logger rhino is not configured - the default additivity (true) would apply to this log key

$ ./rhino-console getadditivity root
Additivity for root is true

Managing a Logger’s Log Level

To manage the log level for a log, use the following rhino-console command or related MBean operation.

Console command: setloglevel

Command

setloglevel <logKey> <logLevel>
  Description
    Set the log level for a logger.
  Required Arguments
    logKey  The log key of the logger.
    logLevel  The log level.

Example

$ ./rhino-console  setloglevel rhino.main info
Log level for rhino.main set to: INFO

Console command: getloglevel

Command

getloglevel <logKey>
  Description
    Get the configured log level for a logger. Displays the effective log level if
    no explicit level is set.
  Required Arguments
    logKey  The log key of the logger.

Examples

$ ./rhino-console getloglevel rhino
Logger rhino does not exist but it has sub-loggers.
Log level for rhino is not set.
Effective (inherited) log level is: INFO

$ ./rhino-console getloglevel rhino.main
Log level for rhino.main is: INFO

Listing Log Appenders

To list available log appenders, use the following rhino-console command or related MBean operation.

Console command: listappenders

Command

listappenders
  Description
    List all currently configured appenders.

Example

[Rhino@localhost (#14)] listAppenders
ConfigLog
STDERR
RhinoLog

MBean operations:

Listing Log Keys

To list log keys, use the following rhino-console command or related MBean operation.

Console command: listlogkeys

Command

listlogkeys [-configured <true|false>] [-prefix <prefix>] [-contains <string>]
  Description
    Returns an array of known log keys. If configured is true, return only
    explicitly configured logkeys, otherwise return all known keys.
  Options
    -configured  If true, list only keys with explicit configuration, otherwise list
    all known keys
    -prefix  Limit results to log keys matching prefix
    -contains  Limit results to log keys containing the specified string

Example

[Rhino@localhost (#3)] listlogkeys
fastserialize
framework
framework.bulk.manager
framework.bulk.ratelimiter
framework.csi
framework.dlv
framework.groupheartbeat
framework.mcp
framework.mcp.preop
framework.mcpclient-mplexer
framework.rmi.network
framework.rmi.result
framework.rmi.server
framework.rmi.skeleton.com.opencloud.rhino.configmanager.runtime.ConfigurationStateImpl
...

Managing Logging Properties

Rhino 2.6 allows logging configuration to use property subsitutions in almost all logging configuration.

To manage properties available for substitution use the following rhino-console commands and related MBean methods.

Console command: getloggingproperties

Command

getloggingproperties [-property <property>]
  Description
    Get the values of Logging properties.
  Options
    -property  The name of the Property

Example

$ ./rhino-console getloggingproperties
name                 value
-------------------  -------------------------------------------------------------------------------------------------------------------------------------------------------------
        colourLevel                                                                                                                        %highlight{%-7level}{${consoleColours}}
     consoleColours  SEVERE=RED BRIGHT, WARNING=YELLOW, INFO=GREEN, CONFIG=CYAN, FINE=BRIGHT_BLACK, FINER=BRIGHT_BLACK, FINEST=BRIGHT_BLACK, CLEAR=GREEN, CRITICAL=RED, MAJOR=RED,
             logDir                                                                                                                                      ${sys:rhino.dir.work}/log
   maxArchivedFiles                                                                                                                                                              4
         plainLevel                                                                                                                                                       %-7level
 rhinoVersionHeader                                                                                               %d{yyyy-MM-dd HH:mm:ss.SSSZ} ${rhino:version} log file started%n
6 rows

Console command: setloggingproperty

Command

setloggingproperty <property> <value>
  Description
    Set a Logging property. Overwrites if it already exists
  Required Arguments
    property  The name of the Property
    value  The value of the Property

Example

$ ./rhino-console setloggingproperty maxArchivedFiles 5
Set property maxArchivedFiles to 5

Console command: removeloggingproperty

Command

removeloggingproperty <property>
  Description
    Remove a logging property if not in use.
  Required Arguments
    property  The name of the Property

Example

$ ./rhino-console removeloggingproperty consoleColours
An error occurred executing command 'removeloggingproperty':

com.opencloud.rhino.configmanager.exceptions.ConfigurationException: Property consoleColours in use by property colourLevel

$ ./rhino-console removeloggingproperty colourLevel
Removed logging property colourLevel

Define a Plugin Component

Console command: defineplugincomponent

Command

defineplugincomponent <alias-name> <plugin-name> [(<property-name>
<property-value>)]* [(-plugin <name>)]*
  Description
    Define a plugin component that can be used with the configurelogger command or
    other plugin definitions.  Plugin definitions exist only in the client, and will
    be lost when the client terminates

Example

[SLEE Stopped] [admin@localhost (#11)] defineplugincomponent fooPattern PatternLayout pattern "%d{yyyy-MM-dd HH:mm:ss.SSSZ} %-7level [%logger] <%threadName> %mdc %msg{nolookups}%n%throwable"
Defined plugin component with name PatternLayout

Annotating Log files

To append a message to a given logger, use the following console commands and related MBean methods

Console command: annotatelog

Command

annotatelog <logKey> <logLevel> <message>
  Description
    Logs a message to all nodes in the cluster using Rhino's logging subsystem.
  Required Arguments
    logKey  The log key of the logger.
    logLevel  The log level.
    message  The message to log.

Example

To configure log keys to output their logger’s messages to a specific file appender:

$ ./rhino-console annotatelog root info "a log annotation"
Annotating log.
Done.
rhino.log
...
2017-12-04 11:53:23.010+1300 INFO    [root] <GroupRMI-thread-1> {} a log annotation
...

Rolling-Over All Rolling File Appenders

To backup and truncate all existing rolling file appenders, use the following rhino-console command or related MBean operation.

Note
Overriding default rollover behaviour

The default behaviour for log files is to automatically rollover when they reach 100MB in size. You can instead request rollover at any time, using the rolloverAllLogFiles command. (You can also override the default maximum file size before a log file rolls over and the maximum number of backup files to keep, when creating a file appender).

Console command: rolloverlogfiles

Command

rolloverlogfiles
  Description
    Triggers a rollover of all existing rolling appenders.

Example

$ ./rhino-console rolloverLogFiles
Done.

Logging Plugins

Rhino uses the Log4j 2 plugin architecture to support any Log4j 2 appender and allow addition of custom appenders.

Tip See the Apache Log4j 2 Plugins documentation for more information about plugins.

Many of the appenders provided by Log4j 2 have additional dependencies. These are not packaged with Rhino. Some examples include:

  • Cassandra Appender

  • JDBC Appender

  • Kafka Appender

Installing appender dependencies and custom plugins

If you want to use a custom plugin or a Log4j 2 appender that requires additional dependencies, you must put the required JARs into ${RHINO_HOME}/lib/logging-plugins/. Any jars found in this directory are added to the core logging classloader.

Note Files in ${RHINO_HOME}/lib/logging-plugins are only scanned at node boot time.

This classloader is not visible to the main application classloaders. Because it is isolated, it can contain versions of libraries that would otherwise conflict with versions of libraries deployed in the SLEE. (It can’t contain multiple versions of the same library if different appenders require different versions.)

Custom plugins may affect the stability of Rhino nodes.

Custom plugins

Log4j 2 provides multiple mechanisms for plugin discovery. The only mechanism supported by Rhino is use of the Log4j 2 annotation processor during the plugin build phase.

The Log4j 2 annotation processor works by scanning for Log4j 2 plugins and generating a metadata file in your processed classes.

The Java compiler will automatically pick up the annotation processor if it is in the classpath. If the annotation processor is disabled during the compilation phase, you must add another compiler pass to your build process that does annotation processing for org.apache.logging.log4j.core.config.plugins.processor.PluginProcessor.

Importing a Rhino export

Dependencies for logging plugins are not included in a Rhino export, even if there is configuration that requires those dependencies. So when using rhino-export and importing into a new Rhino instance, the logging plugins must be copied manually. Copy ${RHINO_HOME}/lib/logging-plugins to the new Rhino location.

Rhino upgrades

The Log4j 2 version may be updated in new Rhino releases, if bug fixes in the core implementation are needed. When this happens, it is possible that plugins may need to updated for the new Log4j version. Changes to the Log4j version will be documented in the Rhino changelog.

Known Issues

  • Highlighting of alarm and tracer levels does not work in colour console (LOG4J2-1999, LOG4J2-2000, LOG4J2-2005)

  • Rolling file appenders do not support live reconfiguration between time-based and sequential numbering (LOG4J2-2009)

  • The createOnDemand configuration option for file appenders, including rolling file appenders does not work if the appender layout has a header (LOG4J2-2027)

Staging

Staging refers to the micro-management of work within the Rhino SLEE.

This work is divided up into items, executed by workers. A system-level thread represents each worker. You can configure the number of threads available to process items on the stage, to minimise latency, and thus increase the performance capacity of the SLEE.

The staging-thread system

Rhino performs event delivery on a pool of threads, called staging threads. The staging-thread system operates a queue of units of work for Rhino to perform, called stage items. Typically, these units of work involve the delivery of SLEE events to SBBs. A stage item enters staging in a processing queue. Then, the first available staging thread removes it, to perform its associated work. How long the thread spends in the staging queue, before a stage worker processes it, contributes to the overall latency in handling the event. Thus, it is important to make sure that the SLEE is using staging threads optimally.

Tunable parameters

To improve performance, you can tune the following staging parameters: maximumSize, threadCount, maximumAge, queueType.

Warning The node must be restarted for any change in maximumSize, maximumAge, or queueType to take effect.
Tip For instructions on tuning staging parameters, see Configuring Staging Parameters. You can observe the effects of configuration changes in the statistics client by simulating heavy concurrency using a load simulator.

maximumSize

Description

Maximum size of the staging queue. Determines how many stage items may be queued awaiting processing. When the queue reaches maximum size, the SLEE automatically fails and removes the oldest item, to accomodate new items.

Default

 3000

Recommendation

The default works well for most scenarios. Should be high enough that the SLEE can ride out short bursts of peak traffic, but not so large that under extreme overload stage items wait in the queue for too long (to be useful to the protocol generating the event), before being properly failed.

threadCount

Description

Number of staging threads in the thread pool.

Tip Of all staging parameters, this has the greatest impact on overall event-processing latency. To achieve optimal performance, give careful attention to tuning the thread count.

Default

 30

Recommendation

The default works well for many applications on a wide range of hardware. However for some applications, or with hardware using four or more CPUs, more staging threads may be useful. In particular, when the SLEE is running services that perform high-latency blocking requests to an external system, more staging threads may often be necessary.

For example, for a credit-check application that only allows a call setup to continue after performing a synchronous call to an external system:

  • If a credit check takes on average 150ms, the staging thread that processes the call-setup event will be blocked and unable to process other events for 150ms.

  • With the default configuration of 30 staging threads, such a system would be able to handle an input rate of approximately 200 events/second. Above this rate, the stage worker threads will not be able to service event-processing stage items fast enough, and stage items will begin to back up in staging queues, eventually causing some calls to be dropped.

  • The problem is easily solved by configuring a higher number of staging threads.

Warning In real-world applications, it is seldom a matter of applying a simple formula to work out the optimal number of staging threads. Instead, performance-monitoring tools would be used to examine the behaviour of staging, alongside such metrics as event-processing time and system-CPU usage, to find a suitable value for this parameter.

maximumAge

Description

Maximum possible age of a staging item, in milliseconds. Determines how long an item of work can remain in the staging queue and still be considered valid for processing. Staging threads automatically fail and remove stage items that stay in the staging queue for longer than this maximum age. Tuning this (along with maximumSize), helps determine your application’s behaviour under overload conditions.

Default

 10000

queueType

Description

Determines ordering of the staging queue. These options are available:

  • LIFO ("Last In First Out") — the newest item in the queue is processed first

  • FIFO ("First In First Out") — the oldest item in the queue is processed first

  • transfer — acts as many FIFO queues, and may perform better under high load on systems with many processors (introduced in Rhino 2.3.1.7).

Default

 LIFO

Recommendation

The default LIFO behaviour works well for most scenarios. In situations where shorts bursts of work exceed capacity then newer work items will see prompt handling at the expense of lengthened delays for those already waiting. In contrast, FIFO behaviour will see delays hit all items in the queue until the queue is cleared.

Configuring Staging Parameters

To configure staging parameters, use the following rhino-console commands or related MBean operations.

configurestagingqueues command

Command

configurestagingqueues [maximumAge <age>] [maximumSize <size>] [threadCount
<count>]
  Description
    set some or all of the staging-queues configuration properties

Example

$ ./rhino-console configurestagingqueues maximumAge 11000 maximumSize 4000 threadCount 40
Updated staging-queue config properties:
maximumSize=4000
threadCount=40
maximumAge=11000

getstagingqueuesconfig command

Command

getstagingqueuesconfig
  Description
    get the staging-queues configuration properties

Example

$ ./rhino-console getstagingqueuesconfig
Configuration properties for staging-queues:
maximumAge=11000
maximumSize=4000
threadCount=40

MBean operations

Use the following MBean operations to configure staging queue parameters, defined on the Staging Queue Management MBean interface.

Operations

Usage

getMaximumSize
setMaximumSize

To get and set the maximum number of items permitted in the staging queue:

public int getMaximumSize()
    throws ConfigurationException;
public void setMaximumSize(int size)
    throws ConfigurationException, ValidationException;

getMaximumAge
setMaximumAge

To get and set the maximum age of items permitted in the staging queue:

public long getMaximumAge()
    throws ConfigurationException;
public void setMaximumAge(long ms)
    throws ConfigurationException, ValidationException;

Queued work items do not immediately expire if their age (measured in milliseconds) exceeds the maximum allowed. Instead, the SLEE discards them when they leave the staging queue (when it’s their turn for processing).

Tip To skip checking the age of queued items (so they can’t get "too old" for processing), you can set this parameter to -1.

getThreadCount
setThreadCount

To get and set the number of threads available for processing items on the staging queue:

public int getThreadCount()
    throws ConfigurationException;
public void setThreadCount(int threads)
    throws ConfigurationException, ValidationException;

Object Pools

The Rhino SLEE uses groups of object pools to manage the Java objects representing SBBs and profile tables.

Throughout the lifecycle of an object, it may move from one pool to another. Although the defaults are generally suitable, each object pool’s maximum size can be configured if needed.

Pools

There are several types of object pools, however Metaswitch recommends that only the initial pooled pool sizes be changed by system administrators. The other pool sizes are best set during performance testing and only after the maximum workload without tuning has been determined. When tuning the pool sizes consideration should be given to the maximum load nodes are expected to process and the memory consumed by the pools.

The JAIN SLEE specification describes the purpose of the object pools with respect to SBBs:

The SLEE creates and manages a pool of SBB objects. At runtime, the SLEE may assign zero or more SBB objects to represent an SBB entity. When an SBB object is assigned to represent an SBB entity, the SBB object is in the Ready state (see Section 6.3). It can receive and fire events, receive synchronous method invocations, and access and update the persistent state of the SBB entity. Another viewpoint is that the SBB object caches a copy of the persistent data of the SBB entity to provide transactional semantics.

Rhino has five types of object pool. Each is managed per-service or per-profile table, if a service or profile table does not have a pool configuration it inherits the default configuration for its type.

SBB object and profile object pools
 Pooled pool

Contains SBB objects and profile objects in the Pooled state. This means that the object context has been initialised but no SBB entity or profile is associated with the object.

If the pool is empty, the SLEE must create and initialise a new object the next time it needs one. This may take time, particularly if the setSbbContext() or setProfileContext() method on the object performs a lengthy initialisation. To reduce the impact of object initialisation, the pool may be pre-populated with initialised objects, either at service activation/profile table creation or at SLEE startup time. By default, the Rhino SLEE pre-populates this pool with 50 initialised objects for services and zero for profile tables. This initial pool size can be configured with the initialPooledPoolSize configuration attribute.

 Ready pool

Contains SBB objects and profile objects in the Ready state. Ready means that the object is associated with the most recent version of an SBB entity or profile. For SBB objects this means that the object is ready to receive and process events.

On startup this pool is always empty. It is populated only with objects from the stale pool or pooled pool, or objects created on demand if the pooled pool was empty.

 Stale pool

Contains SBB objects and profile objects that are associated with an SBB entity or profile that has been modified in another transaction. This pool exists as a partner for the ready pool to avoid unnecessary calls to sbbActivate() and sbbPassivate, or profileActivate() and profilePassivate(). Objects in the stale pool are associated with out-of-date state and must be resynchronised with their persistent state before they can be used.

On startup this pool is always empty. It is populated with objects from the ready pool if and when they become stale.

CMP field object pools
Persistent state pool

A persistent state object holds the MemDB representation of CMP and CMR field data for an SBB entity or profile.

A new persistent state object is required for every transaction in which CMP or CMR field data is updated. The purpose of the persistent state pool is to reduce the GC impact caused by the cycling of these objects as SBB entities and profiles are created, updated, and removed.

State pool

State objects provide the interface between SBB entities and profiles and the persistent state objects holding their CMP and CMR field data. State objects are associated with an SBB object or profile object when the object is associated with an SBB entity or profile.

The state pool should be configured to be at least the size of the ready pool. The maximum amount of state objects in use at any one time, and thus the maximum recommended state pool size, is limited to the sum of:

  • the size of the ready pool;

  • the size of the stale pool; and

  • the number of event processing threads.

Tip Object pool statistics are available in the ObjectPools parameter set.

Configuring Object Pools

The configuration of object pools is structured as follows:

  • A global defaults object pool configuration contains the set of base defaults.

  • A service defaults object pool configuration contains the default configuration for services. When a service is deployed, its initial object pool configuration is copied from the service defaults configuration.

    If, for some reason, the service defaults configuration is missing when it is required, it will be recreated based on the global defaults configuration.

  • A profile table defaults object pool configuration contains the default configuration for profile tables. When a profile table is created, its initial object pool configuration is copied from the profile table defaults configuration.

    If, for some reason, the profile table defaults configuration is missing when it is required, it will be recreated based on the global defaults configuration.

  • When a new namespace is created, each of the default object pool configurations for that namespace are initialised as a copy of the corresponding configurations from the default namespace.

Object pools can be configured, for example, with rhino-console using the following commands:

  • getobjectpoolconfig can be used to view an object pool configuration; and

  • configureobjectpools can be used to change the sizes of the object pools in a configuration.

Please see the online help in rhino-console for more information on using these commands.

Alternatively, MBeans operations can be used to configure object pools. Please see:

Note The useDefaults flag of an object pool configuration is deprecated and no longer has any function.

Licenses

As well as an overview of licenses, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation
 `listlicenses`

License Management → getLicenses

 `getlicensedcapacity`

License Management → getLicensedCapacity

 `installlicense`

License Management → install

 `uninstalllicense`

License Management → uninstall

About Licenses

To be activated, services and resource adaptors need a valid license loaded into Rhino (at least for core or "default" functions). See the following details on license properties, validity, alarms, statistics and an example.

License properties

Each license has the following properties;

  • a unique serial identifier

  • a start date (before which the license is not valid)

  • an end date (after which the license is not valid)

  • a set of licenses that are superseded by this license

  • licensed-product functions — for the Rhino family of products, these are "Rhino" (used by the production Rhino build for its core functions) and "Rhino-SDK" (used by the SDK Rhino build for its core functions)

  • licensed-product versions

  • licensed-product capacities

  • one or more descriptive fields (optional, not actually used for licensing calculations).

Each license can contain one or more sets of (function, version, capacity). For example, a license could be for "Rhino-SDK, version 2.1, 1000" as well as "Rhino, version 2.1, 500".

Warning
Highest-capacity licenses display on startup

Licenses display when Rhino starts up — not the complete list, but only those with the highest licensed capacity for each licensed function/version. (If you have a big license and a small license for the same function/version installed, only the largest will display on startup.)

License validity

A license is considered valid if:

  • The current date is after the license start date, but before the license end date.

  • The list of license functions in that license contains the required function.

  • The list of product versions contains the required version.

  • The license is not superseded by another.

If Rhino finds multiple valid licenses for the same function, it uses the one with the largest licensed capacity.

Upon activating a service or resource adaptor, Rhino checks the list of functions that they require against the list of installed valid licenses. If all required functions are licensed, the service or resource adaptor will activate. (If one or more functions is unlicensed, they will not activate.)

Licensing applies to explicit activation (by way of a management client) and implicit activation (on Rhino restart). There is one exception: if a node joins an existing cluster that has an active service for which there is no valid license, the service does become active on that node.

In the production Rhino, services and resource adaptors that are already active will continue to successfully process events for functions that are no longer licensed, such as when a license has expired. For the SDK Rhino, services and resource adaptors that are already active will stop processing events for the core "Rhino-SDK" function if it becomes unlicensed, typically after a license has expired.

License alarms

Typically, Rhino raises an alarm when:

  • A license has expired.

  • A license is due to expire in the next 7 days.

  • License units are being processed for a currently unlicensed function.

  • A license function is processing more accounted units than it is licensed for. The audit log shows how long it’s been over limit.

System administrators are responsible for verifying and canceling alarms (through the management console (command console or Rhino Element Manager.

Warning Cancelled capacity alarms are re-generated for licensed functions that continue to run over capacity.

Enforcing license limits

Production Rhino never enforces the "hard limit" on a license. The SDK version of Rhino will enforce the "hard limit" on the core "Rhino-SDK" function, by rejecting incoming work.

Tip Contact your Metaswitch sales representative or Metaswitch support if you require a greater capacity Rhino SDK license during development.

Audit logs

Rhino SLEE generates a license audit log in the same directory where other logs are stored. The Rhino SLEE system administrator can use the log for a self-audit as required. Metaswitch support may also request the audit log to perform a license audit. Audit logs are protected from external tampering using an HMAC (Hash-based Method Authentication Code) checksum on each line output to the log.

Audit logs are subject to "rollover", just like any other rolling log appender log —  for a full audit log for a particular period, several logs may need to be concatenated. (Older logs are named audit.log.0, audit.log.1, and so on.)

License statistics

The standard Rhino SLEE statistics interfaces include:

  • the root License Accounting statistic

  • statistics for each function, with both accountedUnits and unaccountedUnits values.

Only accountedUnits count towards licensed limits. Rhino records unaccountedUnits for services and resource adaptors with licensed functions configured as accounted="false".

Sample Rhino license

License:       1194a455e0b
  Issued to:   Umbrella Corporation
  Created at:  Mon Apr 14 12:11:10 NZST 2008
  Valid after: Mon Apr 14 12:11:10 NZST 2008
  Valid until: Fri Jun 13 12:11:10 NZST 2008
  Allows functions:
    Rhino version 2.0, 500 capacity
    Rhino-SDK version 2.0, 500 capacity

License Audit Log Format

License audit logs track information over time, about cluster membership, installed licenses, and license function usage.

Each line in the audit logs describes one of these items at a particular time, as detailed below. All lines start with an HMAC (Hash-based Method Authentication Code) checksum for the line followed by a full time stamp which includes a GMT offset.

Note Every Rhino node writes an audit log; but all audit logs detail cluster-wide information and usage statistics (not per-node information).

Cluster membership

These lines in the audit logs show the current set of node IDs following a cluster membership change.

When logged

Whenever the active node set in the cluster changes.

Format

<checksum>,<timestamp>, CLUSTER_MEMBERS_CHANGED, [<comma>,<separated>,<node>,<list>]

Example

8c7297fec286a6209307920ce2ed6fb7c562099fce760cd8b23721bdb934e81d, 2022-03-21 15:45:57 +0000, CLUSTER_MEMBERS_CHANGED, [101,102]

Installed licenses

These lines in the audit logs list and describe changes to installed licenses.

When logged

Whenever the set of valid licenses changes. For example, when:

  • a license is installed or removed

  • an installed license becomes valid

  • an installed license expires.

Format

<checksum>, <timestamp>,LICENSE,"<license description>"

Example

1a757486f0280bc05f5aa11b61de92a6480401fc06c7c85b757d892204d6720a, 2022-03-21 15:45:46 +0000,LICENSE,"[LicenseInfo serial=116eaaffde9,validFrom=Mon May 10 14:53:49 NZST 2021,...]"

License function usage

These lines in the audit logs show the following information about license function usage:

  • the start and end timestamps

  • number of records

  • minimum, maximum, and average times (each logged period is made up of several smaller periods).

When logged

Every ten minutes.

Format

Each line represents a single license function, in the following format:

<checksum>, <timestamp>, <startTimeMillis>, <endTimeMillis>, <intervalMillis>, <nodeCount>, <function>, <totalAccounted>, <avgAccounted>, <totalUnaccounted>, <avgUnaccounted>, <capacity>

where the fields are as follows:

 <startTimeMillis>

the milliseconds timestamp of the start of this accounting period

 <endTimeMillis

the milliseconds timestamp of the end of this accounting period

 <intervalMillis>

the length in milliseconds of this accounting period (<endTimeMillis> - <startTimeMillis>)

 <nodeCount>

the number of nodes in the cluster at the time

 <function>

the Rhino license function whose usage is described by this record

 <totalAccounted>

total units accounted against <function> for this accounting period

 <avgAccounted>

the average number of accounted units per second against <function> for this accounting period

 <totalUnaccounted>

the total number of unaccounted units against <function> for this period (not counting towards licensed capacity, but presented for informational purposes)

 <avgUnaccounted>

the average number of unaccounted units per second against <function> for this period (not counting towards licensed capacity, but presented for informational purposes)

 <capacity>

current licensed capacity for <function>

Example

a1a9ad29af063e3447db10c8f4432b1843b151d2c5418917f99a4e0458af6887, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino, 17690, 29.48, 0, 0.00, 10000
dafdc99203cbb0ca5a51c16b286450efc0aea73b26b203e3c998032fc92cd352, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS, 6454, 10.76, 0, 0.00, 10000

Sample log file

1a757486f0280bc05f5aa11b61de92a6480401fc06c7c85b757d892204d6720a, 2022-03-21 15:45:46 +0000,LICENSE,"[LicenseInfo serial=179568e2ee7,licensee=Open Cloud Limited (Test license),..."
5d4d60cc73eab2d0b4444473225d8ab25b40583682f17835d9a196bbf08fffce, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN-TPS, 0, 0.00, 0, 0.00, -1
9fd1d0d8cd6c253ce95c31b1e559cdcc2df48fb05f52e69697f11374013bfdd7, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN-External-Service, 0, 0.00, 0, 0.00, 0
ad454bb83c2e107dc7beb9429d0b10294c9f25b77b5cd553e927e4dbe4ef9633, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN-SMS, 0, 0.00, 0, 0.00, 10000
b3e72244b0c7d3f3087ec4de9ddb6e4fd9b11f53220287acdf885426f62a5088, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS, 0, 0.00, 0, 0.00, 10000
72472ba40f76adec16e79dadea8de4bfe9bd3f21aec19a5b413a7b0c8b33299e, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN, 0, 0.00, 0, 0.00, 10000
9c1c7a287b83621345b8713fc8cbdc5e0d7cf00e878f5d1084e7b56d536ebe82, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN-Protocol-CAP, 0, 0.00, 0, 0.00, 0
e34dbbf6e2be929dcd378692bbd4b1806df5385684ff438fca34276fe9971ccd, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino, 0, 0.00, 0, 0.00, 10000
704925656fca4281721624c3d8a44932c45a2b93322176f50545e3b5473b2c31, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN-Local-Service, 0, 0.00, 0, 0.00, 0
f740e80afea471aa710d65cda4cd9aa9f74a274d562e606a2916f7fd334cde1b, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN-Voice, 0, 0.00, 0, 0.00, 10000
105b4c6c8434ec2b2c76806e925084e3a1a68912e0f6ac600092a9fce234a539, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN-Protocol-ETSI-INAP-CS1, 0, 0.00, 0, 0.00, 0
bc6c2b572649312f1c70006ee0651606c05d851689c975d25695264ba3556eb5, 2022-03-21 15:55:48 +0000, 1647877548591, 1647878148591, 600000, 2, Rhino-SIS-IN-SPS, 0, 0.00, 0, 0.00, 0
8136300188cb27e3f485795eab499a68d941823bd9a12b7dfb648c4445968dc3, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN-TPS, 0, 0.00, 0, 0.00, -1
20f9ca5e038e6a78e28805d3a483411ca0d15a75af415564a989ea46ab1f1a00, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN-External-Service, 0, 0.00, 0, 0.00, 0
cacdf27553e8d52d09b10d4825f75e07752dad18b4850451bd407dc384285647, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN-SMS, 0, 0.00, 0, 0.00, 10000
dafdc99203cbb0ca5a51c16b286450efc0aea73b26b203e3c998032fc92cd352, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS, 6454, 10.76, 0, 0.00, 10000
5adaa740343637b3284cff421824149a22b6ed2853df6ff14df33e506b9678f5, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN, 6453, 10.76, 0, 0.00, 10000
d35da58ee2b81cf309123a3db203e036dbb3dc23e34477eefd7dc113d9d76024, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN-Protocol-CAP, 0, 0.00, 0, 0.00, 0
a1a9ad29af063e3447db10c8f4432b1843b151d2c5418917f99a4e0458af6887, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino, 17690, 29.48, 0, 0.00, 10000
41a2657135c3d2c90ffe8a432c78141e8a6f3b87d72633fbd03b7d6b8d79c0ca, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN-Local-Service, 0, 0.00, 0, 0.00, 0
e786da7cac6035af3ebc8113c66ee65818791187c4743f27debeb8d716b1f9ea, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN-Voice, 6454, 10.76, 0, 0.00, 10000
c0cf5a8ae5abaaa8c44ba9dcc83b54f6dda60b265cc6c455c2af701c91955a32, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN-Protocol-ETSI-INAP-CS1, 0, 0.00, 0, 0.00, 0
555c83c9d1fae7bd57c041cd7f296e6ce382816b28b31cf2a592526b0d26f6f1, 2022-03-21 16:05:48 +0000, 1647878148591, 1647878748590, 599999, 2, Rhino-SIS-IN-SPS, 0, 0.00, 0, 0.00, 0
13ad53e15c6de948fa7c77c7b4318606dd600ddef7ebef1bbe1cbe8ee5cb8795, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN-TPS, 0, 0.00, 0, 0.00, -1
055413c02406b7eb4cb9b3303fa3eef50940929a6b1173fc1082bcb0d276fa02, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN-External-Service, 0, 0.00, 0, 0.00, 0
8c10fcea238143abdc4cf411b0ab67b5fc89ceb55c9f3d3550dd42ae0e7bf961, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN-SMS, 0, 0.00, 0, 0.00, 10000
04bb9793abd5e1e8102a95cb77b146b8a875840eb2e366076e0d44526de993f6, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS, 7474, 12.46, 0, 0.00, 10000
959b430a644d8bac8ce0c70e7b6d5b5accf4d053d19fa7aeabc32f396d54302b, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN, 7475, 12.46, 0, 0.00, 10000
4ece93de1af46e3ceb541ed9e968fb388df8981b26184e3cb2ef92b260abbc48, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN-Protocol-CAP, 0, 0.00, 0, 0.00, 0
ecde669ac6a14dedd4b6cd8c73170be5bccc011324663706e1a8afc30914818e, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino, 20456, 34.09, 0, 0.00, 10000
6e29cb20d3b8c28d269e3adeb3d6b224cf9716e0d0b0ead4667adfabca7d24dc, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN-Local-Service, 0, 0.00, 0, 0.00, 0
813dbe19c08bddc3042d901626360b5f5856d06165228ec353076c33b7b5d309, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN-Voice, 7474, 12.46, 0, 0.00, 10000
095caf0c508ba3804eaa761118c80f2b62121d58404a81100855516696da1915, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN-Protocol-ETSI-INAP-CS1, 0, 0.00, 0, 0.00, 0
a23ead3862769c27b6b0d4360a04c5a72eae2852cc06107767d6825d31d9c62c, 2022-03-21 16:15:48 +0000, 1647878748590, 1647879348591, 600001, 2, Rhino-SIS-IN-SPS, 0, 0.00, 0, 0.00, 0

Listing Current Licenses

To list current licenses, use the following rhino-console command or related MBean operation.

Console command: listlicenses

Command

listlicenses <brief|verbose>
  Description
    Displays a summary of the currently installed licenses

Example

$ ./rhino-console listLicenses
Installed licenses:
[LicenseInfo serial=107baa31c0e,validFrom=Wed Nov 23 14:00:50 NZDT 2008,
validUntil=Fri Dec 02 14:00:50 NZDT 2008,capacity=400,hardLimited=false,
valid=false,functions=[Rhino],versions=[2.1],supersedes=[]]
[LicenseInfo serial=10749de74b0,validFrom=Tue Nov 01 16:28:34 NZDT 2008,
validUntil=Mon Jan 30 16:28:34 NZDT 2009,capacity=450,hardLimited=false,
valid=true,functions=[Rhino,Rhino-IN-SIS],versions=[2.1,2.1],
supersedes=[]]
Total: 2

In this example, two licenses are installed:

  • 107baa31c0e, enabling the Rhino function

  • 10749de74b0, enabling the Rhino and Rhino-IN-SIS functions.

Both are for Rhino version 2.1.

MBean operation: getLicenses

MBean

Rhino operation

public String[] getLicenses()

Getting Licensed Capacity

To get the licensed capacity for a specified product and function (to determine how much throughput the Rhino cluster has), use the following rhino-console command or related MBean operation.

Console command: getlicensedcapacity

Command

getlicensedcapacity <function> <version>
  Description
    Gets the currently licensed capacity for the specified function and version

Example

$ ./rhino-console getlicensedcapacity Rhino 2.1
Licensed capacity for function 'Rhino' and version '2.1': 450

MBean operation: getLicensedCapacity

MBean

Rhino operation

public long getLicensedCapacity(String function, String version)

Installing a License

To install a license, use the following rhino-console command or related MBean operation.

Console command: installlicense

Command

installlicense <license file>
  Description
    Install the license at the specified SLEE-accessible filename or file: URL

Example

$ ./rhino-console installLicense file:/home/user/rhino/rhino.license
Installing license from file:/home/user/rhino/rhino.license
Warning License files must be on the local filesystem of the host where the node is running.

MBean operation: install

MBean

Rhino operation

Install a license from a raw byte array
public void install(byte[] filedata)
        throws LicenseException, LicenseAlreadyInstalledException, ConfigurationException

Uninstalling a License

To uninstall a license, use the following rhino-console command or related MBean operation.

Console command: uninstalllicense

Command

uninstalllicense <serial id>
  Description
    Uninstalls the specified license

Example

To uninstall the license with serial ID "105563b8895":

$ ./rhino-console uninstalllicense 105563b8895
Uninstalling license with serial ID: 105563b8895
Warning License files must be on the local filesystem of the host where the node is running.

MBean operation: uninstall

MBean

Rhino operation

public void uninstall(String serial)
        throws UnrecognizedLicenseException, ConfigurationException

Rate Limiting (Overload Control)

Rhino’s carrier-grade overload control manages the rate of work Rhino accepts, processes, and generates for other systems. A Rhino administrator can:

  • design the entire end-to-end system query, transaction, and session/message rates

  • configure fine-grained processing rates, so Rhino can participate in the entire end-to-end system without overloading other network equipment.

Tip Rate limiting can be useful in high-load situations, to ensure that once an activity starts it can run to completion without overloading the SLEE or related systems.

Granular control using "rate limiters"

Rhino administrators use rate limiters to set specific "flow rates" for different types of messages. These limit the number of messages per second that the SLEE will process or generate, and throttle-back the flow for certain messages in favor of others. Rhino also lets administrators monitor and refine rate-limiter parameters, to achieve desired throughput.

This section includes the following topics:

About Rate Limiting

For rate limiting with the Rhino SLEE:

  • An administrator creates limiters and assembles them into hierarchies.

  • The administrator connects those limiters to limiter endpoints.

  • RAs and SBBs determine the number of units needed for a particular piece of work.

  • RAs, SBBs, and Rhino code use limiter endpoints to determine if a piece of work can be done (for example, if a message can be processed or sent).

Note
Per-node configuration

Some limiter properties can be overridden on a per-node basis (a value set this way is called a per-node value). For example, a rate limiter’s maximum allowed rate could be set differently for different sized machines.

Each node always independently maintains the working state of each limiter (counts of units used and so on).

What are limiters?

A limiter is an object that decides if a piece of work can be done or not. How the decision is made depends on the type of limiter. Limiters are always created and removed "globally". That is, they always exist on all nodes in the cluster.

Limiter names

Each limiter has a name. A limiter’s name must be globally unique within the scope of the Rhino instance.

Warning
Name character restriction

The limiter name cannot include the "/" character.

Tip See also Limiter Types for details on limiter properties, and Managing Limiters for procedures to create, remove, set properties, inspect, and list limiters.

Limiter hierarchies

Limiters can optionally be linked to a single parent limiter and/or multiple child limiters. A limiter only allows a piece of work if all of its ancestors (its parent, its parent’s parent, and so on) also allow the work. You configure a hierarchy by setting the parent property on each limiter.

Warning The limiter hierarchy is the same on all nodes — per-node hierarchies are not possible. (Nor is it possible to create a cycle among parent/child limiters.)

Bypassing a limiter

All limiters have a bypassed property. If the flag is true, then the limiter itself takes no part in the decision about allowing work. If it has a parent, it delegates the question to the parent. If is doesn’t have a parent, it always allows all work.

Rhino has no concept of enabling or disabling a limiter. Instead, you use the bypassed property.

Default limiter hierarchy

By default Rhino has two limiters, with the following configuration:

Name Type Parent Bypassed Configuration
 QueueSaturation
 QUEUE_SATURATION
 <none>

 maxSaturation=85%
 SystemInput
 RATE
 QueueSaturation

 maxRate=0
timeUnit=seconds
depth=1

So by default, limiting only happens when the event staging queue is 85% or more full. Both limiters can be reconfigured as necessary. QueueSaturation can be removed, but SystemInput cannot (although it doesn’t have to be used for anything).

Limiter endpoints

A limiter endpoint is the interface between code that uses rate limiting, and the rate-limiting system itself. Administrators cannot create limiter endpoints — they are created as part of RA entities and SBBs. The only configuration available for a limiter endpoint is whether or not it is connected to a limiter. Limiter endpoints are not the same as SLEE endpoints — they are different concepts.

Endpoints in RAs and SBBs

RAs and SBBs may have any number of limiter endpoints, and there is no restriction on what they can be used for. Documentation with each RA or SBB should list and explain the purpose of its limiter endpoints. Typical uses include throttling output of messages to external resources and throttling input messages before passing them to the SLEE.

RA "Input" endpoints

The SLEE automatically creates a limiter endpoint named RAEntity/<entityname>/Input for every RA entity. These endpoints let the SLEE throttle incoming messages from RA entities. By default each "Input" endpoint is connected to the built-in "SystemInput" limiter, but the administrator can connect or disconnect it to another limiter.

The SLEE will try to use one unit on the "Input" endpoint every time a new activity is started. If the endpoint denies the unit then the SLEE rejects the activity. The SLEE will forcibly use one unit every time the RA passes in an event or ends an activity. This functionality is built into Rhino, and automatically happens for all RA entities, regardless of whether or not they use other limiter endpoints.

Tip See also Managing Limiter Endpoints, for procedures to list limiter endpoints, connect them to and disconnect them from a limiter, and find which limiter is connected to them.

What are units?

Units are an abstract concept representing the cost of doing a piece of work. For example, one unit might represent a normal piece of work, so three units indicate a piece of work that needs three times as much processing.

The RA or SBB determines the number of units used for a particular piece of work. Some might be configurable through configuration properties or deployment descriptors. This information should be documented for each individual RA or SBB.

Using units

Code can ask an endpoint "Can I do x units of work?". If the endpoint is connected to a limiter, the limiter will answer yes or no. If the endpoint is not connected to a limiter, the answer is always yes. If the answer is yes, the units are said to have been used. If the answer is no, the units are said to have been rejected.

Code can also tell the endpoint "I am doing x units of work that cannot be throttled". The endpoint passes this message to a limiter if connected (otherwise, it ignores the message). The units in this case are said to have been forcibly used.

Future limiter decisions do not differentiate between units used and those forcibly used. Rhino just counts both as having been "used".

Example

The following diagram illustrates an example rate limiting configuration, with two limiter hierarchies. Incoming traffic from the MSC is limited by the "FromSwitch" limiter and the limiters further up the chain. The Location and SRI services have defined limiter endpoints, which the administrator has connected (directly or indirectly) to the "To HLR" limiter to limit the total rate of requests to the HLR.

example

Limiter Types

Rhino comes with the following types of limiters:

Rate limiter

Rate limiters limit the rate of work. It is typically used to limit the rate of incoming events, or outgoing requests.

Type Name

RATE
(the type argument specified when creating a limiter)

Rejects work when…​

the number of units used (or forced) during a given timeUnit exceeds maxRate. TimeUnit can be one second, one minute, one hour, or one day (24-hour period, not calendar day).

Rhino implements rate limiters with a token bucket algorithm, where the depth property determines the bucket size. The actual bucket size is maxRate * depth.

The default setting for depth is 1.0. So "50/sec" means "allow for 50 per second". When depth is 2, "50/sec" means "allow an initial burst of 100 and then 50 per second." The recommended setting for maxRate is where your CPU is around 85%.

Example

Configured as "maxRate=100.0 timeUnit=seconds depth=2.0", the limiter has a bucket size of 100.0*2.0=200.0. If the bucket is empty when the limiter becomes overloaded, it allows 200+100=300 units in the first second and then 100 units per second after that.

Example

Configured as "maxRate=0.1 timeUnit=seconds depth=10.0", the limiter has a bucket size of 0.1*10.0=1.0. If the bucket is empty when the limiter becomes overloaded, it allows 1.0+0.1=1 units in the first second and then 1 units per 1/maxRate seconds after that.

Properties

Property Legal values Default Settable per node
 bypassed

true false

 false

 parent

name of another limiter or "" for no parent

 ""

 maxrate

1.0 ⋯ 10000000.0

 0.0

 depth

0.01 ⋯ 00.0

 1.0

 timeunit

seconds minutes hours days
Note that 'days' means 24 hour periods, not calendar days.

 seconds

Queue-saturation limiter

The queue-saturation limiter rejects work when the event-staging queue (explained in the Staging section) passes a given saturation. It provides some overload protection, by limiting incoming activities in cases where too much work is backlogged, while allowing enough headroom to process existing activities.

For example, the default configuration has the QueueSaturation limiter configured with an allowed capacity of 85%. With the default maximum queue size of 3000, this limiter starts rejecting new activities when 2250 or more items are in the queue (leaving 15% headroom for processing existing activities).

Type Name

QUEUE_SATURATION
(the type argument specified when creating a limiter)

Rejects work when…​

the number of items in the staging queue reaches maxSaturation, expressed as a percentage of the queue’s capacity.

Example

Configured as maxSaturation=80.0 and a queue capacity of 200, the limiter will reject work when the queue contains 160 or more items.

Properties

Property Legal values Default Settable per node
 bypassed

true false

 false

 parent

name of another limiter or "" for no parent

 ""

 maxSaturation

0.0 ⋯ 100.0

 0

Absolute stat limiter

The absolute stat limiter rejects work based on the value of a single gauge-type statistic and supports progressive levels of limiting tiers.

This limiter type should be used in cases where a single parameter set gauge can be compared against one or more fixed values to determine if work load should be limited. On each request for one or more units, the current value of the specified statistic is compared against one or more absolute values. Each absolute value can be configured to limit a different percentage of the total unit requests. The absolute value tiers can be structured in such a way as to progressively limit an increasing percentage of all unit requests as the value of the statistic approaches some known threshold.

Tip Only gauge-type statistics are currently supported. Configuring a limiter with a counter or sample-type statistic will cause the limiter to raise a misconfiguration alarm. Any unit requests made to this limiter while it is misconfigured will be accepted (unless rejected by a parent limiter).

Type Name

ABSOLUTE_STAT
(the type argument specified when creating a limiter)

Rejects work when…​

the value of the monitored statistic is equal to or greater than the value of one of the configured limiter tiers that has a non-zero limit percentage.

Example

Configured as

parameterSet=Activities
statistic=active
tiers=[ {10000: 50%} {12000: 75%} {13000: 100%} ]

when the current value of the active gauge in the Activities parameter set is equal to or greater than:

  • 10000 — 50% of all unit requests will be refused;

  • 12000 — 75% of all unit requests will be refused; or

  • 13000 — 100% of all unit requests will be refused.

Properties

Property Legal values Default Settable per node
 bypassed

true false

 false

 parent

name of another limiter or "" for no parent

 ""

 parameterSet

a registered parameter set

 "unspecified"

 statistic

a statistic in the configured parameter set

 "unspecified"

 tiers

one or more mappings of value to limit percentage

 []

Relative stat limiter

The relative stat limiter rejects work based on the values of two related gauge-type statistics and supports progressive levels of limiting tiers.

This limiter type should be used in cases where two related parameter set gauges can be compared against each other to determine if work load should be limited. On each request for one or more units, the current value of the specified statistic is compared against one or more percentage values of the current value of the relative statistic. Each percentage value can be configured to limit a different percentage of the total unit requests. The relative percentage tiers can be structured in such a way as to progressively limit an increasing percentage of all unit requests as the value of the statistic approaches a known threshold with respect to the value of the related statistic.

Tip Only gauge-type statistics are currently supported. Configuring a limiter with a counter or sample-type statistic will cause the limiter to raise a misconfiguration alarm. Any unit requests made to this limiter while it is misconfigured will be accepted (unless rejected by a parent limiter).

Type Name

RELATIVE_STAT
(the type argument specified when creating a limiter)

Rejects work when…​

the value of the monitored statistic is equal to or greater than the relative percentage of the relative statistic as configured in one or more tiers that has a non-zero limit percentage.

Example

Configured as

parameterSet=MemDB-Local
statistic=committedSize
relativeParameterSet=MemDB-Local
relativeStatistic=maxCommittedSize
tiers=[ {75%: 50%} {80%: 75%} {90%: 100%} ]

when the current value of the committedSize gauge in the MemDB-Local parameter set is equal to or greater than:

  • 75% of maxCommittedSize — 50% of all unit requests will be refused;

  • 80% of maxCommittedSize — 75% of all unit requests will be refused; or

  • 90% of maxCommittedSize — 100% of all unit requests will be refused.

Properties

Property Legal values Default Settable per node
 bypassed

true false

 false

 parent

name of another limiter or "" for no parent

 ""

 parameterSet

a registered parameter set

 "unspecified"

 statistic

a statistic in the configured parameter set

 "unspecified"

 relativeParameterSet

a registered parameter set

 "unspecified"

 relativeStatistic

a statistic in the configured relative parameter set

 "unspecified"

 tiers

one or more mappings of relative percentage to limit percentage

 []

Note For the stat limiters, when limiting work in a percentage limit tier, the limiter always attempts to keep the percentage of units rejected as close to the specified limit percentage as possible. For example, upon entering a tier configured to limit 50%, if one unit is requested and accepted, then the next unit will be rejected. If a single request for five units is rejected, then around five of the next requests for a single unit will be accepted. When the limiter drops below a certain tier, or the total units requested while limiting surpasses Integer.MAX_VALUE, that tier’s internal running count of total units requested to units rejected is reset.

Managing Rate Limiting

Below are a summary of the MBeans and a listing of the procedures available for managing rate limiting with the Rhino SLEE.

Rate-limiting MBeans

Rate limiting exposes several MBean classes:

MBean What it does Where to get it

The main limiting MBean.

Provides management of a particular limiter, specific to the type of that limiter. All limiter MBeans extend LimiterMBean.

Controls ramp-up of a rate limiter.

Tip For convenience, you can get the MBean for the SystemInput limiter through LimiterManagementMBean.getSystemInputLimiterMBean().

Managing Limiters

This section includes instructions for performing the following Rhino SLEE procedures with explanations, examples, and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation
 createlimiter

LimiterManagementMBean → createLimiter

 removelimiter

LimiterManagementMBean → removeLimiter

 configureratelimiter
 configuresaturationlimiter
 configureabsolutestatlimiter
 configurerelativestatlimiter

LimiterMBean
RateLimiterMBean
QueueSaturationLimiterMBean
AbsoluteStatLimiterMBean
RelativeStatLimiterMBean

 getlimiterinfo

LimiterMBean
RateLimiterMBean
QueueSaturationLimiterMBean
AbsoluteStatLimiterMBean
RelativeStatLimiterMBean

 listlimiters

LimiterManagementMBean → getLimiters
LimiterManagementMBean → getHierarchySummary

Creating a Limiter

To create a limiter, use the following rhino-console command or related MBean operation.

Warning
Name character restriction

The limiter name cannot include the "/" character.

Console command: createlimiter

Command

createlimiter <limitername> [-type <limitertype>]
  Description
    Creates a new limiter with the given name, and of the given type if specified.
    If no type is specified, then a RATE limiter is created by default.

Example

To create a queue-saturation type limiter named saturation1:

$ ./rhino-console createlimiter saturation1 -type QUEUE_SATURATION
Successfully created queue_saturation limiter 'saturation1'.

MBean operation: createLimiter

MBean

Rhino operation

void createLimiter(String type, String name)
        throws NullPointerException, InvalidArgumentException, ConfigurationException, ManagementException, LimitingManagementException;

Removing a Limiter

To remove a limiter, use the following rhino-console command or related MBean operation.

Warning

A limiter cannot be removed if it is the parent of any limiters or if any limiter endpoints are connected to it.

Also the SystemInput limiter cannot be removed.

Console command: removelimiter

Command

removelimiter <limitername>
  Description
    Remove the specified limiter

Example

To remove limiter saturation1:

$ ./rhino-console removelimiter saturation1
The Limiter saturation1 has been successfully removed.

MBean operation: removeLimiter

MBean

Rhino operation

void removeLimiter(String name)
        throws NullPointerException, InvalidArgumentException, ConfigurationException, LimitingManagementException;

Setting Limiter Properties

To set limiter properties, use the following rhino-console commands or related MBean operations. For more information on the configuration properties for input limiter types see Limiter Types and About Rate Limiting.

Warning Limiters can only be configured administratively — RAs or services cannot configure limiters.

Console commands

Tip For details of available properties for each limiter type, see Limiter Types.

configureratelimiter

Command

configureratelimiter <limitername> [-nodes node1,node2...] <[-property value]
[-property value] ... >
  Description
    Sets the values of the specified configuration properties of the limiter on the
    given node(s). Use a value of '-' to clear existing per-node settings

Example

To set rate limiter properties:

$ ./rhino-console configureratelimiter SystemInput -nodes 101 -bypassed false -maxrate 100
Updated config properties for limiter 'SystemInput':
maxrate=100
bypassed=false

configuresaturationlimiter

Command

configuresaturationlimiter <limitername> [-nodes node1,node2...] <[-property
value] [-property value] ... >
  Description
    Sets the values of the specified configuration properties of the limiter on the
    given node(s). Use a value of '-' to clear existing per-node settings

Example

To set saturation limiter properties:

$ ./rhino-console configuresaturationlimiter QueueSaturation -maxsaturation 75
Updated config properties for limiter 'QueueSaturation':
maxsaturation=7

configureabsolutestatlimiter

Command

configureabsolutestatlimiter <limitername> [-nodes node1,node2...] <[-property
value] [-property value] ... >
  Description
    Sets the values of the specified configuration properties of the limiter on the
    given node(s). Use a value of '-' to clear existing per-node settings

Example

To configure an absolute stat limiter called ActivitiesActive based on the active statistic in the Activities parameter set with three progressive levels of limiting tiers. When the active gauge reports a value at or above:

  • 10000 — 50% of all unit requests will be refused;

  • 12000 — 75% of all unit requests will be refused; or

  • 13000 — 100% of all unit requests will be refused.

$ ./rhino-console configureabsolutestatlimiter ActivitiesActive \
  -parameterset Activities \
  -statistic active \
  -values 10000,12000,13000 \
  -limitpercentages 50,75,100
Updated config properties for limiter 'ActivitiesActive':
parameterset=Activities
statistic=active
values=10000,12000,13000
limitpercentages=50,75,100

configurerelativestatlimiter

Command

configurerelativestatlimiter <limitername> [-nodes node1,node2...] <[-property
value] [-property value] ... >
  Description
    Sets the values of the specified configuration properties of the limiter on the
    given node(s). Use a value of '-' to clear existing per-node settings

Example

To configure a relative stat limiter called MemDBLocalCommitted based on the committedSize and maxCommittedSize statistics in the MemDB-Local parameter set with three progressive levels of limiting tiers. When the committedSize gauge reports a value at or above:

  • 75% of maxCommittedSize — 50% of all unit requests will be refused;

  • 80% of maxCommittedSize — 75% of all unit requests will be refused; or

  • 90% of maxCommittedSize — 100% of all unit requests will be refused.

$ ./rhino-console configurerelativestatlimiter MemDBLocalCommitted \
  -parameterset MemDB-Local \
  -statistic committedSize \
  -relativeparameterset MemDB-Local \
  -relativestatistic maxCommittedSize \
  -relativepercentages 75,80,90 \
  -limitpercentages 50,75,100
Updated config properties for limiter 'MemDBLocalCommitted':
parameterset=MemDB-Local
statistic=committedSize
relativeparameterset=MemDB-Local
relativestatistic=maxCommittedSize
relativepercentages=75,80,90
limitpercentages=50,75,100
Note

If -nodes are specified, these commands set properties for the listed nodes only (the "per-node" properties). Otherwise, these commands update the default properties for the limiter (the properties that apply whenever no per-node property is set).

You cannot change the name or type of a limiter — these are set when a limiter is created.

MBean operations

Limiter MBean operations

Operation Usage

setBypassedDefault

void setBypassedDefault(boolean bypassed)
        throws ConfigurationException;

setBypassedForNode

void setBypassedForNode(boolean[] bypassed, int[] nodeIDs)
        throws NullPointerException, ConfigurationException, InvalidArgumentException;

setParent

void setParent(String parentName)
        throws ConfigurationException, ValidationException, NullPointerException, InvalidArgumentException;

RateLimiter MBean operations

Operation Usage

setDepthDefault

void setDepthDefault(double depth)
        throws ConfigurationException, ValidationException;

setDepthForNode

void setDepthForNode(double[] depth, int[] nodeIDs)
        throws ConfigurationException,  ValidationException, NullPointerException, InvalidArgumentException;

setMaxRateDefault

void setMaxRateDefault(double depth)
        throws ConfigurationException, ValidationException;

setMaxRateForNode

void setMaxRateForNode(double[] depth, int[] nodeIDs)
        throws ConfigurationException,  ValidationException, NullPointerException, InvalidArgumentException;

setTimeUnit

void setTimeUnit(String timeUnit)
        throws ConfigurationException, ValidationException;

QueueSaturationLimiter MBean operations

Operation Usage

setMaxSaturationDefault

void setMaxSaturationDefault(double maxSaturation)
        throws ConfigurationException, ValidationException;

setMaxSaturationForNode

void setMaxSaturationForNode(double[] maxSaturation, int[] nodeIDs)
        throws ConfigurationException, ValidationException,NullPointerException, InvalidArgumentException;

StatLimiter MBean operations

Operation Usage

setParameterSetStatistic

void setParameterSetStatistic(String parameterSet, String statistic)
        throws ConfigurationException, ValidationException;

AbsoluteStatLimiter MBean operations

Operation Usage

setTier

void setTier(long value, double limitPercentage)
        throws ConfigurationException, ValidationException;

setTierForNode

void setTierForNode(long[] values, double[] limitPercentages, int[] nodeIDs)
        throws ConfigurationException, InvalidArgumentException, ValidationException;

clearTier

void clearTier(long value)
        throws ConfigurationException, ValidationException;

clearTierForNode

void clearTierForNode(long[] values, int[] nodeIDs)
        throws ConfigurationException, InvalidArgumentException, ValidationException;

RelativeStatLimiter MBean operations

Operation Usage

setRelativeParameterSetStatistic

void setRelativeParameterSetStatistic(String parameterSet, String statistic)
        throws ConfigurationException, ValidationException;

setTier

void setTier(double relativePercentage, double limitPercentage)
        throws ConfigurationException, ValidationException;

setTierForNode

void setTierForNode(double[] relativePercentages, double[] limitPercentages, int[] nodeIDs)
        throws ConfigurationException, InvalidArgumentException, ValidationException;

clearTier

void clearTier(double relativePercentage)
        throws ConfigurationException, ValidationException;

clearTierForNode

void clearTierForNode(double[] relativePercentages, int[] nodeIDs)
        throws ConfigurationException, InvalidArgumentException, ValidationException;

Inspecting a Limiter

To inspect a limiter, use the following rhino-console command or related MBean operations.

Console command

getlimiterinfo

Command

getlimiterinfo <limitername> [-c]
  Description
    Displays the current configuration settings of the specified limiter.  If the -c
    flag is provided, all stored default and per node settings for the limiter are
    listed.  Otherwise the current configuration of all event routing nodes (as
    derived from the stored settings) is listed.
Note

The first variant of this command, using the -c flag (or the LimiterMBean.getConfigSummary method) displays all configuration properties stored for the limiter.

The second variant, without the -c flag (or the LimiterMBean.getInfoSummarymethod) displays the effective configuration for the limiter as derived from the stored configuration properties. This variant of the command lists information for all event-routing nodes, whereas the MBean method lets you specify which nodes you’re interested in. (You can get a list of all event-routing nodes from RhinoHousekeepingMBean.getEventRouterNodes()).

Examples

To view all configuration properties stored for a limiter SystemInput:

$ ./rhino-console getlimiterinfo SystemInput -c
limiter-name   node-id    bypassed   depth   maxrate   parent            time-unit   type
-------------  ---------  ---------  ------  --------  ----------------  ----------  -----
  SystemInput   defaults       true     1.0       0.0   QueueSaturation     SECONDS   RATE
          n/a        101      false       *     100.0               n/a         n/a    n/a
2 rows

'*' means no value set
'n/a' means setting not configurable per node

NOTE: Ramp up of SystemInput limiter is currently disabled

To view the effective configuration for limiter SystemInput, as derived from the stored configuration properties:

$ ./rhino-console getlimiterinfo SystemInput
limiter-name   node-id   bypassed   depth   maxrate   parent            time-unit   type
-------------  --------  ---------  ------  --------  ----------------  ----------  -----
  SystemInput       101      false     1.0     100.0   QueueSaturation     SECONDS   RATE
1 rows

'*' means no value set


NOTE: Ramp up of SystemInput limiter is currently disabled

MBean operations

Limiter MBean operations

Operation Usage

getConfigSummary

TabularData getConfigSummary();

getInfoSummary

TabularData getInfoSummary(int[] nodeIDs)
      throws NullPointerException, ConfigurationException, InvalidArgumentException;

getChildLimiters

String[] getChildLimiters()
      throws ConfigurationException;

getConnectedEndPoints

String[] getConnectedEndPoints()
      throws ConfigurationException;

getName

String getName()
      throws ConfigurationException;

getParent

String getParent()
      throws ConfigurationException;

getType

String getType()
      throws ConfigurationException;

isBypassedDefault

boolean isBypassedDefault()
      throws ConfigurationException;

isBypassedForNode

boolean[] isBypassedForNode(int[] nodeIDs)
      throws NullPointerException, ConfigurationException, InvalidArgumentException;

RateLimiter MBean operations

Operation Usage

getDepthDefault

double getDepthDefault()
      throws ConfigurationException;

getDepthForNode

double[] getDepthForNode(int[] nodeIDs)
      throws ConfigurationException, NullPointerException, InvalidArgumentException;

getMaxRateDefault

double getMaxRateDefault()
      throws ConfigurationException;

getMaxRateForNode

double[] getMaxRateForNode(int[] nodeIDs)
      throws ConfigurationException, NullPointerException, InvalidArgumentException;

getTimeUnit

double getTimeUnit()
      throws ConfigurationException;

SaturationLimiter MBean operations

Operation Usage

getMaxSaturationDefault

double getMaxSaturationDefault()
      throws ConfigurationException;

getMaxSaturationForNode

double[] getMaxSaturationForNode(int[] nodeIDs)
      throws ConfigurationException, NullPointerException, InvalidArgumentException;

StatLimiter MBean operations

Operation Usage

getParameterSet

String getParameterSet()
      throws ConfigurationException;

getStatistic

String getStatistic()
      throws ConfigurationException;

getLimitPercentages

double[] getLimitPercentages()
      throws ConfigurationException;

getLimitPercentagesForNodes

String double[][] getLimitPercentagesForNodes(int[] nodeIDs)
      throws ConfigurationException, InvalidArgumentException;

AbsoluteStatLimiter MBean operations

Operation Usage

getTierValues

long[] getTierValues()
        throws ConfigurationException;

getTierValuesForNodes

String long[][] getTierValuesForNodes(int[] nodeIDs)
      throws ConfigurationException, InvalidArgumentException;

RelativeStatLimiter MBean operations

Operation Usage

getRelativeParameterSet

String getRelativeParameterSet()
      throws ConfigurationException;

getRelativeStatistic

String getRelativeStatistic()
      throws ConfigurationException;

getTierPercentages

double[] getTierPercentages()
        throws ConfigurationException;

getTierPercentagesForNodes

String double[][] getTierPercentagesForNodes(int[] nodeIDs)
      throws ConfigurationException, InvalidArgumentException;

Listing Limiters and Limiter Hierarchies

To list all limiters and limiter hierarchies, use the following rhino-console command or related MBean operations.

Console command: listlimiters

Command

listlimiters [-v]
  Description
    Lists all limiters. If the '-v' flag is provided, all limiter hierarchies and
    connected endpoints are displayed.

Example

To list all limiters:

$ ./rhino-console listlimiters
QueueSaturation
rate1
SystemInput

To display all limiter hierarchies and connected endpoints:

$ ./rhino-console listlimiters -v
QueueSaturation
+- SystemInput
   +- Endpoint:RAEntity/entity1/Input
   +- Endpoint:RAEntity/entity2/Input
rate1
   (Has no children or endpoints)

MBean operation: getLimiters

MBean

Rhino operation

String[] getLimiters() throws ConfigurationException;

MBean operation: getHierarchySummary

MBean

Rhino operation

String getHierarchySummary()
          throws ConfigurationException, ManagementException;

Managing Limiter Endpoints

This section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation
 connectlimiterendpoint

LimiterManagementMBean → connectLimiterEndpoint

 disconnectlimiterendpoint

LimiterManagementMBean → disconnectLimiterEndpoint

 listlimiterendpoints

LimiterManagementMBean → getLimiterEndpoints

 getlimiterforlimiterendpoint

LimiterManagementMBean → getLimiterForEndpoint

Connecting a Limiter Endpoint to a Limiter

To connect a limiter endpoint to a limiter, use the following rhino-console command or related MBean operation.

Console command: connectlimiterendpoint

Command

connectlimiterendpoint <limiterendpoint> <limiter>
  Description
    Sets the limiter endpoint to use the specified limiter

Example

To connect limiter endpoint RAEntity/entity1/Input to limiter rate1:

$ ./rhino-console connectlimiterendpoint RAEntity/entity1/Input rate1
Connected limiter endpoint 'RAEntity/entity1/Input' to limiter 'rate1'

MBean operation: connectLimiterEndpoint

MBean

Rhino operation

void connectLimiterEndpoint(String limiterEndpointID, String limiterName)
        throws NullPointerException, InvalidArgumentException, ConfigurationException, ManagementException, LimitingManagementException;

Disconnecting a Limiter Endpoint from a Limiter

To disconnect a limiter endpoint from a limiter, use the following rhino-console command or related MBean operation.

Console command: disconnectlimiterendpoint

Command

disconnectlimiterendpoint <limiterendpoint>
  Description
    Removes the limiter for a limiter endpoint

Example

To disconnect limiter endpoint RAEntity/entity1/Input:

$ ./rhino-console disconnectlimiterendpoint RAEntity/entity1/Input
Disconnected limiter endpoint 'RAEntity/entity1/Input'

MBean operation: disconnectLimiterEndpoint

MBean

Rhino operation

void disconnectLimiterEndpoint(String limiterEndpointID)
        throws NullPointerException, InvalidArgumentException, ConfigurationException,  LimitingManagementException, ManagementException;

Listing Limiter Endpoints

To list all limiter endpoints, use the following rhino-console command or related MBean operation.

Console command: listlimiterendpoints

Command

listlimiterendpoints [-v]
  Description
    Lists all available limiter endpoints. If the '-v' flag is provided, the limiter
    endpoint's current used limiter is also provided.

Example

$ ./rhino-console listlimiterendpoints
RAEntity/entity1/Input
RAEntity/entity1/inbound

MBean operation: getLimiterEndpoints

MBean

Rhino operation

String[] getLimiterEndpoints()
        throws ConfigurationException, ManagementException;

Finding which Limiter is Connected to a Limiter Endpoint

To find which limiter is connected to a limiter endpoint, use the following rhino-console command or related MBean operation.

Console command: getlimiterforlimiterendpoint

Command

getlimiterforlimiterendpoint <limiterendpoint>
  Description
    Returns the name of the limiter that the limiter endpoint is using

Example

To find which limiter is connected to limiter endpoint RAEntity/entity1/Input:

$ ./rhino-console getlimiterforlimiterendpoint RAEntity/entity1/Input
LimiterEndpoint 'RAEntity/entity1/Input' is using the limiter 'rate1'

MBean operation: getLimiterForEndpoint

MBean

Rhino operation

String getLimiterForEndpoint(String limiterEndpointID)
        throws NullPointerException, InvalidArgumentException, ConfigurationException, ManagementException;

Managing Rate Limiter Ramp-up

As well as an overview of ramp-up of the rate limiters, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation
 enablerampup

LimiterRampUpMBean → enableRampUp

 disablerampup

LimiterRampUpMBean → disableRampUp

 getrampupconfiguration

LimiterRampUpMBean → isEnabled
LimiterRampUpMBean → getStartRate
LimiterRampUpMBean → getEventsPerIncrement
LimiterRampUpMBean → getRateIncrement

About Rate Limiter Ramp-up

Ramp-up is an optional procedure that gradually increases the rate that a rate limiter allows — from a small value at the beginning of the ramp, typically when a node starts, up to the configured maximum.

This allows time for events such as Just-In-Time compilation and cache-loading, before the maximum work rate applies to the node.

Ramp-up configuration units

SystemInput rate limiter

The ramp-up configuration of the SystemInput rate limiter is expressed in terms of a raw number of SLEE events. That is, the startRate and rateIncrement specify an exact number of events.

For example, if the startRate is 50 then ramp-up begins with an allowance of 50 events per time unit.

The rateIncrement is added to the allowed rate every time Rhino processes eventsPerIncrement events with no rejected events. Rhino counts all events processed, regardless of whether or not they go through the SystemInput limiter.

Other rate limiters

The ramp-up configuration of all other rate limiters is expressed in terms of a percentage of the maximum rate of the limiter.

For example, if maxRate = 500 and startRate = 25.0 then ramp-up begins with an allowance of 500 x 25.0% = 125 units of work per time unit.

Following on with this example, if rateIncrement = 10.0 then the allowed rate increases by 500 x 10.0% = 50 units of work per time unit every time that eventsPerIncrement units of work are used in the limiter.

Enabling or disabling ramp-up

Below is a summary of what happens depending on whether ramp-up is enabled or disabled for the SystemInput rate limiter.

Enabled Disabled
  • First, Rhino sets the SystemInput limiter’s maximum allowed rate to startRate.

  • Then, every time Rhino processes eventsPerIncrement events, with no rejected events, it adds rateIncrement to the rate.

  • Ramp-up finishes when the current rate reaches the maxRate value set for the SystemInput limiter.

Nothing special happens when the node starts — the maximum rate the SystemInput limiter allows is simply maxRate.

Below is a summary of what happens when ramp-up is enabled or disabled for any other rate limiter.

Enabled Disabled
  • First, Rhino sets the rate limiter’s maximum allowed rate to maxRate x startRate%.

  • Then, every time eventsPerIncrement units of work are used in the limiter, it adds maxRate x rateIncrement% to the rate.

  • Ramp-up finishes when the current rate reaches the maxRate value set for the rate limiter.

Nothing special happens when the node starts — the maximum rate the rate limiter allows is simply maxRate.

Warning Ramp-up has no effect if the rate limiter’s bypassed flag is true.
Warning Ramp-up restarts from startRate if any property of the ramp-up’s configuration is modified, if the limiter’s maxRate is changed, or if the limiter changes from being bypassed to not being bypassed.
Tip You configure ramp-up globally, but each node ramps up independently. So if a node restarts, it ramps up again — without affecting other already running nodes.

Enabling Rate Limiter Ramp-up

To enable rate limiter ramp-up, use the following rhino-console command or related MBean operation.

Console command: enablerampup

Command

enablerampup [-limiter <limitername>] <startrate> <rateincrement>
<eventsperincrement>
  Description
    Enables rampup of a limiter's rate with the provided startrate, rateincrement,
    and eventsperincrement.  If no limiter name is given then the SystemInput
    limiter is updated.

Example

To enable SystemInput limiter ramp-up:

$ ./rhino-console enablerampup 10 10 1000
Enabled rampup of the SystemInput limiter rate with config properties:
startrate=10.0 events per time unit
rateincrement=10.0 events per time unit
eventsperincrement=1000
Tip: the ramp up will only be effective when the SystemInput limiter is not bypassed.

To enable ramp-up on a rate limiter named "From MSC":

$ ./rhino-console -limiter "From MSC" enablerampup 5 5 800
Enabled rampup of the From MSC limiter rate with config properties:
startrate=5.0% of maximum rate
rateincrement=5.0% of maximum rate
eventsperincrement=800
Tip: the ramp up will only be effective when the From MSC limiter is not bypassed.

MBean operation: enableRampUp

MBean

Rhino operation

void enableRampUp(double startRate, double rateIncrement, int eventsPerIncrement)
        throws ConfigurationException;
Warning The rate limiter’s bypassed flag must be false for ramp-up to have any effect.

Disabling Rate Limiter Ramp-up

To disable rate limiter ramp-up, use the following rhino-console command or related MBean operation.

Console command: disablerampup

Command

disablerampup [limitername]
  Description
    Disables rampup of the specified limiter's rate, or the rate of the SystemInput
    limiter if no limiter name is given.

Example

To disable SystemInput limiter ramp-up:

$ ./rhino-console disablerampup
Disabled rampup of the SystemInput limiter rate

To disable ramp-up on a rate limiter named "From MSC":

$ ./rhino-console disablerampup "From MSC"
Disabled rampup of the From MSC limiter rate

MBean operation: disableRampUp

MBean

Rhino operation

void disableRampUp()
        throws ConfigurationException;

Inspecting Rate Limiter Ramp-up Configuration

To inspect a rate limiter’s ramp-up configuration, use the following rhino-console command or related MBean operation.

Console command: getrampupconfiguration

Command

getrampupconfiguration [limitername]
  Description
    Retrieves the limiter rampup configuration settings, if it is enabled. If no
    limiter is specified the settings of the SystemInput limiter are shown.

Example

To inspect the SystemInput limiter’s ramp-up configuration:

$ ./rhino-console getrampupconfiguration
Rampup of the SystemInput limiter is active with the following config properties:
startrate=10.0 events per time unit
rateincrement=10.0 events per time unit
eventsperincrement=100

To inspect the ramp-up configuration of the "From MSC" rate limiter:

$ ./rhino-console getrampupconfiguration "From MSC"
Rampup of the From MSC limiter is active with the following config properties:
startrate=5.0% of maximum rate
rateincrement=5.0% of maximum rate
eventsperincrement=800

LimiterRampUp MBean operations

Operation

Usage

isEnabled

boolean isEnabled()
        throws ConfigurationException;

getStartRate

double getStartRate()
        throws ConfigurationException;

getRateIncrement

double getRateIncrement()
        throws ConfigurationException;

getEventsPerIncrement

int getEventsPerIncrement()
        throws ConfigurationException;

Monitoring Limiter Statistics

You can monitor limiter statistics using the rhino-stats tool or the StatsManagement MBean.

The root parameter set is called Limiters, and it has one child parameter set for each limiter; for example, Limiters.SystemInput or Limiters.QueueSaturation.

Limiter parameters recorded

Rhino records the following limiter parameters:

Records the total number of.. Increments.. Decrements..

Total units successfully used or forced — unitsUsed

…​units allowed to be used and units forced to be used.

…​whenever units are allowed to or forced to be used.

…​never.

Total units denied by a limiter’s parent — unitsRejectedByParent

…​units not allowed to be used, because the parent of the limiter denied their use. This includes units denied because of any ancestor (such as the parent of the parent).

The unitsRejected statistic also includes units rejected by a parent.

…​whenever the parent of a limiter denies unit use.

…​never.

Total units denied by a limiter — unitsRejected

…​units not allowed to be used, because a limiter or its parent denied their use. This includes units denied because of any ancestor (such as the parent of the parent).

…​whenever a limiter denies unit use.

…​never.

Example

The following excerpt shows the number of units a limiter allows and rejects, second by second.

$ ./rhino-stats -m Limiters.SystemInput
2009-03-11 06:57:43.903  INFO    [rhinostat]  Connecting to localhost:1199
2009-03-11 06:57:44.539  INFO    [dispatcher]  Establish direct session DirectSession[host=server1 port=17400 id=56928181032135173]
2009-03-11 06:57:44.542  INFO    [dispatcher]  Connecting to localhost/127.0.0.1:17400

                                 Limiters.SystemInput
time                       rejected  rejectedByParent  used
-----------------------   ---------------------------------
2009-03-11 06:57:46.604           -                 -     -
2009-03-11 06:57:47.604          14                 0   103
2009-03-11 06:57:48.604          14                 0   102
2009-03-11 06:57:49.604          11                 0   101
2009-03-11 06:57:50.604          12                 0    99
2009-03-11 06:57:51.604          13                 0   102
2009-03-11 06:57:52.604          14                 0   101
2009-03-11 06:57:53.604           8                 0    96

(In this example, rejectedByParent is 0, as SystemInput has no parent.)

Using Alarms with Limiters

Threshold Alarms can be configured for a limiter based on any limiter statistics.

Note See the Configuring Rules section for general instructions on installing threshold alarm rules, and the configuration example on this page.

Pre-existing alarms

By default Rhino has two threshold alarms pre-configured to indicate when one of the two pre-configured limiters rejects work: the SystemInput Rejecting Work alarm for the SystemInput limiter, and the QueueSaturation Rejecting Work alarm for the QueueSaturation limiter. Each rate limiter may also generate a Negative capacity alarm if it reaches a limit to the amount of forced work it can keep track of.

SystemInput rejecting work

Alarm Message

SystemInput rate limiter is rejecting work

Type

 LIMITING

Instance ID

system-input-limiter-rejecting-work

Level

 MAJOR

Raised if…​

…​the SystemInput limiter is rejecting work for more than one second.

Cleared if…​

…​the SystemInput limiter has not rejected any work for five seconds.

Example output

2009-03-02 17:13:43.893  Major   [rhino.facility.alarm.manager]   <Timer-2> Alarm 101:136455512705:8
    [SubsystemNotification[subsystem=ThresholdAlarms],LIMITING,system-input-limiter-rejecting-work]
    was raised at 2009-03-02 17:13:43.893 to level Major
        SystemInput rate limiter is rejecting work

QueueSaturation Rejecting Work

Alarm Message

QueueSaturation limiter is rejecting work

Type

 LIMITING

Instance ID

queue-saturation-limiter-rejecting-work

Level

 MAJOR

Raised if…​

…​the QueueSaturation limiter is rejecting work for more than one second.

Cleared if…​

…​the QueueSaturation limiter has not rejected any work for five seconds.

Example output

2009-03-02 17:16:37.697  Major   [rhino.facility.alarm.manager]   <Timer-1> Alarm 101:136455512705:10
    [SubsystemNotification[subsystem=ThresholdAlarms],LIMITING,queue-saturation-limiter-rejecting-work]
    was raised at 2009-03-02 17:16:34.592 to level Major
          QueueSaturation limiter is rejecting work

Negative capacity alarm

Alarm Message

Token count in rate limiter "<LIMITER_NAME>" capped at negative saturation point on node <NODE_ID>. Too much work has been forced. Alarm will clear once token count >= 0.

Type

 ratelimiter.below_negative_capacity

Instance ID

 nodeID=<NODE_ID>,limiter=<LIMITER_NAME>

Level

 WARNING

Raised if…​

…​a very large number of units have been forcibly used and the internal token counter has reached the biggest possible negative number (-2,147,483,648).

Cleared if…​

…​token counter >= 0

Example output

2009-03-05 01:14:59.597  Warning [rhino.facility.alarm.manager]   <Receiver for switchID 1236168893> Alarm 101:136654511648:16

[SubsystemNotification[subsystem=LimiterManager],limiting.ratelimiter.below_negative_capacity,nodeID=101,limiter=SystemInput]
  was raised at 2009-03-05 01:14:59.596 to level Warning
        Token count in rate limiter "SystemInput" capped at negative saturation point on node 101.
        Too much work has been forced. Alarm will clear once token count >= 0.

Threshold alarm example

The following configuration example defines the pre-existing system-input-limiter-rejecting-work alarm.

<threshold-rules active="true" name="system-input-limiter-rejecting-work">
    <trigger-conditions name="Trigger conditions" operator="OR" period="1000">
        <simple-threshold operator="&gt;" value="0.0">
            <select-statistic calculate-delta="true" parameter-set="Limiters.SystemInput" statistic="unitsRejected"/>
        </simple-threshold>
    </trigger-conditions>
    <reset-conditions name="Reset conditions" operator="OR" period="5000">
        <simple-threshold operator="==" value="0.0">
            <select-statistic calculate-delta="true" parameter-set="Limiters.SystemInput" statistic="unitsRejected"/>
        </simple-threshold>
    </reset-conditions>
    <trigger-actions>
        <raise-alarm-action level="Major" message="SystemInput rate limiter is rejecting work" type="LIMITING"/>
    </trigger-actions>
    <reset-actions>
        <clear-raised-alarm-action/>
    </reset-actions>
</threshold-rules>
Tip The default threshold alarms can be modified or removed as needed.

Security

Security is an essential feature of the JAIN SLEE standard and Rhino.

It provides access control for: m-lets (management applets), JAIN SLEE components (including resource adaptors, services and libraries) and Rhino node and cluster administration. Rhino’s security subsystem implements a pessimistic security model — to prevent untrusted resource adaptors, m-lets, services or human users from performing restricted functions.

Warning

Transport-layer security and the general security of the remote host and server are important considerations when interconnection with third-party servers.

Any security planning can be foiled by an incumbent with a key!

Key features of Rhino security include:

Configuring Java Security of Rhino

Note

As Rhino starts, it:

  1. pre-processes configuration files (including rhino.policy

  2. substitutes configuration variables (such as @RHINO_HOME@)

  3. creates working configuration files (in the node-XXX/work/config subdirectory).

Disabling or debugging security

There may be times when you want to disable security (for example, during development), or enable fine-grained security tracing in Rhino (for example, to track down security-related issues in Rhino).

Disabling security completely

You can disable security two ways:

  1. Insert a rule into the policy file that grants AllPermission to all code:

    grant {
    permission java.security.AllPermission;
    };
  2. Disable the use of a security manager — edit $RHINO_HOME/node-XXX/read-config-variables, commenting out the following line:

    #OPTIONS="$OPTIONS -Djava.security.manager"
Warning
Enable security when running Rhino

Metaswitch recommends you always run Rhino with security enabled.

Debugging security

You can debug Rhino’s security configuration by enabling security tracing (so that the security manager produces trace logs) — edit $RHINO_NODE_HOME/read-config-variables, adding the following line:

OPTIONS="$OPTIONS -Djava.security.debug=access,failure"
Warning
Capture console output

This option will produce a lot of console output. To capture it, redirect the standard out and standard error streams from Rhino to a file. For example:

$ start-rhino.sh > out 2>&1

Excerpt of rhino.policy

Below is an excerpt of $RHINO_HOME/node-XXX/config/rhino.policy:

grant {
  permission java.io.FilePermission "${java.home}${/}lib${/}jaxp.properties","read";

  // Needed by default logging configuration.
  permission java.io.FilePermission "$${rhino.dir.work}$${/}log", "read";
  permission java.io.FilePermission "$${rhino.dir.work}$${/}log$${/}-","read,write,delete";

  // Needed by netty specifically, but it's a sensible top-level permission to grant
  permission java.io.FilePermission "/etc/os-release", "read";
  permission java.io.FilePermission "/usr/lib/os-release", "read";

  // Java "standard" properties that can be read by anyone
  permission java.util.PropertyPermission "java.version", "read";
  permission java.util.PropertyPermission "java.vendor", "read";
  permission java.util.PropertyPermission "java.vendor.url", "read";
  permission java.util.PropertyPermission "java.class.version", "read";
  permission java.util.PropertyPermission "os.name", "read";
  permission java.util.PropertyPermission "os.version", "read";
  permission java.util.PropertyPermission "os.arch", "read";
  permission java.util.PropertyPermission "file.encoding", "read";
  permission java.util.PropertyPermission "file.separator", "read";
  permission java.util.PropertyPermission "path.separator", "read";
  permission java.util.PropertyPermission "line.separator", "read";

  permission java.util.PropertyPermission "java.specification.version", "read";
  permission java.util.PropertyPermission "java.specification.vendor", "read";
  permission java.util.PropertyPermission "java.specification.name", "read";

  permission java.util.PropertyPermission "java.vm.specification.version", "read";
  permission java.util.PropertyPermission "java.vm.specification.vendor", "read";
  permission java.util.PropertyPermission "java.vm.specification.name", "read";
  permission java.util.PropertyPermission "java.vm.version", "read";
  permission java.util.PropertyPermission "java.vm.vendor", "read";
  permission java.util.PropertyPermission "java.vm.name", "read";
};

// Standard java and jdk modules we use get all permissions by default.
// Actual access will be limited by the caller's security context.
grant codeBase "jrt:/java.security.jgss" {
  permission java.security.AllPermission;
};

// ...

Java Security Properties

A per node configuration file $RHINO_NODE_HOME/config/rhino.java.security has been added to allow overriding of JVM security settings. This file includes default values for the following networking security properties:

networkaddress.cache.ttl=30
networkaddress.negative.cache.ttl=10

The value of these properties control how long Resource Adaptors and Rhino based applications cache network addresses after successful and unsuccessful DNS queries. These values override the ones specified in the JVMs java.security file. See Oracle’s Networking Properties documentation for more details. The JVM default for networkaddress.cache.ttl is to cache forever. (-1) Therefore the introduction of this file to Rhino’s per-node configuration will alter an applications caching behavior on upgrade to a newer Rhino version.

Use of a different java.security configuration file can be achieved by modifying the following line in $RHINO_NODE_HOME/read-config-variables:

OPTIONS="$OPTIONS -Djava.security.properties=${SCRIPT_WORK_DIR}/config/rhino.java.security"

Secure Access for OA&M Staff

Rhino provides a set of management tools for OA&M staff, including the Rhino Element Manager and various command-line tools.

The following topics explain how you can:

  • manage user authentication

  • encrypt the interconnections between Rhino and all command-line tools

  • configure Rhino to allow management access from a remote trusted host.

Authentication

The Java Authentication and Authorization Service (JAAS) allows integration with enterprise systems, identity servers, — databases and password files.

JAAS configuration

The file rhino.jaas defines the JAAS modules Rhino uses for authentication:

/** Login Configuration for OpenCloud Rhino **/

jmxr-adaptor {
    com.opencloud.rhino.security.auth.FileLoginModule REQUIRED
    file="$${rhino.dir.base}/rhino.passwd"
    hash="md5";
};
Note See the Javadoc for the JAAS Configuration class for details about flags such as REQUIRED.

The system property java.security.auth.login.config defines the location of rhino.jaas (in read-config-variables for a production Rhino instance and jvm_args for the Rhino SDK.)

Rhino contains JAAS login modules based on files, LDAP, and JAIN SLEE profiles.

File login module

The FileLoginModule reads login credentials and roles from a file. It is the default login module for a new Rhino installation.

The parameters to the FileLoginModule are:

  • file - specifies location of password file.

  • hash - password hashing algorithm. Use none for clear text passwords, or a valid java.security.MessageDigest algorithm name (e.g. md5 or sha-1). If not specified, clear text passwords are used.

Password File Format

<username>:<password>:<role,role...>
  • username - user’s name

  • password - user’s password (or hashed password). May be prefixed by the hash method in {}.

  • roles - comma-separated list of role names that the user belongs to, eg. rhino,view.

Tip
Using flags and hashed passwords

By default, Rhino stores passwords in cleartext, in the password file. For increased security, store a secure one-way hash of the password instead:

  1. Configure the file login module, changing the hash="none" option to hash="md5".

  2. Use the client/bin/rhino-passwd utility to generate hashed passwords.

  3. Copy those hashed passwords into the password file.

LDAP login module

The LdapLoginModule reads login credentials and roles from an LDAP directory server.

To use this module, edit the JAAS configuration file ${RHINO_HOME}/config/rhino.jaas, and add an entry to the jmxr-adaptor declaration:

jmxr-adaptor {
    com.opencloud.rhino.security.auth.LdapLoginModule SUFFICIENT
        properties="config/ldapauth.properties";

    /* a "backup" login module would typically go here */
};

Configuration Properties

The properties file contains LDAP connection parameters. The properties that this module uses are documented in the example ldapauth.properties file, along with default values and examples

The file config/ldapauth.properties defines the LDAP-connection configuration:

### Properties for JAAS LDAP login module (LdapLoginModule)
#
# The commented values are the default values that will be used if the given property is not specified.
# The ldap.url property has no default and must be specified.
#
# This properties file should be supplied to the LdapLoginModule using the "properties" property, e.g.
#
# jmxr-adaptor {
#     com.opencloud.rhino.security.auth.LdapLoginModule SUFFICIENT
#     properties="config/ldapauth.properties";
#  };
#

### Connection properties

# An LDAP URL of the form ldap://[host[:port]]/basedn or ldaps://host[:port]/basedn
# Some examples:
# Connect to local directory server
#ldap.url=ldap:///dc=example,dc=com
# Connect to remote directory server
#ldap.url=ldap://remoteserver/dc=example,dc=com
# Connect to remote directory server using SSL
#ldap.url=ldaps://remoteserver/dc=example,dc=com
ldap.url=

# Use TLS.  When set to true, the LdapLoginModule attempts a "Start TLS" request when it connects to the
# directory server.  This should NOT be set to true when using an ldaps:// (SSL) URL.
#ldap.usetls=true

# To use TLS or SSL, you must have your directory server's X509 certificate installed in Rhino's trust
# store, located at $RHINO_BASE/rhino-server.keystore.

### Authentication properties

## Direct mode
# In "direct mode", the login module attempts to bind using a DN calculated from the pattern property.
# Direct mode is used if the ldap.userdnpattern property is specified.

# A DN pattern that can be used to directly login users to LDAP.  This pattern is used for creating a DN string for
# 'direct' user authentication, where the pattern is relative to the base DN in the LDAP URL.
# {0} will be replaced with the submitted username.
# A typical value for this property might be "uid={0},ou=People"

#ldap.userdnpattern=

## Search mode
# In "search mode", the login module binds using the given manager credentials and searches for the user.
# Authentication to LDAP will be done from the DN found if successful.
# Search mode is used if the ldap.userdnpattern property is not specified.

# Bind credentials to search for the user.  May be blank if the directory server allows anonymous connections, or if
# using direct mode.
#ldap.managerdn=
#ldap.managerpw=

# A filter expression used to search for the user DN that will be used in LDAP authentication.
# {0} will be replaced by the submitted username.
#ldap.searchfilter=(uid={0})

# Context name to search in, relative to the base DN in the LDAP URL.
#ldap.searchbase=

### Role resolution properties

# A search is performed using the search base (ldap.role.searchbase), and filter (ldap.role.filter).  The results of
# the search define the Rhino roles.  The role name is in the specified attribute (ldap.roles.nameattr) and must match
# role definitions in Rhino configuration.  The members of each role are determined by examining the values of the
# member attribute (ldap.role.memberattr) and must contain the DN of the authenticated user.

# Attribute on the group entry which denotes the group name.
#ldap.rolenameattr=cn

# A multi-value attribute on the group entry which contains user DNs or ids of the group members (e.g. uniqueMember,member)
#ldap.rolememberattr=uniqueMember

# The LDAP filter used to search for group entries.
#ldap.rolefilter=(objectclass=groupOfUniqueNames)

# A search base for group entry DNs, relative to the DN that already exists on the LDAP server's URL.
#ldap.rolesearchbase=ou=Groups

# Do case-sensitive search by default. Allowed values are true and false.
#ldap.casesensitive=true

TLS setup for ldaps:// or starttls

For security reasons, always use TLS for LDAP authentication, either via an ldaps:// URL or via ldap.usetls=true. Rhino does not use the JDK’s default CA certificates list, so you must add a TLS certificate that Rhino should trust to Rhino’s rhino-server.keystore. This must be done whether you are using a TLS certificate from a public or private CA, or a self-signed certificate, for your LDAP server, otherwise Rhino will refuse to trust the LDAP server. You may use any of:

  • The CA’s root certificate, which will be the most durable choice as it should continue to work across LDAP server certificate rotations for a number of years.

  • One of the CA’s intermediate certificates, which will be less durable than using the root CA certificate.

  • The LDAP server’s certificate, which is the least durable choice because you will need to replace it in each Rhino keystore file whenever the LDAP server gets a new certificate.

To add an LDAP TLS certificate to rhino-server.keystore, run the following, with $PATH_TO_YOUR_CA_CERT replaced with the correct path to your certificate:

keytool -importcert -noprompt -alias ldap-server-ca-cert -file $PATH_TO_YOUR_CA_CERT -keystore rhino-server.keystore

SLEE profile login module

The ProfileLoginModule reads login credentials and roles from a SLEE profile table.

To use this module, edit the JAAS configuration file ${RHINO_HOME}/config/rhino.jaas, and add an entry to the jmxr-adaptor declaration:

jmxr-adaptor {
    com.opencloud.rhino.security.auth.ProfileLoginModule SUFFICIENT
        profiletable="UserLoginProfileTable"
        passwordattribute="HashedPassword"
        rolesattribute="Roles"
        hash="md5";

    /* a "backup" login module would typically go here */
};

ProfileLoginModule supports the following options:

Option Description Default
 profiletable

name of the profile table to use

 UserLoginProfileTable
 passwordattribute

profile attribute to compare the password against
profile attribute type must be java.lang.String

 HashedPassword
 rolesattribute

profile attribute to load the roles from
profile attribute type must be array of java.lang.String

 Roles
 hash

hashing algorithm to use for the password
use none for clear text passwords, or a valid java.security.MessageDigest algorithm name (e.g. md5 or sha-1).

Note If anything other than md5 is specified, then the environment variable on the UserLoginProfile profile spec must be set to match the algorithm used.
 md5

The profile login module:

  • finds the profile in a specified table with a name matching the supplied username

  • compares the supplied password with the password stored in the profile; if authentication succeeds, retrieves the roles for that user from the profile.

Rhino comes with a profile specification that you can use to create a profile table for the profile login module (in $RHINO_HOME/lib/user-login-profile-du.jar). It contains a profile specification called UserLoginProfileSpec. You can install it using the rhino-console:

[Rhino@localhost (#3)] installlocaldu ../../lib/user-login-profile-du.jar
installed: DeployableUnitID[url=file:/tmp/rhino/lib/user-login-profile-du.jar]
[Rhino@localhost (#4)] listprofilespecs
ProfileSpecificationIDname=AddressProfileSpec,vendor=javax.slee,version=1.0
ProfileSpecificationIDname=AddressProfileSpec,vendor=javax.slee,version=1.1
ProfileSpecificationIDname=ResourceInfoProfileSpec,vendor=javax.slee,version=1.0
ProfileSpecificationID[name=UserLoginProfileSpec,vendor=Open Cloud,version=1.0]
Tip A profile table named UserLoginProfileTable created using this specification will work with the default configuration values listed above.

Creating a profile table fallback

OpenCloud recommends configuring a file login module as a fallback mechanism, in case the profile table is accidentally deleted or renamed, or the admin user profile is deleted or changed.

Warning Without a fallback you would not be able to fix the profile table problem, since no user would be able to login using a management client!

To create a profile table fallback, give ProfileLoginModule a SUFFICIENT flag and the FileLoginModule a REQUIRED flag:

jmxr-adaptor {
    com.opencloud.rhino.security.auth.ProfileLoginModule SUFFICIENT
        profiletable="UserLoginProfileTable"
        passwordattribute="HashedPassword"
        rolesattribute="Roles"
        hash="md5";

    com.opencloud.rhino.security.auth.FileLoginModule REQUIRED
        file="$${rhino.dir.base}/rhino.passwd"
        hash="md5";
};

Encrypted Communication with SSL

By default, the interconnection between Rhino and a management client uses the Secure Sockets Layer (SSL) protocol.

(You can disable SSL by editing the JMX Remote Adaptor m-let configuration.)

Note
How does SSL work?

An SSL connection for sending data protects it by using encryption, which prevents eavesdropping and tampering. SSL uses a cryptographic system that doubly encrypts the data, with both a public key known to everyone and a private (or "secret") key known only to the recipient of the message.

For more about SSL, please see SSL Certificates HOWTO from the Linux Documentation Project, and Java SE Security Documentation from Oracle.

Below are descriptions of Rhino SSL keystores and using the keytool utility to manage them.

SSL in Rhino

Several keystores store the keys Rhino uses during user authentication. For example, a Rhino SDK installation includes:

Keystore Used by…​ To…​
 $RHINO_HOME/rhino-public.keystore

clients

identify themselves, and confirm the server’s identity

 $RHINO_HOME/rhino-private.keystore

Rhino

identify itself, confirm a client’s identity

 $RHINO_HOME/client/rhino-public.keystore

Rhino OA&M clients (like command line console)

duplicate $RHINO_HOME/rhino-public.keystore (this is a copy of that), when copying the client directory to another location

Tip The installation process generates keystores, keys, and certificates for Rhino.

Using keytool to manage keystores

You can use keytool to manage keystores. For example:

$ keytool -list -keystore rhino-public.keystore
Enter keystore password:  <password>

Keystore type: PKCS12
Keystore provider: SUN

Your keystore contains 2 entries

jmxr-ssl-client, Apr 25, 2020, PrivateKeyEntry,
Certificate fingerprint (SHA-256): B4:5A:4E:E3:B8:73:22:C4:94:1C:C7:B7:B5:B0:BF:7E:06:B2:68:D3:D3:21:A4:98:63:2A:12:9B:53:FB:9F:C3
jmxr-ssl-server, Apr 29, 2020, trustedCertEntry,
Certificate fingerprint (SHA-256): BE:B8:00:AD:8B:5E:B3:0D:D5:5A:4B:61:AE:7B:36:F9:CD:DE:8D:8F:98:5A:13:3E:F7:27:C4:D9:D9:89:BA:F7
Note
Change the default passphrase

Rhino keystores and keys have a default passphrase of changeit. As the name suggests, Metaswitch recommends changing it, for example with keytool:

keytool -storepasswd -keystore rhino-public.keystore

Enabling Remote Access

By default, only Rhino’s management tools (such as the command-line console or stats console) can run on the same host as Rhino. You can, however, securely manage Rhino from a remote host.

As discussed in the preceding topic, Rhino uses SSL to secure its interconnect with management clients. To configure Rhino to support remote management clients:

  1. Copy the client directory to the remote machine.

  2. Allow the remote host to connect to the JMX remote adaptor.

Set up the client directory on the remote machine

The client directory (and subdirectories) contain all the scripts, configuration files and other dependencies needed for Rhino management clients. To setup the client directory on the remote machine:

  1. Copy the entire directory structure to the remote host: $ scp -r client <user>@<host>:<destination>/

  2. Edit client/etc/client.properties and change rhino.remote.host:

    # RMI properties, file names are relative to client home directory
    rhino.remote.host=<rhino host>
    rhino.remote.port=1199
    # ...

Allow the remote host to connect to the JMX remote adaptor

All management tools connect to Rhino using the JMX Remote Adaptor m-let. By default this component only permits access from the same host that Rhino is running on.

The security-spec section of the node-XXX/config/permachine-mlet.conf and node-XXX/config/pernode-mlet.conf files defines the security environment of an m-let. To allow a remote host to connect to the JMX remote adaptor, edit the security-permission-spec sections of the node-XXX/config/permachine-mlet.conf file, to enable remote access with appropriate java.net.SocketPermission:

<mlet enabled="true">
<classpath>
<jar-url>@FILE_URL@@RHINO_BASE@/lib/jmxr-adaptor.jar</jar-url>
<jar-url>@FILE_URL@@RHINO_BASE@/lib/jmxr-adaptor-gpl2.jar</jar-url>
<security-permission-spec>
grant {
...
permission java.net.SocketPermission "<REMOTE_HOST>","accept";
...
};
...
</mlet>
Warning If you would like to connect to Rhino SDK, the file that defines the m-let configuration is $RHINO_SDK/config/mlet.conf.

Configuring the SLEE Component Java Sandbox

All JAIN SLEE components run within a "sandbox" defined by a set of Java security permissions.

Note This section draws heavily from material in the JAIN SLEE 1.1 specification.

Default Security Permissions for SLEE Components

The following table defines the Java platform security permissions that Rhino grants to the instances of SLEE component classes at runtime.

The term "grant" means that Rhino grants the permission, the term "deny" means that Rhino denies the permission.

Permission name SLEE policy
 java.security.AllPermission

deny

 java.awt.AWTPermission

deny

 java.io.FilePermission

deny

 java.net.NetPermission

deny

 java.util.PropertyPermission

grant read, *
deny all other

 java.lang.reflect.ReflectPermission

deny

 java.lang.RuntimePermission

deny

 java.lang.SecurityPermission

deny

 java.io.SerializablePermission

deny

 java.net.SocketPermission

deny

Note This permission set is defined by the JAIN SLEE 1.1 specification section 12.1.1.1-slee-component-security-permissions. This section also explains how SBB, profile specification, resource adaptor and library components can be granted additional security permissions over and above the default set.

Adding Security Permissions to SLEE Components

SBB, profile specification, resource adaptor and library components can be granted additional security permissions over and above the default set of security permissions granted by the SLEE — by using the security-permissions element in their respective deployment descriptor.

Each security-permissions element contains the following sub-elements:

  • description — an optional informational element

  • security-permission-spec — an element that identifies the security permission policies used by component jar file classes. (For the security-permission-spec element syntax definition, please see the J2SE security documentation).

Note

If the codeBase argument:

  • is not specified for a grant entry — the SLEE assumes the codebase to be the component jar file, and grants security permissions to all classes loaded from the component jar file (that is, to all SLEE components defined in the component jar file). The SLEE does not, however, grant the security permissions to classes loaded from any other dependent component jar required by the components defined in the deployment descriptor.

  • is specified for a grant entry — the SLEE assumes the specified path is relative to the root directory of the component jar within the deployable unit jar (but its use is otherwise undefined by the SLEE specification).

Below are a sample component jar deployment descriptor with added security permissions, and a table of security requirements that apply to methods invoked on classes loaded from different types of component jars with added permissions.

Sample component jar deployment descriptor with added security permissions

Below is an example of a resource adaptor component jar with added security permissions:

<resource-adaptor-jar>
<resource-adaptor>
<description> ... </description>
<resource-adaptor-name> Foo JCC </resource-adaptor-name>
<resource-adaptor-vendor> com.foo </resource-adaptor-vendor>
<resource-adaptor-version> 10/10/20 </resource-adaptor-version>
...
</resource-adaptor>

<security-permissions>
<description>
Allow the resource adaptor to modify thread groups and connect to remotehost on port 1234
</description>
<security-permission-spec>
grant {
permission java.lang.RuntimePermission "modifyThreadGroup";
permission java.net.SocketPermission "remotehost:1234", "connect";
};
</security-permission-spec>
</security-permissions>
</resource-adaptor-jar>

Security requirements for methods invoked on classes loaded from component jars

The following table describes the security requirements that apply to methods invoked on classes loaded from different types of component jars:

Component jar type Security requirements

SBB

  • Event-handler and initial event-selector methods run with the default set of security permissions granted by the SLEE, plus any additional security permissions specified in the SBB’s component jar deployment descriptor.

  • The isolate-security-permissions attribute of the sbb-local-interface element in the SBB’s deployment descriptor controls whether or not security permissions of other protection domains in the call stack are propagated to the SBB when a business method on the SBB Local interface is invoked:

    • If False — then the method in the SBB abstract class invoked as a result of a business method invoked on the SBB Local interface runs with an access control context that includes the protection domain(s) of the SBB as well as the protection domains of any other classes in the call stack as prescribed by the Java security model, such as the SBB that invoked the SBB Local interface method.

    • If True — the SLEE automatically wraps the method invoked on the SBB abstract class in response to the SBB Local interface method invocation in an AccessController.doPrivileged block in order to isolate the security permissions of the invoked SBB. That is, the security permissions of other protection domains in the call stack do not affect the invoked SBB.

  • All methods defined in the javax.slee.Sbb interface can be invoked on an SBB object from an unpredictable call path. If any of these methods need to execute privileged code requiring security permissions over and above the standard set of permissions granted by the SLEE, the additional security permissions must be declared in the SBB component jar’s deployment descriptor and the relevant methods must wrap the privileged code in an AccessController.doPrivileged block to ensure that potentially more restrictive security permissions of other protection domains in the call stack do not prohibit the privileged code from being executed.

Profile spec

  • All management methods invoked on the Profile Management interface run with the default set of security permissions granted by the SLEE, plus any additional security permissions specified in the Profile Specification’s component jar deployment descriptor.

  • The isolate-security-permissions attribute of the profile-local-interface element in the Profile Specification’s deployment descriptor controls whether or not security permissions of other protection domains in the call stack are propagated to the Profile when a business method on the Profile Local interface is invoked.

    • If False — the method in the Profile abstract class invoked as a result of a business method invoked on the Profile Local interface runs with an access control context that includes the protection domain(s) of the Profile Specification as well as the protection domains of any other classes in the call stack as prescribed by the Java security model, such as the SLEE Component that invoked the Profile Local interface method.

    • If True — the SLEE automatically wraps the method invoked on the Profile abstract class in response to the Profile Local interface method invocation in an AccessController.doPrivileged block in order to isolate the security permissions of the invoked Profile, i.e. the security permissions of other protection domains in the call stack do not affect the invoked Profile.

  • The setProfileContext, unsetProfileContext, profilePostCreate, profileActivate, profilePassivate, profileLoad, profileStore, and profileRemove methods defined in the javax.slee.profile.Profile interface can be invoked on a Profile object from an unpredictable call path. If any of these methods need to execute privileged code requiring security permissions over and above the standard set of permissions granted by the SLEE, the additional security permissions must be declared in the Profile Specification component jar’s deployment descriptor and the relevant methods must wrap the privileged code in an AccessController.doPrivileged block to ensure that potentially more restrictive security permissions of other protection domains in the call stack do not prohibit the privileged code from being executed.

  • The profileInitialize and profileVerify methods defined in the javax.slee.profile.Profile interface are invoked as a result of management operations and therefore run with the default set of security permissions granted by the SLEE, plus any additional security permissions specified in the Profile Specification’s component jar deployment descriptor.

Resource adaptor

  • All methods invoked by the SLEE on the javax.slee.resource.ResourceAdaptor and javax.slee.resource.Marshaler interfaces run with the default set of security permissions granted by the SLEE, plus any additional security permissions specified in the Resource Adaptor’s component jar deployment descriptor.

  • All methods that may be invoked by other SLEE Components such as SBBs run with the set of security permissions that is the intersection of the permissions of all protection domains traversed by the current execution thread (up until any AccessController.doPrivileged invocation in the call stack). If any of these methods need to execute privileged code requiring security permissions over and above the standard set of permissions granted by the SLEE, the additional security permissions must be declared in the Resource Adaptor component jar’s deployment descriptor and the relevant methods must wrap the privileged code in an AccessController.doPrivileged block to ensure that potentially more restrictive security permissions of other protection domains in the call stack do not prohibit the privileged code from being executed.

Library

  • All methods run with the set of permissions that is the intersection of the permissions of all protection domains traversed by the current execution thread (up until any AccessController.doPrivileged invocation in the call stack). If a library component method needs to execute privileged code requiring security permissions over and above the standard set of permissions granted by the SLEE, the additional security permissions must be declared in the library component jar’s deployment descriptor and the method must wrap the privileged code in an AccessController.doPrivileged block to ensure that potentially more restrictive security permissions of other protection domains in the call stack do not prohibit the privileged code from being executed.

External Databases

The Rhino SLEE requires the use of an external database for persistence of management and profile data. Rhino can also provide SLEE applications with access to an external database for persistence of their own data.

Rhino can connect to any external database which has support for JDBC 2.0 and JDBC 2.0’s standard extensions. The JDBC API is the industry standard for database-independent connectivity between the Java programming language and a wide range of databases. The JDBC API provides a call-level API for SQL-based database access. JDBC technology lets you use the Java programming language to exploit "Write Once, Run Anywhere" capabilities for applications that require access to enterprise data. For more information, please see https://docs.oracle.com/javase/tutorial/jdbc.

External database integration is managed in Rhino using the following configurable entities:

Configurable entity What it does

Persistence instance

Defines the parameters Rhino needs to be able to connect to an external database using the database vendor’s database driver code.

Persistence resource

Links a Rhino in-memory database with one or more persistence instances.

JDBC resource

Provides a SLEE application with access to a persistence instance.

This section includes instructions and details on:

Adding the JDBC Driver

The JDBC driver for an external database needs to be added to Rhino’s runtime environment before Rhino can connect to it. You’ll need the JDBC 2.0 driver from the database vendor. (You’ll only need to do this once per Rhino installation and database vendor.)

To install the driver, you need to add it to Rhino’s runtime environment and grant permissions to the classes in the JDBC driver. Rhino needs to be restarted after making these changes for them to take effect.

Add the library

To add the library to Rhino’s runtime environment, copy the JDBC driver jar file to $RHINO_BASE/lib, then add the jar to Rhino’s classpath. The method for adding classpath entries differs between the Rhino SDK and Rhino production versions.

Rhino SDK
For the Rhino SDK, add an entry for the JDBC driver jar file into the rhino.runtime.classpath system property in $RHINO_HOME/config/jvm_args.

Below is an example that includes the PostgreSQL and Oracle JDBC drivers.

# Required classpath
-Drhino.runtime.classpath=${RHINO_BASE}/lib/postgresql.jar;${RHINO_BASE}/lib/derby.jar;${RHINO_BASE}/lib/ojdbc6.jar

Rhino Production
For the production version of Rhino, add an entry for the JDBC driver jar file into the RUNTIME_CLASSPATH environment variable in $RHINO_BASE/defaults/read-config-variables, and the $RHINO_HOME/read-config-variables file in any node directory that has already been created.

Below is an example for adding the Oracle JDBC driver:

# Set classpath
LIB=$RHINO_BASE/lib
CLASSPATH="${CLASSPATH:+${CLASSPATH}:}$LIB/RhinoBoot.jar"
RUNTIME_CLASSPATH="$LIB/postgresql.jar"
# Add Oracle JDBC driver to classpath
RUNTIME_CLASSPATH="$RUNTIME_CLASSPATH:$LIB/ojdbc6.jar"

Grant permissions to the JDBC driver

To grant permissions to the classes in the JDBC driver, edit the Rhino security policy file, adding an entry for the JDBC driver jar file.

In the Rhino SDK, the policy file is $RHINO_HOME/config/rhino.policy. In the production version, the policy files are $RHINO_BASE/defaults/config/rhino.policy, and $RHINO_HOME/config/rhino.policy in any node directory that has already been created.

Below is an example for the Oracle JDBC driver:

// Add permissions to Oracle JDBC driver
grant codeBase "file:$${rhino.dir.base}/lib/ojdbc6.jar" {
  permission java.net.SocketPermission "*", "connect,resolve";
  permission java.lang.RuntimePermission "getClassLoader";
  permission java.util.PropertyPermission "oracle.*", "read";
  permission java.util.PropertyPermission "javax.net.ssl.*", "read";
  permission java.util.PropertyPermission "user.name", "read";
  permission javax.management.MBeanPermission "oracle.jdbc.driver.OracleDiagnosabilityMBean", "registerMBean";
};

Persistence Instances

As well as an overview of persistence instances, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation
 createpersistenceinstance

Persistence Management → createPersistenceInstance

 listpersistenceinstances
 dumppersistenceinstance

Persistence Management → getPersistenceInstances Persistence Management → getPersistenceInstance

 updatepersistenceinstance

Persistence Management → updatePersistenceInstance

 removepersistenceinstance

Persistence Management → removePersistenceInstance

About Persistence Instances

A persistence instance defines how Rhino connects to an external database endpoint.

A persistence instance requires the following configuration properties:

  • A unique name that identifies the persistence instance in the SLEE.

  • The fully-qualified name of the Java class from the database driver that implements the javax.sql.DataSource interface or the javax.sql.ConnectionPoolDataSource interface. For more information on the distinction between these interfaces and their implications for application-level JDBC connection pooling in Rhino, please see Managing database connections.

  • Configuration properties for the datasource. Each datasource has a number of JavaBean properties (as defined by the JDBC specification). For each configured property, its name, expected Java type, and value must be specified.
    Variables may be used in the construct of JavaBean property values. Variables are indicated using the ${...} syntax, where the value between the braces is the variable name. Rhino attempts to resolve the variable name by looking in the following places in this order:

    • The content of the $RHINO_HOME/config/config_variables file

    • Java system properties

    • User environment variables

Note At a minimum, configuration properties that inform the JDBC driver where to connect to the database server must be specified.

Creating Persistence Instances

To create a persistence instance, use the following rhino-console command or related MBean operation.

Console command: createpersistenceinstance

Command

createpersistenceinstance <name> <type> [-ds <datasource-class-name>]
[-set-property (<property-name> <property-type> <property-value)*]
  Description
    Create a persistence instance configuration.  The type may be 'jdbc' or
    'cassandra'.  A datasource class name must be specified for 'jdbc'
    configurations.

Example

This example creates a new persistence instance with the following configuration properties:

  • Name: oracle

  • Type: jdbc

  • Datasource class name: oracle.jdbc.pool.OracleDataSource

  • JavaBean properties:

    • name: URL
      type: java.lang.String
      value: jdbc:oracle:thin:@oracle_host:1521:db

    • name: user
      type: java.lang.String
      value: ${MANAGEMENT_DATABASE_USER}

    • name: password
      type: java.lang.String
      value: ${MANAGEMENT_DATABASE_PASSWORD}

    • name: loginTimeout
      type: java.lang.Integer
      value: 30

$ ./rhino-console createpersistenceinstance oracle jdbc \
    -ds oracle.jdbc.pool.OracleDataSource \
    -set-property URL java.lang.String jdbc:oracle:thin:@oracle_host:1521:db \
    -set-property user java.lang.String '${MANAGEMENT_DATABASE_USER}' \
    -set-property password java.lang.String '${MANAGEMENT_DATABASE_PASSWORD}' \
    -set-property loginTimeout java.lang.Integer 30
Created persistence instance oracle

Configuration properties

JDBC persistence instances

A JDBC persistence instance has configuration properties defined by the JavaBean properties of the target datasource class. Reference must be made to the datasource documentation for the available properties.

Cassandra persistence instances

A Cassandra persistence instance can be configured using any configuration property names recognised by the DataStax Java Driver, for example basic.contact-points, advanced.reconnection-policy.class, etc. The reference configuration (used in Rhino as the base configuration) provides a comprehensive list of the configuration properties recognised by the driver. Driver execution profiles are supported for application use using appropriately flattened configuration property names such as profiles.myprofile.basic.request.timeout. Driver metrics can be enabled by configuring the advanced.metrics.session.enabled or advanced.metrics.node.enabled configuration properties as described in the reference configuration. Metrics are exposed over JMX from the Rhino MBean server with the object name domain com.datastax.oss.driver.

Note

If the basic.contact-points configuration property is specified in a persistence instance configuration, the driver requires that the basic.load-balancing-policy.local-datacenter configuration property is also specified to provide the name of the datacentre. If the datacentre name is not specified, the driver may still successfully connect to the contact points but will refuse to utilise any of the database servers due to datacentre name mismatch.

The Java type of all Cassandra persistence instance configuration properties for the DataStax Java Driver must be java.lang.String. This is to accommodate things like duration properties being specified with values containing unit qualifiers such as 100ms.

A Cassandra persistence instance that is used in a persistence resource referenced by a memdb configuration in rhino-config.xml, such as the ManagementDatabase or ProfileDatabase, must specify the basic.session-keyspace configuration property with the name of the Cassandra keyspace where persistent state will be stored. In any other case, specifying the keyspace in configuration is optional. An application may, for example, specify the desired keyspace at runtime using CQL queries instead.

Finally, all Cassandra persistence instances must define the Rhino-specific configuration property rhino.ddl-statement-timeout of type java.lang.String. This property defines the timeout duration that Rhino will use, if configured to use the persistence instance for internal functions such as the key/value store or session ownership store, when executing schema-altering statements such as CREATE TABLE and DROP TABLE.

An example Cassandra persistence instance configuration (as it would appear in the $RHINO_HOME/config/persistence.xml file) is illustrated below:

<persistence-instance name="cassandra" type="cassandra">
  <parameter name="rhino.ddl-statement-timeout" type="java.lang.String" value="10s"/>
  <parameter name="basic.contact-points" type="java.lang.String" value="${CASSANDRA_CONTACT_POINTS}"/>
  <parameter name="basic.load-balancing-policy.local-datacenter" type="java.lang.String" value="${CASSANDRA_DATACENTRE}"/>
  <parameter name="advanced.reconnection-policy.class" type="java.lang.String" value="ConstantReconnectionPolicy"/>
  <parameter name="advanced.reconnection-policy.base-delay" type="java.lang.String" value="5000ms"/>
  <parameter name="basic.request.consistency" type="java.lang.String" value="LOCAL_QUORUM"/>
</persistence-instance>

MBean operation: createPersistenceInstance

MBean

Rhino operation

public void createPersistenceInstance(String name, PersistenceInstanceType type, String dsClassName, ConfigProperty[] configProperties)
    throws NullPointerException, InvalidArgumentException,
      DuplicateNameException, ConfigurationException;

Displaying Persistence Instances

To list current persistence instances or display the configuration parameters of a persistence instance, use the following rhino-console commands or related MBean operations.

listpersistenceinstances

Command

listpersistenceinstances
  Description
    List all currently configured persistence instances.

Example

$ ./rhino-console listpersistenceinstances
oracle
postgres
postgres-jdbc

dumppersistenceinstance

Command

dumppersistenceinstance <name> [-expand]
  Description
    Dump the current configuration for the named persistence instance. The -expand
    option will cause any property values containing variables to be expanded with
    their resolved value (if resolvable)

Example

$ ./rhino-console dumppersistenceinstance postgres
datasource-class-name : org.postgresql.ds.PGSimpleDataSource
name                  : postgres
type                  : jdbc
config-properties     :
       > name               type                value
       > -----------------  ------------------  --------------------------------
       >        serverName    java.lang.String       ${MANAGEMENT_DATABASE_HOST}
       >        portNumber   java.lang.Integer       ${MANAGEMENT_DATABASE_PORT}
       >      databaseName    java.lang.String       ${MANAGEMENT_DATABASE_NAME}
       >              user    java.lang.String       ${MANAGEMENT_DATABASE_USER}
       >          password    java.lang.String   ${MANAGEMENT_DATABASE_PASSWORD}
       >      loginTimeout   java.lang.Integer                                30
       >     socketTimeout   java.lang.Integer                                15
       >  prepareThreshold   java.lang.Integer                                 1
       > 8 rows

MBean operations

getPersistenceInstances

MBean

Rhino operation

public String[] getPersistenceInstances()
    throws ConfigurationException;

This operation returns an array containing the names of the persistence instances.

getPersistenceInstance

MBean

Rhino operation

public CompositeData getPersistenceInstance(String name)
    throws NullPointerException, NameNotFoundException,
      ConfigurationException;

This operation returns a JMX CompositeData object that contains the current configuration parameters for the specified persistence instance. The javadoc for this operation describes the format of this data.

Updating Persistence Instances

The configuration properties of an existing persistence instance can be updated at runtime. If the persistence instance is in use at the time of a reconfiguration, then new connections will be established with the new configuration properties, and any existing connections to the database will be closed when they become idle.

To update an existing persistence instance, use the following rhino-console command or related MBean operation.

Console command: updatepersistenceinstance

Command

updatepersistenceinstance <name> [-type type] [-ds <datasource-class-name>]
[-set-property <property-name> <property-type> <property-value)]*
[-remove-property <property-name>]*
  Description
    Update a persistence instance configuration.

Example

$ ./rhino-console updatepersistenceinstance oracle \
    -set-property URL java.lang.String jdbc:oracle:thin:@oracle_backup:1521:db \
    -set-property user java.lang.String '${MANAGEMENT_DATABASE_USER}' \
    -set-property password java.lang.String '${MANAGEMENT_DATABASE_PASSWORD}'
Updated persistence instance oracle

MBean operation: updatePersistenceInstance

MBean

Rhino operation

public void updatePersistenceInstance(String name, PersistenceInstanceType type, String dsClassName, ConfigProperty[] configProperties)
    throws NullPointerException, InvalidArgumentException,
      NameNotFoundException, ConfigurationException;

Removing Persistence Instances

To remove an existing persistence instance, use the following rhino-console command or related MBean operation.

Note A persistence instance cannot be removed while it is referenced by a persistence resource or JDBC resource.

Console command: removepersistenceinstance

Command

removepersistenceinstance <name>
  Description
    Remove an existing persistence instance configuration.

Example

$ ./rhino-console removepersistenceinstance oracle
Removed persistence instance oracle

MBean operation: removePersistenceInstance

MBean

Rhino operation

public void removePersistenceInstance(String name)
    throws NullPointerException, NameNotFoundException,
      InvalidStateException, ConfigurationException;

Persistence Resources

As well as an overview of persistence resources, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation

Managing persistence resources

 createdatabaseresource

Persistence Management → createPersistenceResource

 listpersistenceresources

Persistence Management → getPersistenceResources

 removedatabaseresource

Persistence Management → removePersistenceResource

Managing persistence instance references

 addpersistenceinstanceref

Persistence Management → addPersistenceResourcePersistenceInstanceRef

 listpersistenceinstancerefs

Persistence Management → getPersistenceResourcePersistenceInstanceRefs

 removepersistenceinstanceref

Persistence Management → removePersistenceResourcePersistenceInstanceRef

About Persistence Resources

A persistence resource links a Rhino in-memory database with one or more persistence instances. State stored in the in-memory database is replicated to the external databases for persistence, so that if the Rhino SLEE cluster is shut down then management and provisioned data can be restored.

The persistence resources that Rhino requires are defined in the config/rhino-config.xml file. An in-memory database, identified by a <memdb> element in this file, that persists its state externally contains a reference to a persistence resource using a <persistence-resource-ref> element. In the default configuration, Rhino requires the persistence resources named below:

Persistence Resource What it’s used for
 management

Persistence of installed deployable units, component desired states, configuration information, and so on.

 profiles

Persistence of all provisioned data in profile tables.

While it is possible to add and remove persistence resources from Rhino, there is typically never a need to do so. Rhino only utilises the persistence resources named in config/rhino-config.xml, and all must exist for Rhino to function correctly.

Note Active session state is stored in an in-memory database that is not backed by a persistence resource.

Persistence resources and persistence instances

A persistence resource can be associated with zero or more persistence instances. By associating a persistence resource with a persistence instance, in-memory database state corresponding with that resource will be persisted to the external database endpoint identified by that persistence instance. Any given persistence instance may be used concurrently by multiple persistence resources. Each persistence resource uses a unique set of tables such that overlap in a single database will not occur.

Upon a successful connection, Rhino will keep each persistence instance synchronised with the state of the persistence resource. Naturally, at least one persistence instance reference must be configured for persistence to occur.

When the first node of a cluster boots, Rhino will attempt to connect to all persistence instances used by a persistence resource, and will initialise corresponding in-memory database state from a connected persistence instance that contains the most recent data. The node will fail to boot if it cannot successfully connect to at least one persistence instance for each required persistence resource.

If Rhino connects to any persistence instance that contain out-of-date data, it will be resynchronised with the latest data. Native database replication should not be used between the persistence instances that Rhino connects to — Rhino will handle the synchronisation itself.

Warning A persistence resource should never be associated with two persistence instances that connect to the same physical database. Due to table locking this causes a deadlock when the first Rhino cluster node boots, and it can also cause corruption to database state.

Using multiple persistence instances for a persistence resource

While only a single PostgreSQL or Oracle database is required for the entire Rhino SLEE cluster, the Rhino SLEE supports communications with multiple database servers.

Multiple servers add an extra level of fault tolerance for the runtime configuration and the working state of the Rhino SLEE. Rhino’s in-memory databases will be constantly synchronized to each persistence instance so if the cluster is restarted it will be able to restore state if any of the databases are operational. If a persistence instance database fails or is no longer network-reachable, Rhino will continue to persist updates to the other instances associated with the persistence resource. Updates will be queued for unreachable instances and stored when the instances come back online.

Configuring multiple instances
Prepare the database servers

Before adding a database to a persistence resource you must prepare the database, by executing $RHINO_NODE_HOME/init-management-db.sh for each server.

$ init-management-db.sh -h dbhost-1 -p dbport -u dbuser -d database postgres
$ init-management-db.sh -h dbhost-2 -p dbport -u dbuser -d database postgres

You will be prompted for a password on the command line

Create persistence instances for the databases

Once the databases are initialised on each database server, configure new persistence instances in Rhino and attach them to the persistence resources. To create persistence instances, follow the instructions at Creating persistence instances.

Add the new persistence instances to the configured persistence resources

When persistence instances have been created for each database, add them to the persistence resources. Instructions to do so are at Adding persistence instances to a persistence resource. An example of the procedure is shown below:

$ ./rhino-console createpersistenceinstance oracle \
    oracle.jdbc.pool.OracleDataSource \
    URL java.lang.String jdbc:oracle:thin:@oracle_host:1521:db \
    user java.lang.String ${MANAGEMENT_DATABASE_USER} \
    password java.lang.String ${MANAGEMENT_DATABASE_PASSWORD} \
    loginTimeout java.lang.Integer 30
Created persistence instance oracle
$ ./rhino-console addpersistenceinstanceref persistence management oracle
Added persistence instance reference 'oracle' to persistence resource management
$ ./rhino-console addpersistenceinstanceref persistence profiles oracle
Added persistence instance reference 'oracle' to persistence resource profiles

It is also possible to configure the persistence instances before starting Rhino by editing the persistence.xml configuration file. This is useful for initial setup of the cluster but should not be used to change a running configuration as changes to the file cannot be reloaded without restarting. An example persistence.xml is shown below:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE rhino-persistence-config PUBLIC "-//Open Cloud Ltd.//DTD Rhino Persistence Config 2.3//EN" "rhino-persistence-config-2.3.dtd">
<rhino-persistence-config config-version="2.3" rhino-version="Rhino (version='2.5', release='0-TRUNK.0-SNAPSHOT.1-DEV13-pburrowes', build='201610251631', revision='6c862fc (dirty)')" timestamp="1477629656508">
    <!--Generated Rhino configuration file: 2016-10-28 17:40:56.507-->
    <persistence>
        <jdbc-resource jndi-name="jdbc">
            <persistence-instance-ref name="postgres-jdbc"/>
            <connection-pool connection-pool-timeout="5000" idle-check-interval="30" max-connections="15" max-idle-connections="15" max-idle-time="600" min-connections="0"/>
        </jdbc-resource>
        <persistence-instances>
            <persistence-instance datasource-class-name="org.postgresql.ds.PGSimpleDataSource" name="postgres-1">
                <parameter name="serverName" type="java.lang.String" value="${MANAGEMENT_DATABASE_HOST}"/>
                <parameter name="portNumber" type="java.lang.Integer" value="${MANAGEMENT_DATABASE_PORT}"/>
                <parameter name="databaseName" type="java.lang.String" value="${MANAGEMENT_DATABASE_NAME}"/>
                <parameter name="user" type="java.lang.String" value="${MANAGEMENT_DATABASE_USER}"/>
                <parameter name="password" type="java.lang.String" value="${MANAGEMENT_DATABASE_PASSWORD}"/>
                <parameter name="loginTimeout" type="java.lang.Integer" value="30"/>
                <parameter name="socketTimeout" type="java.lang.Integer" value="15"/>
                <parameter name="prepareThreshold" type="java.lang.Integer" value="1"/>
            </persistence-instance>
            <persistence-instance datasource-class-name="org.postgresql.ds.PGSimpleDataSource" name="postgres-2">
                <parameter name="serverName" type="java.lang.String" value="${MANAGEMENT_DATABASE_HOST2}"/>
                <parameter name="portNumber" type="java.lang.Integer" value="${MANAGEMENT_DATABASE_PORT}"/>
                <parameter name="databaseName" type="java.lang.String" value="${MANAGEMENT_DATABASE_NAME}"/>
                <parameter name="user" type="java.lang.String" value="${MANAGEMENT_DATABASE_USER}"/>
                <parameter name="password" type="java.lang.String" value="${MANAGEMENT_DATABASE_PASSWORD}"/>
                <parameter name="loginTimeout" type="java.lang.Integer" value="30"/>
                <parameter name="socketTimeout" type="java.lang.Integer" value="15"/>
                <parameter name="prepareThreshold" type="java.lang.Integer" value="1"/>
            </persistence-instance>
        </persistence-instances>
        <persistence-resource name="management">
            <persistence-instance-ref name="postgres-1"/>
            <persistence-instance-ref name="postgres-2"/>
        </persistence-resource>
        <persistence-resource name="profiles">
            <persistence-instance-ref name="postgres-1"/>
            <persistence-instance-ref name="postgres-2"/>
        </persistence-resource>
    </persistence>
</rhino-persistence-config>

Creating Persistence Resources

To create a persistence resource, use the following rhino-console command or related MBean operation.

Console command: createdatabaseresource

Command

createdatabaseresource <resource-type> <name>
  Description
    Create a database resource. The resource-type parameter must be either
    'persistence' or 'jdbc'. Note that when creating JDBC resources the supplied
    name will automatically be prefixed with 'jdbc/` when determining the internal
    JNDI name for the resource, so this prefix should not normally be included when
    specifying the resource name.

Example

$ ./rhino-console createdatabaseresource persistence myresource
Created persistence resource myresource

MBean operation: createPersistenceResource

MBean

Rhino operation

public void createPersistenceResource(String name)
    throws NullPointerException, InvalidArgumentException,
      DuplicateNameException, ConfigurationException;

Displaying Persistence Resources

To list the current persistence resources use the following rhino-console command or related MBean operation.

Console command: listdatabaseresources

Command

listdatabaseresources <resource-type>
  Description
    List all currently configured database resources. The resource-type parameter
    must be either 'persistence' or 'jdbc'.

Example

$ ./rhino-console listdatabaseresources persistence
management
profiles

MBean operation: getPersistenceResources

MBean

Rhino operation

public String[] getPersistenceResources()
    throws ConfigurationException;

This operation returns an array containing the names of the persistence resources that have been created.

Removing Persistence Resources

To remove an existing persistence resource, use the following rhino-console command or related MBean operation.

Console command: removedatabaseresource

Command

removedatabaseresource <resource-type> <name>
  Description
    Remove an existing database resource. The resource-type parameter must be either
    'persistence' or 'jdbc'.

Example

$ ./rhino-console removedatabaseresource persistence myresource
Removed persistence resource myresource

MBean operation: removePersistenceResource

MBean

Rhino operation

public void removePersistenceResource(String name)
    throws NullPointerException, NameNotFoundException,
      ConfigurationException;

Adding Persistence Instances to a Persistence Resource

To add a persistence instance to a persistence resource, use the following rhino-console command or related MBean operation.

Console command: addpersistenceinstanceref

Command

addpersistenceinstanceref <resource-type> <resource-name>
<persistence-instance-name>
  Description
    Add a persistence instance reference to a database resource. The resource-type
    parameter must be either 'persistence' or 'jdbc'.

Example

$ ./rhino-console addpersistenceinstanceref persistence management oracle
Added persistence instance reference 'oracle' to persistence resource management

MBean operation: addPersistenceResourcePersistenceInstanceRef

MBean

Rhino operation

public void addPersistenceResourcePersistenceInstanceRef(String persistenceResourceName, String persistenceInstanceName)
    throws NullPointerException, NameNotFoundException,
      DuplicateNameException, ConfigurationException;

Displaying a Persistence Resource’s Persistence Instances

To display the persistence instances that have been added to a persistence resource, use the following rhino-console command or related MBean operation.

Console command: listpersistenceinstancerefs

Command

listpersistenceinstancerefs <resource-type> <resource-name>
  Description
    List the persistence instance references for a database resource. The
    resource-type parameter must be either 'persistence' or 'jdbc'.

Example

$ ./rhino-console listpersistenceinstancerefs persistence management
postgres

MBean operation: getPersistenceResourcePersistenceInstanceRefs

MBean

Rhino operation

public String[] getPersistenceResourcePersistenceInstanceRefs(String persistenceResourceName)
    throws NullPointerException, NameNotFoundException,
      ConfigurationException;

This operation returns an array containing the names of the persistence instances used by the persistence resource.

Removing Persistence Instances from a Persistence Resource

To remove a persistence instance from a persistence resource, use the following rhino-console command or related MBean operation.

Console command: removepersistenceinstanceref

Command

removepersistenceinstanceref <resource-type> <resource-name>
<persistence-instance-name>
  Description
    Remove a persistence instance reference from a database resource. The
    resource-type parameter must be either 'persistence' or 'jdbc'.

Example

$ ./rhino-console removepersistenceinstanceref persistence management oracle
Removed persistence instance reference 'oracle' from persistence resource management

MBean operation: removePersistenceResourcePersistenceInstanceRef

MBean

Rhino operation

public void removePersistenceResourcePersistenceInstanceRef(String persistenceResourceName, String persistenceInstanceName)
    throws NullPointerException, NameNotFoundException,
        ConfigurationException;

JDBC Resources

JDBC resources are used by application components such as service building blocks (SBBs) to execute SQL statements against an external database. A systems administrator can configure new external database resources for applications to use.

As well as an overview on how SBBs can use JDBC to execute SQL and an overview on managing physical database connections, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean(s) → Operation

Managing JDBC resources

 createdatabaseresource

Persistence Management → createJdbcResource

 listdatabaseresources

Persistence Management → getJdbcResources

 removedatabaseresource

Persistence Management → removeJdbcResource

Managing persistence instance references

 addpersistenceinstanceref

Persistence Management → setJdbcResourcePersistenceInstanceRef

 listpersistenceinstancerefs

Persistence Management → getJdbcResourcePersistenceInstanceRef

 removepersistenceinstanceref

Persistence Management → removePersistenceResourcePersistenceInstanceRef

Managing database connections

 createjdbcresourceconnectionpoolconfig

Persistence Management → createJdbcResourceConnectionPoolConfig

 dumpjdbcresourceconnectionpoolconfig

Persistence Management → getJdbcResourceConnectionPoolConfigMBean

JDBC Resource Connection Pool Management → get…​

 setjdbcresourceconnectionpoolconfig

JDBC Resource Connection Pool Management → set…​

 removejdbcresourceconnectionpoolconfig

Persistence Management → removeJdbcResourceConnectionPoolConfig

How SBBs use JDBC to execute SQL

An SBB can use JDBC to execute SQL statements. It must declare this intent in an extension deployment descriptor: the oc-sbb-jar.xml file (contained in the SBB jar file in the META-INF directory). The <resource-ref> element (which must be inside the <sbb> element of oc-sbb-jar.xml) defines the JDBC datasource it will use.

Sample <resource-ref>

Below is a sample <resource-ref> element defining a JDBC datasource:

<resource-ref>
    <!-- Name under the SBB's java:comp/env tree where this datasource will be bound -->
    <res-ref-name>foo/datasource</res-ref-name>
    <!-- Resource type - must be javax.sql.DataSource -->
    <res-type>javax.sql.DataSource</res-type>
    <!-- Only Container auth supported -->
    <res-auth>Container</res-auth>
    <!-- Only Shareable scope supported -->
    <res-sharing-scope>Shareable</res-sharing-scope>
    <!-- JNDI name of target JDBC resource, relative to Rhino's java:resource tree. -->
    <res-jndi-name>jdbc/myresource</res-jndi-name>
</resource-ref>
Note In the above example, the <res-jndi-name> element has the value jdbc/myresource, which maps to the JDBC resource created in the example.

How an SBB obtains a JDBC connection

An SBB can get a reference to an object that implements the datasource interface using a JNDI lookup. Using that object, the SBB can then obtain a connection to the database. The SBB uses that connection to execute SQL queries and updates.

For example:

import javax.naming.*;
import javax.slee.*;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.SQLException;
...

public abstract class SimpleSbb implements Sbb {
    public void setSbbContext(SbbContext context) {
        try {
            Context myEnv = (Context)new InitialContext().lookup("java:comp/env");
            ds = (DataSource)myEnv.lookup("foo/datasource");
        }
        catch (NamingException e) {
            // JNDI lookup failed
        }
    }

    public void onSimpleEvent(SimpleEvent event, ActivityContextInterface context) {
        Connection conn;
        try {
            conn = ds.getConnection();
        }
        catch (SQLException e) {
            // could not get database connection
        }
        ...
    }
    ...

    private DataSource ds;
    ...
}

SQL programming

When an SBB executes in a transaction and invokes SQL statements, the SLEE controls transaction management of the JDBC connection. This lets the SLEE perform last-resource-commit optimisation.

Invoking JDBC methods which affect transaction management have no effect or undefined semantics when called from an application component isolated by a SLEE transaction. The methods (including any overridden form) that affect transaction management on the java.sql.Connection interface are listed below. These methods should not be invoked by SLEE components:

  • close

  • commit

  • rollback

  • setAutoCommit

  • setIsolationLevel

  • setSavePoint

  • releaseSavePoint

Creating JDBC Resources

JDBC resources are identified by a unique name that identifies where in the JNDI tree the JDBC resource will be bound. This name is relative to the java:resource/jdbc namespace, for example the JNDI name oracle/db1 will result in the JDBC resource being bound to the name java:resource/jdbc/oracle/db1.

Note The JNDI location is not accessible to SBBs directly. Each SBB links to the JNDI name in the SBB deployment descriptor. (For more on SBB deployment descriptor entries please see how SBBs use JDBC to execute SQL.)
Note All JDBC resources required by the SBBs in a service must exist before that service can be activated. A JDBC resource must also have a persistence instance associated with it in order for it to be able to provide database connections to SBBs that request them.

To create a JDBC resource, use the following rhino-console command or related MBean operation.

Console command: createdatabaseresource

Command

createdatabaseresource <resource-type> <name>
  Description
    Create a database resource. The resource-type parameter must be either
    'persistence' or 'jdbc'. Note that when creating JDBC resources the supplied
    name will automatically be prefixed with 'jdbc/` when determining the internal
    JNDI name for the resource, so this prefix should not normally be included when
    specifying the resource name.

Example

$ ./rhino-console createdatabaseresource jdbc myresource
Created JDBC resource myresource

MBean operation: createJdbcResource

MBean

Rhino operation

public void createJdbcResource(String jndiName)
    throws NullPointerException, InvalidArgumentException,
      DuplicateNameException, ConfigurationException;

Displaying JDBC Resources

To list current JDBC resources, use the following rhino-console command or related MBean operation.

Console command: listdatabaseresources

Command

listdatabaseresources <resource-type>
  Description
    List all currently configured database resources. The resource-type parameter
    must be either 'persistence' or 'jdbc'.

Example

$ ./rhino-console listdatabaseresources jdbc
jdbc
myresource

MBean operation: getJdbcResources

MBean

Rhino operation

public String[] getJdbcResources()
    throws ConfigurationException;

This operation returns an array containing the names of the JDBC resources that have been created.

Removing JDBC Resources

To remove an existing JDBC resource, use the following rhino-console command or related MBean operation.

Note A JDBC resource cannot be removed while it is referenced by an SBB in an activated service.

Console command: removedatabaseresource

Command

removedatabaseresource <resource-type> <name>
  Description
    Remove an existing database resource. The resource-type parameter must be either
    'persistence' or 'jdbc'.

Example

$ ./rhino-console removedatabaseresource jdbc myresource
Removed JDBC resource myresource

MBean operation: removeJdbcResource

MBean

Rhino operation

public void removeJdbcResource(String jndiName)
    throws NullPointerException, NameNotFoundException,
      InvalidStateException, ConfigurationException;

Adding A Persistence Instance to a JDBC Resource

A JDBC resource can be associated with at most one persistence instance.

Warning Rhino SLEE treats different JDBC resources as different database managers, even if they use the same persistence instance. Therefore, even if two JDBC resources use the same persistence instance, and a single transaction uses both JDBC resources, Rhino treats them as multiple resource managers.

To add a persistence instance to a JDBC resource, use the following rhino-console command or related MBean operation.

Console command: addpersistenceinstanceref

Command

addpersistenceinstanceref <resource-type> <resource-name>
<persistence-instance-name>
  Description
    Add a persistence instance reference to a database resource. The resource-type
    parameter must be either 'persistence' or 'jdbc'.

Example

$ ./rhino-console addpersistenceinstanceref jdbc myresource oracle
Added persistence instance reference 'oracle' to JDBC resource myresource

MBean operation: setJdbcResourcePersistenceInstanceRef

MBean

Rhino operation

public void setJdbcResourcePersistenceInstanceRef(String jdbcResourceJndiName, String persistenceInstanceName)
    throws NullPointerException, NameNotFoundException,
      ConfigurationException;

Displaying a JDBC Resource’s Persistence Instance

To display the persistence instance that has been added to a JDBC resource, use the following rhino-console command or related MBean operation.

Console command: listpersistenceinstancerefs

Command

listpersistenceinstancerefs <resource-type> <resource-name>
  Description
    List the persistence instance references for a database resource. The
    resource-type parameter must be either 'persistence' or 'jdbc'.

Example

$ ./rhino-console listpersistenceinstancerefs jdbc myresource
oracle

MBean operation: getJdbcResourcePersistenceInstanceRef

MBean

Rhino operation

public String getJdbcResourcePersistenceInstanceRef(String jndiName)
    throws NullPointerException, NameNotFoundException,
      ConfigurationException;

This operation returns the name of any persistence instance that has been associated with the JDBC resource.

Removing the Persistence Instance from a JDBC Resource

To remove the persistence instance from a JDBC resource, use the following rhino-console command or related MBean operation.

Console command: removepersistenceinstanceref

Command

removepersistenceinstanceref <resource-type> <resource-name>
<persistence-instance-name>
  Description
    Remove a persistence instance reference from a database resource. The
    resource-type parameter must be either 'persistence' or 'jdbc'.

Example

$ ./rhino-console removepersistenceinstanceref jdbc myresource oracle
Removed persistence instance reference 'oracle' from JDBC resource myresource

MBean operation: setJdbcResourcePersistenceInstanceRef

MBean

Rhino operation

public void setJdbcResourcePersistenceInstanceRef(String jdbcResourceJndiName, String persistenceInstanceName)
    throws NullPointerException, NameNotFoundException,
      ConfigurationException;

To remove an existing persistence instance reference, pass in null for the persistenceInstanceName parameter.

Managing Database Connections

JDBC 2.0 with standard extensions provides two mechanisms for connecting to the database:

  • The javax.sql.DataSource interface provides unmanaged physical connections.

  • The javax.sql.ConnectionPoolDataSource interface provides managed physical connections. To connect to a connection pooling data source, you need a managed ConnectionPoolDataSource connection.

Using a connection pool with a JDBC resource

By default, a JDBC resource does not use connection pooling. A connection pool may, however, be attached to a JDBC resource to improve efficiency. When a JDBC resource uses connection pooling, the way Rhino manages connections depends on what interface the datasource class of the persistence instance used by the JDBC resource is an implementation of, as follows:

Interface How Rhino manages connections
 javax.sql.DataSource

uses an internal implementation of ConnectionPoolDataSource to create managed connections

 javax.sql.ConnectionPoolDataSource

uses managed connections from the ConnectionPoolDataSource provided by the persistent instance’s datasource class.

Connection pool configurable parameters

A connection pool has the following configurable parameters:

Parameter What it specifies
 max-connections

Maximum number of active connections a Rhino process can use at any one time.

 max-idle-connections

Maximum number of inactive connections that should be maintained in the connection pool.

This value must be less than or equal to max-connections.

 min-connections

Minimum number of connections that should be maintained in the connection pool.

 max-idle-time

Time in seconds after which an inactive connection may become eligible for discard. An idle connection will not be discarded if doing so would reduce the number of idle connections below the min-connections setting.

If this parameter has the value 0, idle connections will never be discarded.

 idle-check-interval

Time in seconds between idle connection discard checks.

 connection-pool-timeout

Maximum time in milliseconds an SBB will wait for a free connection before a timeout error occurs.

Adding a Connection Pool Configuration to a JDBC Resource

To add a connection pool configuration to a JDBC resource, use the following rhino-console command or related MBean operation.

Console command: createjdbcresourceconnectionpoolconfig

Command

createjdbcresourceconnectionpoolconfig <name>
  Description
    Create a connection pool configuration for a JDBC resource.

Example

$ ./rhino-console createjdbcresourceconnectionpoolconfig myresource
Connection pool configuration created

MBean operation: createJdbcResourceConnectionPoolConfig

MBean

Rhino operation

public ObjectName createJdbcResourceConnectionPoolConfig(String jndiName)
    throws NullPointerException, NameNotFoundException,
      InvalidStateException, ConfigurationException;

This method returns the JMX ObjectName of a JDBC Resource Connection Pool Management MBean, which can be used to manage the connection pool configuration parameters.

Displaying a JDBC Resource’s Connection Pool Configuration

To display the connection pool configuration for a JDBC resource, use the following rhino-console command or related MBean operation.

Console command: dumpjdbcresourceconnectionpoolconfig

Command

dumpjdbcresourceconnectionpoolconfig <name>.
  Description
    Dump the connection pool configuration of a JDBC resource.

Example

$ ./rhino-console dumpjdbcresourceconnectionpoolconfig myresource
connection-pool-timeout : 5000
idle-check-interval     : 30
max-connections         : 2147483647
max-idle-connections    : 2147483647
max-idle-time           : 0
min-connections         : 0

MBean operations:

getJdbcResourceConnectionPoolConfigMBean

MBean

Rhino operation

public ObjectName getJdbcResourceConnectionPoolConfigMBean(String jndiName)
    throws NullPointerException, NameNotFoundException,
      InvalidStateException, ConfigurationException;

This method returns the JMX ObjectName of a JDBC Resource Connection Pool Management MBean, which can be used to manage the connection pool configuration parameters.

JDBC Resource Connection Pool Management

MBean

Rhino operations

public int getMaxConnections()
    throws ConfigurationException;

public int getMinConnections()
    throws ConfigurationException;

public int getMaxIdleConnections()
    throws ConfigurationException;

public int getMaxIdleTime()
    throws ConfigurationException;

public int getIdleCheckInterval()
    throws ConfigurationException;

public long getConnectionPoolTimeout()
    throws ConfigurationException;

These methods return the current value of the corresponding connection pool configuration parameter.

public CompositeData getConfiguration()
    throws ConfigurationException;

This operation returns a JMX CompositeData object that contains the current configuration parameters for the connection pool configuration. The javadoc for this operation describes the format of this data.

Updating a JDBC Resource’s Connection Pool Configuration

To update the connection pool configuration for a JDBC resource, use the following rhino-console command or related MBean operation.

Console command: setjdbcresourceconnectionpoolconfig

Command

setjdbcresourceconnectionpoolconfig <name> [-max-connections max-size]
[-min-connections size] [-max-idle-connections size] [-max-idle-time time]
[-idle-check-interval time] [-connection-pool-timeout time]
  Description
    Update the connection pool configuration of a JDBC resource. Size parameters
    must be integer values. The max-idle-time and idle-check-interval parameters are
    measured in seconds and must be integer values. The connection-pool-timeout
    parameter is measured in milliseconds and must be a long value.

Example

In the example below, the maximum idle connections is set to 20, the maximum number of connections is set to 30, and the maximum time an idle connection remains in the connection pool is set to 60s:

$ ./rhino-console setjdbcresourceconnectionpoolconfig myresource \
        -max-idle-connections 20 -max-connections 30 -max-idle-time 60
Connection pool configuration updated for JDBC resource myresource
$ ./rhino-console dumpjdbcresourceconnectionpoolconfig myresource
connection-pool-timeout : 5000
idle-check-interval     : 30
max-connections         : 30
max-idle-connections    : 20
max-idle-time           : 60
min-connections         : 0

MBean operations:

getJdbcResourceConnectionPoolConfigMBean

MBean

Rhino operation

public ObjectName getJdbcResourceConnectionPoolConfigMBean(String jndiName)
    throws NullPointerException, NameNotFoundException,
      InvalidStateException, ConfigurationException;

This method returns the JMX ObjectName of a JDBC Resource Connection Pool Management MBean, which can be used to manage the connection pool configuration parameters.

JDBC Resource Connection Pool Management

MBean

Rhino operations

public void setMaxConnections(int maxConnections)
    throws ConfigurationException;

public void setMinConnections(int minConnections)
    throws ConfigurationException;

public void setMaxIdleConnections(int maxIdleConnections)
    throws ConfigurationException;

public void setMaxIdleTime(int maxIdleTime)
    throws ConfigurationException;

public void setIdleCheckInterval(int idleCheckInterval)
    throws ConfigurationException;

public void setConnectionPoolTimeout(long timeout)
    throws ConfigurationException;

These methods set a new value for the corresponding connection pool configuration parameter.

Removing the Connection Pool Configuration from a JDBC Resource

To remove the connection pool configuration from a JDBC resource, use the following rhino-console command or related MBean operation.

Console command: removejdbcresourceconnectionpoolconfig

Command

removejdbcresourceconnectionpoolconfig <name>.
  Description
    Remove the connection pool configuration from a JDBC resource.

Example

$ ./rhino-console removejdbcresourceconnectionpoolconfig myresource
Connection pool configuration removed

MBean operation: removeJdbcResourceConnectionPoolConfig

MBean

Rhino operation

public void removeJdbcResourceConnectionPoolConfig(String jndiName)
    throws NullPointerException, NameNotFoundException,
      InvalidStateException, ConfigurationException;

Persistence Configuration File Format

In most circumstances, it will never be necessary to manually edit the external persistence configuration file. However, as the default Rhino install is configured to connect to a PostgreSQL database, one of the most likely reasons for needing to manually edit the file is if some other vendor’s database needs to be used, for example Oracle, as Rhino will not start if it doesn’t have an external database that it can connect to.

This section describes the format of the persistence configuration file.

Persistence configuration file location

The persistence configuration file can be found in ${RHINO_HOME}/config/persistence.xml. However this file only exists if the Rhino node has been started at least once. If the node has yet to be started, or if the persistence.xml file is deleted, then the persistence configuration is obtained from ${RHINO_HOME}/config/defaults.xml

Warning Every node in a Rhino cluster has the same persistence configuration. A Rhino node that boots and joins an existing cluster will obtain its persistence configuration from the other nodes in the cluster. The cluster configuration will be saved into the node’s ${RHINO_HOME}/config/persistence.xml file, potentially overwriting any local changes that may have been made to it.

XML Format of a Persistence Configuration

The persistence configuration is contained within the <persistence> element in the configuration file. The <persistence> element may contain any number of the following elements:

Element Description
 <persistence-instance>

Contains the configuration information for a single persistence instance

 <presistence-resource>

Contains the configuration information for a single persistence resource

 <jdbc-resource>

Contains the configuration information for a single jdbc resource

Persistence instance configuration

A persistence instance configuration is contained in a <persistence-instance> element. This element must have the following attributes:

Attribute Description
 name

The name of the persistence instance. This name must be unique between all persistence instance configurations.

 datasource-class-name

The fully-qualified name of the Java class from the database driver that implements the javax.sql.DataSource interface or the javax.sql.ConnectionPoolDataSource interface.

A <persistence-instance> element may also include zero or more <parameter> elements. Each <parameter> element identifies the name, Java type, and value of a configuration property of the datasource class using the following element attributes:

Attribute Description
 name

The name of a JavaBean property defined by the datasource class.

 type

The fully-qualified Java class name of the JavaBean property’s type.

 value

The value that should be assigned to the configuration property.
Variables may be used in the construct of JavaBean property values. Variables are indicated using the ${...} syntax, where the value between the braces is the variable name. Rhino attempts to resolve the variable name by looking in the following places in this order:

  • The content of the $RHINO_HOME/config/config_variables file

  • Java system properties

  • User environment variables

Example

Below is an example of the default configuration that connects to a PostgreSQL database:

<persistence-instance datasource-class-name="org.postgresql.ds.PGSimpleDataSource" name="postgres">
    <parameter name="serverName" type="java.lang.String" value="${MANAGEMENT_DATABASE_HOST}"/>
    <parameter name="portNumber" type="java.lang.Integer" value="${MANAGEMENT_DATABASE_PORT}"/>
    <parameter name="databaseName" type="java.lang.String" value="${MANAGEMENT_DATABASE_NAME}"/>
    <parameter name="user" type="java.lang.String" value="${MANAGEMENT_DATABASE_USER}"/>
    <parameter name="password" type="java.lang.String" value="${MANAGEMENT_DATABASE_PASSWORD}"/>
    <parameter name="loginTimeout" type="java.lang.Integer" value="30"/>
    <parameter name="socketTimeout" type="java.lang.Integer" value="15"/>
    <parameter name="prepareThreshold" type="java.lang.Integer" value="1"/>
</persistence-instance>

Persistence resource configuration

A persistence resource configuration is contained in a <persistence-resource> element. This element must have a name attribute, which specifies the name of the persistence resource. The name must be unique between all persistence resource configurations.

A <persistence-resource> element may also include zero or more <persistence-instance-ref> elements. Each <persistence-instance-ref> element must have a name attribute, which must be the name of a persistence instance defined elsewhere in the configuration file. The persistence resource will store relevant in-memory database state into each referenced persistence instance.

Example

Below is an example of a persistence resource that stores state into two PostgreSQL databases:

<persistence-resource name="management">
    <persistence-instance-ref name="postgres-db1"/>
    <persistence-instance-ref name="postgres-db2"/>
</persistence-resource>

JDBC resource configuration

A JDBC resource configuration is contained in a <jdbc-resource> element. This element must have a jndi-name attribute, which specifies the JNDI name relative to the java:resource/jdbc namespace where the resource will be bound in the JNDI tree. The JNDI name must be unique between all JDBC resource configurations.

A <jdbc-resource> element may also optionally include a <persistence-instance-ref> element and a <connection-pool> element.

The <persistence-instance-ref> element must have a name attribute, which must be the name of a persistence instance defined elsewhere in the configuration file. The JDBC resource will use the database identified by the referenced persistence instance to execute SQL queries.

The presence of a <connection-pool> element indicates to Rhino that a connection pool should be used to manage the physical connections used by the JDBC resource. The element may define attributes with the names of the connection pool configurable parameters. If a given parameter is absent in the element’s attribute list then the default value for that parameter is assumed.

Example

Below is an example of a JDBC resource:

<jdbc-resource jndi-name="jdbc">
    <persistence-instance-ref name="postgres-jdbc"/>
    <connection-pool
        connection-pool-timeout="5000"
        idle-check-interval="30"
        max-connections="15"
        max-idle-connections="15"
        max-idle-time="600"
        min-connections="0"/>
</jdbc-resource>

Cluster Membership

Rhino maintains a single system image by preventing inconsistent nodes from forming a cluster. It determines cluster membership based on the set of cluster nodes reachable within a time-out period.

This page explains the strategies available for managing and configuring cluster membership.

Tip Cluster membership is not a concern when using the Rhino SDK Getting Started Guide, where the cluster membership is always just the single SDK node.

Below are descriptions of:

How nodes "go primary"

Note
What is primary component selection?

A cluster node runs a primary component selection algorithm to determine whether the component it belongs to is primary or non-primary — without a priori global knowledge of the cluster.

The primary component is the authoritative set of nodes in the cluster. A node can only perform work when in the primary component. When a node enters the primary component, we say it "goes primary". Likewise, when a node leaves the primary component, we say it "goes non-primary".

The component selector manages which nodes are in the primary component. Rhino provides a choice of two component selectors: DLV or 2-node. The component selector needs to maintain a consistent view of the primary component in several scenarios, to maintain the single system image provided by Rhino.

Segmentation and split-brain

Nodes can become isolated from each other if some networking failure causes a network segmentation. This carries the risk of a "split brain" scenario, where nodes on both sides of the segment consider themselves primary. Rhino, which is managed as a single system image, does not allow split brain scenarios. The DLV and 2-node selectors use different strategies for avoiding split-brain scenarios.

Starting and stopping nodes

Nodes may stop and start the following ways:

  • node failure — Individual cluster nodes may fail, for example due to a hardware failure. From the point of view of the remaining nodes, node failures are indistinguishable from network segmentation. Behaviour of the surviving members is determined by the component selector.

  • automatic shutdown with restart — There are cases described in this guide where the component selector "shuts down" a node, for example to prevent split-brain scenarios. It does this by shifting the node from primary to non-primary. Whenever a node goes from primary to non-primary, it self-terminates. The node will still restart if the “-k” flag was passed to the start-rhino.sh script. The node will become primary again as soon as the component selector determines it’s safe to do so.

  • node start or restart — When a booting node enters a cluster which is primary, the node will also go primary, and will receive state from existing nodes.

  • remerge — A remerge happens after a network segmentation, when connectivity between network segments is restored. When a network segment of non-primary nodes merges with a segment of primary nodes, the non-primary nodes will also go primary, and receive state from the other nodes. In the unlikely case that two primary segments try to merge, Rhino will shut down the nodes in one of the segments, to maintain the single system image. This should only happen if two sides of a network segment are manually activated using the -p flag when using DLV (an administrative error), or after a network failure when using the 2-node selector.

Specifying the component selector

The main configuration choice related to cluster membership is the choice of component selector. If no component selector is specified, Rhino uses DLV as the default.

To specify the component selector, set the system property com.opencloud.rhino.component_selection_strategy to 2node or dlv on each node. Add this line near the end of read-config-variables file under the node directory to use the 2-node strategy:

OPTIONS="$OPTIONS -Dcom.opencloud.rhino.component_selection_strategy=2node"
Warning This property must be set consistently on every node in the cluster. Rhino will shut down a node trying to enter a cluster using a different component selector.

The DLV component selector

Note
What is DLV?

The DLV component selector is inspired by the dynamic-linear voting (DLV) algorithm described by Jajodia and Mutchler in their research paper Dynamic voting algorithms for maintaining the consistency of a replicated database.

DLV is the default primary component strategy. It is suitable for most deployments, and recommended when using three or more Rhino nodes, or two nodes plus a quorum node.

The DLV component selector uses a voting algorithm where the membership of previous primary components plays a role in the selection of the next primary component. Each node persists its knowledge of the last known primary component. When the cluster membership changes, each node exchanges a voting message that contains its own knowledge of previous primary components. Once voting completes, each node, independently, uses these votes to make the same decision on whether to be primary or non-primary. A component can be primary if there are enough members present from the last known configuration to form a quorum.

The DLV component selector guarantees that in the case of a network segmentation (where sets of nodes are isolated from each other), that at most one of the segments will remain primary, to avoid a 'split-brain' scenario where two segments consider themselves primary. This is achieved by considering any component smaller than cluster_size/2 to be non-primary. In the case of an exactly even split (4 node cluster with 2 nodes failed), the component with the lowest nodeID survives.

Manually activating DLV

Upon first starting a cluster using DLV, the primary component must be activated. You do this by passing the -p flag to start-rhino.sh when booting the first node. DLV persists the primary/non-primary state to disk, so specifying the -p flag is not required after the first time.

Using quorum nodes to distinguish node failure from network segmentation

Note
What is a quorum node?

A quorum node is a lightweight node added to distinguish between network segmentation and node failure (as described above). It does not process events or run SLEE services (nodes that are not quorum nodes are sometimes called "event-router nodes"). Quorum nodes have much lighter hardware requirements than event-router nodes.

To start a quorum node, you pass the -q flag to the start-rhino.sh script. You should always set a quorum node’s node ID to be higher than the other nodes, so that the loss of a quorum node never causes an event-router node to shut down. (The node with the lowest ID is the distinguished node.)

A quorum node is useful to help distinguish between node failure and network segmentation when using just two event-router nodes. Given a cluster of nodes {1,2}, there are two node-failure cases:

  • If node {2} fails, the remaining node {1} will stay primary because it is the distinguished node (having the lowest node ID).

  • If node {1} fails, the remaining node {2} will go non-primary and shut down. DLV can’t distinguish this from network segmentation, so it shuts down node {2} to prevent the possibility of a split-brain scenario. This usually isn’t desirable, and there are two approaches for solving this case: use a 2-node component selector or add a single quorum node to the cluster.

The 2-node component selector

The 2-node selector is designed exclusively for configurations with exactly two Rhino nodes, with a redundant network connection between them. It differs from DLV in how it handles node failures. When one node fails, the other node stays primary, regardless of which of the nodes failed. Conceptually, the responsibility of avoiding a split-brain scenario shifts to the redundant network connection. For this reason, this strategy should only be used when a redundant connection is available. If network segmentation happens, and two primary components remerge, one side of the segment will be shut down.

Warning Quorum nodes cannot be used with a 2-node selector. If you choose a 2-node selector, Rhino will prevent quorum nodes from booting.

Activating 2-node selectors automatically

2-node selectors automatically go primary when booting. (The -p flag is not necessary when using the 2-node selector, and is ignored.) When both nodes are visible, they become primary without delay. When a single node boots, it waits for a short time (defaulting to five seconds) before going primary. This prevents a different split-brain case when introducing new nodes.

Communications mode

Cluster membership may be run on exactly one of two communication methods. The communication mode must be chosen at cluster creation time and cannot be live reconfigured.

Multicast

This communication mode uses UDP multicast for communication between nodes. This requires that UDP multicast be available and correctly working on all hosts in the cluster.

Scattercast

This communication mode uses UDP unicast in a mesh topology for communication between nodes. This mode is intended for use where multicast support is not available, such as in the cloud. Scattercast requires significantly more complex configuration, and incurs some network overhead. Thus we do not recommend scattercast where multicast is available.

About Multicast

Multicast is the default communications mode used in Rhino.

This mode allows for automated cluster membership discovery by using the properties of multicast behaviour. When the network supports multicast this is the preferred communication mode, as it is much easier to configure in Rhino.

Nodes communicate by sending messages to well-known multicast groups. These are received by all nodes within the same network.

Configuring multicast

Configuration of multicast is very simple. A multicast address range must be specified; addresses from this range are used for different internal groups. Configuring this is handled by the rhino-install.sh script.

Troubleshooting multicast

A troubleshooting guide for multicast clusters can be found in Clustering.

About Scattercast

Scattercast is implemented as a replacement for UDP multicast clustering in environments that do not support multicast.

Warning
Cluster-wide communication mode

Choosing a cluster communication mode is a cluster-wide decision. It should be made before installation begins. The cluster cannot correctly form when the cluster communication mode is inconsistent; two independent, primary clusters will form.

How does it work?

Normally Savanna will send UDP datagrams to a well-known multicast group address / port combination to maintain cluster membership. Message transfer happens on separate multicast group address / port combinations that are allocated at runtime from a pool.

Scattercast replaces each multicast UDP datagram with multiple unicast UDP datagrams, one to each involved node. Each node has a unique unicast address / port combination ("scattercast endpoint") used for cluster membership. A separate unicast address is used for each message group. Another separate unicast address is used for co-ordinating state transfer between members. This is created from membership IP address, membership port + state_distribution_port_offset (default 100. Configured in {$NODE_HOME}/config/savanna/cluster.properties).

All nodes must a priori know the endpoint addresses of all other nodes in the same cluster. To achieve this, a configuration file scattercast.endpoints is stored on each node. This file is created during install and is subsequently managed using the Scattercast Management commands.

Separate endpoints for message transfer are allocated at runtime based on the membership address, and a port chosen from the range first_port to last_port (defaults to 46700, 46800). This is configured in {$NODE_HOME}/config/savanna/cluster.properties).

Warning

Scattercast uses separate groups for message transfer and membership. Ports used for the membership group in scattercast endpoints must not overlap with the port range used for message groups.

UDP broadcast addresses are not supported in scattercast endpoints. These will not be rejected by the installer, scattercast commands, or the recovery tool, but must be avoided by users.

Scattercast endpoint configuration is versioned, and hashed to ensure consistency. The cluster will prefer the newest version if multiple versions are detected when a node tries to join the cluster. Nodes that detect an out-of-date local version will shut down immediately. Nodes that detect a hash mismatch will also shut down immediately, as this indicates corrupt or manually modified contents.

All clustering configuration is stored per node, and must be updated on all nodes to remain in sync. It is expected that this should not be changed often, if at all.

What’s the downside?

Scattercast requires sending additional traffic. For an N-node cluster, scattercast will generate about (N-1) times as many datagrams as the equivalent multicast cluster. That is, there is no penalty for a 2-node cluster; a 3-node cluster will generate about 2x traffic; a 4-node cluster will generate 3x traffic; and so on. At high loads you may run out of network bandwidth sooner; also, there is some CPU overhead involved in sending the extra datagrams.

Scattercast cannot automatically discover nodes; you must explicitly provide endpoint information for all nodes in the cluster. To add nodes, remove nodes, or update nodes at runtime, online management commands should be used.

Warning Manual editing of the configuration file scattercast.endpoints is not supported. Manual editing will cause edited nodes to fail to boot.

Initial setup

A cluster must be seeded with an initial scattercast endpoints file containing valid mappings for all initial nodes. Without a valid scattercast endpoints file a node is unable to boot in scattercast comms mode. This initial endpoints set may be generated by the Rhino installer. When choosing to install in scattercast mode, the installer script must be provided with an initial endpoints set. Details can be found in Unpack and Gather Information.

If the initial cluster size is known at installation time, providing the full endpoint set here is recommended, as there is no manual step required when this is done.

Troubleshooting guide

A troubleshooting guide for scattercast can be found in Scattercast Clustering.

Scattercast Management

Below is an overview of procedures for managing and repairing scattercast endpoints.

Online management

Once a cluster has been established following the procedures in initial setup, online management of the scattercast endpoints becomes possible. There are four basic management commands, to get, add, delete, or update scattercast endpoints.

Each command applies the result to all currently executing nodes. If there is a node not currently executing that requires the new endpoints set, this must be done manually. To provide an up-to-date endpoints set to an offline node, copy {$NODE_HOME}/config/savanna/scattercast.endpoints from any up-to-date node to the matching path for the new node. All currently running nodes should have an up-to-date copy of this file. This can be verified by using the getscattercastendpoints command.

Warning scattercast.endpoints cannot be manually edited. Nodes will not boot with a manually edited scattercast.endpoints.

Multicast, localhost, and wildcard addresses are not permitted in scattercast endpoints. As an endpoint address is used to both send and recieve, the localhost address confines the cluster to a single host. Wildcard addresses are only usable to listen on all interfaces. These addresses cannot be sent from, and thus are invalid in scattercast endpoints.

Inconsistent states

When manually copying scattercast endpoint sets to all cluster members, the cluster will reject all write-management commands until it is rebooted. This occurs because persistent and in-memory state are not identical across all nodes.

This may also occur in other ways, and can be resolved with the following steps

  • If the disk state is correct on all nodes, reboot the cluster.

  • If the disk state is not correct or not the same on all nodes:

    • If the disk state incorrect, use the recover-scattercast-endpoints.sh tool in repairing to create a new, correct file; and copy to all nodes before reboot.

    • If disk state is correct on some but not all nodes, copy the file from correct nodes to all other nodes.

Repair

In most cases where the scattercast configuration is inconsistent, the faulty nodes can be restored by copying the scattercast.endpoints file from an operational node. If the node has been deleted from the current configuration, it should first be re-added using the addscattercastendpoints rhino-console command. The configuration file can be found at $RHINO_HOME/$NODE/config/savanna/scattercast.endpoints.

If no nodes are operational, such as after a major change to network addressing, the tool recover-scattercast-endpoints.sh can be used to rebuild the configuration from scratch.

After running recover-scattercast-endpoints.sh, you must copy the generated file to $RHINO_HOME/$NODE/config/savanna/scattercast.endpoints for each node.

Recovering scattercast endpoints

The recover-scattercast-endpoints.sh script is used to rebuild the scattercast config file after a major network change or configuration data loss. It can be run interactively, prompting for node,IP,port tuples, or using the new configuration on the command line. Options must be provided before the list of endpoints. If you want to use automatic port assignment, you must provide the baseport and offset options to allow calculation of a valid port set.

Usage:

$ cd rhino
$ ./recover-scattercast-endpoints.sh -?

 Usage: ./recover-scattercast-endpoints.sh [options] [node,ip-address[,port]]*
       Creates a seed scattercast endpoints file. The generated file needs to be copied to {$NODE_HOME}/config/savanna/scattercast.endpoints in all cluster nodes.
       If no endpoints are provided, enters interactive mode.
       arguments:
        -f, --file           Relative path to output file.
        -b, --baseport       The scattercast base port, used to derive a port when no port is specified in endpoints.
        -o, --offset         The scattercast port offset, used to derive a port when no port is specified in endpoints.
        -?, --help           Displays this message.

Example:

$RHINO_HOME/recover-scattercast-endpoints.sh -b 19000 -o 100 101,192.168.1.1,19000 102,192.168.1.2 103,192.168.1.2,19003
Warning If baseport and offset are provided, they are used only for the recovery tool. Nodes added or updated with management commands will continue to use values in cluster.properties.

Add Scattercast Endpoint(s)

addscattercastendpoints adds one or more new endpoints to the scattercast endpoints set.

This must be done before the new node is booted because a node cannot boot if it is not in the scattercast endpoints set. After running the add command successfully, the scattercast endpoints file must be copied from an existing node to the new node. This cannot be done with rhino-management commands.

Note If an endpoint is added with the wrong ip/port, this can be resolved by deleting and re-adding the endpoint.

Command

addscattercastendpoints <node,ip-address[,port]>*
  Description
    Add scattercast endpoints for new cluster members. If port is omitted, one will
    be assigned automatically.

Examples

Add endpoints for nodes 102, 103:

[Rhino@localhost (#1)] addscattercastendpoints 102,192.168.0.127 103,192.168.0.127
Endpoints added successfully. Displaying new scattercast endpoints mappings:
NodeID   Address
-------  --------------------
    101   192.168.0.127:12000
    102   192.168.0.127:12001
    103   192.168.0.127:12002
3 rows

Attempt to add an invalid address:

[Rhino@localhost (#4)] addscattercastendpoints 104,224.0.101.1
Multicast addresses are not permitted in scattercast endpoints: 224.0.101.1
Invalid usage for command 'addscattercastendpoints'.  Usage:
  addscattercastendpoints <node,ip-address[,port]>*
     Add scattercast endpoints for new cluster members. If port is omitted, one will
     be assigned automatically.

Add a node while node 102 has changed disk state:

[Rhino@localhost (#7)] addscattercastendpoints 104,192.168.0.127
Failed to add endpoints:
Node 102 reports: Disk state does not match memory state. No write commands available.

Delete Scattercast Endpoint(s)

deletescattercastendpoints removes endpoints for shut-down nodes.

A node’s endpoint cannot be deleted while in use. This means that the node must be shut down and have left the cluster before a delete can be issued.

A node that has been deleted cannot rejoin the cluster unless it is re-added, and the new scattercast endpoints file copied over. Copying an older scattercast endpoints file will not work, as the cluster uses versioning to protect against out-of-sync endpoints files.

Command

deletescattercastendpoints <-nodes node1,node2,...>
  Description
    Delete scattercast endpoints for cluster members being removed.

Examples

Delete a shut-down node:

[Rhino@localhost (#3)] deletescattercastendpoints -nodes 104
Endpoints deleted successfully, removed nodes shut down.
New scattercast endpoints mappings:
NodeID   Address
-------  --------------------
    101   192.168.0.127:12000
    102   192.168.0.127:12001
    103   192.168.0.127:12002
3 rows

Delete a running node:

[Rhino@localhost (#9)] deletescattercastendpoints -nodes 101
Failed to delete scattercast endpoints due to: Node: 101 currently running. Please shutdown nodes before deleting.

Get Scattercast Endpoints

getscattercastendpoints reports the set of scattercast endpoints known to all currently running cluster members.

This command may be issued at any time. If cluster membership changes during the read command, this causes an immediate failure, reporting that the cluster membership changed. This is for consistency with write commands.

Command

getscattercastendpoints
  Description
    Get the scattercast endpoints for the cluster.

Examples

Single node read with consistent endpoints:

[Rhino@localhost (#1)] getscattercastendpoints
[Consensus] Disk Mapping   : Coherent
[Consensus] Memory Mapping :
      [101] Address  : 192.168.0.127:12000
      [102] Address  : 192.168.0.127:12001

Two nodes read, where node 102 has different in-memory and disk mappings:

[Rhino@localhost (#2)] getscattercastendpoints
[101] Disk Mapping   : Coherent
[101] Memory Mapping :
      [101] Address  : 192.168.0.127:12000
      [102] Address  : 192.168.0.127:12001

[102] Disk Mapping   :
      [101] Address  : 192.168.0.127:12000
      [102] Address  : 192.168.0.127:12001
      [103] Address  : 192.168.0.127:18000

[102] Memory Mapping :
      [101] Address  : 192.168.0.127:12000
      [102] Address  : 192.168.0.127:12001

Read failed due to a cluster-membership change:

[Rhino@localhost (#3)] getscattercastendpoints
[Consensus] Disk Mapping   : Cluster membership change detected, command aborting
[Consensus] Memory Mapping : Cluster membership change detected, command aborting

Update Scattercast Endpoint(s)

updatescattercastendpoints updates the endpoints for currently running nodes.

In order to update scattercast endpoints, the SLEE must be stopped clusterwide. If this is successful, it triggers an immediate cluster restart to reload scattercast state.

Update commands make a best-efforts attempt to validate that the updated value will be usable after cluster reboot. This is done by attempting to bind the new address.

Updates cannot be applied to non-running cluster nodes. To update a node that is out of the cluster, simply delete the node and add with the new address.

Command

updatescattercastendpoints <node,ip-address[,port]>*
  Description
    Update scattercast endpoints for existing cluster members. If port is omitted,
    one will be assigned automatically. WARNING: This command will cause a cluster
    restart.

Examples

Update with the whole cluster in a stopped state:

[Rhino@localhost (#0)] updatescattercastendpoints 101,192.168.0.127,18000
Update executed successfully, cluster shutting down now.

Update while the SLEE is in a running state:

[Rhino@localhost (#3)] updatescattercastendpoints 101,192.168.0.127,12000
Failed to update scattercast endpoints due to: Cannot update scattercast endpoints while SLEE is running.

Update a non-running node:

[Rhino@localhost (#5)] updatescattercastendpoints 102,192.168.0.127,12000
Failed to update scattercast endpoints due to: 102 is not currently alive. Updates can only be done against currently live nodes

Errors

If the update command fails part way through execution, it is likely that the update will not have been applied. Under some rare circumstances, such as multiple disk errors or filesystem-access problems, the update will have only been applied to some nodes. Check the current configuration by running the getscattercastendpoints command to verify that the on-disk config is either coherent with the in-memory config (the command rolled back cleanly) or consistent across all nodes (the command failed after writing all changes). Identify and fix the fault that caused the command to fail, then reboot the cluster. If the update failed before writing the new configuration, rerun the update after fixing the fault that caused the initial attempt to fail.

Alarms

If the updatescattercastendpoints command is unable to reboot the cluster automatically, for example due to a timeout writing state to the persistent database, it raises a CRITICAL alarm of type rhino.scattercast.update-reboot-required and a message:

Scattercast endpoints have been updated. A cluster reboot is required
to apply the update as soon as possible otherwise a later partial reboot
e.g. due to network segmentation could result in a split-brain cluster.

Static Replication Domaining

This section covers what resources are domainable in Rhino 2.3 and late, instructions for configuring basic and advanced features of static replication domaining, and how to display the current domaining configuration.

Note
What is static replication domaining?

Static replication domaining means partitioning Rhino’s replication mechanisms to perform replication only between selected subsets of nodes.

A subset of nodes is called a "domain". This provides better scaling for larger clusters, while still providing a level of replication to ensure fault tolerance. Prior to Rhino 2.3.0, a cluster could be considered as having one and only one domain, and every node could be considered a member of that domain.

Domain configuration consists of a set of domain definitions, each associated with one or more domainable resources and one or more cluster nodes.

Domainable resources

Rhino includes two types of domainable resources:

  • persistence resources — instances of MemDB (Rhino’s in-memory database) that act as storage for SBB, RA, or profile replication

  • activity handler resources — the existence and the state of Activity Context Interfaces, Activity Contexts, and associated attributes.

Tip Activity Handlers, SBB Persistence, and RA Persistence replicated resources are domainable in Rhino 2.3.0 and later
Warning The Null Activity Factory and Activity Context Naming are not domainable. This means that these resources are replicated cluster wide.

Configuring Static Replication Domaining

Below are instructions for configuring static replication domaining.

Configuring basic domaining settings

To configure domaining, you edit the config/rhino-config.xml in each Rhino node directory. The domain definitions in those files look like this:

<domain name="domain-name" nodes="101,102,...,n">
  ... resources associated with the domain ...
</domain>

Domainable resources

Inside each domain configuration block, each resource is defined using the following format and resource names:

Persistence resources Activity Handler resources

Format

Inside a memdb-resource section:

<memdb-resource>
  ...memory database name...
</memdb-resource>

Inside an ah-resource section:

<ah-resource>
  ...activity handler name...
</ah-resource>

Name

Same as the jndi-name used in its declaration in rhino-config.xml:

<memdb>
  <jndi-name>DomainedMemoryDatabase</jndi-name>
  <message-id>10005</message-id>
  <group-name>rhino-db</group-name>
  <committed-size>100M</committed-size>
  <resync-rate>100000</resync-rate>
</memdb>

Same as its group-name in rhino-config.xml:

<activity-handler>
  <group-name>rhino-ah</group-name>
  <message-id>10000</message-id>
  <resync-rate>100000</resync-rate>
</activity-handler>
Warning

It is extremely important that the domaining configuration section of rhino-config.xml is identical between all cluster nodes and does not change for the lifetime of the cluster. Any changes to the domaining configuration must be made while the cluster is offline.

Some persistence resources are not domainable as they contain data which either makes no sense to domain, or which must be global to the entire cluster. The current undomainable persistence resources are ReplicatedMemoryDatabase, LocalMemoryDatabase, and ManagementDatabase.

Example configuration

rhino-config.xml includes the following sample domaining configuration, commented out by default. It configures an 8-node cluster into 4 domains, with each domain containing 2 nodes — specifying that replication of SBB and RA shared state only happens between each pair of nodes.

<!--
  Example replication domain configuration.

  This example splits the cluster into several 2-node domain pairs for the purposes of
  service state replication. This example does not cover replication domaining for writeable
  profiles.
-->
<domain name="domain-1" nodes="101,102">
  <memdb-resource>DomainedMemoryDatabase</memdb-resource>
  <ah-resource>rhino-ah</ah-resource>
</domain>
<domain name="domain-2" nodes="201,202">
  <memdb-resource>DomainedMemoryDatabase</memdb-resource>
  <ah-resource>rhino-ah</ah-resource>
</domain>
<domain name="domain-3" nodes="301,302">
  <memdb-resource>DomainedMemoryDatabase</memdb-resource>
  <ah-resource>rhino-ah</ah-resource>
</domain>
<domain name="domain-4" nodes="401,402">
  <memdb-resource>DomainedMemoryDatabase</memdb-resource>
  <ah-resource>rhino-ah</ah-resource>
</domain>
Warning This example contains node IDs which start with the same number as their corresponding domain. While it’s not required, Metaswitch recommends this naming scheme as it clarifies which nodes are associated with a particular domain.

Default domain

The default domain (named domain-0) is not configurable and contains all replicated resources which are not explicitly domained as part of the configuration in rhino-config.xml. If a node is booted into the cluster, and does not have an associated domain configuration associated with it, it will use the default domain for all persistence resources. If no domains are configured at all, all resources will belong to the default domain.

Advanced configuration

It is possible, though less usual, to configure overlapping domains with different resources. The only constraint on the domaining configuration is that for each domainable resource, it may only occur in a single domain for any given node. For example, the following configuration is valid, despite multiple nodes containing the same NodeIDs.

Note This example builds on the basic example, adding two more domains (domain-profiles-1 and domain-profiles-2). These additional domains allow replication of writeable profiles (backed by MyWriteableProfileDatabase) across a larger set of nodes than the domains used for service replication.
<domain name="domain-profiles-1" nodes="101,102,201,202">
  <memdb-resource>MyWriteableProfileDatabase</memdb-resource>
</domain>
<domain name="domain-profiles-2" nodes="301,302,401,402">
  <memdb-resource>MyWriteableProfileDatabase</memdb-resource>
</domain>

<domain name="domain-services-1" nodes="101,102">
  <memdb-resource>DomainedMemoryDatabase</memdb-resource>
  <ah-resource>rhino-ah</ah-resource>
</domain>
<domain name="domain-services-2" nodes="201,202">
  <memdb-resource>DomainedMemoryDatabase</memdb-resource>
  <ah-resource>rhino-ah</ah-resource>
</domain>
<domain name="domain-services-3" nodes="301,302">
  <memdb-resource>DomainedMemoryDatabase</memdb-resource>
  <ah-resource>rhino-ah</ah-resource>
</domain>
<domain name="domain-services-4" nodes="401,402">
  <memdb-resource>DomainedMemoryDatabase</memdb-resource>
  <ah-resource>rhino-ah</ah-resource>
</domain>
Warning The configuration and setup of the memory database for use with writeable profiles is beyond the scope of this documentation.

Displaying the Current Domaining Configuration

To display the current domaining configuration, use the following rhino-console command or MBean operation.

Console command: getdomainstate

Command

getdomainstate
  Description
    Display the current state of all configured domains

Output

Display the current state of all configured domains.

Example

$ ./rhino-console getDomainState
domain-1:
DomainedMemoryDatabase, rhino-ah
  101                              Running
  102                              Running

domain-2:
DomainedMemoryDatabase, rhino-ah
  201                              Running
  202                              Running

domain-3:
DomainedMemoryDatabase, rhino-ah
  301                              Stopped
  302                                    -

domain-4:
DomainedMemoryDatabase, rhino-ah
  401                                    -
  402                                    -
Warning Nodes which are configured with domain information but are not current part of the cluster are represented by a -.

MBean operation: getDomainConfig

MBean

Rhino extension

public TabularData getDomainConfig()
    throws ManagementException;

(See the javadoc for the structure of the TabularData returned by this operation.)

Data Striping

This section covers which MemDB instances support data striping, instructions for configuring basic and advanced features of MemDB data striping, how to display the current striping configuration, and striping-related statistics.

Note
What is MemDB data striping?

MemDB data striping means dividing a MemDB instance into partially independent "stripes". This can remove bottlenecks in MemDB, letting Rhino better use more available cores (in machines with many CPU cores). In other words, the primary purpose of data striping is to increase vertical scalability. MemDB data striping was introduced in Rhino 2.3.1.

Warning Data striping should not be used for replicated MemDB instances. Under some conditions it can corrupt the management database.

MemDB instances

Rhino includes two types of MemDB (Rhino’s in-memory database):

  • local MemDB — contains state local to the Rhino node, used by non-replicated applications running in "high-availability mode".

  • replicated MemDB — contains state replicated across the cluster, domain, or sub-cluster, used by replicated applications running in "fault-tolerant mode".

Warning MemDB instances backed by disk storage — including the profile database and management database — do not support striping.

Configuring Data Striping

Below are instructions for configuring data striping.

Configuring basic striping settings

The number of stripes can be configured for each instance of MemDB.

Note
How does the stripe count work?

To scale well on increasingly multi-core systems, it’s important to understand how the stripe count works:

  • Each MemDB stripe has a single commit order. While concurrent transactions (different events processed in parallel) execute concurrently against a single MemDB stripe, the commit protocol enforces an ordering, which means that no more than one commit at a time can occur against a single stripe.

  • While a single CPU can process thousands of commits per second, eventually a single commit order becomes a latency and scalability bottleneck.

  • Therefore, configuring a stripe count greater than 1 means that there is now more than 1 commit order — which means there is more scalability. So a stripe count of 8 means that up to 8 transactions can commit concurrently.

In summary, stripe count is the measure of commit concurrency.

Below are details on the default settings for stripe counts, and how to choose and set the stripe count for your MemDB instances.

Default settings

By default, Rhino disables data striping for all MemDB instances.

Choosing a stripe count

The stripe count must be 1 or greater, and must be a power of two (1, 2, 4, 8, 16, …​). The stripe count should be proportional to the number of CPU cores in a server. A good rule of thumb is that the stripe count should be about 1/2 the number of CPU cores.

To disable striping, use a stripe count of 1.

In some cases when nodes are regularly leaving and joining a cluster, there is a chance of all cluster nodes being restarted as a result of striping being enabled.

Warning We recommend that you consult with Metaswitch before enabling striping to ensure it is configured correctly in a stable and consistent network.

Setting the stripe count

Each MemDB instance has its own stripe count. To configure the stripe count for a particular MemDB instance, you edit the MemDB configuration for that instance, in the config/rhino-config.xml file in each Rhino node directory.

Warning The stripe count for a MemDB instance must be the same on all nodes in the cluster. A new node will not start if it contains a stripe count which is inconsistent with other nodes in the cluster. Therefore, the stripe count cannot be changed while a cluster is running.

The striping configuration for a local MemDB instance looks like this:

<memdb-local>
  ...
  <stripe-count>8</stripe-count>
</memdb-local>
Warning Data striping is not a supported configuration for replicated MemDB instance.

Displaying the Current Striping Configuration

To display the current striping configuration, use the following rhino-console command or MBean operations.

Console command: getstripingstate

Command

getstripingstate
  Description
    Display the striping configuration of all MemDB instances

Output

Display the striping configuration of all MemDB instances.

Example

$ ./rhino-console getstripingstate
Striping configuration for replicated MemDB instances:

memdb-resource             stripe-count   stripe-offset
-------------------------  -------------  --------------
       ManagementDatabase              1               0
          ProfileDatabase              1               0
 ReplicatedMemoryDatabase              1               0
3 rows

Striping configuration for local MemDB instances on node 101:

memdb-resource        stripe-count
--------------------  -------------
 LocalMemoryDatabase              8
1 rows

MBean operation: getReplicatedMemDBStripingConfig

MBean

Rhino extension

public TabularData getReplicatedMemDBStripingConfig()
  throws ManagementException;

(See the javadoc for the structure of the TabularData returned by this operation.)

MBean operation: getLocalMemDBStripingNodeConfig

MBean

Rhino extension

public TabularData getLocalMemDBStripingNodeConfig()
  throws ManagementException;

(See the javadoc for the structure of the TabularData returned by this operation.)

MemDB and striping statistics

There are two sets of statistics related to MemDB data striping: MemDB statistics and striping-statistics.

MemDB statistics and striping

MemDB collects statistics under the MemDB-Replicated and MemDB-Local parameter sets, within each data stripe. They can be monitored on a per-stripe basis, or viewed as an aggregate across all stripes.

The parameter set names of the per-stripe statistics end with suffix of the form .stripe-N. For example, the stats for the first stripe will have suffix .stripe-0.

Striping statistics

MemDB maintains atomicity, consistency and isolation of data across stripes. This involves managing the versions of data exposed to various client transactions. The MemDB-Timestamp parameter set contains the relevant statistics.

This is a listing of the statistics available for a particular MemDB instance, within the MemDB-Timestamp parameter set:

Counter type statistics:
Id: Name:             Label:     Description:
0   waitingThreads    waiting    The number of threads waiting for a timestamp to become safe
1   unexposedCommits  unexposed  The number of commits which are not yet safe to expose

A database transaction containing at least one write is considered "safe to expose to client transactions" when (as shown by these statistics) all its changes — as well as all the write transactions that precede them — are available across all stripes.

Note These statistics are expected to have low values even under load (often with value zero), and should stay at zero when Rhino is not under load.

Service Assurance Server (SAS) Tracing

Metaswitch Service Assurance Server (SAS) is a platform that records traces of network flows and service logic.

Rhino TAS provides a SAS facility for components to send events to SAS.

This section describes the commands for managing the SAS facility and resource bundles.

For information about developing Rhino applications with the SAS tracing functionality, refer to the Rhino SAS API Development Guide.

SAS Configuration

This page describes the commands used to configure and enable the SAS facility. SAS configuration is namespace-aware: all these commands apply to the current namespace for the client (selected with setactivenamespace).

Resource Identifiers

Events sent to SAS are associated with a resource identifier. All components within a Rhino namespace use the same resource identifier. The resource identifier can be set with the setsasresourceid command.

The resource identifier will be in the generated resource bundle that will be imported into SAS.

SAS Servers

Rhino supports connecting to some or all SAS server instances in a federation. This is maintained as an internal list of servers and ports. Servers may be added with addsasserver and removed with removesasserver. By default SAS listens on port 6761. If the port is omitted from the add command, then the default port will be used.

Changing SAS configuration

Note that all commands that change SAS configuration on this page require the SLEE be in the Stopped state on all cluster nodes and/or SAS tracing to be disabled.

If you want to disable SAS tracing without stopping the SLEE, you can do so using setsasenabled false -force true. Then make changes and re-enable SAS using setsasenabled true.

The getsasconfiguration command can be run at any time.

Console command: addsasserver

Command

addsasserver <servers>
  Description
    Add one or more servers to the set of configured SAS servers
  Required Arguments
    servers  Comma delimited list of host:port pairs for SAS servers

Example

[Rhino@localhost (#1)] addsasserver localhost:12000
Added server(s) to SAS configuration properties:
servers=localhost:12000
[Rhino@localhost (#2)] addsasserver 127.0.0.1:12001,127.0.0.2
Added server(s) to SAS configuration properties:
servers=127.0.0.1:12001,127.0.0.2

Console command: removesasserver

Command

removesasserver <servers>
  Description
    Remove one or more servers from the set of configured SAS servers
  Required Arguments
    servers  Comma delimited list of host:port pairs for SAS servers

Example

[Rhino@localhost (#1)] removesasserver localhost:12000
Removed server(s) from SAS configuration properties:
servers=localhost:12000
[Rhino@localhost (#2)] removesasserver 127.0.0.1:12001,127.0.0.2
Removed server(s) from SAS configuration properties:
servers=127.0.0.1:12001,127.0.0.2

Console command setsassystemname

Command

setsassystemname <systemName> [-appendID <appendID>]
  Description
    Configure the SAS system name.
  Required Arguments
    systemName  The unique system name to use. Cluster wide
  Options
    -appendID  If true, append node ID to system name

Example

$ ./rhino-console setsassystemname mmtel
Set SAS system name:
systemName=mmtel
$ ./rhino-console setsassystemname mmtel -appendID true
Set SAS system name:
systemName=mmtel
appendID=true

Console command setsassystemtype

Command

setsassystemtype <systemType>
  Description
    Configure the SAS system type.
  Required Arguments
    systemType  The system type to use. Cluster wide

For systems running Sentinel products on the Rhino platform, Metaswitch recommends the following system type strings:

  • VoLTE TAS for Sentinel VoLTE

  • IPSMGW for Sentinel IPSMGW

  • BSF for Sentinel GAA

  • ShCM for Sh Cache Microservice

Example

$ ./rhino-console setsassystemtype BSF
Set SAS system type:
systemType=BSF

Console command setsassystemversion

Command

setsassystemversion <systemVersion>
  Description
    Configure the SAS system version.
  Required Arguments
    systemVersion  The system version to use. Cluster wide

For systems running Sentinel products on the Rhino platform, Metaswitch recommends using the three-digit version number of the installed product as the system version string, for example 3.0.0.

Example

$ ./rhino-console setsassystemversion 3.0.0
Set SAS system version:
systemVersion=3.0.0

Console command setsasresourceid

Command

setsasresourceid <resourceIdentifier>
  Description
    Configure the SAS resource identifier.
  Required Arguments
    resourceIdentifier  The resource identifier to use.

Example

$ ./rhino-console setsasresourceid com.metaswitch.rhino
Set SAS resource identifier:
resourceIdentifier=com.metaswitch.rhino

Console command setsasqueuesize

Command

setsasqueuesize <queueSize>
  Description
    Configure the per server SAS message queue limit.
  Required Arguments
    queueSize  The maximum number of messages to queue for sending to the SAS
    server.

Example

$ ./rhino-console setsasqueuesize 100000
Set SAS queue size:
queueSize=100000

Console command: getsasconfiguration

Command

getsasconfiguration
  Description
    Display SAS tracing configuration

Example

$ ./rhino-console getsasconfiguration
SAS tracing is currently disabled.

Configuration properties for SAS:
servers=[sas-server]
systemName=mmtel
appendNodeIdToSystemName=true
resourceIdentifier=com.metaswitch.rhino
queueSize=10000 per server

Enabling and Disabling SAS Tracing

SAS tracing can be enabled and disabled using the setsasenabled command. The Rhino SAS facility must be configured with both a resource identifier and server list before being enabled. SAS tracing state is namespace-aware: this command applies to the current namespace for the client (selected with setactivenamespace).

Disabling SAS tracing on a running SLEE requires use of the -force option in order to shut down SAS tracing. When the SLEE is running, there may be activities actively tracing to SAS. Live reconfiguration of the SAS facility will result in breaking all trails started before the reconfiguration. If this is acceptable, then the -force parameter will allow a clean shutdown of SAS tracing for reconfiguration.

Console command: setsasenabled

Command

setsasenabled <enable> [-force <force>]
  Description
    Enable or disable SAS tracing. Configure SAS before enabling.
  Required Arguments
    enable  True to enable SAS tracing, false to disable.
  Options
    -force  True to override the SLEE state check when disabling SAS tracing state.
    SAS tracing state cannot normally be disabled when the SLEE is not in the
    Stopped state, because this may cause incomplete trails to be created in SAS for
    sessions that are in progress.

Example

To enable SAS tracing:

$ ./rhino-console setsasenabled true
SAS tracing enabled

SAS Bundle Mappings

Rhino TAS uses a prefix per mini-bundle to generate full event IDs included in the exported SAS resource bundle.

In general, you need to manually define the prefixes and map them to the mini-bundles. This section describes the Rhino management console commands that you can use to manage the mappings.

For more information about defining bundle mappings, see Define bundle mappings in the Rhino SAS API Development Guide.

Console command: listsasbundlemappings

Command

listsasbundlemappings [-sortBy <sortBy>]
  Description
    Lists all the SAS bundle mappings.
  Options
    -sortBy  The column to sort the bundle mappings by for display. Either 'name' or
    'prefix'

Example

[Rhino@localhost (#1)] listsasbundlemappings
name                                      prefix
----------------------------------------  -------
 com.opencloud.slee.services.example.sas   0x0001
1 rows

Console command: setsasbundlemapping

Command

setsasbundlemapping <name> <prefix>
  Description
    Sets a SAS bundle mapping.
  Required Arguments
    name  The fully qualified name of the bundle.
    prefix  The prefix for the bundle mapping, as a decimal, hex, or octal string.

Example

[Rhino@localhost (#1)] setsasbundlemapping com.opencloud.slee.services.example.sas 0x0001
Added a SAS bundle mapping from com.opencloud.slee.services.example.sas to 0x0001.

Console command: removesasbundlemapping

Command

removesasbundlemapping <name>
  Description
    Removes a SAS bundle mapping.
  Required Arguments
    name  The fully qualified name of the bundle.

Example

[Rhino@localhost (#1)] removesasbundlemapping com.opencloud.slee.services.example.sas
Prefix for com.opencloud.slee.services.example.sas removed.

Console command: listunmappedsasbundles

Command

listunmappedsasbundles
  Description
    Display unmapped SAS bundles

Example

[Rhino@localhost (#1)] listunmappedsasbundles
Unmapped SAS bundles found:
com.opencloud.slee.resources.http

SAS Bundle Generation

SAS requires at least one resource bundle file containing definitions of all events that will be sent to the server. These definitions show SAS how to display and interpret data sent to the server.

Rhino verifies that a SAS enabled deployable unit includes a resource bundle containing definitions for all events it uses. These per DU resource bundles are called mini-bundles.

Rhino provides console commands to export a resource bundle suitable for use by SAS, containing mini-bundles from all installed deployable units.

Console command: exportsasbundle

Command

exportsasbundle <bundleFileName>
  Description
    Export SAS bundle.
  Required Arguments
    bundleFileName  The bundle file name.

Example

[Rhino@localhost (#1)] exportsasbundle my-bundle.yaml
Wrote combined bundle to: ~/my-bundle.yaml
Exported bundle
info:
  identifier: my-rhino
  minimum_sas_version: '9.1'
  version: '1522714397579'
events:
  0x000100:
    summary: Test Event
    level: 100
enums: {
  }

MBean methods

System Properties

Below is a list of system properties that can be used to modify Rhino behaviour.

Name Description

com.opencloud.savanna2.framework.GroupHeartbeat.interval

Interval between sending per-group heartbeat messages

com.opencloud.savanna2.framework.GroupHeartbeat.loss_threshold

Number of unreceived pending heartbeats before the group heartbeat watchdog condition triggers

eventrouter.transaction_timeout

Transaction timeout for eventrouter transactions

logging.status.level

Log level to set on the Log4j status logger

notifications.max_pending_notifications

Maximum number of pending JMX notifications before notifications are dropped

notifications.notification_threads

Number of JMX notification threads to run

rhino.ah.gcthreads

Set the max number of AH GC threads

rhino.ah.pessimistic.maxlocktimeout

Set the maximum lock timeout for local activity handler

rhino.ah.replicating.maxlocktimeout

Set the maximum lock timeout for replicated activities

rhino.ah.replicating.migrationtimeout

Default ah migration timeout

rhino.audit.log_format

Format of CSV management audit log

rhino.bootup.locktimeout

Distributed lock timeout used during node bootup

rhino.config.yaml_code_point_limit

Set the maximum number of code points supported in a declarative YAML bundle for importing

rhino.er.ra_notifier_threads

Number of resource adaptor entity notifier (callback) threads

rhino.group_rmi.max_threads

Maximum number of threads available to handle group RMI operations.

rhino.halttimeout

The timeout in milliseconds before a JVM shutdown is forcibly terminated with a Runtime.halt().

rhino.lenient_config_property_validation

Relax resource adaptor entity config property validation.

rhino.license.default

Location of default Rhino license file

rhino.lock.lockmanager.defaultlocktimeout

Set the default timeout for lock acquisitions

rhino.mbean.boot.timeout

Timeout used for write operations during exceptionally long SLEE starts (during Rhino boot)

rhino.misc.queue_size

Queue size for Distributed Resource Manager’s misc. runnable stage

rhino.misc.thread_count

Number of threads used in Distributed Resource Manager’s misc. runnable stage

rhino.monitoring.clocksync.check_interval

Interval in milliseconds between cluster node clock sync checks

rhino.monitoring.clocksync.max_fail_count

Number of times a clock sync check must fail for a node before an alarm is raised

rhino.monitoring.clocksync.threshold

Threshold in milliseconds over which a cluster node will be reported as being out of clock synchronisation

rhino.rem.enabled

Enable the embedded Rhino Element Manager (SDK only).

rhino.skip_lifecycle_checks

Whether to skip check that prevents a resource adaptor from creating an activity in the STOPPING state.

rhino.slee_restricted_lifecycle

Restrict SLEE lifecycle changes depending on current node actual states.

rhino.state.convergence.interval_seconds

Interval between state convergence checks

rhino.state.convergence.retry_interval_millis

Interval between state convergence task retries

rhino.state.convergence.timeout_seconds

Time before a convergence task is considered to have timed out

rhino.tm.resources.queue_size

Queue size for Transaction Manager’s executor for blocking resource callbacks

rhino.tm.resources.thread_count

Number of threads used in Transaction Manager’s executor for blocking resource callbacks

rhino.tm.synchronization.queue_size

Queue size for Transaction Manager’s executor for synchronization callbacks

rhino.tm.synchronization.thread_count

Number of threads used in Transaction Manager’s executor for synchronization callbacks

rhino.verification.traceexceptions

Print exception stack traces for DU verification errors

sas.queue_full_interval

Interval in milliseconds to wait before clearing the queue-full alarm

savanna.receive_watchdog_interval

Maximum time a Savanna receive-thread may remain busy before it is considered stuck

snmp.filename.max-length

The maximum allowed filename length, used when generating MIBs during a MIB export

snmp.locktimeout

Default lock acquisition timeout for SNMP config update thread

snmp.max_config_update_retries

Maximum number of times the SNMP config update thread will attempt to obtain an exclusive lock in order to apply a given configuration update work item

snmp.name.max-length

The maximum number of characters allowed in object identifiers included in generated MIBs

snmp.name.uppercase

Determines if MIB identifiers and filenames will be forced to uppercase, used when generating MIBs during a MIB export

snmp.suppress_service_oid_conflict_alarms

Suppress the raising of alarms for duplicate parameter set type OID mappings when multiple services use the same base OID

staging.lifo_scan_interval

Interval between scans of the LIFO queue’s tail to check for expired items

staging.live_threads_fraction

Minimum percentage of staging threads that must remain alive to prevent a watchdog restart

timer.remote.http_timeout

HTTP response timeout in milliseconds for remote timer server requests

transaction.default_timeout

Default transaction age in milliseconds before a long-running transaction is aborted

transaction.timeout_check_interval

Interval in milliseconds between checks for transactions that need timing out

transaction.timeout_warn_percent

Transaction age (as a percentage of transaction timeout) to warn about long-running transactions at

watchdog.check_interval

Interval in milliseconds between watchdog checks

watchdog.max_pause_margin

Maximum delay in watchdog scheduling before a warning is displayed

watchdog.no_exit

Override the default behaviour of the watchdog to disable terminating the JVM.Do not use in a production deployment. An alarm will be raised when this mode is active.

watchdog.reverse_timewarp_margin

Maximum watchdog 'early wakeup' in milliseconds before a reverse-timewarp warning is displayed

watchdog.warn_interval

Minimum interval in milliseconds between displaying timewarp warnings

com.opencloud.savanna2.framework.GroupHeartbeat.interval

Description

Interval between sending per-group heartbeat messages

Valid values

time in milliseconds

Default value

5000

com.opencloud.savanna2.framework.GroupHeartbeat.loss_threshold

Description

Number of unreceived pending heartbeats before the group heartbeat watchdog condition triggers

Valid values

positive integer

Default value

10

eventrouter.transaction_timeout

Description

Transaction timeout for eventrouter transactions

Valid values

milliseconds

Default value

30000

logging.status.level

Description

Log level to set on the Log4j status logger

Valid values

ERROR,WARN,INFO,DEBUG,TRACE

Default value

ERROR

notifications.max_pending_notifications

Description

Maximum number of pending JMX notifications before notifications are dropped

Valid values

number of pending notifications, >= 0

Default value

500

notifications.notification_threads

Description

Number of JMX notification threads to run

Valid values

number of threads; ⇐0 implies same-thread delivery

Default value

1

rhino.ah.gcthreads

Description

Set the max number of AH GC threads

Valid values

>2

Default value

2

rhino.ah.pessimistic.maxlocktimeout

Description

Set the maximum lock timeout for local activity handler

Valid values

time in milliseconds

Default value

15000

rhino.ah.replicating.maxlocktimeout

Description

Set the maximum lock timeout for replicated activities

Valid values

time in milliseconds

Default value

15000

rhino.ah.replicating.migrationtimeout

Description

Default ah migration timeout

Valid values

time in milliseconds

Default value

60000

rhino.audit.log_format

Description

Format of CSV management audit log

Valid values

2.4 (old format) or 2.5 (includes an extra namespace field)

Default value

2.5

rhino.bootup.locktimeout

Description

Distributed lock timeout used during node bootup

Valid values

Positive integer (seconds)

Default value

120

rhino.config.yaml_code_point_limit

Description

Set the maximum number of code points supported in a declarative YAML bundle for importing

Valid values

positive integers

Default value

10485760

rhino.er.ra_notifier_threads

Description

Number of resource adaptor entity notifier (callback) threads

Valid values

Positive integer

Default value

1

rhino.group_rmi.max_threads

Description

Maximum number of threads available to handle group RMI operations.

Valid values

Positive integer

Default value

10

rhino.halttimeout

Description

The timeout in milliseconds before a JVM shutdown is forcibly terminated with a Runtime.halt().

Valid values

null

Default value

60000

rhino.lenient_config_property_validation

Description

Relax resource adaptor entity config property validation.

Extended Description

When set to false (the default), Rhino will reject an attempt to create a resource adaptor entity or update the config properties of an existing resource adaptor entity if the config properties provided by the client contains unrecognised config property names, ie. names not defined by the resource adaptor or any known vendor-specific properties.

If this system property is set to true, this validation check is skipped and any config property name will be accepted.

Valid values

true,false

Default value

false

rhino.license.default

Description

Location of default Rhino license file

Valid values

absolute or relative file path

Default value

../rhino.license (rhino-sdk.license for Rhino SDK)

rhino.lock.lockmanager.defaultlocktimeout

Description

Set the default timeout for lock acquisitions

Valid values

time in milliseconds

Default value

60000

rhino.mbean.boot.timeout

Description

Timeout used for write operations during exceptionally long SLEE starts (during Rhino boot)

Valid values

Positive integer (seconds)

Default value

120

rhino.misc.queue_size

Description

Queue size for Distributed Resource Manager’s misc. runnable stage

Valid values

Positive integer

Default value

100

rhino.misc.thread_count

Description

Number of threads used in Distributed Resource Manager’s misc. runnable stage

Valid values

Positive integer

Default value

3

rhino.monitoring.clocksync.check_interval

Description

Interval in milliseconds between cluster node clock sync checks

Valid values

time in milliseconds

Default value

10000

rhino.monitoring.clocksync.max_fail_count

Description

Number of times a clock sync check must fail for a node before an alarm is raised

Valid values

Positive integer

Default value

5

rhino.monitoring.clocksync.threshold

Description

Threshold in milliseconds over which a cluster node will be reported as being out of clock synchronisation

Valid values

time in milliseconds

Default value

2000

rhino.rem.enabled

Description

Enable the embedded Rhino Element Manager (SDK only).

Valid values

true,false

Default value

true

rhino.skip_lifecycle_checks

Description

Whether to skip check that prevents a resource adaptor from creating an activity in the STOPPING state.

Extended Description

This property should be set to false (the default) to enforce the restriction on creating activities in the STOPPING state.

When set to true, resource adaptors should check the state before creating an activity, to avoid a situation where a resource adaptor entity never deactivates because new activities are being created.

See the documentation reference for more details.

Valid values

true,false

Default value

false

Reference

rhino.slee_restricted_lifecycle

Description

Restrict SLEE lifecycle changes depending on current node actual states.

Extended Description

Normally the start and stop SLEE lifecycle operations only consider the current node desired state(s) when determining if the operation is valid. When this property is set to true, Rhino will revert to legacy behaviour and also consider current node actual state(s) as well. The legacy behaviour, for example, doesn’t allow a node to be started if it is currently in the STOPPING state.

Valid values

true,false

Default value

false

rhino.state.convergence.interval_seconds

Description

Interval between state convergence checks

Valid values

an integer specifying the delay from the end of one scheduled convergence check to the start of the next

Default value

30

rhino.state.convergence.retry_interval_millis

Description

Interval between state convergence task retries

Valid values

an integer specifying the delay from the completion of a batch of state convergence operations after which to retry ones that did not meet the required preconditions

Default value

1000

rhino.state.convergence.timeout_seconds

Description

Time before a convergence task is considered to have timed out

Valid values

An integer specifying the interval in seconds from the creation of a convergence task to when it can be considered to have timed out and an alarm raised

Default value

300

rhino.tm.resources.queue_size

Description

Queue size for Transaction Manager’s executor for blocking resource callbacks

Valid values

Positive integer

Default value

100

rhino.tm.resources.thread_count

Description

Number of threads used in Transaction Manager’s executor for blocking resource callbacks

Valid values

Positive integer

Default value

2

rhino.tm.synchronization.queue_size

Description

Queue size for Transaction Manager’s executor for synchronization callbacks

Valid values

Positive integer

Default value

500

rhino.tm.synchronization.thread_count

Description

Number of threads used in Transaction Manager’s executor for synchronization callbacks

Valid values

Positive integer

Default value

2

rhino.verification.traceexceptions

Description

Print exception stack traces for DU verification errors

Valid values

true, false

Default value

false

sas.queue_full_interval

Description

Interval in milliseconds to wait before clearing the queue-full alarm

Valid values

positive integer

Default value

5000

savanna.receive_watchdog_interval

Description

Maximum time a Savanna receive-thread may remain busy before it is considered stuck

Valid values

time in milliseconds

Default value

5000

snmp.filename.max-length

Description

The maximum allowed filename length, used when generating MIBs during a MIB export

Valid values

Any positive integer. Any value less than or equal to zero will disable the length limit.

Default value

255

snmp.locktimeout

Description

Default lock acquisition timeout for SNMP config update thread

Valid values

time in milliseconds

Default value

30000

snmp.max_config_update_retries

Description

Maximum number of times the SNMP config update thread will attempt to obtain an exclusive lock in order to apply a given configuration update work item

Valid values

Positive integer

Default value

10

snmp.name.max-length

Description

The maximum number of characters allowed in object identifiers included in generated MIBs

Valid values

Any positive integer. Any value less than or equal to zero will disable the length limit.

Default value

127

snmp.name.uppercase

Description

Determines if MIB identifiers and filenames will be forced to uppercase, used when generating MIBs during a MIB export

Valid values

true, false

Default value

false

snmp.suppress_service_oid_conflict_alarms

Description

Suppress the raising of alarms for duplicate parameter set type OID mappings when multiple services use the same base OID

Valid values

true,false

Default value

false

staging.lifo_scan_interval

Description

Interval between scans of the LIFO queue’s tail to check for expired items

Valid values

time in milliseconds

Default value

1000

staging.live_threads_fraction

Description

Minimum percentage of staging threads that must remain alive to prevent a watchdog restart

Valid values

0 - 100

Default value

25

timer.remote.http_timeout

Description

HTTP response timeout in milliseconds for remote timer server requests

Valid values

Default value

2000

transaction.default_timeout

Description

Default transaction age in milliseconds before a long-running transaction is aborted

Valid values

Default value

180000

transaction.timeout_check_interval

Description

Interval in milliseconds between checks for transactions that need timing out

Valid values

time in milliseconds

Default value

10000

transaction.timeout_warn_percent

Description

Transaction age (as a percentage of transaction timeout) to warn about long-running transactions at

Valid values

0 - 100

Default value

75

watchdog.check_interval

Description

Interval in milliseconds between watchdog checks

Valid values

positive integer

Default value

1000

watchdog.max_pause_margin

Description

Maximum delay in watchdog scheduling before a warning is displayed

Valid values

Default value

1000

watchdog.no_exit

Description

Override the default behaviour of the watchdog to disable terminating the JVM.Do not use in a production deployment. An alarm will be raised when this mode is active.

Valid values

true,false

Default value

false

watchdog.reverse_timewarp_margin

Description

Maximum watchdog 'early wakeup' in milliseconds before a reverse-timewarp warning is displayed

Valid values

Default value

500

watchdog.warn_interval

Description

Minimum interval in milliseconds between displaying timewarp warnings

Valid values

Default value

15000

Changes in Rhino 2.6

Rhino 2.6 has removed some system properties previously available for extended configuration. These include:

Property Replaced by
 rhino.tracer.defaultlevel
 setloglevel trace <trace_level>

Application-State Maintenance

As well as an overview of application-state maintenance, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:

Procedure rhino-console command(s) MBean → Operation(s)

         

Rhino Housekeeping → getClusterHousekeeping
Rhino Housekeeping → getNodeHousekeeping

 findactivities

Housekeeping → getActivities

 getactivityinfo

Housekeeping → getActivityInfo

 removeactivity

Housekeeping → removeActivity

 removeallactivities

Rhino Housekeeping → markAllActivitiesForRemoval

 findsbbs

Housekeeping → getSbbs

 getsbbinfo

Housekeeping → getSbbInfo

 removesbb

Housekeeping → removeSbb

 removeallsbbs

Rhino Housekeeping → removeAllSbbs

 findtimers

Housekeeping → getTimers

 findremotetimers

Housekeeping → getRemoteTimers

 gettimerinfo

Housekeeping → getTimerInfo

 getremotetimerinfo

Housekeeping → getRemoteTimerInfo

 canceltimer

Housekeeping → cancelTimer

 cancelremotetimer

Housekeeping → cancelRemoteTimer

 findactivitybindings

Housekeeping → getBoundActivities

 removeactivitybinding

Housekeeping → removeActivityBinding

 getenventries

Deployment → getEnvEntries

 setenventries

Deployment → setEnvEntries

 getsecuritypolicy

Deployment → getSecurityPolicy

 setsecuritypolicy

Deployment → setSecurityPolicy

 initiateactivitycleanup

Housekeeping → initiatecleanup

 initiateglobalcleanup

Housekeeping → initiateglobalcleanup

About Application-State Maintenance

During normal operation, Rhino removes SBB entities when they are no longer needed to process events on the activities they are attached to — usually when all those activities have ended.

Sometimes, however, the normal SBB lifecycle is interrupted and obsolete entities remain. For example:

  • An SBB might be attached to an activity that didn’t end correctly, due to a problem in the resource adaptor entity that created it.

  • The sbbRemove method might throw an exception.

Unexpected problems such as these, with deployed resource adaptors or services, may cause resource leaks. Rhino provides an administration interface, the Node Housekeeping MBean, which lets you find and remove stale or problematic:

  • activities

  • SBB entities

  • activity context name bindings

  • timers.

The following topics include procedures for:

Finding Housekeeping MBeans

To find Node or Cluster Housekeeping MBeans, if using MBean operations directly, use the Rhino Housekeeping MBean, as follows.

Note
Cluster vs Node Housekeeping

Rhino includes two types of Housekeeping MBean, which provide the same set of functions for either an entire cluster or a single node:

  • Cluster Housekeeping MBeans operate on all event-router nodes in the primary component.

  • Node Housekeeping MBeans operate on a single cluster node only.

Many of the housekeeping commands available in rhino-console accept a -node parameter, which lets you execute the housekeeping command for a specified node, using the relevant Node Housekeeping MBean. Without this parameter, the command executes for the cluster, using the Cluster Housekeeping MBean.

MBean operation: getClusterHousekeeping

MBean

Rhino operation

public ObjectName getClusterHousekeeping()
  throws ManagementException;

This operation returns the JMX Object Name of a Cluster Housekeeping MBean.

MBean operation: getNodeHousekeeping

MBean

Rhino operation

public ObjectName getNodeHousekeeping(int)
  throws InvalidArgumentExcepiton, ManagementException;

This operation returns the JMX Object Name of a Node Housekeeping MBean for the given node.

Note Both the Cluster Housekeeping MBean and Node Housekeeping MBean expose the NodeHousekeepingMBean interface.

Activities

Finding Activities

To find activities in the SLEE, use the following rhino-console command or related MBean operations.

Console command: findactivities

Command

findactivities [-maxpernode maxrows] [-node nodeid] [-removed|-all] [-ra
<ra-entity>] [-created-after date|time|offset] [-created-before
date|time|offset] [-updated-after date|time|offset] [-updated-before
date|time|offset]
  Description
    Find activities.  Use -removed to list only activities removed but not garbage
    collected.  Use -all to list all active and removed activities combined.

Options

Option Abbreviation Description
 -maxpernode <maxrows>

               

Retrieve at most this many activities from each event-router node (default is 100). Can be used to limit Rhino’s load when processing the request.

 -node <node-id>

               

Only display activities owned by the given node.

 -removed

               

Display removed but not yet garbage collected activities instead of active activities.

 -all

               

Display both active and removed but not yet garbage collected activities.

 -ra <ra-entity>

               

Only display activities created by the named resource adaptor entity.

 -created-after <time>
 -ca

Only display activities created after the given time.

 -created-before <time>
 -cb

Only display activities created before the given time.

 -updated-after <time>
 -ua

Only display activities updated after the given time.

 -updated-before <time>
 -ub

Only display activities updated before the given time.

Times for the above options may be entered in either absolute or relative format:

Type Format Description Examples

Absolute

 [[finding-activitiesyyyy/]MM/dd] [HH:mm[:ss]]

yyyy = the year
MM = the month (1-12)
dd = the date of the month (1-31)
HH = the hour (0-23)
mm = the minute (0-59)
ss = the second (0-59)

 2008/04/15
04/15
10:57
10:57:35
2008/04/15 10:57:35

Relative

 (<nn><d|h|m|s>)*

nn = a number
d = days
h = hours
m = minutes
s = seconds

 1d
1h
1m
30s
1m
7m30s
1d12h5m30s
Note Rhino assumes relative time format is in the past. For example, 1h30m means 1 hour and 30 minutes ago.

Examples

To display all activities in the SLEE:
$ ./rhino-console findactivities
pkey                      attach-count   handle                                 namespace   node   ra-entity    ref-count   replication-mode   submission-time     update-time
------------------------  -------------  -------------------------------------  ----------  -----  -----------  ----------  -----------------  ------------------  ------------------
1.101:219852641476607.1               0  ServiceActivity[ServiceID[name=Simple                101   Rhino Inte           0            SAVANNA   20180613 18:35:32   20180613 18:35:32
C.101:219852641476608.0               1    SAH[switchID=1528911304,connectionI                101       simple           3    KEY_VALUE_STORE   20180613 18:35:47   20180613 18:35:47
C.101:219852641476609.0               1    SAH[switchID=1528911304,connectionI                101       simple           3    KEY_VALUE_STORE   20180613 18:35:48   20180613 18:35:48
C.102:219852644015615.0               1    SAH[switchID=1528911304,connectionI                102       simple           3    KEY_VALUE_STORE   20180613 18:35:48   20180613 18:35:48
C.102:219852644015616.0               1    SAH[switchID=1528911304,connectionI                102       simple           3    KEY_VALUE_STORE   20180613 18:35:49   20180613 18:35:49
C.103:219852646067199.0               1    SAH[switchID=1528911304,connectionI                103       simple           3    KEY_VALUE_STORE   20180613 18:35:48   20180613 18:35:48
C.103:219852646067200.0               1    SAH[switchID=1528911304,connectionI                103       simple           3    KEY_VALUE_STORE   20180613 18:35:49   20180613 18:35:49
7 rows
Finding stale activities

A common search would be for stale activities. Rhino performs a periodic activity-liveness scan, checking all active activities and ending those detected as stale. Sometimes, however, a failure in the network or inside a resource adaptor might prevent the liveness scan from detecting and ending some activities. In this case, the Administrator would have to locate and end the stale activities manually.

To narrow the search:

To search for activities belonging to node 101 (replicated or non-replicated activities owned by 101) that are more than one hour old, you would use the arguments -node 101 and -cb 1h:

$ ./rhino-console findactivities -node 101 -cb 1h
pkey                      attach-count   handle                                 namespace   node   ra-entity    ref-count   replication-mode   submission-time     update-time
------------------------  -------------  -------------------------------------  ----------  -----  -----------  ----------  -----------------  ------------------  ------------------
C.101:219852641476608.0               1    SAH[switchID=1528911304,connectionI                101       simple           3    KEY_VALUE_STORE   20180613 18:35:47   20180613 18:35:47
C.101:219852641476609.0               1    SAH[switchID=1528911304,connectionI                101       simple           3    KEY_VALUE_STORE   20180613 18:35:48   20180613 18:35:48
2 rows

(This example returned two activities.)

MBean operation: getActivities

MBean

Rhino operations

Get summary information for all activities
public TabularData getActivities(int maxPerNode, boolean includeRemoved)
    throws ManagementException;

This operation returns tabular data summarising all activities.


Get summary information for activities belonging to a resource adaptor entity
public TabularData getActivities(int maxPerNode, String entityName, boolean includeRemoved)
    throws UnrecognizedResourceAdaptorEntityException, ManagementException;

This operation returns tabular data summarising the activities owned by the given resource adaptor entity.


Get summary information for activities using time-based criteria
public TabularData getActivities(int maxPerNode, String entityName, long createdAfter, long createdBefore, long updatedAfter, long updatedBefore, boolean includeRemoved)
    throws UnrecognizedResourceAdaptorEntityException, ManagementException;

This operation returns tabular data summarising the activities owned by the given resource adaptor entity using the time-based criteria specified (in milliseconds, as used by java.util.Date, or the value 0 to ignore a particular parameter).


Get summary information only for removed but not yet garbage collected activities using time-based criteria
public TabularData getRemovedActivities(int maxPerNode, String entityName, long createdAfter, long createdBefore, long updatedAfter, long updatedBefore)
    throws ManagementException, UnrecognizedResourceAdaptorEntityException;

This operation returns tabular data summarising the removed activities owned by the given resource adaptor entity using the time-based criteria specified (in milliseconds, as used by java.util.Date, or the value 0 to ignore a particular parameter).


Tip

Results depend on the Housekeeping MBean that invokes the operation:

  • Cluster Housekeeping MBean — returns results from all event-router nodes in the primary component

  • Node Housekeeping MBean — returns results from that node only.

Note For a description of the format of the tabular data that these operations return, see the javadoc.

Inspecting Activities

To get detailed information about an activity, use the following rhino-console command or related MBean operation.

Console command: getactivityinfo

Command

getactivityinfo [-non-resident] [-v] <activity pkey>*
  Description
    Get activity information [-v = verbose].  Use -non-resident to get activity
    information on activities not currently owned by any cluster node.

Example

To display activity information for activity C.101:219852641476611.0:

$ ./rhino-console getactivityinfo C.101:219852641476611.0
pkey             : C.101:219852641476611.0
activity         : SAH[switchID=1528911304,connectionID=7,address=1]
creating-gen     : 25
ending           : false
events-submitted : 2
flags            : 0x0
handle           : SAH[switchID=1528911304,connectionID=7,address=1]
head-event       :
last-event-time  : 20180614 14:40:55
namespace        :
node             : 101
ra-entity        : simple
replication-mode : KEY_VALUE_STORE
submission-time  : 20180614 14:40:25
submitting-node  : 101
update-time      : 20180614 14:40:55
event-queue      : no rows
generations      :
      [27] refcount      : 0
      [27] removed       : false
      [27] attached-sbbs : no rows
      [27] timers        : no rows

This command returns a snapshot of the activity’s state at the time you execute it. Some values (such as fields pkey, activity, creating-gen, flags, handle, namespace, ra-entity, replication-mode, submission-time and submitting-node) are fixed for the lifetime of the activity. Others change as events on the activity are processed.

Tip See Activity Information Fields for a description of the fields getactivityinfo returns.

MBean operation: getActivityInfo

MBean

Rhino operation

public CompositeData getActivityInfo(String activityPKey, boolean showAllGenerations, boolean nonResident)
    throws InvalidPKeyException, UnknownActivityException, ManagementException;

This operation returns tabular data with detailed information on the given activity.

Note For a description of the format of the tabular data that this operation returns, see the javadoc.

Activity Information Fields

The getactivityinfo console command displays information about:

Activity information

The getactivityinfo console command displays the following values about an activity:

Field Description
 pkey

The activity’s primary key. Uniquely identifies this activity within Rhino.

 activity

The activity object, in string form. Its exact content is resource adaptor dependent (and may or may not contain useful human-readable information).

 creating-gen

The database generation in which the activity was created.

 ending

Boolean flag indicating if the activity is ending.

 events-submitted

The number of events that have been submitted for processing on the activity.

 flags

Hexadecimal value of the flags the activity was created with (if any).

 handle

The activity handle assigned by the activity’s resource adaptor entity, in string form. The exact content is resource adaptor dependent (and may or may not contain useful human-readable information).

 head-event

The event at the head of the activity’s event queue (the next event to be processed on the activity).

 last-event-time

When the most recent event was submitted on the activity.

 namespace

The namespace that the activity resides in.

 node

The Rhino cluster node that currently owns the activity. If this value is different to the submission-node field, the activity must be a replicated activity that was reassigned to this node.

If this value is negative, then the activity is currently non-resident. This means that the state for the activity was replicated to an external key/value store, the node that the activity was previously assigned to has failed, and the activity has not yet been adopted by any remaining cluster node. The absolute value of the node ID represents the node that the activity was last assigned to.

 ra-entity

The resource adaptor entity that created this activity.

 replication-mode

The method of activity replication. This field will have one of the following values:

  • NONE — the activity is not replicated

  • SAVANNA — the activity is replicated using the traditional savanna framework

  • KEY_VALUE_STORE — the activity is replicated using an external key/value store.

 submission-time

When the activity was created.

 submission-node

The Rhino cluster node that created the activity.

 update-time

When the activity was last updated (when the most recent database generation record was created). Useful in some situations for evaluating whether an activity is still live.

A list of events queued for processing on the activity.

A list of generational information stored in the database for the activity. If getactivityinfo includes the -v option, all generations display (otherwise just the most recent displays).

Event-queue information

The getactivityinfo console command displays the following values for each event in an activity’s event queue:

Field Description
 position

The position of the event in the queue.

 event-type

The event-type component identifier of the event.

 event

The event object, in string form. Its exact content is resource adaptor dependent (and may or may not contain useful human-readable information).

 flags

Hexadecimal value of the flags the event was fired with (if any).

Generational information

The getactivityinfo console command displays values for the following fields, in an activity’s generational information:

Field Description
 generation

Not displayed as a field but included in square brackets before the rest of the generational information, for example: [76343].

 refcount

The number of references made to the activity by the Timer Facility and the Activity Context Naming Facility.

 removed

Boolean flag indicating if the activity no longer exists in the SLEE. Only true if the activity has ended but has not yet been garbage collected.

A list of SBBs attached to the activity.

A list of Timer Facility timers set on the activity.

Attached-SBB information

The getactivityinfo console command displays values for the following fields, for each SBB entity attached to an activity:

Field Description
 pkey

The primary key of the SBB entity.

 namespace

The namespace the attached SBB entity resides in. This will always be equal to the namespace in which the activity resides.

 sbb-component-id

The component identifier of the SBB for the SBB entity.

 service-component-id

The component identifier of the service the SBB belongs to.

Activity-timer information

The getactivityinfo console command displays values for the following fields, for each timer active on an activity:

Field Description
 pkey

The primary key of the timer.

 namespace

The namespace that the timer exists in. This will always be equal to the namespace in which the activity resides.

 activity-pkey

The primary key of the activity the timer is set on.

 submission-time

The time the timer was initially set.

 period

The timer period (for periodic timers).

 repetitions

The number of repetitions the timer will fire before it expires.

 preserve-missed

Boolean flag indicating if missed timers should still fire an event into the SLEE.

 replicated

Boolean flag indicating whether or not the timer is replicated. This flag will only be set to true if the activity the timer was set on has a replication mode of SAVANNA.

Removing Activities

To forcefully remove an activity, use the following rhino-console command or related MBean operation.

Warning
Consult the spec before ending an activity

The JAIN SLEE 1.1 specification provides detailed rules for ending no-longer-required activities.

Console command: removeactivity

Command

removeactivity [-non-resident] <activity pkey>*
  Description
    Remove activities.  Use -non-resident to remove activities not currently owned
    by any cluster node.

Example

To remove the activities with the primary keys C.101:219852641476611.0 and C.101:219852641476612.0

$ ./rhino-console removeactivity C.101:219852641476611.0 C.101:219852641476612.0
2 activities removed

MBean operation: removeActivity

MBean

Rhino operation

public void removeActivity(String activityPKey, boolean nonResident)
    throws InvalidPKeyException, UnknownActivityException,
            ManagementException;

This operation removes the activity with the given primary key. The nonResident argument must be true in order to remove a non-resident activity.

Removing All Activities

To mark all activities of a resource adaptor entity for removal, use the following rhino-console command or related MBean operation.

Warning
Use extreme care when removing forcibly

Occasionally an administrator will want to remove all activities belonging to a resource adaptor entity. Typically, this would be to deactivate a resource adaptor when upgrading or reconfiguring. Under normal conditions, these actions would be performed automatically, by allowing existing activities to drain over time. Rhino provides the following housekeeping commands to forcibly speed up the draining process, although these should be used with extreme care on production systems — they will interrupt service for any existing network activities belonging to the resource adaptor entity.

Console command: removeallactivities

Command

removeallactivities <ra-entity> [-nodes node1,node2,...]
  Description
    Remove all activities belonging to a resource adaptor entity in the Stopping
    state (on the specified nodes)

Example

To remove all activities owned by the resource adaptor entity called sipra on nodes 101 and 102:

$ ./rhino-console removeallactivities sipra -nodes 101,102
Activities marked for removal on node(s) [101,102]

MBean operation: markAllActivitiesForRemoval

MBean

Rhino operation

public void markAllActivitiesForRemoval(String entityName, int[] nodeIDs)
    throws NullPointerException, UnrecognizedResourceAdaptorEntityException,
          InvalidStateException, ManagementException;

This operation marks all the activities owned by the given resource adaptor entity on the given nodes for removal.

Warning
Resource adaptor entity (or SLEE) must be STOPPING

As a safeguard, this command (or MBean operation) cannot be run unless the specified resource adaptor entity, or the SLEE, is in the STOPPING state on the specified nodes. (It may also be run against nodes where the resource adaptor entity is in the INACTIVE state (or the SLEE is in the STOPPED state) for convenience in asymmetric cluster configurations, but has no effect against such nodes since no activities exist for the resource adaptor entity on nodes where it is INACTIVE (or the SLEE is STOPPED).)

Note
Why "mark" (instead of just ending)?

This command does not remove all activities immediately, because that might overload the system (from processing too many activity-end events at once). Instead, removeallactivities marks targeted activities as ENDING. Marked activities end over the course of the next pass of the activity query-liveness scan (typically, with the default settings, each scan lasts three-to-five minutes).

Timers

Finding Timers

To find timers in the SLEE, use the following rhino-console command or related MBean operations.

Console command: findtimers

Command

findtimers [-maxpernode maxrows] [-node nodeid] [-created-after
date|time|offset] [-created-before date|time|offset]
  Description
    Find timers

Options

Option Abbreviation Description
 -maxpernode <maxrows>

               

Retrieve at most this many timers from each event-router node (default is 100). Can be used to limit Rhino’s load when processing the request.

 -node <node-id>

               

Only display timers owned by the given node.

 -ra <ra-entity>

               

Only display timers set on activities belonging to the named resource adaptor entity.

 -created-after <time>
 -ca

Only display timers created after the given time.

 -created-before <time>
 -cb

Only display timers created before the given time.

Times for the above options may be entered in either absolute or relative format:

Type Format Description Examples

Absolute

 [[finding-timersyyyy/]MM/dd] [HH:mm[:ss]]

yyyy = the year
MM = the month (1-12)
dd = the date of the month (1-31)
HH = the hour (0-23)
mm = the minute (0-59)
ss = the second (0-59)

 2008/04/15
04/15
10:57
10:57:35
2008/04/15 10:57:35

Relative

 (<nn><d|h|m|s>)*

nn = a number
d = days
h = hours
m = minutes
s = seconds

 1d
1h
1m
30s
1m
7m30s
1d12h5m30s
Note Rhino assumes relative time format is in the past. For example, 1h30m means 1 hour and 30 minutes ago.

Examples

To display all timers in the SLEE:
$ ./rhino-console findtimers
pkey                          activity-pkey             namespace   period                preserve-missed   remote-timer-pkey         repetitions   replicated   submission-time
----------------------------  ------------------------  ----------  --------------------  ----------------  ------------------------  ------------  -----------  ------------------
3.103:244326037547631.0.420    3.103:244326037547631.0               9223372036854775807              Last   102/102:244326037604633             1        false   20191218 23:29:40
3.103:244326037547664.0.43c    3.103:244326037547664.0                            290000              Last   102/102:244326037604669           307        false   20191218 23:29:40
3.103:244326037547664.0.43d    3.103:244326037547664.0               9223372036854775807              Last   102/102:244326037604668             1        false   20191218 23:29:40
3.103:244326037547664.0.43e    3.103:244326037547664.0               9223372036854775807              Last   102/102:244326037604667             1        false   20191218 23:29:40
3.103:244326037547668.0.443    3.103:244326037547668.0                            290000              Last   101/101:244326026950453           307        false   20191218 23:29:40
3.103:244326037547668.0.444    3.103:244326037547668.0               9223372036854775807              Last   101/101:244326026950454             1        false   20191218 23:29:40
3.103:244326037547668.0.445    3.103:244326037547668.0               9223372036854775807              Last   102/102:244326037604673             1        false   20191218 23:29:40
7 rows

MBean operation: getTimers

MBean

Rhino operations

Get summary information for all timers
public TabularData getTimers(int maxPerNode)
    throws ManagementException;

This operation returns tabular data summarising all timers.


Get summary information for timers set on activities belonging to a resource adaptor entity using time-based criteria
public TabularData getTimers(int maxPerNode, String raEntity, long createdAfter, long createdBefore)
    throws ManagementException;

This operation returns tabular data summarising the timers set on activities belonging to the given resource adaptor entity using the time-based criteria specified (in milliseconds, as used by java.util.Date, or the value 0 to ignore a particular time-based parameter).


Tip

Results depend on the Housekeeping MBean that invokes the operation:

  • Cluster Housekeeping MBean — returns results from all event-router nodes in the primary component

  • Node Housekeeping MBean — returns results from that node only.

Note For a description of the format of the tabular data that these operations return, see the javadoc.

Finding Remote Timers

To find remotely-armed timers in the SLEE, use the following rhino-console command or related MBean operations.

Console command: findremotetimers

Command

findremotetimers [-maxpernode maxrows] [-node nodeid] [-created-after
date|time|offset] [-created-before date|time|offset]
  Description
    Find remote timers

Options

Option Abbreviation Description
 -maxpernode <maxrows>

               

Retrieve at most this many remote timers from each event-router node (default is 100). Can be used to limit Rhino’s load when processing the request.

 -node <node-id>

               

Only display remote timers armed on the given node.

 -ra <ra-entity>

               

Only display remote timers set on activities belonging to the named resource adaptor entity.

 -created-after <time>
 -ca

Only display remote timers created after the given time.

 -created-before <time>
 -cb

Only display remote timers created before the given time.

Times for the above options may be entered in either absolute or relative format:

Type Format Description Examples

Absolute

 [[finding-remote-timersyyyy/]MM/dd] [HH:mm[:ss]]

yyyy = the year
MM = the month (1-12)
dd = the date of the month (1-31)
HH = the hour (0-23)
mm = the minute (0-59)
ss = the second (0-59)

 2008/04/15
04/15
10:57
10:57:35
2008/04/15 10:57:35

Relative

 (<nn><d|h|m|s>)*

nn = a number
d = days
h = hours
m = minutes
s = seconds

 1d
1h
1m
30s
1m
7m30s
1d12h5m30s
Note Rhino assumes relative time format is in the past. For example, 1h30m means 1 hour and 30 minutes ago.

Examples

To display all remote timers in the SLEE:
$ ./rhino-console findremotetimers
pkey                      activity-pkey             callback-node   expiry-time         interval   namespace   period   submission-time
------------------------  ------------------------  --------------  ------------------  ---------  ----------  -------  ------------------
101/101:244326026949631    F.103:244326037546522.0             103   20191218 23:30:23      65000                    0   20191218 23:29:18
101/101:244326026949632    F.103:244326037546502.0             103   20191218 23:39:23     605000                    0   20191218 23:29:18
101/101:244326026949633    F.103:244326037546538.0             103   20191218 23:39:23     605000                    0   20191218 23:29:18
101/101:244326026949634    F.103:244326037546538.0             103   20191218 23:30:23      65000                    0   20191218 23:29:18
101/101:244326026949635    F.103:244326037546517.0             103   20191218 23:30:23      65000                    0   20191218 23:29:18
101/101:244326026949636    F.103:244326037546503.0             103   20191218 23:30:23      65000                    0   20191218 23:29:18
101/101:244326026949637    F.103:244326037546506.0             103   20191218 23:30:23      65000                    0   20191218 23:29:18
7 rows

MBean operation: getRemoteTimers

MBean

Rhino operations

Get summary information for all remote timers
public TabularData getRemoteTimers(int maxPerNode)
    throws ManagementException;

This operation returns tabular data summarising all timers.


Get summary information for remote timers set on activities belonging to a resource adaptor entity using time-based criteria
public TabularData getRemoteTimers(int maxPerNode, String raEntity, long createdAfter, long createdBefore)
    throws ManagementException;

This operation returns tabular data summarising the remote timers set on activities belonging to the given resource adaptor entity using the time-based criteria specified (in milliseconds, as used by java.util.Date, or the value 0 to ignore a particular time-based parameter).


Tip

Results depend on the Housekeeping MBean that invokes the operation:

  • Cluster Housekeeping MBean — returns results from all event-router nodes in the primary component

  • Node Housekeeping MBean — returns results from that node only.

Note For a description of the format of the tabular data that these operations return, see the javadoc.

Inspecting Timers

To get detailed information about a timer, use the following rhino-console command or related MBean operation.

Console command: gettimerinfo

Command

gettimerinfo <timer id>*
  Description
    Get timer info

Example

To display information for timer 10.103:244325031551168.0.227d7:

$ ./rhino-console gettimerinfo 10.103:244325031551168.0.227d7
activity-pkey     : 10.103:244325031551168.0
pkey              : 10.103:244325031551168.0.227d7
remote-timer-pkey :
namespace         :
next-firing-time  : 20191218 23:01:32
node              : 103
period            : 9223372036854775807
preserve-missed   : Last
ra-entity         : sip-sis-ra
remaining         : 1
repetitions       : 1
replicated        : false
submission-time   : 20191218 23:00:27

This command returns a snapshot of the timer’s state at the time you execute it. Some values (such as fields pkey, activity-pkey, remote-timer-pkey, period, namespace, ra-entity, replicated, submission-time, and preserve-missed) are fixed for the lifetime of the timer. Others values change as the timer fires.

Tip See Timer Information Fields for a description of the fields gettimerinfo returns.

MBean operation: getTimerInfo

MBean

Rhino operation

public CompositeData getTimerInfo(String timerID)
    throws InvalidPKeyException, UnknownTimerException, ManagementException;

This operation returns composite data with detailed information on the given timer.

Note For a description of the format of the composite data that this operation returns, see the javadoc.

Timer Information Fields

The gettimerinfo console command displays the following values about a timer:

Field Description
 pkey

The primary key of the timer. This key uniquely identifies the timer within the SLEE.

 namespace

The namespace that the timer exists in.

 activity-pkey

The primary key of the SLEE activity this timer is attached to.

 ra-entity

The name of the resource adaptor entity which created the activity on which this timer is set.

 submission-time

The date the timer was created.

 next-firing-time

The date the timer will next fire.

 period

The timers period.

 repetitions

Number of repetitions of the timer.

 remaining

Number of remaining repetitions of the timer.

 preserve-missed

Behaviour when a timer is missed. One of:

  • NONE - do not preserve missed,

  • ALL - preserve all missed events, or

  • LAST - Preserve last missed event.

 node

The node currently responsible for scheduling the timer.

 replicated

Replicated flag, true for replicated timers entities, false for non-replicated timers. A timer is replicated if the activity it is attached to is replicated.

 remote-timer-pkey

The primary key of the remote timer created for this timer, if any.

Inspecting Remote Timers

To get detailed information about a remote timer, use the following rhino-console command or related MBean operation.

Console command: getremotetimerinfo

Command

getremotetimerinfo <remote timer id>*
  Description
    Get remote timer info

Example

To display remote timer information for 103/103:244326037626200:

$ ./rhino-console getremotetimerinfo 103/103:244326037626200
activity-pkey    : 3.101:244326026885993.0
pkey             : 103/103:244326037626200
callback-node    : 101
expiry-time      : 20191218 23:35:12
interval         : 300000
namespace        :
next-firing-time : 20191218 23:35:12
period           : 0
pops             : 0
ra-entity        : Rhino Null Activities
submission-time  : 20191218 23:30:12

This command returns a snapshot of the remote timer’s state at the time you execute it. Some values (such as fields pkey, namespace, activity-pkey, ra-entity, submission-time, interval, period, and expiry-time) are fixed for the lifetime of the remote timer. Other values change as the timer fires or cluster membership changes.

Tip See Remote Timer Information Fields for a description of the fields getremotetimerinfo returns.

MBean operation: getRemoteTimerInfo

MBean

Rhino operation

public CompositeData getRemoteTimerInfo(String remoteTimerID)
    throws InvalidPKeyException, UnknownTimerException, ManagementException;

This operation returns composite data with detailed information on the given remote timer.

Note For a description of the format of the composite data that this operation returns, see the javadoc.

Remote Timer Information Fields

The getremotetimerinfo console command displays the following values about a remote timer:

Field Description
 pkey

The primary key of the remote timer. This key uniquely identifies the remote timer within the SLEE.

 namespace

The namespace that the timer exists in.

 activity-pkey

The primary key of the SLEE activity this timer is attached to.

 ra-entity

The name of the resource adaptor entity which created the activity on which this timer is set.

 submission-time

The date the timer was created.

 next-firing-time

The date the timer will next fire.

 interval

The initial time period after the timer was created before the first firing.

 period

The timers period, if a repeating timer.

 expiry-time

The date that the timer will expire.

 callback-node

The node that will receive a callback with the timer event when the timer next fires. May be null if the assigned callback address for the timer is not currently reachable.

 pops

The number of times that the timer has already fired.

Cancelling Timers

To administratively cancel a timer, use the following rhino-console command or related MBean operation.

Console command: canceltimer

Command

canceltimer <timer id>
  Description
    Cancel timer

Example

To cancel the timer with the primary key 10.101:244325023613578.0.7661

$ ./rhino-console canceltimer 10.101:244325023613578.0.7661
Timer removed

MBean operation: cancelTimer

MBean

Rhino operation

public void cancelTimer(String timerID)
    throws InvalidPKeyException, UnknownTimerException,
           ManagementException;

This operation cancels the timer with the given primary key.

Cancelling Remote Timers

To administratively cancel a remotely-armed timer, use the following rhino-console command or related MBean operation.

Console command: cancelremotetimer

Command

cancelremotetimer <remote timer id>
  Description
    Cancel remote timer

Example

To cancel the remote timer with the primary key 101/101:244326026949631

$ ./rhino-console cancelremotetimer 101/101:244326026949631
Remote timer removed

MBean operation: cancelRemoteTimer

MBean

Rhino operation

public void cancelRemoteTimer(String remoteTimerID)
    throws InvalidPKeyException, UnknownTimerException,
           ManagementException;

This operation cancels the remote timer with the given primary key.

SBB Entities

Finding SBB Entities

To find SBB entities in the SLEE, use the following rhino-console command or related MBean operation.

Console command: findsbbs

Command

findsbbs [-maxpernode maxrows] [-node nodeid] <-service service> [-sbb sbb]
[-created-after date|time|offset] [-created-before date|time|offset]
  Description
    Find SBBs.

Options

Option Abbreviation Description
 -maxpernode <maxrows>

               

Retrieve at most this many SBB entities from each event-router node (default is 100). Can be used to limit Rhino’s load when processing the request.

 -node <node-id>

               

Only display activities owned by the given node.

 -service <service>

               

Look for SBB entities owned by the named service (must be specified).

 -sbb <sbb>

               

Only display SBB entities of the named SBB.

 -created-after <time>
 -ca

Only display SBB entities created after the given time.

 -created-before <time>
 -cb

Only display SBB entities created before the given time.

Times for the above options may be entered in either absolute or relative format:

Type Format Description Examples

Absolute

 [[finding-sbb-entitiesyyyy/]MM/dd] [HH:mm[:ss]]

yyyy = the year
MM = the month (1-12)
dd = the date of the month (1-31)
HH = the hour (0-23)
mm = the minute (0-59)
ss = the second (0-59)

 2008/04/15
04/15
10:57
10:57:35
2008/04/15 10:57:35

Relative

 (<nn><d|h|m|s>)*

nn = a number
d = days
h = hours
m = minutes
s = seconds

 1d
1h
1m
30s
1h30m
7m30s
1d12h5m30s

Rhino assumes relative time format is in the past. For example, 1h30m means 1 hour and 30 minutes ago.

Examples

To display all SBB entities owned by the SimpleService service in the SLEE:
$ ./rhino-console findsbbs -service name=SimpleService,vendor=OpenCloud,version=1.1
pkey                             creation-time       namespace   node  parent-pkey   replicated   sbb-component-id                                   service-component-id
-------------------------------  ------------------  ----------  ----- ------------  -----------  -------------------------------------------------  -------------------------------------------------
101:219902028358655/965121714     20180614 21:23:16                101                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
101:219902028358656/996141521     20180614 21:23:18                101                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
101:219902028358657/1027161328    20180614 21:23:19                101                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
101:219902028358658/1058181135    20180614 21:23:21                101                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
102:219902028482559/499417899     20180614 21:23:16                102                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
102:219902028482560/561457513     20180614 21:23:17                102                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
102:219902028482561/623497127     20180614 21:23:18                102                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
102:219902028482562/685536741     20180614 21:23:21                102                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
103:219902030044671/-392412623    20180614 21:23:17                103                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
103:219902030044672/-361392816    20180614 21:23:18                103                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
103:219902030044673/-330373009    20180614 21:23:19                103                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
11 rows
To narrow the search:

To search for SBB entities belonging to node 102 (replicated or non-replicated SBB entities owned by 102) that are more than one hour old, you would use the parameters -node 102 and -cb 1h:

$ ./rhino-console findsbbs -service name=SimpleService,vendor=OpenCloud,version=1.1 -node 102 -cb 1h
pkey                             creation-time       namespace   node  parent-pkey   replicated   sbb-component-id                                   service-component-id
-------------------------------  ------------------  ----------  ----- ------------  -----------  -------------------------------------------------  -------------------------------------------------
102:219902028482559/499417899     20180614 21:23:16                102                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
102:219902028482560/561457513     20180614 21:23:17                102                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
102:219902028482561/623497127     20180614 21:23:18                102                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
102:219902028482562/685536741     20180614 21:23:21                102                     false  SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1  ServiceID[name=SimpleService,vendor=OpenCloud,ver
4 rows

This example returned four SBB entities.

MBean operations: getSbbs

MBean

Rhino operations

Get summary information for all SBB entities owned by a service
public TabularData getSbbs(int maxPerNode, ServiceID serviceID)
    throws UnrecognizedServiceException, ManagementException;

This operation returns tabular data summarising all SBB entities in the given service.


Get summary information for all SBB entities owned by a service using time-based criteria
public TabularData getSbbs(int maxPerNode, ServiceID serviceID, long createdAfter, long createdBefore)
    throws UnrecognizedServiceException, ManagementException;

This operation returns tabular data summarising the SBB entities owned by the given service using the time-based criteria specified (in milliseconds, as used by java.util.Date, or the value 0 to ignore a particular parameter).


Get summary SBB entity information for a particular SBB in a service using time-based criteria
public TabularData getSbbs(int maxPerNode, ServiceID serviceID, SbbID sbbType, long createdAfter, long createdBefore)
    throws UnrecognizedServiceException, UnrecognizedSbbException, ManagementException

This operation returns tabular data summarising only SBB entities of the given SBB in the given service using the time-based criteria specified (in milliseconds, as used by java.util.Date, or the value 0 to ignore a particular parameter).


Tip

Results depend on the Housekeeping MBean that invokes the operation:

  • Cluster Housekeeping MBean — returns results from all event-router nodes in the primary component

  • Node Housekeeping MBean — returns results from that node only.

Note For a description of the format of the tabular data that these operations return, see the javadoc.

Inspecting SBB Entities

To get detailed information about an SBB entity, use the following rhino-console command or related MBean operation.

Console command: getsbbinfo

Command

getsbbinfo <serviceid> <sbbid> [-non-resident] <sbb pkeys>*
  Description
    Get SBB information.  Use -non-resident to get information on SBB entities not
    currently owned by any cluster node.

Example

To display information for SBB entity 102:219902028482559/499417899 of the SimpleSbb SBB in the SimpleService service:

$ ./rhino-console getsbbinfo name=SimpleService,vendor=OpenCloud,version=1.1 name=SimpleSbb,vendor=OpenCloud,version=1.1 102:219902028482559/499417899
parent-pkey          :
pkey                 : 102:219902028482559/499417899
convergence-name     : APK[ns=,ah=10,id=102:219902022645761,replication=NONE]:::::-1
creating-node-id     : 102
creation-time        : 20180614 21:23:16
namespace            :
owning-node-id       : 102
priority             : 10
replicated           : false
sbb-component-id     : SbbID[name=SimpleSbb,vendor=OpenCloud,version=1.1]
service-component-id : ServiceID[name=SimpleService,vendor=OpenCloud,version=1.1]
attached-activities  :
       > pkey                      handle                                              namespace   ra-entity   replicated
       > ------------------------  --------------------------------------------------  ----------  ----------  -----------
       > A.102:219902022645761.0    SAH[switchID=1528911304,connectionID=10,address=]                  simple        false
       > 1 rows

This command returns a snapshot of the SBB entity’s state at the time you execute it. Some values (such as pkey, parent-pkey, convergence-name, creating-node-id, creation-time, namespace, replicated, sbb-component-id and service-component-id) are fixed for the lifetime of the SBB entity. Others change as the SBB entity processes events.

The default mode of this command only retrieves information for SBB entities currently owned by at least one active cluster node. If an SBB entity is non-resident, then the -non-resident option must be used to obtain information for that SBB entity’s state.

Tip See SBB Entity Information Fields for a description of the fields getsbbinfo returns.

MBean operation: getSbbInfo

MBean

Rhino operation

public CompositeData getSbbInfo(ServiceID serviceID, SbbID sbbID, String sbbPKey, boolean nonResident)
    throws UnrecognizedServiceException, UnrecognizedSbbException, InvalidPKeyException,
           UnknownSbbEntityException, ManagementException;

This operation returns tabular data with detailed information on the given SBB entity.

Note For a description of the format of the tabular data that this operation returns, see the javadoc.

SBB Entity Information Fields

The getsbbinfo console command displays information about an SBB entity and each activity it is attached to.

SBB entity information

The getsbbinfo console command displays the following values about an SBB entity:

Field Description
 pkey

The primary key of the SBB entity. Unique to the SBB within the service (SBB entities of other SBBs in the same or another service may have the same primary key).

 parent-pkey

The primary key of the SBB entity’s parent SBB entity. (For a root SBB entity, with no parent, this field is empty.)

 convergence-name

The convergence name, for a root SBB entity. (If not a root SBB entity, this field is empty.)

 creating-node-id

The Rhino node that created the SBB entity.

 creation-time

The date and time the SBB entity was created.

 namespace

The namespace that the SBB entity resides in.

 owning-node-id

The Rhino node that is currently responsible for the SBB entity.

If this value is negative, then the SBB entity is currently non-resident. This means that the state for the SBB entity tree was replicated to an external key/value store, the node that the SBB entity tree was previously assigned to has failed, and the SBB entity tree has not yet been adopted by any remaining cluster node. The absolute value of the node ID represents the node that the SBB entity was last assigned to.

 priority

The SBB entity’s current event-delivery priority.

 replicated

Boolean flag indicating if the SBB entity’s state is replicated.

 sbb-component-id

The SBB-component identifier, in string form. Identifies the SBB component that the SBB entity was created from.

 service-component-id

The service-component identifier, in string form. Identifies the service that the SBB entity is providing a function for.

A list of the activities the SBB entity is attached to.

Attached-activity information

The getsbbinfo console command displays the following values about each activity the SBB entity is attached to:

 Field

Description

 pkey

The primary key of the activity. Uniquely identifies this activity within Rhino.

 handle

The activity handle assigned by the activity’s resource adaptor entity, in string form. Its exact content is resource adaptor dependent (and may or may not contain useful human-readable information).

 namespace

The namespace the activtiy resides in. This will always be equal to the namespace in which the SBB entity resides.

 ra-entity

The resource adaptor entity that created this activity.

 replicated

Boolean flag indicating if the activity is a replicated activity. This flag will only be set to true if the activity has a replication mode of SAVANNA.

Diagnosing SBB Entities

SBB diagnostics lets you pull service-defined diagnostic information from SBB objects at runtime.

You can use the getsbbdiagnostics console command (or associated MBean operation) to access diagnostic information (in string form) from particular methods implemented in the SBB abstract class.

For an SBB to provide this diagnostics information, it must implement one of these methods:

  • public String ocGetSbbDiagnostics() — When queried with the getsbbdiagnostics console command, the return result from this method will be included in the SBB diagnostics output.

  • public void ocGetSbbDiagnostics(StringBuilder sb) — When queried with the getsbbdiagnostics console command, this method will be invoked on the SBB object with a SLEE-provided StringBuilder. The toString() output of the StringBuilder will be included in the SBB diagnostics output. This method is intended for chaining diagnostics output between extending classes and child SBBs.

Both of these methods are invoked with a valid transaction context on an SBB entity in the ready state. The methods may read any SBB CMP fields as necessary to produce diagnostic information.

See the example below.

Warning
  • A deployment error will occur if more than one of these methods is implemented.

  • The implementation of an ocGetSbbDiagnostics method should not modify SBB entity state.

To get detailed diagnostics information about an SBB entity, use the following rhino-console command or related MBean operation.

Console command: getsbbdiagnostics

Command

getsbbdiagnostics <serviceid> <sbbid> [-non-resident] <sbb pkeys>*
  Description
    Get SBB info and diagnostics (if supported by sbb implementation).  Use
    -non-resident to get information on SBB entities not currently owned by any
    cluster node.

Example

To display information for SBB entity 103:219902030054321/213428623 of the Sentinel SIP service:

$ ./rhino-console getsbbdiagnostics name=sentinel.sip,vendor=OpenCloud,version=1.0 name=sentinel.sip,vendor=OpenCloud,version=1.0 103:219902030054321/213428623
parent-pkey          :
pkey                 : 103:219902030054321/213428623
convergence-name     : APK[ns=,ah=10,id=103:219902023292928,replication=NONE]:::::-1
creating-node-id     : 103
creation-time        : 20180614 21:10:11
namespace            :
owning-node-id       : 103
priority             : 10
replicated           : false
sbb-component-id     : SbbID[name=sentinel.sip,vendor=OpenCloud,version=1.0]
service-component-id : ServiceID[name=sentinel.sip,vendor=OpenCloud,version=1.0]
attached-activities  :
    > pkey                       handle         ra-entity    replicated
    > -------------------------  -------------  -----------  -----------
    >  A.101:219902021728256.0    SessionAH[3]   sip-sis-ra        false
    > 1 rows


SBB diagnostics:
SentinelSipFeatureAndOcsSbbSupportImpl Child SBBs
=================================================

SubscriberDataLookup SBB:
No diagnostics available for this feature sbb.

SipAdhocConference SBB:
No child SBB currently exists for SipAdhocConference.

DiameterRoOcs SBB:
DiameterRoOcsMultiFsmSbb Service FSM States
===========================================

DiameterIECFSM [current state = NotCharging, InputRegister[scheduled=[], execution=[]], Endpoints[Endpoint[local],Endpoint[DiameterMediation],Endpoint[DiameterToOcs]]]

DiameterSCURFSM [previous state = WaitForInitialCreditCheckAnswer, current state = WaitForNextCreditControlRequest, InputRegister[scheduled=[local_errorsEndSession], execution=[]], Endpoints[Endpoint[local],Endpoint[DiameterMediation],Endpoint[DiameterToOcs,aci=[set,sbb-attached]]]]

SentinelSipSessionStateAggregate Session State
==============================================

Account: 6325
ActivityTestHasFailed: false
AllowPresentationOfDivertedToUriToOriginatingUser: false
AllowPresentationOfServedUserUriToDivertedToUser: false
AllowPresentationOfServedUserUriToOriginatingUser: false
AnnouncementCallIds: null
AnnouncementID: 0
AnytimeFreeDataPromotionActive: false
CFNRTimerDuration: 0
CallHasBeenDiverted: false
CallType: MobileOriginating
CalledPartyAddress: tel:34600000001
CalledPartyCallId: null
CallingPartyAddress: tel:34600000002
CallingPartyCallId: null
ChargingResult: 2001
ClientChargingType: sessionCharging
ClientEventChargingMethod: null
ClosedUserGroupCall: null
ClosedUserGroupDropIfNoGroupMatch: null
ClosedUserGroupEnabled: true
ClosedUserGroupIncomingAccessAllowed: null
ClosedUserGroupList: [CUG1Profile]
ClosedUserGroupOutgoingAccessAllowed: null

...
etc.

This command returns a snapshot of the SBB entity’s state and SBB-defined diagnostics information at the time you execute it. Some values (such as pkey, parent-pkey, convergence-name, creating-node-id, creation-time, namespace, replicated, sbb-component-id, and service-component-id) are fixed for the lifetime of the SBB entity. Others change as the SBB entity processes events.

The diagnostics output (from the "SBB diagnostics:" line onwards) is free-form and SBB defined. The above output is only representative of a single service-defined diagnostics method.

The default mode of this command only retrieves information for SBB entities currently owned by at least one active cluster node. If an SBB entity is non-resident, then the -non-resident option must be used to obtain information for that SBB entity’s state.

Tip See SBB Entity Information Fields for a description of the fields getsbbdiagnostics returns.

MBean operation: getSbbDiagnostics

MBean

Rhino operation

public CompositeData getSbbDiagnostics(ServiceID serviceID, SbbID sbbID, String sbbPKey, boolean nonResident)
    throws UnrecognizedServiceException, UnrecognizedSbbException, InvalidPKeyException,
           UnknownSbbEntityException, ManagementException;

This operation returns tabular data with detailed information on the given SBB entity, including SBB-defined diagnostics information.

Note For a description of the format of the tabular data that this operation returns, see the javadoc.

Example usage

The following is a basic example showing an auto-generated ocGetSbbDiagnostics(StringBuilder sb) method. In this case, the method was generated based on CMP fields declared by the SBB, and demonstrates diagnostics information being obtained from both the current class and the super class:

public abstract class ExampleSessionStateImpl extends com.opencloud.sentinel.ant.BaseSbb implements ExampleSessionState {
    public void ocGetSbbDiagnostics(StringBuilder sb) {
        String header = "ExampleSessionState Session State";
        sb.append(header).append("\n");
        for (int i=0; i<header.length(); i++) sb.append("=");
        sb.append("\n\n");
        // diagnostics: ClashingType (from com.opencloud.sentinel.ant.ExampleSessionStateInterface)
        if (getClashingType() == null) {
            sb.append("ClashingType: null\n");
        }
        else {
          sb.append("ClashingType: ").append(getClashingType()).append("\n");
        }
        // diagnostics: ExampleField (from com.opencloud.sentinel.ant.ExampleSessionStateInterface)
        if (getExampleField() == null) {
            sb.append("ExampleField: null\n");
        }
        else {
            sb.append("ExampleField: ").append(getExampleField()).append("\n");
        }
        sb.append("\n");
        super.ocGetSbbDiagnostics(sb);
    }
    ...

Removing SBB Entities

To forcibly remove an SBB entity, use the following rhino-console) command or related MBean operation.

Warning
Only forcibly remove SBB entities if necessary

SBB entities should only be forcibly removed if they do not remove themselves due to some unforeseen error during event processing.

Console command: removesbb

Command

removesbb <serviceid> <sbbid> [-non-resident] <sbb pkey>*
  Description
    Remove SBBs.  Use -non-resident to remove SBB entities not currently owned by
    any cluster node.

Example

To remove the SBB entity of the SimpleSbb SBB in the SimpleService service with the primary key 101:219902028358655/965121714:

$ ./rhino-console removesbb name=SimpleService,vendor=OpenCloud,version=1.1 \
    name=SimpleSbb,vendor=OpenCloud,version=1.1 101:219902028358655/965121714
SBB entity 101:219902028358655/965121714 removed

The default mode of this command only removes SBB entities currently owned by at least one active cluster node. If an SBB entity is non-resident, then the -non-resident option must be used to remove that SBB entity.

MBean operation: removeSbb

MBean

Rhino operation

public void removeSbb(ServiceID serviceID, SbbID sbbID, String sbbPKey, boolean nonResident)
    throws UnrecognizedServiceException, UnrecognizedSbbException, InvalidPKeyException,
          UnknownSbbEntityException, ManagementException;

This operation removes the SBB entity with the given primary key from the given service.

Removing All SBB Entities

To remove all SBB entities of a service, use the following rhino-console command or related MBean operation.

Warning
Use extreme care when removing forcibly

Occasionally an administrator will want to remove all SBB entities in a particular service. Typically, this would be to deactivate the service when upgrading or reconfiguring. Under normal conditions, these actions would be performed automatically, by allowing existing SBB entities to drain over time. Rhino provides the following housekeeping commands to forcibly speed up the draining process, although these should be used with extreme care on production systems — they will interrupt service for any existing network activities belonging to the service.

Service (or SLEE) must be STOPPING

As a safeguard, this command (or MBean operation) cannot be run unless the specified service, or the SLEE, is in the STOPPING state on the specified nodes. (It may also be run against nodes where the service is in the INACTIVE state (or the SLEE is in the STOPPED state) for convenience in asymmetric cluster configurations, but has no affect against such nodes since no SBB entities exist for the service on nodes where the service is INACTIVE (or the SLEE is STOPPED).)

Console command: removeallsbbs

Command

removeallsbbs <serviceid> [-nodes node1,node2,...]
  Description
    Remove all SBBs from a service in the Stopping state (on the specified nodes)

Example

To remove all SBB entities for the SimpleService service on nodes 101 and 102:

$ ./rhino-console removeallsbbs name=SimpleService,vendor=OpenCloud,version=1.1 \
    -nodes 101,102
SBB entities removed from node(s) [101,102]

MBean operation: removeAllSbbs

MBean

Rhino operation

public void removeAllSbbs(ServiceID serviceID, int[] nodeIDs)
    throws NullPointerException, UnrecognizedServiceException, InvalidStateException, ManagementException;

This operation removes all SBB entities for the given service on the given nodes.

Activity Context Name Bindings

Finding Activity Context Name Bindings

To find activity context name bindings in the SLEE, use the following rhino-console command or related MBean operations.

Console command: findactivitybindings

Command

findactivitybindings [-maxpernode maxrows] [-node nodeid] [-created-after
date|time|offset] [-created-before date|time|offset]
  Description
    Find activity context naming facility bindings.

Options

Option Abbreviation Description
 -maxpernode <maxrows>

               

Retrieve at most this many activity context name bindings from each event-router node (default is 100). Can be used to limit Rhino’s load when processing the request.

 -node <node-id>

               

Only display activity context name bindings owned by the given node.

 -created-after <time>
 -ca

Only display activity context name bindings created after the given time.

 -created-before <time>
 -cb

Only display activity context name bindings created before the given time.

Times for the above options may be entered in either absolute or relative format:

Type Format Description Examples

Absolute

 [[finding-activity-context-name-bindingsyyyy/]MM/dd] [HH:mm[:ss]]

yyyy = the year
MM = the month (1-12)
dd = the date of the month (1-31)
HH = the hour (0-23)
mm = the minute (0-59)
ss = the second (0-59)

 2008/04/15
04/15
10:57
10:57:35
2008/04/15 10:57:35

Relative

 (<nn><d|h|m|s>)*

nn = a number
d = days
h = hours
m = minutes
s = seconds

 1d
1h
1m
30s
1m
7m30s
1d12h5m30s
Note Rhino assumes relative time format is in the past. For example, 1h30m means 1 hour and 30 minutes ago.

Examples

To display all activity context name bindings in the SLEE:
$ ./rhino-console findactivitybindings
name                                pkey                      namespace   replicated   submission-time
----------------------------------  ------------------------  ----------  -----------  ------------------
connection[1542896514/0]/non-repl    2.101:227012595280421.0                    false   20181122 14:22:05
connection[1542896514/0]/repl        3.101:227012595280422.0                     true   20181122 14:22:05
connection[1542896514/1]/non-repl    2.102:227012872179724.0                    false   20181122 14:22:05
connection[1542896514/1]/repl        3.102:227012872179725.0                     true   20181122 14:22:05
connection[1542896514/2]/non-repl    2.103:227012872965639.0                    false   20181122 14:22:05
connection[1542896514/2]/repl        3.103:227012872965640.0                     true   20181122 14:22:05
connection[1542896514/3]/non-repl    2.101:227012595280424.0                    false   20181122 14:23:15
connection[1542896514/3]/repl        3.101:227012595280425.0                     true   20181122 14:23:15
connection[1542896514/4]/non-repl    2.102:227012872179727.0                    false   20181122 14:23:16
connection[1542896514/4]/repl        3.102:227012872179728.0                     true   20181122 14:23:16
10 rows
Narrowing a name bindings search

To search for activities belonging to node 101 (replicated or non-replicated activities owned by 101) that are more than one hour old, you would use the arguments -node 101 and -cb 1h:

$ ./rhino-console findactivitybindings -node 101 -cb 1h
name                                pkey                      namespace   replicated   submission-time
----------------------------------  ------------------------  ----------  -----------  ------------------
connection[1542896514/0]/non-repl    2.101:227012595280421.0                    false   20181122 14:22:05
connection[1542896514/0]/repl        3.101:227012595280422.0                     true   20181122 14:22:05
2 rows

(This example returned two name bindings.)

MBean operation: getBoundActivities

MBean

Rhino operations

Get summary information for all activity context name bindings
public TabularData getBoundActivities(int maxPerNode)
    throws ManagementException;

This operation returns tabular data summarising all activity context name bindings.


Get summary information for activity context name bindings using time-based criteria
public TabularData getBoundActivities(int maxPerNode, long createdAfter, long createdBefore)
    throws ManagementException;

This operation returns tabular data summarising the activity context name bindings using the time-based criteria specified (in milliseconds, as used by java.util.Date, or the value 0 to ignore a particular parameter).


Tip

Results depend on the Housekeeping MBean that invokes the operation:

  • Cluster Housekeeping MBean — returns results from all event-router nodes in the primary component

  • Node Housekeeping MBean — returns results from that node only.


Note For a description of the format of the tabular data that these operations return, see the javadoc.

Removing Activity Context Name Bindings

To forcefully remove an activity context name binding, use the following rhino-console command or related MBean operation.

Console command: removeactivitybinding

Command

removeactivitybinding <activity pkey> <name> [-non-resident]
  Description
    Remove an activity context naming facility binding. Use -non-resident to remove
    bindings not currently owned by any cluster node.

Example

To remove the activity context name binding with the name connection[1542896514/0]/repl from the activity with the primary key 3.101:227012595280422.0

$ ./rhino-console removeactivitybinding 3.101:227012595280422.0 connection[1542896514/0]/repl
Activity binding removed

MBean operation: removeActivityBinding

MBean

Rhino operation

public void removeActivityBinding(String activityPKey, String name, boolean nonResident)
    throws ManagementException, InvalidPKeyException, UnknownActivityException,
           NameNotBoundException;

This operation removes the activity context name binding with the given name from the activity with the given primary key. The nonResident argument must be true in order to remove a name binding from a non-resident activity.

Runtime Component Configuration

Tip As of Rhino 2.3, Rhino supports modifying environment entry configuration and security permissions for deployed components.

To configure runtime components, see:

Inspecting Environment Entries

To inspect a component’s environment entries, use the following rhino-console command or related MBean operation.

Console command: getenventries

Command

getenventries <ComponentID> [<true|false>]
  Description
    Returns the env entries for the specified SbbID or ProfileSpecificationID. The
    original env entries will be returned if the final argument is 'true'.

Example

To list all environment entries for the SIP Registrar SBB:

./rhino-console getenventries SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8]
Getting env entries for component: SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8]
  sipACIFactoryName:slee/resources/ocsip/acifactory
  sipProviderName:slee/resources/ocsip/provider

MBean operation: getEnvEntries

MBean

Rhino operation

public Map<String, String> getEnvEntries(ComponentID id, boolean original)
    throws NullPointerException, UnrecognizedComponentException,
    ManagementException, IllegalArgumentException;

This operation returns the environment entries associated with a component as a map of strings.

Setting Environment Entries

To modify a component’s environment entries, use the following rhino-console command or related MBean operation.

Console command: setenventries

Command

setenventries <ComponentID> <name1:value1> <name2:value2> ...
  Description
    Sets the env entries associated with the specified SbbID or
    ProfileSpecificationID.

Example

To modify the environment entries for the SIP Registrar SBB:

./rhino-console setenventries SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8] sipACIFactoryName:slee/resources/ocsip/mycustomacifactory,sipProviderName:slee/resources/ocsip/mycustomprovider
Setting env entries for component: SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8]

MBean operation: getEnvEntries

MBean

Rhino operation

public void setEnvEntries(ComponentID id, Map<String, String> entries)
    throws NullPointerException, UnrecognizedComponentException,
           ManagementException, IllegalArgumentException;

This operation sets the environment entries associated with a component.

Inspecting Security Permissions

To inspect a component’s security permissions, use the following rhino-console command or related MBean operation.

Note The security permissions for a component may be shared with multiple other components. For example, SBBs in the same jar may share their permissions.

Console command: getsecuritypolicy

Command

getsecuritypolicy (<ComponentID> | <LibraryID> [jarname]) [true|false]
  Description
    Returns the security policy associated with the specified ComponentID. The
    optional 'jarname' argument can be used to specify a nested library jar for
    LibraryIDs. The original policy will be returned if the final argument is
    'true'.

Example

To list the security permissions for the SIP resource adaptor:

./rhino-console getsecuritypolicy ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1]
grant {
  permission java.util.PropertyPermission "opencloud.sip.*", "read";
  permission java.io.FilePermission "/etc/resolv.conf", "read";
  permission java.net.SocketPermission "*", "resolve";
  permission java.net.SocketPermission "*:1024-", "listen,resolve";
  permission java.net.SocketPermission "*:1024-", "accept,connect";
  permission java.lang.RuntimePermission "modifyThread";
  permission java.io.FilePermission "sip-ra-ssl.truststore", "read";
  permission java.util.PropertyPermission "javax.sip.*", "read";
  permission java.io.FilePermission "sip-ra-ssl.keystore", "read";
  permission java.net.SocketPermission "*:53", "connect,accept";
};

MBean operation: getSecurityPolicy

MBean

Rhino operation

public String getSecurityPolicy(ComponentID id, String subId, boolean original)
    throws NullPointerException, UnrecognizedComponentException,
           IllegalArgumentException, ManagementException;

This operation returns the security permissions associated with a component.

Modifying Security Permissions

To modify a component’s security permissions, use the following rhino-console command or related MBean operation.

Note The security permissions for a component may be shared with multiple other components. For example, SBBs in the same jar may share their permissions.

Console command: setsecuritypolicy

Command

setsecuritypolicy (<ComponentID> | <LibraryID> [jarname]) <SecurityPolicy>
  Description
    Sets the current security policy associated with the specified ComponentID.

Example

To set the security permissions for the SIP resource adaptor:

./rhino-console setsecuritypolicy 'ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1]' 'grant { permission java.net.SocketPermission "*:53", "connect,accept"; };'
Setting security policy for component: ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1]
Warning The command-line console only supports a single line as an argument. To easily modify multi-line security policies, use the Rhino Element Manager instead.

MBean operation: getSecurityPolicy

MBean

Rhino operation

public void setSecurityPolicy(ComponentID id, String subId, String policyText)
    throws NullPointerException, UnrecognizedComponentException,
           IllegalArgumentException, ManagementException;

This operation sets the security permissions associated with a component.

Garbage Collection

During normal operation, Rhino periodically cleans up removed application state by running GC (garbage collection) algorithms over state that has been updated.

Application state GC is, however, triggered after a certain amount of churn has occurred within a given in-memory database. Sometimes, for example when looking at Rhino heap dumps, it’s useful to disregard state that’s eligible for GC and not have it pollute the heap dump unnecessarily. However, in a cluster that has returned to an idle state, GC won’t normally run to perform these cleanups as the trigger conditions only occur with load.

To help with these situations, Rhino offers two housekeeping functions that force GC to occur. These operations can be used to remove unnecessary state from the heap before taking a heap dump, etc. These procedures are:

Activity State Cleanup

To clean up state for activities that have been removed but not yet garbage collection, use the following rhino-console command or related MBean operation.

Console command: initiateactivitycleanup

Command

initiateactivitycleanup [ra entity name]
  Description
    Initiate activity GC

Example

To initiate garbage collection for the sipra resource adaptor entity

$ ./rhino-console initiateactivitycleanup sipra
Initiated activity GC for insis-ptc-1a

MBean operations: initiateCleanup

MBean

Rhino operations

Initiate activity cleanup
public void initiateCleanup(String raEntityName)
    throws UnrecognizedResourceAdaptorEntityException, ManagementException;

This operation initiates the garbage collection process on the activity state of the specified resource adaptor entity in the current namespace. If no resource adaptor entity name is specified, ie. the raEntityName argument is null, then garbage collection is initiated on the activity state for all resource adaptor entities in the current namespace.

Activity state GC runs asynchronously after the operation has been initiated, but typically completes within a few seconds.

Global State Cleanup

Rhino offers functionality to initiate garbage collection of all internal in-memory databases in a single operation.

This affects all removed but not yet garbage collected application and activity state in all namespaces.

To clean up all application and activity state for SBBs and activities that have been removed but not yet garbage collected, use the following rhino-console command or related MBean operation.

Console command: initiateglobalcleanup

Command

initiateglobalcleanup
  Description
    Initiate global GC

Example

To initiate garbage collection for all in-memory databases

$ ./rhino-console initiateglobalcleanup
Initiated global GC

MBean operations: initiateGlobalCleanup

MBean

Rhino operations

Initiate global cleanup
public void initiateGlobalCleanup()
    throws ManagementException;

This operation initiates the garbage collection process for all application and activity in-memory databases on all Rhino cluster nodes.

The GC process runs asynchronously after the operation has been initiated, but typically completes within a few seconds.

State Flushing

During normal operation, Rhino’s in-memory state can sometimes be slightly ahead of its persisted management and key/value store state. This is caused by asynchronous writes, query batching, and similar implementation details designed to minimise unnecessary or blocking database interactions. Any unpersisted state is usually only seconds ahead of the persisted state, excepting exceptional error cases such as database connection failures where unpersisted state may accumulate until connection restoration.

A graceful termination of a Rhino node (e.g. via the shutdown command) will implicitly perform appropriate persistence operations for any in-memory state which has yet to be persisted, but it can be useful in some cases to explicitly trigger persistence:

  • where a Rhino node needs to be forcefully terminated and guarantees regarding persisted state are required; or

  • where external automated tooling needs certainty regarding configuration changes which have been applied via management APIs.

Rhino offers two housekeeping functions that assist with these cases by manually initiating persistence and blocking until completion.

These procedures are:

Key/Value Store Persistence

To force persistence of in-memory key/value store state, use the following rhino-console command or related MBean operation.

Console command: flushkeyvaluestores

Command

flushkeyvaluestores [-nodes node1,node2,...]
  Description
    Flush any state currently stored locally in key/value stores (on the specified
    nodes) to the backing database.  This operation blocks until the flush has
    completed.

Example

To force key/value stores to be flushed for node 101

$ ./rhino-console flushkeyvaluestores -nodes 101
Flushing key/value stores on nodes [101]...
Flush complete

MBean operations: fluskKeyValueStores

MBean

Rhino operations

Flush KeyValueStores
public void flushKeyValueStores()
    throws ManagementException;
public void flushKeyValueStores(int[] nodeIDs)
    throws ManagementException;

Request that all application state waiting for persistence in all key/value stores in all event router nodes in the cluster be persisted to the backing database as soon as possible.

Any unwritten application state buffered in the key/value stores at the time this operation is invoked will be flushed. Any further application state committed after the flush operation is initiated will be persisted at some future time in accordance with normal key/value store behaviour.

This method blocks until the flush operation has completed.

This operation requests that all application state waiting for persistence in all key/value stores in all event router nodes in the cluster be persisted to the backing database as soon as possible.

Configuration Persistence

To force persistence of in-memory configuration state to the management database, use the following rhino-console command or related MBean operation.

Console command: flushconfiguration

Command

flushconfiguration
  Description
    Flush any outstanding configuration changes which have yet to be persisted to
    the management database. This operation blocks until the flush has completed.

Example

To force persistence of configuration to the management database

$ ./rhino-console flushconfiguration
Flushing configuration changes to database...
Flush complete.

MBean operations: flushConfiguration

MBean

Rhino Housekeeping

Flush configuration
public void flushConfiguration()
    throws ManagementException;

Request that all configuration state waiting for persistence be persisted to the management database as soon as possible. This method is unnecessary during normal operation as configuration changes are usually persisted to the database within seconds (assuming availability). This method is primarily intended for automated systems which require certainty that configuration changes have been persisted.

This method blocks until the flush operation has completed.

Backup and Restore

This section includes an overview of backup and restore procedures, and instructions for:

About Backup and Restore

Backup strategies for your configuration

To backup a Rhino SLEE, an administrator will typically use a combination of utilities, depending on backup requirements and the layout of Rhino nodes and the external persistence database. Common approaches include:

Backing up and restoring…​ using…​ to support…​

individual nodes

OS-level facilities such as LVM

…​recovery after a node failure, by creating a snapshot of the volumes containing the Rhino installation and persistence database data files.

Note This approach can be done on a per-node, per-database basis. (A database may need to perform recovery when restoring from a disk snapshot.)

cluster-wide SLEE state

the rhino-export utility (to save the management state of the SLEE), and the rhino-import utility (to restore the state to a new cluster)

…​recovery from a cluster failure, recovery from data corruption, rolling back state to an earlier time, and migrating a cluster to a new version of Rhino.

Warning You should also backup any modified or created files which are not stored in Rhino’s management database (for example, using cron and tar). This applies to modified files under the Rhino installation (such as changes to security permissions or JVM options) and to any important generated output (such as CDR files).

subset of the SLEE state

…​restoring a subset of SLEE state, for example during development, after updating a set of SLEE components.

SLEE profile state

…​backing up only the state of SLEE profiles (a subset of the management state of the SLEE).

external persistence database

…​backing up the contents of the external persistence database.

Tip For many deployments, the combination of disk-volume backups and rhino-export backups is sufficient.

Rhino SLEE State

As well as an overview of Rhino exports, this section includes instructions, explanations and examples of performing the following Rhino SLEE procedures:

Procedure Utility
 rhino-export
 rhino-import
 rhino-import

About Rhino Exports

Note
Why export Rhino?

Administrators and programmers can export a Rhino SLEE’s deployment and configuration state to a set of human-readable text files, which they can then import into another (or the same) Rhino SLEE instance. This is useful for backing up the state of the SLEE, migrating the state of one Rhino SLEE to another Rhino SLEE instance, or migrating the SLEE state between different versions of the Rhino SLEE.

Using rhino-export to backup SLEE state

The rhino-export tool exports the entire current state of the cluster, preserving all management state in a human-readable form, which:

  • is fairly easy to modify or examine

  • consists of an Ant build script and supporting files

  • includes management commands to restore the entire cluster state.

Tip To restore exported SLEE state, reinstall Rhino and use the rhino-import tool with the saved export.

What’s exported?

The exported image records the following state from the SLEE:

  • for each namespace:

    • all deployable units

    • all profile tables

    • all profiles

    • all resource adaptor entities

    • configured trace levels for all components

    • current desired state of all services and resource adaptor entities

  • runtime configuration:

    • external database resources

    • access control roles and permissions

    • logging

    • rate limiter

    • staging queue dimensions

    • object pool dimensions

    • threshold alarms

    • SNMP legacy OID mappings.

Note The exporter uses profile specification references to determine relationships between profiles. It does not recognise undeclared relationships expressed in the profile validation methods. If profile spec X contains a dependency reference to profile spec Y, then any create-profile-table targets for profile tables of X will include a dependency on any create-profile-table targets for any profile tables of Y. If there is no profile specification reference between two profile specifications that have a functional dependency, the exporter will not handle the dependency between the profile tables.

Exported profile data DTD

The format of the exported profile data is defined by the DTD file exported_profile_data_1_1.dtd, which can be found in the doc/dtd/ directory of your Rhino installation.

An administrator can write custom XML scripts, or modify exported profile data, according to the structure defined by this DTD.

The included profile element can be used to create, replace or remove a profile.

Refer to the documentation in exported_profile_data_1_1.dtd for more details.

Warning You should also backup any modified or created files which are not stored in Rhino’s management database (for example, using cron and tar). This applies to modified files under the Rhino installation (such as changes to security permissions or JVM options) and to any important generated output (such as CDR files).

Exporting a SLEE

To export a Rhino SLEE using rhino-export, see the following instructions, example, and list of files exported.

Invoke rhino-export

To export a Rhino SLEE, use the $RHINO_HOME/client/bin/rhino-export shell script.

Warning You cannot run rhino-export unless the Rhino SLEE is available and ready to accept management commands, and you include at least one argument — the name of the directory to write the export image to.

Command-line arguments

You can include the following command-line arguments with rhino-export:

$ client/bin/rhino-export
Valid command line options are:
-h <host>            - The hostname to connect to.
-p <port>            - The port to connect to.
-u <username>        - The user to authenticate as.
-w <password>        - The password used for authentication.
-D                   - Display connection debugging messages.
-J                   - Export profile tables using JMX and write as XML. (default)
-R                   - Take a snapshot of profile tables and write as raw data.
                       The raw data files can be decoded later using snapshot-to-export.
-s                   - Only export DUs and component states. Does not export
                       configuration (object pools, logging, licenses). This
                       is useful for creating exports to migrate data
                       between Rhino versions.
-q                   - Quiet mode.  Disables profile export progress bar.
                       This is useful in cases where the terminal is too small
                       for the progress bar to display properly, or when
                       console output is being redirected to a pipe or file.
<output-directory>   - The destination directory for the export.

Usually, only the <output-directory> argument must be specified.
All other arguments will be read from 'client.properties'.

Sample export

For example, an export might run as follows:

user@host:~/rhino/client/bin$ ./rhino-export ../../rhino_export
Connecting to localhost:1199
Exporting state from the default namespace...
9 deployable units found to export
Establishing dependencies between deployable units...
Exporting file:jars/sip-profile-location-service.jar...
Exporting file:jars/sip-presence-event.jar...
Exporting file:du/jsip-library-1.2.du.jar...
Exporting file:du/jsip-ratype-1.2.du.jar...
Exporting file:du/ocsip-ratype-2.2.du.jar...
Exporting file:du/ocsip-ra-2.3.1.19.du.jar...
Exporting file:jars/sip-registrar-service.jar...
Exporting file:jars/sip-presence-service.jar...
Exporting file:jars/sip-proxy-service.jar...
Generating import build file...
Exporting 1 profile(s) from profile table sip-registrations
Export complete

Exported files

The exporter creates a new sub-directory, such as rhino_export (as specified by argument), that contains all the deployable units that are installed in the SLEE, and an Ant script called build.xml which can be used later to initiate the import process.

If there are any user-defined namespaces, each of these are exported into separate Ant scripts named namespace-<namespace-name>.xml which can be used individually to restore only the corresponding namespace.

Here is an example export subdirectory:

user@host:~/rhino$ cd rhino_export/
user@host:~/rhino/rhino_export$ ls
build.xml
configuration
import.properties
profiles
rhino-ant-management_2_3.dtd
units

Exported files and directories include:

File or directory Description
 build.xml

The main Ant build file, which gives Ant the information it needs to import all the components of this export directory into the SLEE.

 namespace-_<namespace-name>_.xml

An Ant build file, as above, but specific to a user-defined namespace.

 import.properties

Contains configuration information, specifically the location of the Rhino "client" directory where required Java libraries are found.

 configuration

A directory containing the licenses and configured state that the SLEE should have.

 units

A directory containing deployable units.

 profiles

A directory containing XML files with the contents of profile tables.

 (snapshots)

Directories containing "snapshots" of profile tables. These are binary versions of the XML files in the profiles directory, created only by the export process (and not used for importing). See Profile Snapshots.

Importing a SLEE

To import an exported Rhino SLEE, using rhino-import, see the following instructions and example.

Run rhino-import

To import the state of an export directory into a Rhino SLEE:

  • change the current working directory to the export directory (cd into it)

  • run $RHINO_HOME/client/bin/rhino-import.

(You can also manually run Ant directly from the export directory, provided that the import.properties file has been correctly configured to point to the location of your $RHINO_HOME/client directory.)

Warning Generally, Metaswitch recommends importing into a SLEE with no existing deployed components — except when you need to merge exported state with existing state in the SLEE. (Some import operations may fail if the SLEE already includes components or objects with the same identity. These failures will not halt the build process, however, if the failonerror property of the management subtasks is set to false in the build file.)

Sample Rhino import

$ cd /path/to/export
$ rhino-import
Buildfile: ./build.xml

management-init:
     [echo] Open Cloud Rhino SLEE Management tasks defined

login:
[slee-management] Establishing new connection to localhost:1199
[slee-management] Connected to localhost:1199 (101) [Rhino-SDK (version='2.5', release='0.0', build='201609201052', revision='eb71e6f')]

import-persistence-cfg:

import-snmp-cfg:

import-snmp-node-cfg:

import-snmp-parameter-set-type-cfg:

import-snmp-configuration:

pre-deploy-config:

init:
[slee-management] Setting the active namespace to the default namespace

install-du-javax-slee-standard-types.jar:

install-du-jsip-library-1.2.du.jar:
[slee-management] Install deployable unit file:du/jsip-library-1.2.du.jar

install-du-jsip-ratype-1.2.du.jar:
[slee-management] Install deployable unit file:du/jsip-ratype-1.2.du.jar

install-du-ocsip-ra-2.3.1.19.du.jar:
[slee-management] Install deployable unit file:du/ocsip-ra-2.3.1.19.du.jar

install-du-ocsip-ratype-2.2.du.jar:
[slee-management] Install deployable unit file:du/ocsip-ratype-2.2.du.jar

install-du-sip-presence-event.jar:
[slee-management] Install deployable unit file:jars/sip-presence-event.jar

install-du-sip-presence-service.jar:
[slee-management] Install deployable unit file:jars/sip-presence-service.jar

install-du-sip-profile-location-service.jar:
[slee-management] Install deployable unit file:jars/sip-profile-location-service.jar

install-du-sip-proxy-service.jar:
[slee-management] Install deployable unit file:jars/sip-proxy-service.jar

install-du-sip-registrar-service.jar:
[slee-management] Install deployable unit file:jars/sip-registrar-service.jar

install-all-dus:

install-deps-of-du-javax-slee-standard-types.jar:

verify-du-javax-slee-standard-types.jar:

install-deps-of-du-jsip-library-1.2.du.jar:

verify-du-jsip-library-1.2.du.jar:
[slee-management] Verifying deployable unit file:du/jsip-library-1.2.du.jar

install-deps-of-du-jsip-ratype-1.2.du.jar:

verify-du-jsip-ratype-1.2.du.jar:
[slee-management] Verifying deployable unit file:du/jsip-ratype-1.2.du.jar

install-deps-of-du-ocsip-ratype-2.2.du.jar:

install-deps-of-du-ocsip-ra-2.3.1.19.du.jar:

verify-du-ocsip-ra-2.3.1.19.du.jar:
[slee-management] Verifying deployable unit file:du/ocsip-ra-2.3.1.19.du.jar

verify-du-ocsip-ratype-2.2.du.jar:
[slee-management] Verifying deployable unit file:du/ocsip-ratype-2.2.du.jar

install-deps-of-du-sip-presence-event.jar:

verify-du-sip-presence-event.jar:
[slee-management] Verifying deployable unit file:jars/sip-presence-event.jar

install-deps-of-du-sip-profile-location-service.jar:

install-deps-of-du-sip-presence-service.jar:

verify-du-sip-presence-service.jar:
[slee-management] Verifying deployable unit file:jars/sip-presence-service.jar

verify-du-sip-profile-location-service.jar:
[slee-management] Verifying deployable unit file:jars/sip-profile-location-service.jar

install-deps-of-du-sip-proxy-service.jar:

verify-du-sip-proxy-service.jar:
[slee-management] Verifying deployable unit file:jars/sip-proxy-service.jar

install-deps-of-du-sip-registrar-service.jar:

verify-du-sip-registrar-service.jar:
[slee-management] Verifying deployable unit file:jars/sip-registrar-service.jar

verify-all-dus:

deploy-du-javax-slee-standard-types.jar:

deploy-du-jsip-library-1.2.du.jar:
[slee-management] Deploying deployable unit file:du/jsip-library-1.2.du.jar

deploy-du-jsip-ratype-1.2.du.jar:
[slee-management] Deploying deployable unit file:du/jsip-ratype-1.2.du.jar

deploy-du-ocsip-ra-2.3.1.19.du.jar:
[slee-management] Deploying deployable unit file:du/ocsip-ra-2.3.1.19.du.jar

deploy-du-ocsip-ratype-2.2.du.jar:
[slee-management] Deploying deployable unit file:du/ocsip-ratype-2.2.du.jar

deploy-du-sip-presence-event.jar:
[slee-management] Deploying deployable unit file:jars/sip-presence-event.jar

deploy-du-sip-presence-service.jar:
[slee-management] Deploying deployable unit file:jars/sip-presence-service.jar
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1],sbb=SbbID[name=EventStateCompositorSbb,vendor=OpenCloud,version=1.0]] root tracer to Info
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1],sbb=SbbID[name=NotifySbb,vendor=OpenCloud,version=1.1]] root tracer to Info
[slee-management] Setting service ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] starting priority to 0
[slee-management] Setting service ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] stopping priority to 0
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1],sbb=SbbID[name=EventStateCompositorSbb,vendor=OpenCloud,version=1.0]] root tracer to Info
[slee-management] Setting service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] starting priority to 0
[slee-management] Setting service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] stopping priority to 0
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0],sbb=SbbID[name=ProfileLocationSbb,vendor=OpenCloud,version=1.0]] root tracer to Info
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0],sbb=SbbID[name=PublishSbb,vendor=OpenCloud,version=1.0]] root tracer to Info
[slee-management] Setting service ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] starting priority to 0
[slee-management] Setting service ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] stopping priority to 0

deploy-du-sip-profile-location-service.jar:
[slee-management] Deploying deployable unit file:jars/sip-profile-location-service.jar
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0],sbb=SbbID[name=ProfileLocationSbb,vendor=OpenCloud,version=1.0]] root tracer to Info
[slee-management] Setting service ServiceID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0] starting priority to 0
[slee-management] Setting service ServiceID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0] stopping priority to 0

deploy-du-sip-proxy-service.jar:
[slee-management] Deploying deployable unit file:jars/sip-proxy-service.jar
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8],sbb=SbbID[name=ProfileLocationSbb,vendor=OpenCloud,version=1.0]] root tracer to Info
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8],sbb=SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8]] root tracer to Info
[slee-management] Setting service ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] starting priority to 0
[slee-management] Setting service ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] stopping priority to 0

deploy-du-sip-registrar-service.jar:
[slee-management] Deploying deployable unit file:jars/sip-registrar-service.jar
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8],sbb=SbbID[name=ProfileLocationSbb,vendor=OpenCloud,version=1.0]] root tracer to Info
[slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8],sbb=SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8]] root tracer to Info
[slee-management] Setting service ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] starting priority to 0
[slee-management] Setting service ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] stopping priority to 0

deploy-all-dus:

create-sip-registrations-profile-table:
[slee-management] Create profile table sip-registrations from specification ComponentID[name=SipRegistrationProfile,vendor=OpenCloud,version=1.0]
[slee-management] Importing profiles into profile table: sip-registrations
[slee-management] 1 profile(s) processed: 0 created, 0 replaced, 0 removed, 1 skipped
[slee-management] Set trace level of ProfileTableNotification[table=sip-registrations] root tracer to Info

create-all-profile-tables:

create-ra-entity-sipra:
[slee-management] Deploying ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1]
[slee-management] [Failed] Component ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] already deployed
[slee-management] Create resource adaptor entity sipra from ComponentID[name=OCSIP,vendor=OpenCloud,version=2.3.1]
[slee-management] Bind link name OCSIP to sipra
[slee-management] Set trace level of RAEntityNotification[entity=sipra] root tracer to Info
[slee-management] Setting resource adaptor entity sipra starting priority to 0
[slee-management] Setting resource adaptor entity sipra stopping priority to 0

create-all-ra-entities:

deploy:

deploy-all:

set-subsystem-tracer-levels:
[slee-management] Set trace level of SubsystemNotification[subsystem=AbnormalExecutionAlarms] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=ActivityHandler] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=ClusterStateListener] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=ConfigManager] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=DatabaseStateListener] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=ElementManager] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=LicenseManager] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=LimiterManager] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=MLetStarter] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=RhinoStarter] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=ServiceStateManager] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=SNMP] root tracer to Info
[slee-management] Set trace level of SubsystemNotification[subsystem=ThresholdAlarms] root tracer to Info

import-runtime-cfg:

import-logging-cfg:

import-auditing-cfg:

import-license-cfg:

import-object-pool-cfg:

import-staging-queues-cfg:

import-limiting-cfg:

import-threshold-cfg:

import-threshold-rules-cfg:

import-access-control-cfg:

import-container-configuration:

post-deploy-config:

activate-service-SIP Notification Service-OpenCloud-1.1:
[slee-management] Activate service ComponentID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] on nodes [101]

activate-service-SIP Presence Service-OpenCloud-1.1:
[slee-management] Activate service ComponentID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] on nodes [101]

activate-service-SIP Profile Location Service-OpenCloud-1.0:
[slee-management] Activate service ComponentID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0] on nodes [101]

activate-service-SIP Proxy Service-OpenCloud-1.8:
[slee-management] Activate service ComponentID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] on nodes [101]

activate-service-SIP Publish Service-OpenCloud-1.0:
[slee-management] Activate service ComponentID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] on nodes [101]

activate-service-SIP Registrar Service-OpenCloud-1.8:
[slee-management] Activate service ComponentID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] on nodes [101]

activate-all-services:

activate-ra-entity-sipra:
[slee-management] Activate RA entity sipra on node(s) [101]

activate-all-ra-entities:

activate:

activate-all:

all:

BUILD SUCCESSFUL
Total time: 27 seconds

Partial Imports

To perform a partial import, first view the available targets in the export, then select the target to import.

Note
Why import only part of an export?

A partial import is useful when you only need to import selected items from an export, such as:

  • specific namespaces

  • specific deployable units

  • specific profile tables

  • individual services

  • individual resource adaptor entities

  • container configuration settings.

For example, you can use a partial import to recreate resource adaptor entities without also activating them.

It is also possible to exclude certain configuration steps by setting an exclusion property. For example to exclude the persistence configuration you would use -Dexclude-persistence-cfg=true as part of the command. These exclusion properties exist for all configuration targets.

View targets

To list the available targets in the export, use rhino-import -l. For example:

$ cd /path/to/export
$ rhino-import -l
Buildfile: ./build.xml

Main targets:

Other targets:

 activate
 activate-all
 activate-all-ra-entities
 activate-all-services
 activate-ra-entity-sipra
 activate-service-SIP Notification Service-OpenCloud-1.1
 activate-service-SIP Presence Service-OpenCloud-1.1
 activate-service-SIP Profile Location Service-OpenCloud-1.0
 activate-service-SIP Proxy Service-OpenCloud-1.8
 activate-service-SIP Publish Service-OpenCloud-1.0
 activate-service-SIP Registrar Service-OpenCloud-1.8
 all
 create-all-profile-tables
 create-all-ra-entities
 create-ra-entity-sipra
 create-sip-registrations-profile-table
 deploy
 deploy-all
 deploy-all-dus
 deploy-du-javax-slee-standard-types.jar
 deploy-du-jsip-library-1.2.du.jar
 deploy-du-jsip-ratype-1.2.du.jar
 deploy-du-ocsip-ra-2.3.1.19.du.jar
 deploy-du-ocsip-ratype-2.2.du.jar
 deploy-du-sip-presence-event.jar
 deploy-du-sip-presence-service.jar
 deploy-du-sip-profile-location-service.jar
 deploy-du-sip-proxy-service.jar
 deploy-du-sip-registrar-service.jar
 import-access-control-cfg
 import-auditing-cfg
 import-container-configuration
 import-license-cfg
 import-limiting-cfg
 import-logging-cfg
 import-object-pool-cfg
 import-persistence-cfg
 import-runtime-cfg
 import-snmp-cfg
 import-snmp-configuration
 import-snmp-node-cfg
 import-snmp-parameter-set-type-cfg
 import-staging-queues-cfg
 import-threshold-cfg
 import-threshold-rules-cfg
 init
 install-all-dus
 install-deps-of-du-javax-slee-standard-types.jar
 install-deps-of-du-jsip-library-1.2.du.jar
 install-deps-of-du-jsip-ratype-1.2.du.jar
 install-deps-of-du-ocsip-ra-2.3.1.19.du.jar
 install-deps-of-du-ocsip-ratype-2.2.du.jar
 install-deps-of-du-sip-presence-event.jar
 install-deps-of-du-sip-presence-service.jar
 install-deps-of-du-sip-profile-location-service.jar
 install-deps-of-du-sip-proxy-service.jar
 install-deps-of-du-sip-registrar-service.jar
 install-du-javax-slee-standard-types.jar
 install-du-jsip-library-1.2.du.jar
 install-du-jsip-ratype-1.2.du.jar
 install-du-ocsip-ra-2.3.1.19.du.jar
 install-du-ocsip-ratype-2.2.du.jar
 install-du-sip-presence-event.jar
 install-du-sip-presence-service.jar
 install-du-sip-profile-location-service.jar
 install-du-sip-proxy-service.jar
 install-du-sip-registrar-service.jar
 login
 management-init
 post-deploy-config
 pre-deploy-config
 set-subsystem-tracer-levels
 verify-all-dus
 verify-du-javax-slee-standard-types.jar
 verify-du-jsip-library-1.2.du.jar
 verify-du-jsip-ratype-1.2.du.jar
 verify-du-ocsip-ra-2.3.1.19.du.jar
 verify-du-ocsip-ratype-2.2.du.jar
 verify-du-sip-presence-event.jar
 verify-du-sip-presence-service.jar
 verify-du-sip-profile-location-service.jar
 verify-du-sip-proxy-service.jar
 verify-du-sip-registrar-service.jar
Default target: all

Select target

To import a selected target, run rhino-import with the target specified.

Warning If you don’t specify a target, this operation will import the default (all).

For example, to recreate all resource adaptor entities included in the export:

$ cd /path/to/export
$ rhino-import . create-all-ra-entities
Buildfile: ./build.xml

management-init:
     [echo] Open Cloud Rhino SLEE Management tasks defined

login:
[slee-management] Establishing new connection to localhost:1199
[slee-management] Connected to localhost:1199 (101) [Rhino-SDK (version='2.5', release='0.0', build='201609201052', revision='eb71e6f')]

init:
[slee-management] Setting the active namespace to the default namespace

install-du-jsip-library-1.2.du.jar:
[slee-management] Install deployable unit file:du/jsip-library-1.2.du.jar

install-deps-of-du-jsip-library-1.2.du.jar:

install-du-jsip-ratype-1.2.du.jar:
[slee-management] Install deployable unit file:du/jsip-ratype-1.2.du.jar

install-deps-of-du-jsip-ratype-1.2.du.jar:

install-du-ocsip-ratype-2.2.du.jar:
[slee-management] Install deployable unit file:du/ocsip-ratype-2.2.du.jar

install-deps-of-du-ocsip-ratype-2.2.du.jar:

install-du-ocsip-ra-2.3.1.19.du.jar:
[slee-management] Install deployable unit file:du/ocsip-ra-2.3.1.19.du.jar

install-deps-of-du-ocsip-ra-2.3.1.19.du.jar:

verify-du-ocsip-ra-2.3.1.19.du.jar:
[slee-management] Verifying deployable unit file:du/ocsip-ra-2.3.1.19.du.jar

deploy-du-ocsip-ra-2.3.1.19.du.jar:
[slee-management] Deploying deployable unit file:du/ocsip-ra-2.3.1.19.du.jar

create-ra-entity-sipra:
[slee-management] Deploying ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1]
[slee-management] [Failed] Component ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] already deployed
[slee-management] Create resource adaptor entity sipra from ComponentID[name=OCSIP,vendor=OpenCloud,version=2.3.1]
[slee-management] Bind link name OCSIP to sipra
[slee-management] Set trace level of RAEntityNotification[entity=sipra] root tracer to Info
[slee-management] Setting resource adaptor entity sipra starting priority to 0
[slee-management] Setting resource adaptor entity sipra stopping priority to 0

create-all-ra-entities:

BUILD SUCCESSFUL
Total time: 5 seconds

Profile Snapshots

As well as an overview of profile snapshots, this section includes instructions, explanations and examples of performing the following Rhino SLEE procedures:

Procedure Script Console command
 rhino-snapshot

         
 snapshot-decode

         
 snapshot-to-export

         

         
 importprofiles

About SLEE Profile Snapshots

The rhino-snapshot script exports the state of some or all profile tables out of the SLEE in binary format. These binary snapshots can then be inspected, converted into a form suitable for re-importing, and re-imported later.

You extract and convert binary images of profile tables using the following commands, available in client/bin/:

  • rhino-snapshot — extracts the state of a profile table in an active SLEE, and outputs the binary image of that table to a snapshot directory or ZIP file

  • snapshot-decode — prints the contents of a snapshot directory or ZIP file

  • snapshot-to-export — converts a snapshot directory or ZIP file into an XML file, which can be re-imported into Rhino

Tip For usage information on any of these scripts, run them without arguments.

Saving a Profile Snapshot

To create a snapshot (for example, to extract the state of an individual profile table to a snapshot directory), run the rhino-snapshot script.

Options

$ ~/rhino/client/bin/rhino-snapshot
Rhino Snapshot Client

    This tool creates snapshots of currently installed profile tables.

Usage:
    rhino-snapshot [options] <action> [options] [<profile table name>*|--all]

Valid options:
    -? or --help             - Display this message

    -h <host>                - Rhino host
    -p <port>                - Rhino port
    -u <username>            - Rhino username
    -w <password>            - Rhino password

Valid actions:
    --list                   - list all profile tables
    --info                   - get profile table info only (do not save any data)
    --snapshot               - snapshot and save profile table data

Action modifier options:
    --namespace <name>       - restrict action to the given namespace
                               use an empty string to denote the default namespace
    --outputdir <directory>  - sets the directory where snapshot files are created
                               defaults to the current working directory
    --zip                    - save to .zip archives instead of directories

    --all                    - snapshot all profile tables

Example

ubuntu@ip-172-31-25-31:~/RhinoSDK/client/bin$ ./rhino-snapshot -h localhost --snapshot --outputdir snapshot-backup --all
Rhino Snapshot Client

Connecting to node: localhost:1199
Connected to node: 101
Snapshot server port is: 22000
Taking snapshot for OpenCloud_ShortCode_AddressListEntryTable
Saving OpenCloud_ShortCode_AddressListEntryTable.jar (1,335kb)
Streaming profile table 'OpenCloud_ShortCode_AddressListEntryTable' snapshot to OpenCloud_ShortCode_AddressListEntryTable.data (3 entries)
[###############################################################################################################################################################################################################################] 3/3 entries

Taking snapshot for SCCCamelToIMSReOriginationConfigProfileTable
Saving SCCCamelToIMSReOriginationConfigProfileTable.jar (4,937kb)
Streaming profile table 'SCCCamelToIMSReOriginationConfigProfileTable' snapshot to SCCCamelToIMSReOriginationConfigProfileTable.data (2 entries)
[###############################################################################################################################################################################################################################] 2/2 entries

Taking snapshot for OpenCloud_RegistrarPublicIdToPrivateIdTable
Saving OpenCloud_RegistrarPublicIdToPrivateIdTable.jar (1,398kb)
Streaming profile table 'OpenCloud_RegistrarPublicIdToPrivateIdTable' snapshot to OpenCloud_RegistrarPublicIdToPrivateIdTable.data (1 entries)
[###############################################################################################################################################################################################################################] 1/1 entries

...

Extracted 1,626 of 1,626 entries (251kb)
Snapshot timestamp 2016-10-31 15:08:46.917 (1477926526917)
   Critical region time     : 0.003 s
   Request preparation time : 0.090 s
   Data extraction time     : 113.656 s
   Total time               : 113.746 s

Inspecting a Profile Snapshot

To print the contents of a snapshot directory or zip file, run the snapshot-decode script.

Options

$ ~/rhino/client/bin/snapshot-decode
Syntax: snapshot-decode <snapshot .zip | snapshot directory> [max# of records to print, default=all]

Example

$ ~/rhino/client/bin/snapshot-decode snapshots-backup/ActivityTestConfigurationTable
OpenCloud::::,2000,1.5,300,10000,chargingPeriodMultiple
OpenCloud:OpenCloud:sipcall::,2000,1.5,1800,10000,chargingPeriodMultiple

Notes

The snapshot-decode command outputs the contents of a single profile table saved in a single profile snapshot, as rows of comma-separated lists.

If you exported all profile tables (by passing the --all flag to rhino-snapshot), to view the complete set of snapshots, run snapshot-decode against each subdirectory of the output. For example, if you ran this command to create the set of profile snapshots:

~/rhino/client/bin/rhino-snapshot -h localhost --outputdir snapshots-backup --all

…​then you would run commands such as the following to inspect all the tables:

~/rhino/client/bin/snapshot-decode snapshots-backup/ActivityTestConfigurationTable

~/rhino/client/bin/snapshot-decode snapshots-backup/AnnouncementProfileTable
...

Preparing a Snapshot for Importing

To convert a snapshot to XML (so that it can be re-imported into another SLEE), run the snapshot-to-export script. To convert the raw data of snapshot from rhino-export result, run the convert-export-snapshots-to-xml script.

snapshot-to-export

Options

$ ~/rhino/client/bin/snapshot-to-export
Snapshot .zip file or directory required
Syntax: snapshot-to-export <snapshot .zip | snapshot directory>
  <output .xml file> [--max max records, default=all]

Example

$ ~/rhino/client/bin/snapshot-to-export snapshots-backup/snapshots/ActivityTestConfigurationTable ActivityTestConfigurationTable.xml
Creating profile export file ActivityTestConfigurationTable.xml
[###########################################################################################################] converted 3 of 3
[###########################################################################################################] converted 3 of 3

Created export for 3 profiles in 0.1 seconds

Notes

The snapshot-to-export script converts a single profile table snapshot.

If you exported all profile tables (by passing the --all flag to rhino-snapshot), run snapshot-to-export against each subdirectory of the output. For example, if you ran this command to create the set of profile snapshots:

~/rhino/client/bin/rhino-snapshot -h localhost --outputdir snapshots-backup --all

…​then you would run commands such as the following to convert all the tables:

~/rhino/client/bin/snapshot-to-export snapshots-backup/ActivityTestConfigurationTable
Creating profile export file snapshots-backup/ActivityTestConfigurationTable.xml
[############################################################################################################] converted 3 of 3
[############################################################################################################] converted 3 of 3

Created export for 3 profiles in 0.1 seconds

~/rhino/client/bin/snapshot-to-export snapshots-backup/sis_configs_sip
Creating profile export file snapshots-backup/sis_configs_sip.xml
[############################################################################################################] converted 2 of 2
[############################################################################################################] converted 2 of 2

Created export for 2 profiles in 0.9 seconds
...

convert-export-snapshots-to-xml

Options

$ ~/rhino/client/bin/convert-export-snapshots-to-xml
Export directory name must be specified
Syntax: convert-export-snapshots-to-xml <export directory>

Example

$ ~/rhino/client/bin/convert-export-snapshots-to-xml exports/
Converting table test-profile-table from snapshot exports/snapshots/test_profile_table to XML test_profile_table.xml

Importing a Profile Snapshot

To import a converted snapshot XML file into a Rhino SLEE, run the importprofiles command in rhino-console.

Options

importprofiles <filename.xml> [-table table-name] [-replace] [-max
profiles-per-transaction] [-noverify]
  Description
    Import profiles from xml data

Example

$ ~/rhino/client/bin/rhino-console importprofiles snapshots-backup/ActivityTestConfigurationTable.xml
Interactive Rhino Management Shell
Connecting as user admin
Importing profiles into profile table: ActivityTestConfigurationTable
3 profile(s) processed: 3 created, 0 replaced, 0 removed, 0 skipped

External Persistence Database Backups

During normal operation, all SLEE management and profile data resides in Rhino’s own in-memory distributed database. The memory database is fault tolerant and can survive the failure of a node. However, for management and profile data to survive a total restart of the cluster, it must be persisted to a permanent, disk-based data store. Metaswitch Rhino SLEE uses an external database for this purpose, both PostgreSQL and Oracle databases are supported.

Note
When to export instead of backing up

You can only successfully restore database backups using the same Rhino SLEE version as the backup was made from. For backups that can reliably be used for restoring to different versions of the Rhino SLEE, create an export image of the SLEE.

The procedures for backup and restore of the external database that Rhino stores persistent state differs depending on the database vendor. Consult the documentation provided by your database vendor.

PostgreSQL documentation for PostgreSQL 9.6 can be found at https://www.postgresql.org/docs/9.6/static/backup.html

Oracle documentation for Oracle Database 12C R2 can be found at https://docs.oracle.com/database/122/BRADV/toc.htm

When installing Rhino a database schema is initialised. To backup the Rhino database you must dump a copy of all the tables in this schema. The schema to be backed up is the database name chosen during the Rhino installation. This value is stored as the MANAGEMENT_DATABASE_NAME variable in the file $RHINO_HOME/config/config_variables where $RHINO_HOME is the path of a rhino node directory.

Database schema

The tables below are typical of a Rhino installation. Depending on your deployed services and configuration the schema may contain more tables than these. Backups should always include all the tables to allow restoration to a usable state without additional manual operations.

Table Contents
 keyspaces

Names of tables holding data for MemDB keyspaces and config options for these keyspaces. Each MemDB instance contains a number of keyspaces for different types of stored data. These are mapped to tables in the persistent DB.

 timestamps

Snapshot timestamps and MemDB generation IDs for persistent MemDB databases. This table records the current timestamp for each persistent MemDB so nodes can determine which backing database holds the most recent version when starting up. See About Persistence Resources for an explanation of how to use multiple redundant backing databases for persistent storage.

 domain-0-rhino_management_RHINO internal metadata suppocfa90e0c
  domain-0-rhino_management_RHINO internal metadata suppo302ee56d
  ...

Rhino configuration data. Each table contains one keyspace for a different type of configuration data, e.g. SLEE state.

 domain-0-rhino_management_rhino:deployment

Deployed service classes. Entries correspond to jars, metadata files and checksums for deployed components.

 domain-0-rhino_profiles_Profile Table 1:ProfileOCBB
...

Profile table data. Each table corresponds to a profile table in the SLEE. Each record corresponds to a profile in the profile table.

 domain-0-rhino_profiles_Profile Table 1:ProfileIndexAddressOCBB
...

Index data for the profile tables. Each indexed field in a profile has a matching table *ProfileIndex<Indexed Field>OCBB.

Monitoring Rhino Using SNMP

Rhino includes an SNMP agent, for interoperability with external SNMP-aware management clients. The Rhino SNMP agent provides a read-only view of Rhino’s statistics (through SNMP polling) and supports sending SNMP notifications for platform events to an external monitoring system.

In a clustered environment, individual Rhino nodes will run their own instances of the SNMP agent, so that statistics and notifications can still be accessed in the event of node failure. When multiple Rhino nodes exist on the same host, Rhino assigns each SNMP agent a unique port, so that the SNMP agents for each node can be distinguished. The port assignments are also persisted by default, so that in the event of node failure or cluster restart the Rhino SNMP agent for a given node will resume operation on the same port it was using previously.

By default, Rhino enables the SNMP v1 and v2c protocols. You can query statistics by configuring a monitoring system to connect to port 16100 (or other port as noted above). To use SNMP v3, configure authentication and enable SNMP v3 first.

In this section...

Accessing SNMP Statistics and Notifications

Below is an overview of the statistics and notifications available from the Rhino SNMP agent, and the OIDs they use.

SNMP statistics

The Rhino SNMP agent provides access to all non-sample based Rhino statistics (all gauges and counters). Each parameter set type is represented in the form of a collection of SNMP scalar values or as an SNMP table. The OIDs for each parameter set type is by default registered according to the static OID model introduced in Rhino 3.1.0. If a parameter set type is not statically defined or the defined OIDs clash, then no OID is registered according to the static OID model. Parameter set types that don’t use the static OID model instead register OIDs according to the legacy OID model if it is enabled. For the legacy model, all parameter set types have OIDs registered except for where parameter set type OIDs conflict.

To get the OID mapping of the parameter set types and statistic names currently in the system, use the Rhino management console to export the MIB files.

SNMP collection of scalar values format

Where a parameter set type is statically defined and is associated with only a single parameter set, its statistics are represented with a collection of SNMP scalar values. Each singular value in this collection represents a counter statistic for this parameter set type.

Where a parameter set type is statically defined and is associated with multiple parameter sets, the parameter sets with statically mapped names are each represented with a collection of SNMP scalar values.

For details, see the SNMP Static OID Development Guide.

SNMP table format

Where a parameter set type is statically defined and is associated with multiple individual parameter sets, a non-statically mapped parameter set’s statistics are represented by an SNMP table. The individual table rows represent parameter sets, while the table columns represent statistics from the parameter set type. A parameter set with a non-statically mapped parameter set name will have the name of the parameter set stored as a table index in the OID.

For details, see the SNMP Static OID Development Guide.

Legacy model SNMP tables

Parameter set types that don’t use the static OID model instead have their SNMP OIDs registered according to the legacy OID model. This model supports explicitly defining static OIDs for each parameter set type with the rest dynamically generated. The legacy OID model section describes the SNMP OID allocation in further detail.

Warning

For static model non-statically mapped parameter sets and legacy model parameter sets, exceptionally long parameter set names may be truncated if their OID representation is longer than 255 characters. This is to prevent pathname length problems with management clients that store SNMP statistics in files named after the index OID.

Note

The statistics provided by this interface can be considered the "raw" statistics values for individual Rhino nodes. The SNNP agent doesn’t collate statistics from other nodes. These retrieved statistics won’t in general reflect any activity that is occurring on Rhino nodes other than the one the SNMP agent is providing statistics for.

SNMP notifications

The Rhino SNMP agent supports sending SNMP notifications to a designated host/port. It forwards the following, which include all standard JAIN SLEE 1.1 notifications, plus several Rhino specific notifications. Rhino will send SNMP notifications if a destination host has been configured and the SNMP notification function is enabled. See Configuring SNMP Notifications for instructions on how to configure Rhino to send SNMP notifications.

Notifications Sent when…​ Details Trap type OID (1.3.6.1.4.1.19808.2.101.x)

Alarms

…​alarms are raised or cleared (only once per alarm)

1.3.6.1.4.1.19808.2.101.1

Resource Adaptor

…​an RA entity changes state

1.3.6.1.4.1.19808.2.101.2

Service

…​a service changes state

1.3.6.1.4.1.19808.2.101.3

SLEE

…​the SLEE changes state

1.3.6.1.4.1.19808.2.101.4

Trace

…​a trace message is generated by a component

1.3.6.1.4.1.19808.2.101.5

Usage

…​usage notifications required

1.3.6.1.4.1.19808.2.101.6

Log

…​log messages exceed a specified threshold

1.3.6.1.4.1.19808.2.101.7

Logfile Rollover

…​a log file rolls over

1.3.6.1.4.1.19808.2.101.8

The notification MIB structure is a set of SNMP variable bindings containing the time ticks since Rhino started, the notification message, type, timestamp, node IDs and additional data. These are also documented in the RHINO-NOTIFICATIONS.MIB file generated by rhino-console exportmibs.

Common Notification VarBinds (Notification argument data)
Name OID Description

message

1.3.6.1.4.1.19808.2.102.1

The notification message. For alarms, this is the alarm message text.

type

1.3.6.1.4.1.19808.2.102.2

The notification type. A notification type in dotted hierachical notation. For example, "javax.slee.management.alarm.raentity".

sequence

1.3.6.1.4.1.19808.2.102.3

An incrementing sequence number, indexed for each notification type.

timestamp

1.3.6.1.4.1.19808.2.102.4

A timestamp in ms since 1-Jan-1970.

nodeIDs

1.3.6.1.4.1.19808.2.102.5

The node IDs reporting this notification. An array of Rhino node IDs represented as a string [101,102,…​].

source

1.3.6.1.4.1.19808.2.102.9

The source of the notification. This can be an SBB, RA entity, or subsystem.

namespace

1.3.6.1.4.1.19808.2.102.13

The namespace of the notification. The field will be an empty string for default namespace.

Component state change
Name OID Description

oldState

1.3.6.1.4.1.19808.2.102.6

Old state of the component.

newState

1.3.6.1.4.1.19808.2.102.7

New state of the component.

component

1.3.6.1.4.1.19808.2.102.8

ID of the component. This can be a service or RA entity name.

Alarm
Name OID Description

alarmID

1.3.6.1.4.1.19808.2.102.10

Alarm ID.

alarmLevel

1.3.6.1.4.1.19808.2.102.11

Alarm level (Critical, Major, Minor, etc).

alarmInstance

1.3.6.1.4.1.19808.2.102.12

Alarm instance ID.

alarmType

1.3.6.1.4.1.19808.2.102.14

Alarm type ID. The value of this depends on the source of the alarm. For example, a failed connection alarm from the DB Query RA would have a value like localhost (Oracle).

Tracer
Name OID Description

traceLevel

1.3.6.1.4.1.19808.2.102.50

Tracer level (Error, Warning, Info, …​).

Usage parameter notification (Rhino Statistics)
Name OID Description

usageName

1.3.6.1.4.1.19808.2.102.60

Usage parameter name, one per parameter in the parameter set.

usageSetName

1.3.6.1.4.1.19808.2.102.61

Parameter set name.

usageValue

1.3.6.1.4.1.19808.2.102.62

Value of the usage parameter at the moment the notification was generated.

Logging
Name OID Description

logName

1.3.6.1.4.1.19808.2.102.70

Log key.

logLevel

1.3.6.1.4.1.19808.2.102.71

Log level (ERROR, WARN, INFO, …​).

logThread

1.3.6.1.4.1.19808.2.102.72

Thread the message was logged from.

Log file rollover
Name OID Description

oldFile

1.3.6.1.4.1.19808.2.102.80

The log file that was rolled over.

newFile

1.3.6.1.4.1.19808.2.102.81

The new name of the log file.

A sample SNMP trap for an alarm follows. The OctetString values are text strings containing the alarm notification data.

Simple Network Management Protocol
    version: v2c (1)
    community: public
    data: snmpV2-trap (7)
        snmpV2-trap
            request-id: 1760530310
            error-status: noError (0)
            error-index: 0
            variable-bindings: 11 items
                1.3.6.1.2.1.1.3.0: 559356
                    Object Name: 1.3.6.1.2.1.1.3.0 (iso.3.6.1.2.1.1.3.0)
                    Value (Timeticks): 559356
                1.3.6.1.6.3.1.1.4.1.0: 1.3.6.1.4.1.19808.2.101.1 (iso.3.6.1.4.1.19808.2.101.1) 1
                    Object Name: 1.3.6.1.6.3.1.1.4.1.0 (iso.3.6.1.6.3.1.1.4.1.0)
                    Value (OID): 1.3.6.1.4.1.19808.2.101.1 (iso.3.6.1.4.1.19808.2.101.1)
                1.3.6.1.4.1.19808.2.102.1: 44617461536f7572636520686173206661696c6564  2
                1.3.6.1.4.1.19808.2.102.2: 6a617661782e736c65652e6d616e6167656d656e742e616c...
                    Object Name: 1.3.6.1.4.1.19808.2.102.2 (iso.3.6.1.4.1.19808.2.102.2)3
                    Value (OctetString): 6a617661782e736c65652e6d616e6167656d656e742e616c...
                1.3.6.1.4.1.19808.2.102.3: 34 4
                1.3.6.1.4.1.19808.2.102.4: 31343634313535323331333630 5
                1.3.6.1.4.1.19808.2.102.5: 5b3130315d 6
                1.3.6.1.4.1.19808.2.102.9: 5241456e746974794e6f74696669636174696f6e5b656e74... 7
                1.3.6.1.4.1.19808.2.102.10: 3130313a313836363934363631333134353632 8
                1.3.6.1.4.1.19808.2.102.11: 4d616a6f72 9
                1.3.6.1.4.1.19808.2.102.12: 6c6f63616c686f737420284f7261636c6529 10
                1.3.6.1.4.1.19808.2.102.14: 646271756572792e64732e6661696c757265 11
1 Notification trap type OID: 1.3.6.1.4.1.19808.2.101.1 (Alarm)
2 Message: "DataSource has failed"
3 Notification type: javax.slee.management.alarm.raentity
4 Sequence number: 34
5 Timestamp: 1464155231360
6 Node IDs: [101]
7 Source: RAEntityNotification[entity=dbquery-0]
8 Alarm ID: 101:186694661314562
9 Alarm level: Major
10 Alarm instance: localhost (Oracle)
11 Alarm type: dbquery.ds.failure
Tip
Note
Log notification appender

Rhino 2.2 introduced a log notification appender for use with the SNMP agent. This appender will create notifications for all log messages above its configured threshold. These notifications will in turn be forwarded by the SNMP agent to the designated host/port. This is intended as a catch-all for any errors or warnings which don’t have specific alarms associated with them.

OID hierarchy

The Rhino SNMP agent uses the following OIDs. (All statistics and notifications that it provides use these OIDs as a base.)

 .1.3.6.1.4.1.19808

OpenCloud Enterprise OID

 .1.3.6.1.4.1.19808.2

Rhino

 .1.3.6.1.4.1.19808.2.1

Rhino Statistics

 .1.3.6.1.4.1.19808.2.101

Rhino Notifications

 .1.3.6.1.4.1.19808.2.102

Rhino Notification VarBinds

Exporting MIB Files

To export MIBs, use the exportmibs Rhino management console command.

Note

Management Information Base (MIB) files contain descriptions of the OID hierarchy that SNMP uses to interact with statistics and notifications.

exportmibs

Command

exportmibs <dir> [-maxpathlen <len>]
  Description
    Exports current SNMP statistics configuration as MIB files to the specified
    directory. The -maxpathlen option can be used to limit the maximum path name
    lengths of MIB files generated as some SNMP tools like snmpwalk can fail to read
    files with very long paths. The default maximum path length is 300.
Note The exported MIB files represent the current configuration of Rhino’s statistic parameter sets. They may change when components (such as services) are deployed or undeployed.

SNMP management clients usually provide a tool for using or importing the information from MIBs.

Example

[Rhino@localhost (#1)] exportmibs mibs
Writing MIB exports to: /home/user/rhino/client/bin/mibs
40 MIBs exported.

Configuring and Managing the Rhino SNMP Agent

This section and its subsections list the Rhino management console commands for managing the Rhino SNMP agent, with command explanations and examples.

Command Description

Managing SNMP status

snmpstatus

Shows the status of the SNMP agent.

activatesnmp

Activates SMMP agent.

deactivatesnmp

Deactivates the SNMP agent.

restartsnmp

Restarts the SNMP agent.

Configuring authentication details

setsnmpcommunity

Sets the SNMP community.

setsnmpuserdetails

Sets the SNMP v3 user authentication details.

setsnmpuserengineid

Sets the engine ID user portion.

getsnmpuserengineid

Gets the engine ID user portion.

listsnmpengineids

Lists the engine IDs for all nodes.

enablesnmpversion

Enables the SNMP version.

disablesnmpversion

Disables the SNMP version.

Configuring port and interface bindings

setsnmpsubnet

Sets the default SNMP subnet.

setsnmpportrange

Sets the default SNMP port range.

setloopbackaddressesallowed

Allows loopback addresses.

setaddressbindingssaved

Saves the address bindings.

setportbindingssaved

Saves the port bindings.

Setting SNMP system information

setsnmpdetails

Sets all SNMP text strings, such as the name and description strings.

Configuring SNMP notifications

enablesnmpnotifications

Enables SNMP notifications.

disablesnmpnotifications

Disables SNMP notifications.

addsnmptarget

Adds a notification destination.

removesnmptarget

Removes a notification destination.

listsnmpnotificationtypes

Lists notification types.

setsnmpnotificationenabled

Filters notification delivery.

Managing per-node state

clearsnmpsavedconfig

Clears the saved per-node SNMP configuration for the specified nodes.

setsnmpsavedconfig

Sets the saved address and port configuration used by a node.

Configuring OID Registration Model Support

enableoidregistrationbehaviour

Enables the specified OID registration model.

disableoidregistrationbehaviour

Disables the specified OID registration model.

Viewing SNMP Static OIDs

listsnmpstaticoids

Lists the current statically declared OIDs.

listsnmpstaticoidcountersuffixes

Lists the OID suffix for each counter statistic, with relevant parameter set type information.

listsnmpstaticoidsuffixmappings

Lists the statically declared OID suffix for each profile table and resource adaptor entity.

auditsnmpstaticoids

Audits all verified components to check for missing SNMP static OID suffixes and aliases used for generating SNMP MIB definitions.

Configuring SNMP Legacy OID mappings

listsnmpoidmappings

Lists the current configurable parameter set types and their OID mappings.

setsnmpoidmapping

Sets or clears the OID used for the specified parameter set type.

createsnmpmappingconfig

Creates a new SNMP mapping configuration for the specified parameter set type.

removesnmpmappingconfig

Removes the SNMP mapping configuration for the specified parameter set type.

removeinactivesnmpmappingconfigs

Removes all SNMP mapping configurations that are currently inactive.

Configuring SNMP Legacy Counter Mappings

listsnmpcountermappings

Lists the index mapping of each counter of the specified parameter set type.

setsnmpcountermapping

Sets or clears the SNMP counter mappings.

Removing the Log Notification Appender

removeappenderref

Removes an appender for a log key.

Managing the Status of the SNMP Agent

To activate, deactivate, restart the SNMP agent, or check its current status, use the following Rhino management console commands.

snmpstatus

Command

snmpstatus
  Description
    Provides an overview of all current SNMP agent state including the current SNMP
    defaults, per-node SNMP agent state, and saved per-node SNMP configuration.

Example

The initial status for a freshly installed single-node cluster looks like this:

Output What it means
SNMP Status
============

Agent status: enabled

The current state of the Rhino SNMP Agent.

This is the "desired" state of the agent, not the actual state of the SNMP agents for each cluster node. Individual SNMP agents on different nodes may have a different state. For example, the command may display a different state if the agent encounters a failure during startup (such as a port being unavailable).

Agent details:
  Name: Node ${node-id}
  Description: OpenCloud Rhino
  Location: Unknown location
  Contact: OpenCloud Support (support@opencloud.com)

Descriptions for the SNMP agent when queried, corresponding to the sysName, sysDescr, sysLocation, and sysContact variables from the SNMPv2 MIB.

Any system property string (in the form ${property.name*} will be replaced, on a per-node basis, so node-specific information can be included (if required).

The ${node-id} property is special and will be substituted with the node ID of the node the agent is running on.

Enabled OID registration behaviours: legacy-oids, static-oids

The OID registration models currently enabled.

Enabled versions: v1, v2c

The versions of SNMP currently enabled.

SNMP v1/v2c details:
  Community: public

The current configuration for SNMP v1/v2c.

SNMP v3 details:
  Username:         rhino
  Auth Protocol:    SHA
  Auth Key:         password
  Privacy Protocol: AES128
  Privacy Key:      password
  EngineBoots:      3
  EngineTime:       814

The current configuration for SNMP v3. This configuration is used for both statistics collection and notification delivery.

Network:
  Default subnet: 0.0.0.0/0
  Default port range: 16100-16200
  Loopback addresses: ignored
  Address bindings: not saved
  Port bindings: saved

Indicates how the SNMP agent responds to automatic port and interface discovery for newly booted Rhino nodes.

 Notifications: disabled

Indicates whether the SNMP agent is to generate notifications.

Notification configuration
==========================
  Notification targets:
    v2c:udp:127.0.0.1/162

Indicates the targets to which the SNMP agent sends notifications (if enabled and not filtered).

AlarmNotification
 forwarded
LogNotification
 forwarded
LogRolloverNotification
 forwarded
ResourceAdaptorEntityStateChangeNotification
 forwarded
ServiceStateChangeNotification
 forwarded
SleeStateChangeNotification
 forwarded
TraceNotification
 forwarded
UsageNotification
 forwarded

Indicates the types of notifications the SNMP agent is to send.

Saved per-node configuration
=============================
  101 <default>:16100

The persisted per-node state of the SNMP system; used to determine what ports/interfaces the SNMP agent for each node should bind to on startup.

Per-node agent status
======================
  101 Running 192.168.0.7:16100

The state of the SNMP agent on each node; includes the actual state of the agent, which may differ from the persisted or desired state.

activatesnmp

Command

activatesnmp
  Description
    Activates the Rhino SNMP agent.

Example

[Rhino@localhost (#1)] activatesnmp
Rhino SNMP agent enabled.

deactivatesnmp

Command

deactivatesnmp
  Description
    Deactivates the Rhino SNMP agent.

Example

[Rhino@localhost (#1)] deactivatesnmp
Rhino SNMP agent disabled.

restartsnmp

Command

restartsnmp
  Description
    Deactivates (if required) and then reactivates the Rhino SNMP agent.

Example

[Rhino@localhost (#1)] restartsnmp
Stopped SNMP agent.
Starting SNMP agent.
Rhino SNMP agent restarted.
Note If Rhino cannot start the SNMP agent successfully, it raises alarms. For details, use the listactivealarms management console command to check active alarms.

Configuring Authentication Details

To configure authentication details for accessing Rhino’s SNMP subsystem, use the following rhino-console commands.

Unlike SNMP v1 and v2c which only use a named "community" for identification, SNMP v3 requires password authentication to connect to the Rhino SNMP agent.

The SNMP engine ID is used to distinguish SNMP engine instances in an administrative domain. If managing multiple Rhino clusters the engine ID will need to be configured. Use of engine IDs is defined in the SNMP v3 specification, v1 and v2c clients are not able to process this information.

setsnmpcommunity

Command

setsnmpcommunity <community>
  Description
    Sets the SNMP community.

setsnmpuserdetails

Command

setsnmpuserdetails <username> <authenticationProtocol> <authenticationKey>
<privacyProtocol> <privacyKey>
  Description
    Sets the SNMP v3 user and authentication details.

setsnmpuserengineid

Command

setsnmpuserengineid <hex string>
  Description
    Sets the user configurable portion of the SNMP LocalEngineID.
Note This only sets a user-configurable portion of the engine ID — not the entire engine ID. The SNMP agent calculates full engine ID from the OpenCloud enterprise OID, the node ID, and the user-configurable string.
Warning You need to change the user-configurable string only if the SNMP agent needs to distinguish between different clusters on the same network.

getsnmpuserengineid

Command

getsnmpuserengineid
  Description
    Returns the user configurable portion of the SNMP LocalEngineID.

listsnmpengineids

Command

listsnmpengineids
  Description
    Lists the SNMP EngineIDs for each node.
Note Each node returns a different but similar engine ID.

enablesnmpversion

Command

enablesnmpversion <v1|v2c|v3>
  Description
    Enables support for the specified SNMP protocol version.

disablesnmpversion

Command

disablesnmpversion <v1|v2c|v3>
  Description
    Disables support for the specified SNMP protocol version.

Configuring Port and Interface Bindings

To manage port and interface bindings when a new Rhino node joins the cluster, use the following rhino-console commands to configure Rhino SNMP system settings for the default subnet and port range, allowing loopack addresses, and saving address and port bindings.

Warning These settings only affect nodes which don’t have previously saved interface/port configuration settings. If a node has previously saved settings, it will attempt to use those values. If it cannot (for example, if the port is in use by another application), then the SNMP agent will not start on that node.

Any changes to these settings will require a restart of the SNMP agent to take effect.

setsnmpsubnet

Command

setsnmpsubnet <x.x.x.x/y>
  Description
    Sets the default subnet used by the Rhino SNMP agent when initially determining
    addresses to bind to.
Note Leaving this as 0.0.0.0/0 will bind to all interfaces present on a host, except those excluded by other options (see below). This option is primarily intended for use in situations where a host has multiple interfaces of which only one is intended for SNMP use.

setsnmpportrange

Command

setsnmpportrange <low port> <high port>
  Description
    Sets the default port range used by the Rhino SNMP agent.
Note This setting is intended for hosts which run multiple Rhino nodes.

setloopbackaddressesallowed

Command

setloopbackaddressesallowed <true|false>
  Description
    Specifies whether loopback interfaces will considered (true) or not (false) when
    binding the SNMP agent to addresses. This setting will be ignored if only
    loopback addresses are available.

setaddressbindingssaved

Command

setaddressbindingssaved <true|false>
  Description
    Specifies whether the address bindings used by the SNMP agent for individual
    Rhino nodes are persisted in the Rhino configuration.
Note

This is set to false by default to allow seamless operation on machines which may have changing interface settings.

If set to true, the first interface(s) that the SNMP agent for a node binds to will be saved as per-node configuration, and will be reused on subsequent restarts.

setportbindingssaved

Command

setportbindingssaved <true|false>
  Description
    Specifies whether the port bindings used by the SNMP agent for individual Rhino
    nodes are persisted in the Rhino configuration.
Note This is set to true by default, to ensure that the SNMP agent for individual nodes retains the same port across restarts.

Setting SNMP System Information

To set SNMP system information, use the following rhino-console command.

Note Each Rhino SNMP agent exposes the standard SNMPv2-MIB system variables (sysName, sysDescr, sysLocation, and sysContact).

setsnmpdetails

Command

setsnmpdetails <name> <description> <location> <contact>
  Description
    Sets all SNMP text strings (name, description, location, contact).
Note The values provided will be used for all nodes in the cluster; no per-node configuration exists for these settings.
Tip If you need different settings for individual agents, use system property references, in the form: ${property.name}. These substitute for their associated value, on a per-node basis.
Warning The ${node-id} property is synthetic and not a real system property — it will be replaced by the node ID of the Rhino node each agent is running in.

Configuring SNMP Notifications

To enable, disable, and specify where to send SNMP notifications, use the following rhino-console commands.

enablesnmpnotifications

Command

enablesnmpnotifications
  Description
    Enables SNMP notification sending (while the SNMP agent is active).

disablesnmpnotifications

Command

disablesnmpnotifications
  Description
    Disables SNMP notification sending.

addsnmptarget

Command

addsnmptarget <v2c|v3> <address>
  Description
    Adds the target address for SNMP notifications.

Example

To send version v2c notifications to 127.0.0.1 port 162:

[Rhino@localhost (#1)] addsnmptarget v2c udp:127.0.0.1/162
Added SNMP notifications target: v2c:udp:127.0.0.1/162
Note Specify addresses as udp:address/port. Currently supports udp transport only.

removesnmptarget

Command

removesnmptarget <target>
  Description
    Removes the specified SNMP notification target.

Example

To remove a target:

[Rhino@localhost (#1)] removesnmptarget v2c:udp:127.0.0.1/162
Removed SNMP notifications target: v2c:udp:127.0.0.1/162

listsnmpnotificationtypes

Command

listsnmpnotificationtypes
  Description
    Lists the notification types supported for SNMP notification type filtering.

Example

To list notification types:

[Rhino@localhost (#1)] listsnmpnotificationtypes
Supported SNMP Notification types:
  AlarmNotification
  LogNotification
  LogRolloverNotification
  ResourceAdaptorEntityStateChangeNotification
  ServiceStateChangeNotification
  SleeStateChangeNotification
  TraceNotification
  UsageNotification

setsnmpnotificationenabled

Command

setsnmpnotificationenabled <type> <true|false>
  Description
    Specifies whether the notification type should be forwarded by the SNMP
    subsystem.

Example

To disable SleeStateChangeNotification delivery:

[Rhino@localhost (#1)] setsnmpnotificationenabled SleeStateChangeNotification false
SNMP notifications for type 'SleeStateChangeNotification' are now disabled.
Warning These settings have no effect if SNMP notification delivery is disabled globally.

The notification types that can be configured to generate SNMP traps are:

  • AlarmNotification

  • LogNotification

  • LogRolloverNotification

  • ResourceAdaptorEntityStateChangeNotification

  • ServiceStateChangeNotification

  • SleeStateChangeNotification

  • TraceNotification

  • UsageNotification

Frequently it is desired to only send SNMP traps for AlarmNotification and UsageNotification. In other cases ResourceAdaptorEntityStateChangeNotification, ServiceStateChangeNotification and SleeStateChangeNotification are also desired.

Warning LogNotification and, to a lesser degree, TraceNotification will cause performance degradation due to additional platform load.
Note
Notification configuration and snmpstatus

Below is an example of output from the snmpstatus command showing notification configuration.

Notification configuration
==========================
Notification targets:
v2c:udp:127.0.0.1/162

AlarmNotification                                  forwarded
LogNotification                                    forwarded
LogRolloverNotification                            forwarded
ResourceAdaptorEntityStateChangeNotification       forwarded
ServiceStateChangeNotification                     forwarded
SleeStateChangeNotification                        forwarded
TraceNotification                                  forwarded
UsageNotification                                  forwarded

Thread pooling

By default, Rhino uses a thread pool for SNMP notifications. The pool can be configured by setting the notifications.notification_threads system property.

Value

Effect

0

Disable notification threadpooling, use same thread.

1

Default, a single dedicated thread for notification delivery.

>1

Threadpool of N threads for notification delivery.

Managing Per-Node State

To clear or modify the saved per-node SNMP configuration, use the following rhino-console commands.

Note
Viewing the saved per-node state

Saved per-node state displays in the output of the snmpstatus command. For example:

Saved per-node configuration
=============================
101 <default>:16100

Here node 101 has a saved address/port binding of <default>:16100. This <default> is a special case, which means to use the defaults as per the options described earlier, rather than using a fixed address. The port however here remains static for node 101.

clearsnmpsavedconfig

Command

clearsnmpsavedconfig <node1,node1,...|all>
  Description
    Clears saved per-node SNMP configuration for the specified nodes (or all nodes).

Example

To clear the saved configuration for node 101:

[Rhino@localhost (#1)] clearsnmpsavedconfig 101
Per-node SNMP configurations cleared for nodes: 101

setsnmpsavedconfig

Command

setsnmpsavedconfig <node-id> <addresses|default> <port|default>
  Description
    Sets the saved address and port configuration used by a node.
Warning This will overwrite any existing configuration for the specified node.
Tip This command can also be used to create saved configuration for nodes that are not currently cluster members. The special address/port <default> may be specified, to use on restart.

Example

To set SNMP agent’s address and port for node-101:

[Rhino@localhost (#1)] setsnmpsavedconfig 101 localhost 16100
SNMP configuration for node 101 updated.

Configuring OID Registration Model Support

To activate or deactivate the supported SNMP OID registration models, use the following Rhino management console commands.

Note

The output of the snmpstatus command shows the currently enabled OID registration models. For example:

SNMP Status
============

...

Agent details:
 ...
 Enabled OID registration behaviours: legacy-oids, static-oids

Here both the legacy OID model and the static OID model are active.

enableoidregistrationbehaviour

Command

enableoidregistrationbehaviour <legacy-oids|static-oids>
  Description
    Enable support for the specified OID registration behaviour

Example

To enable the static OID registration model, use the following command:

[Rhino@localhost (#1)] enableoidregistrationbehaviour static-oids

disableoidregistrationbehaviour

Command

disableoidregistrationbehaviour <legacy-oids|static-oids>
  Description
    Disable support for the specified OID registration behaviour

Example

To disable the legacy OID registration model, use the following command:

[Rhino@localhost (#1)] disableoidregistrationbehaviour legacy-oids

Viewing SNMP Static OIDs

To list statically declared OIDs, counter OID suffixes, or OID suffix mappings, or to audit static OIDs, use the following Rhino management console commands.

listsnmpstaticoids

Command

listsnmpstaticoids [parameter set type name|-inactive]
  Description
    Lists the current statically-declared OIDs registered for parameter set types.
    If a parameter set type name is specified, then only the OID registered for that
    parameter set type is listed (if any), otherwise all registered OIDs are listed.
     The -inactive option limits the listing to only inactive registered OIDs.

Example

[Rhino@localhost (#1)] listsnmpstaticoids
Parameter Set Type OIDs
=======================
 OID                                                Status       Namespace            Parameter Set Type
-----------------------------------------------------------------------------------------------------------------------------
 1.3.6.1.4.1.19808.20.2.1.1.1.1.1.1                 (active)     <default>            SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats
 1.3.6.1.4.1.19808.20.2.1.1.1.2.1                   (active)     <default>            SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.TransportStats
 1.3.6.1.4.1.19808.20.2.1.1.1.2.2                   (active)     <default>            SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.TransactionStats
 1.3.6.1.4.1.19808.20.2.1.1.1.2.3                   (active)     <default>            SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.WorkerPoolStats
 1.3.6.1.4.1.19808.20.2.1.1.1.2.4                   (active)     <default>            SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.BigGroupStats
 1.3.6.1.4.1.19808.20.2.3.1.1.1.1.3                 (active)     <default>            CGIN.ResourceAdaptorID[name=CGIN Unified RA,vendor=OpenCloud,version=3.1.0].cginra.TimingWheelExecutorStats
 1.3.6.1.4.1.19808.20.2.3.1.1.1.2.1.1               (active)     <default>            CGIN.ResourceAdaptorID[name=CGIN Unified RA,vendor=OpenCloud,version=3.1.0].cginra.FiberPoolStats
 1.3.6.1.4.1.19808.20.2.3.1.1.1.3.1                 (active)     <default>            CGIN.ResourceAdaptorID[name=CGIN Unified RA,vendor=OpenCloud,version=3.1.0].cginra.ConnectionStats
 1.3.6.1.4.1.19808.20.2.3.1.1.2.1                   (active)     <default>            CGIN.ResourceAdaptorID[name=CGIN Unified RA,vendor=OpenCloud,version=3.1.0].cginra.InboundStats
 1.3.6.1.4.1.19808.20.2.3.1.1.2.2                   (active)     <default>            CGIN.ResourceAdaptorID[name=CGIN Unified RA,vendor=OpenCloud,version=3.1.0].cginra.TCAPStats
 1.3.6.1.4.1.19808.20.2.3.1.1.2.3                   (active)     <default>            CGIN.ResourceAdaptorID[name=CGIN Unified RA,vendor=OpenCloud,version=3.1.0].cginra.SCCPStats
 1.3.6.1.4.1.19808.20.2.3.1.1.2.4                   (active)     <default>            CGIN.ResourceAdaptorID[name=CGIN Unified RA,vendor=OpenCloud,version=3.1.0].cginra.SUAStats
 1.3.6.1.4.1.19808.20.2.3.1.1.2.5                   (active)     <default>            CGIN.ResourceAdaptorID[name=CGIN Unified RA,vendor=OpenCloud,version=3.1.0].cginra.SCTPStats

listsnmpstaticoidcountersuffixes

Command

listsnmpstaticoidcountersuffixes [parameter set type name]
  Description
    Lists the current Parameter Set Type + Counter Name -> OID suffix mappings.  If
    a parameter set type name is specified, then only the counters associated with
    that parameter set type are listed, otherwise all parameter set types are
    listed.

Example

[Rhino@localhost (#1)] listsnmpstaticoidcountersuffixes SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats
Counter OID Suffixes
====================
 Parameter Set Type                                                                       Namespace OID Suffix Counter Name
-------------------------------------------------------------------------------------------------------------------------------------
 SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats     <default>        102 executorTasksSubmitted
 SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats     <default>        103 executorTasksExecuted
 SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats     <default>        104 executorTasksRejected
 SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats     <default>        105 executorTasksWaiting
 SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats     <default>        106 executorTasksExecuting
 SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats     <default>        107 executorThreadsTotal
 SIP.ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=3.1.0].sipra.ExecutorStats     <default>        108 executorThreadsIdle

listsnmpstaticoidsuffixmappings

Command

listsnmpstaticoidsuffixmappings [oid-suffix-mapping-id]
  Description
    Lists the current statically-declared OID suffix mappings for profile tables and
    resource adaptor entities that have been installed into the SLEE

Example

[Rhino@localhost (#1)] listsnmpstaticoidsuffixmappings
Profile Table Name Mappings
===========================
 Table Name                             OID Suffix Specified by
-----------------------------------------------------------------------------------------------------------------------------------------------
 PromotionsTable                        1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 Rocket_FeatureExecutionScriptTable     1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]

Resource Adaptor Entity Name Mappings
=====================================
 Entity Name                      OID Suffix Specified by
------------------------------------------------------------------------------------------------------------------------------------------
 cassandra-general                2          OIDSuffixMappingDescriptorID[name=sentinel-volte-oid-mappings,vendor=OpenCloud,version=4.1.0]
 cassandra-third-party-reg        1          OIDSuffixMappingDescriptorID[name=sentinel-sip-oid-mappings,vendor=OpenCloud,version=4.1.0]
 cdr                              1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 cginmapra                        2          OIDSuffixMappingDescriptorID[name=sentinel-ss7-oid-mappings,vendor=OpenCloud,version=4.1.0]
 cginra                           1          OIDSuffixMappingDescriptorID[name=sentinel-ss7-oid-mappings,vendor=OpenCloud,version=4.1.0]
 correlation-ra                   1          OIDSuffixMappingDescriptorID[name=sentinel-volte-oid-mappings,vendor=OpenCloud,version=4.1.0]
 dbquery-0                        1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 diameter-sentinel-internal       2          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 diameterro-0                     1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 etcari-correlation-ra            2          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 http                             1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 imssf-cdr                        2.13       OIDSuffixMappingDescriptorID[name=sentinel-volte-oid-mappings,vendor=OpenCloud,version=4.1.0]
 imssf-diameterro                 2.14       OIDSuffixMappingDescriptorID[name=sentinel-volte-oid-mappings,vendor=OpenCloud,version=4.1.0]
 imssf_management                 2.15       OIDSuffixMappingDescriptorID[name=sentinel-volte-oid-mappings,vendor=OpenCloud,version=4.1.0]
 reorigination-correlation-ra     3          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 rf-control-ra                    1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 sentinel-management              1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 sh-cache-microservice-ra         1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 sip-sis-ra                       1          OIDSuffixMappingDescriptorID[name=sentinel-sip-oid-mappings,vendor=OpenCloud,version=4.1.0]
 sipra                            1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]
 sis-in                           1          OIDSuffixMappingDescriptorID[name=sentinel-ss7-oid-mappings,vendor=OpenCloud,version=4.1.0]
 sis-map                          2          OIDSuffixMappingDescriptorID[name=sentinel-ss7-oid-mappings,vendor=OpenCloud,version=4.1.0]
 uid                              1          OIDSuffixMappingDescriptorID[name=sentinel-core-oid-mappings,vendor=OpenCloud,version=4.1.0]

auditsnmpstaticoids

Command

auditsnmpstaticoids [-o resultfile.json] [-includemissingbaseoid]
  Description
    Audits components for missing static OID suffixes and SNMP MIB definition
    aliases. The -includemissingbaseoid option permits auditing components that have
    no base OID set.

What it does

Audits all verified components to check for missing SNMP static OID suffixes and aliases used for generating SNMP MIB definitions. All components that have a parent with a statically assigned base OID are checked for the presence of OID suffixes for the component and all referenced parameter set types and their counters.

Example

[Rhino@localhost (#1)] auditsnmpstaticoids
Result:
Resource Adaptors
=================
No audit failures present for this component type

Profiles
========
No audit failures present for this component type

Services
========
No audit failures present for this component type


Result
======
 Status: Success


Audit successful. There are no missing static OID suffixes or stats-presentation aliases.

listsnmpstaticmappingsbycomponent

Command

listsnmpstaticmappingsbycomponent  [component-id|-static]
  Description
    Lists the statically-declared OIDs and stats aliases registered for each
    component. Components that do not support a statically declared OID are omitted.
    The component-id option limits the displayed SNMP static mappings to just that
    component. The -static option limits the listing to only components that have a
    set static OID.

What it does

Lists all verified components that may have an OID part and MIB alias set for them. Each component’s assigned OID part and MIB alias is displayed in the output table. The OID part displayed may be an OID suffix or a base OID.

Example

[Rhino@localhost (#1)] listsnmpstaticmappingsbycomponent
 Component                                                                               Namespace  Static OID                         Alias
---------------------------------------------------------------------------------------- -----------------------------------------------------------------------
 ResourceAdaptorID[name=SIS-SIP/EasySIP RA,vendor=OpenCloud,version=3.2]                 <default>  1.3.6.1.4.1.19808.20.2.4           SIPSIS
 ResourceAdaptorID[name=UniqueID RA,vendor=OpenCloud,version=4.1.0-TRUNK]                <default>  -- unset --                        -- unset --
 ResourceAdaptorID[name=cassandra-cql-ra,vendor=OpenCloud,version=2.1.0-TRUNK]           <default>  1.3.6.1.4.1.19808.20.2.7           CQLRA
 ResourceAdaptorID[name=rf-control-ra,vendor=OpenCloud,version=2.1.0-TRUNK]              <default>  1.3.6.1.4.1.19808.20.5.1.1.3.3     RFControl
 ResourceAdaptorID[name=sentinel.management.ra,vendor=OpenCloud,version=4.1.0-TRUNK]     <default>  -- unset --                        -- unset --
 ResourceAdaptorID[name=sh-cache-microservice-ra,vendor=OpenCloud,version=4.1.0-TRUNK]   <default>  1.3.6.1.4.1.19808.20.5.10.10.3.1   SHCMRA
 SbbID[name=CallLeg,vendor=OpenCloud,version=trunk.0]                                    <default>  1                                  IMSSFCallLeg
 SbbID[name=Charging,vendor=OpenCloud,version=trunk.0]                                   <default>  2                                  IMSSFCharging
 SbbID[name=ExternalSessionTracking,vendor=OpenCloud,version=trunk.0]                    <default>  3                                  ExternalSessionTracking
 SbbID[name=IDP_Builder,vendor=OpenCloud,version=trunk.0]                                <default>  9                                  InitialDPBuilder

Configuring SNMP Legacy OID Mappings

Note

The commands described here relate only to the legacy OID model and not the static OID model introduced in Rhino 3.1.

To list, set, create, remove, or remove inactive OID mappings for parameter set types, use the following Rhino management console commands.

listsnmpoidmappings

Command

listsnmpoidmappings [parameter set type name|-inactive]
  Description
    Lists the current configurable Parameter Set Type -> OID mappings.  If a
    parameter set type name is specified, then only the OID mapping associated with
    that parameter set type is listed, otherwise all mappings are listed.  The
    -inactive option limits the listing to only inactive mappings.

Example

[Rhino@localhost (#1)] listsnmpoidmappings
Parameter Set Type OID Mappings
===============================
 OID                        Status     Namespace   Parameter Set Type
------------------------------------------------------------------------------------------
 1.3.6.1.4.1.19808.2.1.1    (active)               Events
 1.3.6.1.4.1.19808.2.1.2    (active)               Activities
 1.3.6.1.4.1.19808.2.1.3    (active)               StagingThreads
 1.3.6.1.4.1.19808.2.1.4    (active)               LockManagers
 1.3.6.1.4.1.19808.2.1.5    (active)               Services
 1.3.6.1.4.1.19808.2.1.6    (active)               Transactions
 1.3.6.1.4.1.19808.2.1.7    (active)               ObjectPools
 1.3.6.1.4.1.19808.2.1.9    (active)               MemDB-Local
 1.3.6.1.4.1.19808.2.1.10   (active)               MemDB-Replicated
 1.3.6.1.4.1.19808.2.1.12   (active)               LicenseAccounting
 1.3.6.1.4.1.19808.2.1.13   (active)               ActivityHandler
 1.3.6.1.4.1.19808.2.1.14   (active)               JVM
 ...

setsnmpoidmapping

Command

setsnmpoidmapping [-namespace] <parameter set type name> <-oid
<oid>|-auto|-none>
  Description
    Sets or clears the OID used for the specified parameter set type.  The -oid
    option sets the mapping to a specific OID, while the -auto option auto-assigns
    an available OID.  The -none option clears any existing mapping.
Note If the parameter set type has default pre-defined OID mapping, -auto will set the OID mapping to the pre-defined value. Otherwise, it will set to an auto-generated value.

Example

[Rhino@localhost (#1)] setsnmpoidmapping JVM -oid 1.3.6.1.4.1.19808.2.1.14
SNMP OID mapping set to 1.3.6.1.4.1.19808.2.1.14

createsnmpmappingconfig

Command

createsnmpmappingconfig [-namespace] <parameter set type name>
  Description
    Create a new SNMP mapping configuration for the specified parameter set type.
    The mapping is created in the global environment unless the optional -namespace
    argument is used, in which case the mapping is created in the current namespace
    instead.

Example

[Rhino@localhost (#1)] createsnmpmappingconfig -namespace Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0]
SNMP mapping config for parameter set type Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0] created in the current namespace
Note

After creation, the OID mapping for the specified parameter set type is set to cleared status. Use the command setsnmpoidmapping to set the OID mapping afterwards.

removesnmpmappingconfig

Command

removesnmpmappingconfig <parameter set type name>
  Description
    Remove the SNMP mapping configuration for the specified parameter set type

Example

[Rhino@localhost (#1)] removesnmpmappingconfig Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0]
SNMP mapping config for parameter set type Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0] removed

removeinactivesnmpmappingconfigs

Command

removeinactivesnmpmappingconfigs
  Description
    Removes all SNMP mapping configurations that are currently inactive

Example

[Rhino@localhost (#1)] removeinactivesnmpmappingconfigs
Removing mappings from the default namespace
 removing mapping configuration for parameter set type Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0]

Removed 1 mapping configuration

Configuring SNMP Legacy Counter Mappings

Note

The commands described here relate only to the legacy OID model and not the static OID model introduced in Rhino 3.1.

To list or set the OID mappings for counter statistics, use the following Rhino management console commands.

listsnmpcountermappings

Command

listsnmpcountermappings [parameter set type name]
  Description
    Lists the current Parameter Set Type + Counter Name -> Index mappings.  If a
    parameter set type name is specified, then only the mappings associated with
    that parameter set type are listed, otherwise all mappings are listed.

What it does

Lists the current index mappings of the counter names for parameter set types. The mappings represent SNMP statistics from each parameter set type. Includes three columns of information:

  • The parameter set type name

  • The index assigned to the counter

  • The counter name within the parameter set type

Example

[Rhino@localhost (#1)] listsnmpcountermappings Transactions
Counter Mappings
================
Transactions                              2 active
Transactions                              3 started
Transactions                              4 committed
Transactions                              5 rolledBack

setsnmpcountermapping

Command

setsnmpcountermapping [-namespace] <parameter set type name> <counter name>
<-index <index>|-auto|-none>
  Description
    Sets or clears the index used for the specified parameter set type and counter.
    The -index option sets the mapping to a specific index, while the -auto option
    auto-assigns an available index.  The -none option clears any existing mapping.
Note If the parameter set type has default pre-defined counter mapping, -auto will set the counter mapping to the pre-defined value. Otherwise, it will set to an auto-generated value.

What it does

Sets or clears SNMP counter mappings.

Example

[Rhino@localhost (#1)] setsnmpcountermapping Services activeRootSbbs -index 4
SNMP counter mapping for activeRootSbbs set to 4

Removing the Log Notification Appender

To safely remove the Log Notification Appender, use the following rhino-console commands.

Note
What is the Log Notification Appender?

Introduced in Rhino 2.2, the Log Notification Appender is a log appender that generates SNMP notifications from log messages at or above a specified threshold (by default: WARN).

removeappenderref

Command

removeappenderref <logKey> <appenderName>
  Description
    Removes an appender for a log key.
  Required Arguments
    logKey  The log key of the logger.
    appenderName  The name of the Appender.

Example

[Rhino@localhost (#1)] removeappenderref root LogNotification
Done.
Tip
Specific log keys and levels

If you only want notifications from specific log keys, after removing the root appender reference, add one for that specific log key.

For a different notification-generation log level (such as ERROR or FATAL), use the setAppenderThreshold console command on the LogNotification log appender.

However, we strongly recommend keeping the Log Notification Appender configured at WARN level or above.

Metaswitch does not support Log Notification Appender configurations that cause excessive notification generation (such as INFO or DEBUG level).

Static OID Model

Rhino 3.1 introduces a new model for defining static OIDs for SLEE components and their statistics parameter set types. The primary goal of the model is to ensure stability of OIDs and MIBs between releases of Rhino-based products and eliminate the need for remapping OIDs during product upgrades. This alleviates operational concerns regarding solution monitoring stability after version upgrades and simplifies the overall upgrade process. The new model also aims to simplify exposed SNMP statistics where possible.

This document refers to the new model as "static OIDs" or the "static OID model", and the pre-existing model as "legacy OIDs" or the "legacy OID model". The static OID model is the preferred model for all new product development. The legacy OID model remains present and supported in Rhino for backward compatibility.

Note

You can enable both legacy and static OID models at the same time. The statistics parameter set type SNMP registration identifiers generated for deployed components differ between the two models, so no conflict occurs between the models.

For more information about assigning static OIDs with the static OID model, see the SNMP Static OID Development Guide.

Static OID Conflicts

Each unique OID registered with the SNMP subsystem can only be mapped to one statistics parameter set type at any one time. An OID conflict occurs when two statistics parameter set types use the same static OID. This may happen simply by chance, e.g. two separate products inadvertently defined using the same base OID, or as a consequence of a product upgrade, e.g. two versions of the same service using the same OID. In general, when Rhino detects an OID conflict, it will deregister from SNMP all parameter set types mapped to that OID. However, in Rhino 3.1+, an exception is made to this general rule to facilitate online service upgrades.

Conflict Resolution During Service Online Upgrade

The online upgrade of a service typically involves installing a new version of the service, then activating that while simultaneously deactivating the old version. Once the old version has finished draining its active sessions, it can then be uninstalled, leaving only the new version in its place. A problem arises though if both old and new service versions use the same OIDs for their parameter set types, which, understandably, would be desirable if service statistics have not materially changed between versions.

With Rhino’s default behaviour, the OID conflict would cause the parameter set types for all conflicting OIDs to be deregistered while the conflict exists. However, to ensure that statistics availability via SNMP can be maintained during the period of a service upgrade, two services with the same base OID and the same statistics presentation alias (a service deployment descriptor attribute used for MIB generation) exhibit the following SNMP registration behaviour for any conflicting parameter set type OIDs:

  • On initial service deployment, the first service to be deployed will have its parameter set types registered. Services deployed later will have their conflicting parameter set types remain unregistered.

  • On node restart, the first service to be redeployed during boot will initially be chosen to have its parameter set type registered. (Due to the multi-threaded nature of component redeployment, this choice of service may be arbitrary, and could be different on different cluster nodes.)

  • The first service to be activated and transition to an actual state of active will have its parameter set types registered if they are not already. Conflicting parameter set types from another service will be deregistered as necessary to accommodate.

  • If a service whose parameter set types are registered is deactivated and there is another service with an actual state of active, then when the deactivating service’s actual state transitions from stopping to inactive its conflicting parameter set types will be deregistered, and the parameter set types of the active service will be registered in their place.

For clarification, the current registration for parameter set types with conflicting OIDs may only change:

  • when a service transitions to the active actual state where no other service is active;

  • when a service transitions to the inactive actual state where another service is active or stopping; and

  • when a service component is undeployed, which may remove a conflict completely.

When any of these events occur, conflicting parameter set type registrations are reevaluated and may or may not change as appropriate. At any other time, registrations stay as they are and do not change.

Note

This special behaviour only occurs if an OID conflict only happens between service components with the same statistics presentation alias. If there is an OID conflict between services with different aliases, or between a service and any other type of component, this behaviour does not apply.

Warning

Conflict resolution is intended only for the limited use case of an online service upgrade. Where no conflicting services are active, or more than one conflicting service is active at the same time, it is unpredictable which service will have its parameter set types registered after a node reboot. Consequently, this behaviour should not be relied on for general conflict resolution at any other time than a service upgrade.

OID Conflict Alarms

When an OID conflict occurs between two parameter set types, Rhino will raise a rhino.snmp.duplicate-oid-mapping alarm. This alarm may be suppressed for the special service conflict resolution behaviour using the system property snmp.suppress_service_oid_conflict_alarms. If this system property is set to true, an alarm will not be raised when a service OID conflict can be temporarily resolved as described above, however a warning log message noting the conflict will still appear in the Rhino console log.

Legacy OID Model

Non-sample based Rhino statistics (all gauges and counters) that are registered according to the legacy OID model are accessible via SNMP tables.

SNMP Table Structure

Each SNMP table represents a single parameter set type. The values in each SNMP table represent statistics from the individual parameter sets associated with the table’s parameter set type. Each table uses the name of a parameter set, converted to an OID, as a table index. Individual table rows represent parameter sets, while the table columns represent statistics from the parameter set type.

The first column is special, as it contains the parameter set index value as a string. For the purposes of SNMP, the name of the root parameter set for each parameter set type can be considered "(root)".

All other parameter sets use their normal parameter set names, converted to OIDs, as index keys.

For example, the following output (generated using snmpwalk) shows the legacy OID model registered example Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0] parameter set type’s representation in the SNMP agent as OIDs:

1.3.6.1.4.1.19808.2.1.300.1.0 = INTEGER: 2
1.3.6.1.4.1.19808.2.1.300.2.1.1.TestSBBTableIndex.(default) = STRING: "Services.S-7cb11624-,vendor=OpenCloud,version=1.0].(default)"
1.3.6.1.4.1.19808.2.1.300.2.1.1.TestSBBTableIndex = STRING: "[Services.S-f36d2ae2-ageTestSbb,vendor=OpenCloud,version=1.0]"
1.3.6.1.4.1.19808.2.1.300.2.1.2.TestSBBTableIndex.(default) = Counter64: 0
1.3.6.1.4.1.19808.2.1.300.2.1.2.TestSBBTableIndex = Counter64: 0
1.3.6.1.4.1.19808.2.1.300.2.1.3.TestSBBTableIndex.(default) = Counter64: 0
1.3.6.1.4.1.19808.2.1.300.2.1.3.TestSBBTableIndex = Counter64: 0

Note
What the colors represent
  • Blue: The base OID for the Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0] parameter set type. This can be either statically or dynamically set.

  • Yellow: A subindex for data associated with this parameter set type:
    .1 represents the parameter set count for this parameter set type.
    .2 represents the parameter set data table.

  • Olive: A fixed number representing the table entry OID. This will always be .1.

  • Green: Statistic columns. The first column (.1) represents the parameter set name for a given table row. The rest of the columns correspond to individual statistics.

  • Red: The table index. This example has substituted the OID representation of the table index with the TestSBBTableIndex and TestSBBTableIndex.(default) variables.

  • Purple: The raw statistic values.

The OID representation of the table indices in this example is as follows:

  • TestSBBTableIndex is the string Services.S-f36d2ae2-ageTestSbb,vendor=OpenCloud,version=1.0] stored in an OID representation as:

83.101.114.118.105.99.101.115.46.83.45.102.51.54.100.50.97.101.50.45.97.103.101.84.101.115.116.83.98.98.44.118.101.110.100.111.114.61.79.112.101.110.67.108.111.117.100.44.118.101.114.115.105.111.110.61.49.46.48.93
  • TestSBBTableIndex.(default) is the string Services.S-7cb11624-,vendor=OpenCloud,version=1.0].(default) stored in an OID representation as:

83.101.114.118.105.99.101.115.46.83.45.55.99.98.49.49.54.50.52.45.44.118.101.110.100.111.114.61.79.112.101.110.67.108.111.117.100.44.118.101.114.115.105.111.110.61.49.46.48.93.46.40.100.101.102.97.117.108.116.41

These table indices have parameter set names that are longer than 255 characters. Therefore, the stored representation is truncated, leading to the -f36d2ae2- and -7cb11624- gaps in the decoded strings.

Warning

For static model non-statically mapped parameter sets and legacy model parameter sets, exceptionally long parameter set names may be truncated if their OID representation is longer than 255 characters. This is to prevent pathname length problems with management clients that store SNMP statistics in files named after the index OID.

Tip

For usage parameter set types, the base OID and individual statistics can be specified in annotations described in Annotations. For statistics parameter set type in resource adaptor, the base OID can be specified in element stats-parameter-set-type in oc-resource-adaptor-jar.xml. Otherwise, they will be dynamically allocated according to the range specified in Dynamic Rhino monitoring parameter sets.

Differences between the static and legacy OID models

The table below describes the key differences between the static OID model and the legacy OID model.

Legacy OID Model Static OID Model

A statistic parameter set type can optionally declare a static OID, but it must be a complete OID.

OIDs are split into a number of parts declared separately by SLEE components, statistic parameter set types, and OID suffix mapping descriptor documents. OID parts are combined at deployment time to produce complete static OIDs depending on the structure of components installed and resource adaptor entities and profile tables created.

Statistic parameter set types that don’t declare a static OID are dynamically allocated an OID at deployment time.

If not all the required OID parts are available at deployment time to generate a complete static OID for any given statistic parameter set type, no OID will be assigned to it. Dynamic OID allocation doesn’t occur.

The OID assigned to a statistic parameter set type forms part of Rhino configuration state and may be arbitrarily changed at runtime using MBean operations.

Static OIDs are set at deployment time and cannot be subsequently changed at runtime.

Static OIDs don’t form part of Rhino configuration state.

Statistic parameter set counter values are exposed as columns in SNMP tables.

SNMP tables are only used for statistic parameter sets where static mapping of the parameter set name hasn’t been specified. Otherwise, SNMP scalar managed objects are used.

Rhino SNMP OID Mappings

Rhino monitoring parameter sets

All of the predefined parameter set OID mappings that are available in a clean installation of Rhino are listed below.

This is the list of the base OIDs which will be used to represent statistics from each parameter set.

Parameter Set Type OID Mappings
===============================
 OID                        Status     Namespace   Parameter Set Type
------------------------------------------------------------------------------------------
 1.3.6.1.4.1.19808.2.1.1    (active)               Events
 1.3.6.1.4.1.19808.2.1.2    (active)               Activities
 1.3.6.1.4.1.19808.2.1.3    (active)               StagingThreads
 1.3.6.1.4.1.19808.2.1.4    (active)               LockManagers
 1.3.6.1.4.1.19808.2.1.5    (active)               Services
 1.3.6.1.4.1.19808.2.1.6    (active)               Transactions
 1.3.6.1.4.1.19808.2.1.7    (active)               ObjectPools
 1.3.6.1.4.1.19808.2.1.9    (active)               MemDB-Local
 1.3.6.1.4.1.19808.2.1.10   (active)               MemDB-Replicated
 1.3.6.1.4.1.19808.2.1.12   (active)               LicenseAccounting
 1.3.6.1.4.1.19808.2.1.13   (active)               ActivityHandler
 1.3.6.1.4.1.19808.2.1.14   (active)               JVM
 1.3.6.1.4.1.19808.2.1.15   (active)               EventRouter
 1.3.6.1.4.1.19808.2.1.16   (active)               JDBCDatasource
 1.3.6.1.4.1.19808.2.1.17   (active)               Limiters
 1.3.6.1.4.1.19808.2.1.18   (active)               Savanna-Membership
 1.3.6.1.4.1.19808.2.1.19   (active)               Savanna-Group
 1.3.6.1.4.1.19808.2.1.21   (active)               StagingThreads-Misc
 1.3.6.1.4.1.19808.2.1.22   (active)               EndpointLimiting
 1.3.6.1.4.1.19808.2.1.23   (active)               ExecutorStats
 1.3.6.1.4.1.19808.2.1.24   (active)               TimerFacility
 1.3.6.1.4.1.19808.2.1.25   (active)               MemDB-Timestamp
 1.3.6.1.4.1.19808.2.1.26   (active)               PooledByteArrayBuffer
 1.3.6.1.4.1.19808.2.1.27   (active)               UnpooledByteArrayBuffer
 1.3.6.1.4.1.19808.2.1.28   (active)               ClassLoading
 1.3.6.1.4.1.19808.2.1.29   (active)               Compilation
 1.3.6.1.4.1.19808.2.1.30   (active)               GarbageCollector
 1.3.6.1.4.1.19808.2.1.31   (active)               Memory
 1.3.6.1.4.1.19808.2.1.32   (active)               MemoryPool
 1.3.6.1.4.1.19808.2.1.33   (active)               OperatingSystem
 1.3.6.1.4.1.19808.2.1.34   (active)               Runtime
 1.3.6.1.4.1.19808.2.1.35   (active)               Thread
 1.3.6.1.4.1.19808.2.1.36   (active)               RemoteTimerTimingWheel
 1.3.6.1.4.1.19808.2.1.37   (active)               RemoteTimerClientStats
 1.3.6.1.4.1.19808.2.1.38   (active)               RemoteTimerServerStats
 1.3.6.1.4.1.19808.2.1.39   (active)               Convergence
 1.3.6.1.4.1.19808.2.1.40   (active)               ClusterTopology
 1.3.6.1.4.1.19808.2.1.41   (active)               SLEEState

Rhino monitoring parameter set counter mappings

The predefined counter mappings are listed below.

Counter Mappings
================
 Parameter Set Type     Namespace   Mapping Counter Name
-------------------------------------------------------------------------------
 Activities                               2 created
 Activities                               3 ended
 Activities                               4 rejected
 Activities                               5 active
 Activities                               6 startSuspended
 Activities                               7 suspendActivity
 ActivityHandler                          2 txCreate
 ActivityHandler                          3 txFire
 ActivityHandler                          4 txEnd
 ActivityHandler                          5 nonTxCreate
 ActivityHandler                          6 nonTxFire
 ActivityHandler                          7 nonTxEnd
 ActivityHandler                          8 nonTxLookup
 ActivityHandler                          9 txLookup
 ActivityHandler                         10 nonTxLookupMiss
 ActivityHandler                         11 txLookupMiss
 ActivityHandler                         12 ancestorCount
 ActivityHandler                         13 gcCount
 ActivityHandler                         14 generationsCollected
 ActivityHandler                         15 activitiesCollected
 ActivityHandler                         16 activitiesUnclean
 ActivityHandler                         17 activitiesScanned
 ActivityHandler                         18 administrativeRemove
 ActivityHandler                         19 livenessQueries
 ActivityHandler                         20 timersSet
 ActivityHandler                         21 timersCancelled
 ActivityHandler                         22 localLockRequests
 ActivityHandler                         23 foreignLockRequests
 ActivityHandler                         24 create
 ActivityHandler                         25 end
 ActivityHandler                         26 fire
 ActivityHandler                         27 lookup
 ActivityHandler                         28 lookupMiss
 ActivityHandler                         29 churn
 ActivityHandler                         30 liveCount
 ActivityHandler                         31 tableSize
 ActivityHandler                         32 timerCount
 ActivityHandler                         33 lockRequests
 ClassLoading                             2 loadedClassCount
 ClassLoading                             3 totalLoadedClassCount
 ClassLoading                             4 unloadedClassCount
 ClusterTopology                          2 bootingNodes
 ClusterTopology                          3 eventRouterNodes
 ClusterTopology                          4 quorumNodes
 Compilation                              2 totalCompilationTime
 Convergence                              2 convergenceScans
 Convergence                              3 tasksAdded
 Convergence                              4 tasksExecuted
 Convergence                              5 tasksCompleted
 Convergence                              6 tasksFailed
 Convergence                              7 tasksRetried
 Convergence                              8 queueSize
 Convergence                              9 tasksRunning
 Convergence                             10 maxAge
 EndpointLimiting                         2 submitted
 EndpointLimiting                         3 accepted
 EndpointLimiting                         4 userAccepted
 EndpointLimiting                         5 userRejected
 EndpointLimiting                         6 licenseRejected
 EventRouter                              2 eventHandlerStages
 EventRouter                              3 rollbackHandlerStages
 EventRouter                              4 cleanupStages
 EventRouter                              5 badGuyHandlerStages
 EventRouter                              6 vEPs
 EventRouter                              7 rootSbbFinds
 EventRouter                              8 sbbsResolved
 EventRouter                              9 sbbCreates
 EventRouter                             10 sbbExceptions
 EventRouter                             11 processingRetrys
 Events                                   2 accepted
 Events                                   3 rejected
 Events                                   4 failed
 Events                                   5 successful
 Events                                   6 rejectedQueueFull
 Events                                   7 rejectedQueueTimeout
 Events                                   8 rejectedOverload
 ExecutorStats                            2 executorTasksExecuted
 ExecutorStats                            3 executorTasksExecuting
 ExecutorStats                            4 executorTasksRejected
 ExecutorStats                            5 executorTasksSubmitted
 ExecutorStats                            6 executorTasksWaiting
 ExecutorStats                            7 executorThreadsIdle
 ExecutorStats                            8 executorThreadsTotal
 GarbageCollector                         2 collectionCount
 GarbageCollector                         3 collectionTime
 GarbageCollector                         4 lastCollectionDuration
 GarbageCollector                         5 lastCollectionInterval
 GarbageCollector                         6 lastCollectionPeriod
 JDBCDatasource                           2 create
 JDBCDatasource                           3 removeIdle
 JDBCDatasource                           4 removeOverflow
 JDBCDatasource                           5 removeError
 JDBCDatasource                           6 getRequest
 JDBCDatasource                           7 getSuccess
 JDBCDatasource                           8 getTimeout
 JDBCDatasource                           9 getError
 JDBCDatasource                          10 putOk
 JDBCDatasource                          11 putOverflow
 JDBCDatasource                          12 putError
 JDBCDatasource                          13 inUseConnections
 JDBCDatasource                          14 idleConnections
 JDBCDatasource                          15 pendingConnections
 JDBCDatasource                          16 totalConnections
 JDBCDatasource                          17 maxConnections
 JVM                                      2 heapUsed
 JVM                                      3 heapCommitted
 JVM                                      4 heapInitial
 JVM                                      5 heapMaximum
 JVM                                      6 nonHeapUsed
 JVM                                      7 nonHeapCommitted
 JVM                                      8 nonHeapInitial
 JVM                                      9 nonHeapMaximum
 JVM                                     10 classesCurrentLoaded
 JVM                                     11 classesTotalLoaded
 JVM                                     12 classesTotalUnloaded
 LicenseAccounting                        2 accountedUnits
 LicenseAccounting                        3 unaccountedUnits
 Limiters                                 2 unitsUsed
 Limiters                                 3 unitsRejected
 Limiters                                 4 unitsRejectedByParent
 LockManagers                             2 locksAcquired
 LockManagers                             3 locksReleased
 LockManagers                             4 lockWaits
 LockManagers                             5 lockTimeouts
 LockManagers                             6 knownLocks
 LockManagers                             7 acquireMessages
 LockManagers                             8 abortMessages
 LockManagers                             9 releaseMessages
 LockManagers                            10 migrationRequestMessages
 LockManagers                            11 migrationReleaseMessages
 MemDB-Local                              2 committedSize
 MemDB-Local                              3 maxCommittedSize
 MemDB-Local                              4 churnSize
 MemDB-Local                              5 cleanupCount
 MemDB-Local                              6 retainedSize
 MemDB-Replicated                         2 committedSize
 MemDB-Replicated                         3 maxCommittedSize
 MemDB-Replicated                         4 churnSize
 MemDB-Replicated                         5 cleanupCount
 MemDB-Replicated                         6 retainedSize
 MemDB-Timestamp                          2 waitingThreads
 MemDB-Timestamp                          3 unexposedCommits
 Memory                                   2 heapInitial
 Memory                                   3 heapUsed
 Memory                                   4 heapMax
 Memory                                   5 heapCommitted
 Memory                                   6 nonHeapInitial
 Memory                                   7 nonHeapUsed
 Memory                                   8 nonHeapMax
 Memory                                   9 nonHeapCommitted
 MemoryPool                               2 collectionUsageInitial
 MemoryPool                               3 collectionUsageUsed
 MemoryPool                               4 collectionUsageMax
 MemoryPool                               5 collectionUsageCommitted
 MemoryPool                               6 collectionUsageThreshold
 MemoryPool                               7 collectionUsageThresholdCount
 MemoryPool                               8 peakUsageInitial
 MemoryPool                               9 peakUsageUsed
 MemoryPool                              10 peakUsageMax
 MemoryPool                              11 peakUsageCommitted
 MemoryPool                              12 usageThreshold
 MemoryPool                              13 usageThresholdCount
 MemoryPool                              14 usageInitial
 MemoryPool                              15 usageUsed
 MemoryPool                              16 usageMax
 MemoryPool                              17 usageCommitted
 MemoryPool                              18 lastCollected
 MemoryPool                              19 collected
 MemoryPool                              21 collectionCount
 ObjectPools                              2 added
 ObjectPools                              3 removed
 ObjectPools                              4 overflow
 ObjectPools                              5 miss
 ObjectPools                              6 size
 ObjectPools                              7 capacity
 ObjectPools                              8 pruned
 OperatingSystem                          2 availableProcessors
 OperatingSystem                          3 committedVirtualMemorySize
 OperatingSystem                          4 freePhysicalMemorySize
 OperatingSystem                          5 freeSwapSpaceSize
 OperatingSystem                          6 processCpuTime
 OperatingSystem                          7 totalPhysicalMemorySize
 OperatingSystem                          8 totalSwapSpaceSize
 OperatingSystem                          9 maxFileDescriptorCount
 OperatingSystem                         10 openFileDescriptorCount
 PooledByteArrayBuffer                    2 out
 PooledByteArrayBuffer                    3 in
 PooledByteArrayBuffer                    4 added
 PooledByteArrayBuffer                    5 removed
 PooledByteArrayBuffer                    6 overflow
 PooledByteArrayBuffer                    7 miss
 PooledByteArrayBuffer                    8 poolSize
 PooledByteArrayBuffer                    9 bufferSize
 PooledByteArrayBuffer                   10 poolCapacity
 RemoteTimerClientStats                   2 timersCreated
 RemoteTimerClientStats                   3 timersCancelled
 RemoteTimerClientStats                   4 timerEventsProcessed
 RemoteTimerClientStats                   5 timerEventsInProgress
 RemoteTimerClientStats                   6 timerEventsProcessedSuccessfully
 RemoteTimerClientStats                   7 timerEventProcessingFailures
 RemoteTimerClientStats                   8 timerEventsForResidentActivities
 RemoteTimerClientStats                   9 timerEventsForNonResidentActivities
 RemoteTimerClientStats                  10 timerEventsForwarded
 RemoteTimerClientStats                  11 timerEventForwardingFailures
 RemoteTimerClientStats                  12 timersRestored
 RemoteTimerServerStats                   2 timersCreated
 RemoteTimerServerStats                   3 timersArmed
 RemoteTimerServerStats                   4 timersCancelled
 RemoteTimerServerStats                   5 timerEventsGenerated
 RemoteTimerServerStats                   6 timerEventDeliveryFailures
 RemoteTimerServerStats                   7 threadsTotal
 RemoteTimerServerStats                   8 threadsBusy
 RemoteTimerServerStats                   9 threadsIdle
 RemoteTimerTimingWheel                   2 cascadeOverflow
 RemoteTimerTimingWheel                   3 cascadeWheel1
 RemoteTimerTimingWheel                   4 cascadeWheel2
 RemoteTimerTimingWheel                   5 cascadeWheel3
 RemoteTimerTimingWheel                   6 jobsExecuted
 RemoteTimerTimingWheel                   7 jobsRejected
 RemoteTimerTimingWheel                   8 jobsScheduled
 RemoteTimerTimingWheel                   9 jobsToOverflow
 RemoteTimerTimingWheel                  10 jobsToWheel0
 RemoteTimerTimingWheel                  11 jobsToWheel1
 RemoteTimerTimingWheel                  12 jobsToWheel2
 RemoteTimerTimingWheel                  13 jobsToWheel3
 RemoteTimerTimingWheel                  14 jobsWaiting
 RemoteTimerTimingWheel                  15 tasksCancelled
 RemoteTimerTimingWheel                  16 tasksFixedDelay
 RemoteTimerTimingWheel                  17 tasksFixedRate
 RemoteTimerTimingWheel                  18 tasksImmediate
 RemoteTimerTimingWheel                  19 tasksOneShot
 RemoteTimerTimingWheel                  20 tasksRepeated
 RemoteTimerTimingWheel                  21 ticks
 Runtime                                  2 uptime
 Runtime                                  3 startTime
 SLEEState                                2 startingNodes
 SLEEState                                3 runningNodes
 SLEEState                                4 stoppingNodes
 SLEEState                                5 stoppedNodes
 SLEEState                                6 unlicensedNodes
 SLEEState                                7 failedNodes
 Savanna-Group                            2 udpBytesSent
 Savanna-Group                            3 udpBytesReceived
 Savanna-Group                            4 udpDatagramsSent
 Savanna-Group                            5 udpDatagramsReceived
 Savanna-Group                            6 udpInvalidDatagramsReceived
 Savanna-Group                            7 udpDatagramSendErrors
 Savanna-Group                            8 tokenRetransmits
 Savanna-Group                            9 activityEstimate
 Savanna-Group                           10 regularMessagesSent
 Savanna-Group                           11 regularMessagesReceived
 Savanna-Group                           12 recoveryMessagesSent
 Savanna-Group                           13 recoveryMessagesReceived
 Savanna-Group                           14 restartGroupMessagesSent
 Savanna-Group                           15 restartGroupMessagesReceived
 Savanna-Group                           16 restartGroupMessageRetransmits
 Savanna-Group                           17 regularTokensSent
 Savanna-Group                           18 regularTokensReceived
 Savanna-Group                           19 installTokensSent
 Savanna-Group                           20 installTokensReceived
 Savanna-Group                           21 groupIdles
 Savanna-Group                           22 messagesLessThanARU
 Savanna-Group                           23 shiftToInstall
 Savanna-Group                           24 shiftToRecovery
 Savanna-Group                           25 shiftToOperational
 Savanna-Group                           26 messageRetransmits
 Savanna-Group                           27 fcReceiveBufferSize
 Savanna-Group                           28 fcSendWindowSize
 Savanna-Group                           29 fcCongestionWindowSize
 Savanna-Group                           30 fcTokenRotationEstimate
 Savanna-Group                           31 fcRetransmissionRequests
 Savanna-Group                           32 fcLimitedSends
 Savanna-Group                           33 deliveryQueueSize
 Savanna-Group                           34 deliveryQueueBytes
 Savanna-Group                           35 transmitQueueSize
 Savanna-Group                           36 transmitQueueBytes
 Savanna-Group                           37 appBytesSent
 Savanna-Group                           38 appBytesReceived
 Savanna-Group                           39 appMessagesSent
 Savanna-Group                           40 appMessagesReceived
 Savanna-Group                           41 appPartialMessagesReceived
 Savanna-Group                           42 appSendErrors
 Savanna-Group                           43 fragStartSent
 Savanna-Group                           44 fragMidSent
 Savanna-Group                           45 fragEndSent
 Savanna-Group                           46 fragNonSent
 Savanna-Membership                       2 udpBytesSent
 Savanna-Membership                       3 udpBytesReceived
 Savanna-Membership                       4 udpDatagramsSent
 Savanna-Membership                       5 udpDatagramsReceived
 Savanna-Membership                       6 udpInvalidDatagramsReceived
 Savanna-Membership                       7 udpDatagramSendErrors
 Savanna-Membership                       8 tokenRetransmits
 Savanna-Membership                       9 activityEstimate
 Savanna-Membership                      10 joinMessagesSent
 Savanna-Membership                      11 joinMessagesReceived
 Savanna-Membership                      12 membershipTokensSent
 Savanna-Membership                      13 membershipTokensReceived
 Savanna-Membership                      14 commitTokensSent
 Savanna-Membership                      15 commitTokensReceived
 Savanna-Membership                      16 shiftToGather
 Savanna-Membership                      17 shiftToInstall
 Savanna-Membership                      18 shiftToCommit
 Savanna-Membership                      19 shiftToOperational
 Savanna-Membership                      20 tokenRetransmitTimeouts
 Services                                 2 rootSbbsCreated
 Services                                 3 rootSbbsRemoved
 Services                                 4 activeRootSbbs
 StagingThreads                           2 itemsAdded
 StagingThreads                           3 itemsCompleted
 StagingThreads                           4 queueSize
 StagingThreads                           5 numThreads
 StagingThreads                           6 availableThreads
 StagingThreads                           7 minThreads
 StagingThreads                           8 maxThreads
 StagingThreads                           9 activeThreads
 StagingThreads                          10 peakThreads
 StagingThreads                          11 dropped
 StagingThreads-Misc                      2 itemsAdded
 StagingThreads-Misc                      3 itemsCompleted
 StagingThreads-Misc                      4 queueSize
 StagingThreads-Misc                      5 numThreads
 StagingThreads-Misc                      6 availableThreads
 StagingThreads-Misc                      7 minThreads
 StagingThreads-Misc                      8 maxThreads
 StagingThreads-Misc                      9 activeThreads
 StagingThreads-Misc                     10 peakThreads
 StagingThreads-Misc                     11 dropped
 Thread                                   2 currentThreadCpuTime
 Thread                                   3 currentThreadUserTime
 Thread                                   4 daemonThreadCount
 Thread                                   5 peakThreadCount
 Thread                                   6 threadCount
 Thread                                   7 totalStartedThreadCount
 TimerFacility                            2 cascadeOverflow
 TimerFacility                            3 cascadeWheel1
 TimerFacility                            4 cascadeWheel2
 TimerFacility                            5 cascadeWheel3
 TimerFacility                            6 jobsExecuted
 TimerFacility                            7 jobsRejected
 TimerFacility                            8 jobsScheduled
 TimerFacility                            9 jobsToOverflow
 TimerFacility                           10 jobsToWheel0
 TimerFacility                           11 jobsToWheel1
 TimerFacility                           12 jobsToWheel2
 TimerFacility                           13 jobsToWheel3
 TimerFacility                           14 jobsWaiting
 TimerFacility                           15 tasksCancelled
 TimerFacility                           16 tasksFixedDelay
 TimerFacility                           17 tasksFixedRate
 TimerFacility                           18 tasksImmediate
 TimerFacility                           19 tasksOneShot
 TimerFacility                           20 tasksRepeated
 TimerFacility                           21 ticks
 Transactions                             2 active
 Transactions                             3 started
 Transactions                             4 committed
 Transactions                             5 rolledBack
 UnpooledByteArrayBuffer                  2 out
 UnpooledByteArrayBuffer                  3 in
 UnpooledByteArrayBuffer                  4 bytesAllocated
 UnpooledByteArrayBuffer                  5 bytesDiscarded

Dynamic Rhino monitoring parameter sets

If the legacy OID model is enabled and a component is installed with a usage parameter set that doesn’t have an OID statically defined as per that model, an OID is dynamically generated instead. Ranges for dynamic OIDs are listed below.

 Usage.RAEntities

100 - 199

 Usage.ProfileTables

200-299

 Usage.Service

300-499

 Metrics.Service

500-699

 Others

700+

Replication Support Services

This section describes the supplementary replication support services available in Rhino and steps for configuring them.

Topics

Key/Value Stores

A description of how Rhino can use a key/value store as a replication mechanism, and how to configure it.

Session Ownership

A description of the session ownership subsystem, how it can be used, and how to configure it.

Key/Value Stores

This section includes:

About Key/Value Stores

A key/value store is an adjunct to a Rhino in-memory database that serves to persist local database state to an external database such that another node in the Rhino cluster can recover that state in the event of a node failure.

It provides a form of state replication without the state being directly replicated between Rhino cluster nodes such as with the traditional savanna replication method.

High-level usage

In normal operation in a stable Rhino cluster, a key/value store has a data flow in only one direction — from the local in-memory database to the external database.

In the event of a Rhino node failure, the activities and SBB entities that were persisted to the external database by a key/value store on that node remain in the database but become "non-resident", which simply means that that application state still exists but is not currently owned by or resident in any Rhino node.

Subsequently, another node in the cluster may receive a redirected network event for a call or session that was previously owned by the failed node. As the node will not immediately recognise the call/session, it queries the key/value store to determine if there is any state for the call/session in the external database. If state is found, the node can adopt ownership of the call/session, restore the call/session state, and continue processing the event as if the call/session had always been locally known.

As the event is processed, further state related to the call/session may need to be retrieved from the external database, for example attached SBB entity state and the state of other activities that these SBB entities may be attached to.

A key/value store implementation typically operates with a delay between when transactional data is committed to the in-memory database and when it gets written to the external database. This allows techniques such as write buffering and combining to be utilised to reduce bandwidth needs by discarding all but the latest update to a database key observed over a short period of time. However it also means that transactions that occurred within this delay period right before a node failure will not have been persisted to the external database and will be permanently lost as a result.

Configuring a Key/Value Store

A key/value store is configured in a node’s rhino-config.xml configuration file using the optional key-value-store element.

The element content is defined as follows:

<!-- Key/Value store definition -->
<!ELEMENT key-value-store (parameter*, persistence-resource-ref?)>
<!-- the name of this key/value store -->
<!ATTLIST key-value-store name CDATA #REQUIRED>
<!-- fully-qualified class name of the key/value store service provider class -->
<!ATTLIST key-value-store service-provider-class-name CDATA #REQUIRED>

<!-- a configuration parameter -->
<!ELEMENT parameter EMPTY>
<!ATTLIST parameter name CDATA #REQUIRED>
<!ATTLIST parameter type CDATA #REQUIRED>
<!ATTLIST parameter value CDATA #REQUIRED>

<!-- reference to a persistence resource defined in persistence.xml -->
<!ELEMENT persistence-resource-ref EMPTY>
<!ATTLIST persistence-resource-ref name CDATA #REQUIRED>

At minimum, a name for the key/value store and a service provider class name need to be specified. Multiple key/value stores may be specified in the configuration file and each must have a unique name. The service provider class name is expected to be documented by the key/value store, as are any parameters that may be configured by the user.

A key/value store that can utilise Rhino’s persistence framework may also require a <persistence-resource-ref>. This element refers to a persistence resource defined in the separate persistence.xml configuration file.

Warning As with all other persistence related configurations, key/value stores should be configured the same way on all nodes in a Rhino cluster for them to function correctly.

Configuring an in-memory database to use a key/value store

A local (non-replicated) in-memory database can be configured to use a key/value store by simply adding a <key-value-store-ref> element to its configuration in rhino-config.xml. The element contains a name attribute specifying the name of a key/value store defined and configured as specified above.

The example rhino-config.xml fragment below shows a configuration for a local in-memory database called KeyValueDatabase that contains a reference to a key/value store named cassandra.

<memdb-local>
  <jndi-name>KeyValueDatabase</jndi-name>
  <committed-size>300M</committed-size>
  <stripe-count>1</stripe-count>
  <uses-state-pool>true</uses-state-pool>
  <key-value-store-ref name="cassandra"/>
</memdb-local>
Tip Replicated in-memory databases configured using the <memdb> element use the savanna framework to replicate database state and cannot be configured with a key/value store.

Configuring the replicated storage resource for the default namespace

The default namespace always exists, consequently the decision on which replicated storage resource the default namespace will use must be statically declared in rhino-config.xml.

This is done using the <replicated-storage> sub-element of the <default-namespace> element within <persistence-resources>. This element specifies the JNDI name of a configured in-memory database that will be used for replicated state storage for the namespace. The specified name must be one of:

  • a configured <memdb> instance, which replicates using the traditional savanna framework

  • a configured <memdb-local> instance that contains a <key-value-store-ref>, which replicates using the key/value store.

The example rhino-config.xml fragment below shows the default namespace configured to use the KeyValueDatabase as its replicated storage resource.

<persistence-resources>
  <default-namespace>
    <replicated-storage>KeyValueDatabase</replicated-storage>
  </default-namespace>
  ...
</persistence-resources>
Note A Rhino node will fail to start if the specified database resource doesn’t exist or is not a resource that replicates.

Configuring the replicated storage resource for user-defined namespaces

The replicated storage resource for a user-defined namespace can be declared when that namespace is created.

The namespace creation MBean operation accepts a set of options that allow the resources used by the namespace to be configured. One of these options specifies the replicated storage resource to be used for the namespace. The options parameter should be set according to the required configuration. If a replicated storage resource is not specified in the options, then the namespace uses the same replicated storage resource as the default namespace.

The rhino-console example below illustrates the creation of a namespace called my-namespace that uses the KeyValueDatabase as the replicated storage resource:

$ ./rhino-console createnamespace my-namespace -replication-resource KeyValueDatabase
Namespace my-namespace created

Typical configuration process

The end-to-end process to configure and use a key/value store typically follows the below steps:

  1. If necessary, a persistence resource and associated persistence instances are created using, for example, rhino-console commands while the Rhino cluster is up and running.

  2. The cluster nodes are shut down and rhino-config.xml is modified on each node to add the key/value store store configuration and create the necessary in-memory database configuration.

  3. The replicated storage resource for the default namespace is updated if necessary.

    • Whenever the replicated storage resource for the default namespace is changed, Rhino’s management database must be reinitialised. If there are SLEE components, etc, that wish to be retained across the change, an export of current SLEE state can be made before the change and then re-imported after the change and management database reinitialisation.

  4. Finally, the cluster nodes are restarted.

Note During the installation of production Rhino, the installer will ask for the name of the desired replicated storage resource for the default namespace. If the key/value store replicated storage resource is selected, the installer will automatically configure the persistence resources and rhino-config.xml appropriately with default settings for the Cassandra key/value store included with Rhino.

Cassandra Key/Value Store

A key/value store implementation that interacts with a Cassandra persistence resource is included within a Rhino installation.

An example configuration fragment for this key/value store, with the default values for each configurable parameter, is shown below:

  <key-value-store name="cassandra" service-provider-class-name="com.opencloud.resource.kvstore.cassandra.CassandraKeyValueStoreServiceProvider">
    <parameter name="keyspaceReplication" type="java.lang.String" value="{'class': 'SimpleStrategy', 'replication_factor': 3}"/>
    <parameter name="durableWrites" type="boolean" value="true"/>
    <parameter name="sessionReconnectionPeriod" type="long" value="5000"/>
    <parameter name="dataVersion" type="java.lang.String" value="1"/>
    <parameter name="ttl" type="int" value="86400"/>
    <parameter name="tombstoneGCGracePeriod" type="int" value="900"/>
    <parameter name="writeThreadCount" type="int" value="10"/>
    <parameter name="writeDelay" type="long" value="900"/>
    <parameter name="minTransactionAge" type="long" value="250"/>
    <parameter name="maxTransactionAge" type="long" value="1500"/>
    <parameter name="maxPersistDeferredTransactionAge" type="long" value="10000"/>
    <parameter name="scanReschedulingDelay" type="long" value="100"/>
    <parameter name="insertDebugLoggingTruncationLength" type="int" value="100"/>
    <parameter name="maxBatchStatementSize" type="int" value="80000"/>
    <parameter name="readTimeout" type="long" value="2000"/>
    <parameter name="writeTimeout" type="long" value="2000"/>
    <parameter name="pendingSizeLimit" type="java.lang.String" value="-1"/>
    <parameter name="pendingSizeAlarmBounceSuppressionFilterPeriod" type="long" value="1000"/>
    <parameter name="maxScanTimeThreshold" type="long" value="500"/>
    <parameter name="maxPersistTimeThreshold" type="long" value="5000"/>
    <parameter name="overThresholdTriggerCount" type="int" value="5"/>
    <parameter name="gcScanPeriod" type="int" value="300"/>
    <parameter name="gcGracePeriod" type="int" value="7200"/>
    <persistence-resource-ref name="cassandra"/>
  </key-value-store>

This configuration is provided in the Rhino installation’s default rhino-config.xml file but is disabled by default. It can be automatically enabled during Rhino installation by requesting the KeyValueDatabase be used as the replicated storage resource.

Documentation for the key/value store’s service provider class, which includes a full description of all the configurable parameters, is included in the Rhino installation within the doc/api/key-value-store subdirectory.

Cassandra Key/Value Store Statistics

The following statistics are gathered by the Cassandra key/value store implementation included with Rhino:

Global

Cassandra key/value store transaction and batching statistics

OID: not specified

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

batchesCreated

batches

2

Cassandra BATCH statements created

counter

#

counter

batchedStatements

bStmts

3

UPDATE/DELETE statements included in a BATCH

counter

#

counter

batchedStatementSizeTotal

bStmtSzT

4

Total size of all UPDATE/DELETE statements that were batched

counter

bytes

counter

notBatchedStatements

XbStmts

5

UPDATE/DELETE statements not included in a BATCH

counter

#

counter

notBatchedStatementSizeTotal

XbStmtSzT

6

Total size of all UPDATE/DELETE statements that were not batched

counter

bytes

counter

querySuccesses

succ

7

Database queries that executed successfully

counter

#

counter

queryFailures

fail

8

Database queries that failed during execution

counter

#

counter

transactionsCommitted

txComm

10

Transactions committed to the key/value store

counter

#

counter

transactionsPersisted

txPers

11

Committed transactions persisted to the backend database(s)

counter

#

counter

transactionsDiscarded

txDisc

12

Committed transactions discarded due to pending state scan failure

counter

#

counter

pendingSize

size

13

Volume of state maintained by the key/value store awaiting persistence

counter

bytes

gauge

transactionsRejected

txRej

17

Transactions whose state could not be stored by the key/value store due to overload or the pending size limit being reached

counter

#

counter

batchedStatementSize

bStmtSz

Size of UPDATE/DELETE statements that were batched

sample

bytes

count

count

notBatchedStatementSize

XbStmtSz

Size of UPDATE/DELETE statements that were not batched

sample

bytes

count

count

persistTime

txPersT

Time taken to persist each set of transactions selected for persistence

sample

ms

time/nanoseconds

time/milliseconds

persistedBatchSize

persSz

Total size of all UPDATE/DELETE statements batched in each persistence cycle

sample

bytes

count

count

readTime

readT

Time taken to execute each SELECT statement

sample

ms

time/milliseconds

time/milliseconds

scanTime

txScanT

Time taken to scan and collate a set of transactions that are eligible for persisting

sample

ms

time/nanoseconds

time/milliseconds

threadWaitTime

tWait

Time spent idle by a thread before it received scan/write work to perform

sample

ms

time/nanoseconds

time/milliseconds

writeTime

writeT

Time taken to execute (potentially batched) UPDATE/DELETE statements

sample

ms

time/milliseconds

time/milliseconds

Table

Cassandra key/value store statistics for a single Cassandra table

OID: not specified

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

commits

comm

2

Commits made against the table

counter

#

counter

keysUpdated

kUp

3

Individual key updates committed to the table

counter

#

counter

keyUpdatesPending

kPend

4

Individual key updates that have not yet been persisted or discarded

counter

#

gauge

keyUpdatesPersisted

kPers

5

Individual key updates that have been persisted to the backend database(s)

counter

#

counter

keyUpdatesDiscarded

kDisc

6

Individual key updates that have been discarded, eg. a later update was persisted instead, persistence of the key has been disabled, or overload occurred

counter

#

counter

retrievesAttempted

rAtt

7

Backend database retrieves attempted

counter

#

counter

retrievesSucceeded

rSucc

8

Backend database retrieves that returned useful data

counter

#

counter

retrievesFailed

rFail

9

Backend database retrieves that returned no data

counter

#

counter

updates

updt

10

UPDATE statements executed against the backend database(s)

counter

#

counter

updateSizeTotal

updtSzT

11

Total size of UPDATE statements executed against the backend database(s)

counter

bytes

counter

removes

remv

12

REMOVE statements executed against the backend database(s)

counter

#

counter

pendingSize

size

13

Volume of state maintained by the key/value store for this table awaiting persistence

counter

bytes

gauge

gcScans

gcS

15

Number of garbage collection scans this table has had

counter

#

counter

gcReaps

gcR

16

Number of keys that have been removed from this table during garbage collection scans

counter

#

counter

failedRetrieveTime

rFailT

Elapsed time for failed database retrieves

sample

ms

time/milliseconds

time/milliseconds

successfulRetrieveTime

rSuccT

Elapsed time for successful database retrieves

sample

ms

time/milliseconds

time/milliseconds

updateSize

updtSz

Size of UPDATE statements executed

sample

bytes

count

count

Session Ownership

This section includes:

About Session Ownership

The session ownership subsystem is an optional Rhino platform extension with the primary purpose of allowing SLEE services and resource adaptors on a given node to claim ownership of a particular thing such as an individual SLEE activity or a complete identifiable session.

It’s intended for use in a multi-node Rhino cluster where events related to a given session could be delivered to a node that does not have state for that session. In order to keep the state for the session coherent, the session ownership subsystem can be used to determine the current session owner, allowing components to redirect events to the owning node.

The session ownership subsystem supports both blind-write and compare and set (CAS) operations for the creation, update, and removal of session ownership records.

Session ownership records are addressed by a primary key as well as any number of additional secondary keys. In addition, records contain a sequence number, a set of URIs identifying the owner(s) of the record, and a set of user-defined attributes.

High-level usage

The general approach in using the session ownership store is as follows:

  • When a SLEE component detects a new session, whatever "session" means to the component, is about to begin, the component attempts a CAS create operation in the session ownership subsystem to claim ownership of the session.

  • If the operation completes successfully, the component continues on normally and processes the session.

  • The component can update the session ownership record at any time as needed. It may do so either as blind writes or CAS updates (recommended) based on the existing record’s sequence number.

  • When a session ends, the component removes the session ownership record.

  • If a CAS operation fails because some other node has claimed ownership of the session, the component could:

    • proxy the request sideways to the current session owner, if the owner is still deemed to be alive; or

    • take over ownership of the session and handle the request if the current owner is deemed to no longer exist.

Components

The session ownership subsystem consists of three main components:

  • Session ownership store — This is responsible for processing session ownership operation requests from the higher application-level APIs and (typically) interacting with an external storage medium for the persistence of session ownership records.

  • Session ownership facility — This is the application-level API available to resource adaptors for interaction with the session ownership subsystem.

  • Session ownership resource adaptor type — This is the application-level API available to SBBs for interaction with the session ownership subsystem.

Configuring a Session Ownership Store

A session ownership store is configured in a node’s rhino-config.xml configuration file using the optional session-ownership-store element.

The element content is defined as follows:

<!-- Session ownership store definition -->
<!ELEMENT session-ownership-store (parameter*, persistence-resource-ref?)>
<!-- fully-qualified class name of the session ownership store service provider class -->
<!ATTLIST session-ownership-store service-provider-class-name CDATA #REQUIRED>
<!-- default TTL of session ownership records, measured in seconds -->
<!ATTLIST session-ownership-store default-ttl CDATA #REQUIRED>

<!-- a configuration parameter -->
<!ELEMENT parameter EMPTY>
<!ATTLIST parameter name CDATA #REQUIRED>
<!ATTLIST parameter type CDATA #REQUIRED>
<!ATTLIST parameter value CDATA #REQUIRED>

<!-- reference to a persistence resource defined in persistence.xml -->
<!ELEMENT persistence-resource-ref EMPTY>
<!ATTLIST persistence-resource-ref name CDATA #REQUIRED>

At minimum, only the session ownership store service provider class name needs to be specified. This class name is expected to be documented by the session ownership store, as are any parameters that may be configured by the user.

A session ownership store that can utilise Rhino’s persistence framework may also require a <persistence-resource-ref>. This element refers to a persistence resource defined in the separate persistence.xml configuration file.

Note At most one session ownership store can be configured in rhino-config.xml at any one time.
Warning As with all other persistence related configurations, the session ownership store should be configured the same way on all nodes in a Rhino cluster for it to function correctly.

Configuring the default namespace to use the session ownership store

The default namespace always exists, consequently the decision on whether or not the default namespace will make the session ownership subsystem available to SLEE components installed within it must be statically declared in rhino-config.xml.

This is done using the with-session-ownership-facility attribute of the <default-namespace> element within <persistence-resources>. The default value of this attribute, if not present, is False. A session ownership store must be configured, as described above, before this attribute may be set to True.

The example rhino-config.xml fragment below illustrates how the session ownership subsystem can be enabled for the default namespace.

<persistence-resources>
  <default-namespace with-session-ownership-facility="True">
    ...
  </default-namespace>
  ...
</persistence-resources>

Configuring user-defined namespaces to use the session ownership store

The decision about whether or not a user-defined namespace will make the session ownership subsystem available to SLEE components installed within it is made when the namespace is created.

The namespace creation MBean operation accepts a set of options that allow the resources used by the namespace to be configured. One of these options indicates whether or not the session ownership subsystem will be available in the namespace. The options parameter should be set according to the required behaviour.

Just like for the default namespace, a session ownership store must be configured in rhino-config.xml before a user-defined namespace can be created with session ownership functionality enabled.

The rhino-console example below illustrates the creation of a namespace called my-namespace with session ownership functionality enabled:

$ ./rhino-console createnamespace my-namespace -with-session-ownership-facility
Namespace my-namespace created

Typical configuration process

The end-to-end process for configuring the session ownership subsystem typically follows the below steps:

  1. If necessary, a persistence resource and associated persistence instances are created using, for example, rhino-console commands while the Rhino cluster is up and running.

  2. The cluster nodes are shut down and rhino-config.xml is modified on each node to add the session ownership store configuration and declare whether or not this subsystem should be enabled for the default namespace.

  3. Finally, the cluster nodes are restarted.

Note During the installation of production Rhino, the installer will ask for if the session ownership facility should be enabled for the default namespace. If this question is answered in the affirmative, the installer will automatically configure the persistence resources and rhino-config.xml appropriately with default settings for the Cassandra session ownership store included with Rhino.

Application-Level APIs

This section provides a description of the application-level APIs available to SLEE components that wish to interact with the session ownership subsystem.

Topics

Session Ownership Facility

The API for resource adaptors to interact with the session ownership subsystem.

Session Ownership Resource Adaptor Type

The API for SBBs and SBB parts to interact with the session ownership subsystem.

Convergence Name Session Ownership Record

A session ownership record related to an SBB entity tree, managed by Rhino.

Session Ownership Facility

The session ownership facility allows SLEE resource adaptors to interact with the session ownership subsystem.

API documentation for the facility is available here.

The primary interface of interest is SessionOwnershipFacility. A resource adaptor uses this facility to store, retrieve, or delete session ownership records.

Each of these operations is executed asynchronously. If the resource adaptor is interested in the result of the operation (and for retrieves this is likely always the case), the resource adaptor can provide a listener object when invoking the operation. The listener object will receive a callback, typically from another thread, when the operation result becomes available.

Session Ownership Resource Adaptor Type

The session ownership resource adaptor type allows SLEE SBBs and SBB parts to interact with the session ownership subsystem.

API documentation for the resource adaptor type is available here. The SLEE component identifiers of the resource adaptor type and the event types generated by it are given in the class-level documentation of the SessionOwnershipProvider interface.

An SBB or SBB part typically uses the resource adaptor type in the following sequence:

Convergence Name Session Ownership Record

A convergence name session ownership record is a session ownership record related to an SBB entity tree. These records are managed by Rhino using the session ownership store on behalf of each SBB entity tree. This means that applications do not need to create, update, or store the record directly themselves, they simply ask for the record and modify attributes on it in a CMP-style fashion as desired. Modifications to the underlying record will be stored automatically by Rhino during transaction commit, and the record will be automatically deleted when the SBB entity tree is removed.

Convergence name session ownership records and their API are described in more detail in the Rhino Extended APIs book.

Cassandra Session Ownership Store

A session ownership store implementation that interacts with a Cassandra persistence resource is included within a Rhino installation.

An example configuration for this session ownership store, with the default values for each configurable parameter, is shown below:

  <session-ownership-store service-provider-class-name="com.opencloud.resource.sessionownership.cassandra.CassandraSessionOwnershipStoreServiceProvider" default-ttl="3600" thread-count="30" max-queue-size="500">
    <parameter name="keyspaceReplication" type="java.lang.String" value="{'class': 'SimpleStrategy', 'replication_factor': 3}"/>
    <parameter name="durableWrites" type="boolean" value="true"/>
    <parameter name="sessionReconnectionPeriod" type="long" value="5000"/>
    <parameter name="schemaVersion" type="java.lang.String" value="1.0"/>
    <parameter name="tombstoneGCGracePeriod" type="int" value="900"/>
    <parameter name="maxBatchStatementSize" type="int" value="4096"/>
    <parameter name="readTimeout" type="long" value="2000"/>
    <parameter name="writeTimeout" type="long" value="2000"/>
    <persistence-resource-ref name="cassandra"/>
  </session-ownership-store>

This configuration is provided in the Rhino installation’s default rhino-config.xml file but is disabled by default. It can be automatically enabled during Rhino installation by requesting that the session ownership facility be enabled.

Documentation for the session ownership store’s service provider class, which includes a full description of all the configurable parameters, is included in the Rhino installation within the doc/api/session-ownership-store subdirectory.

Cassandra Session Ownership Store Statistics

The following statistics are gathered by the Cassandra session ownership store implementation included with Rhino:

CassandraSessionOwnershipStats

Cassandra session ownership store statistics

OID: not specified

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

batchesCreated

batches

2

Cassandra BATCH statements created

counter

#

counter

batchedStatements

bStmts

3

UPDATE/DELETE statements included in a BATCH

counter

#

counter

batchedStatementSizeTotal

bStmtSzT

4

Total size of all UPDATE/DELETE statements that were batched

counter

bytes

counter

notBatchedStatements

XbStmts

5

UPDATE/DELETE statements not included in a BATCH

counter

#

counter

notBatchedStatementSizeTotal

XbStmtSzT

6

Total size of all UPDATE/DELETE statements that were not batched

counter

bytes

counter

querySuccesses

succ

7

Database queries that executed successfully

counter

#

counter

queryFailures

fail

8

Database queries that failed during execution

counter

#

counter

recordsStored

str

10

Count of blind-write records stored successfully

counter

#

counter

recordStoreFailures

strF

11

Count of blind-write record store attempts that failed

counter

#

counter

casCreateRecordsStored

Cstr

12

Count of CAS-create records stored successfully

counter

#

counter

casCreateRecordStoreFailures

CstrF

13

Count of CAS-create record store attempts that failed

counter

#

counter

casSequenceRecordsStored

Sstr

14

Count of CAS-sequence records stored successfully

counter

#

counter

casSequenceRecordStoreFailures

SstrF

15

Count of CAS-sequence record store attempts that failed

counter

#

counter

recordsDeleted

del

16

Count of blind-write records deleted successfully

counter

#

counter

recordDeleteFailures

delF

17

Count of blind-write record delete attempts that failed

counter

#

counter

casSequenceRecordsDeleted

Sdel

18

Count of CAS-sequence records deleted successfully

counter

#

counter

casSequenceRecordDeleteFailures

SdelF

19

Count of CAS-sequence record delete attempts that failed

counter

#

counter

retrievesSuccessful

retrS

20

Count of successful retrieve attempts that returned useful data

counter

#

counter

retrievesNoData

retrN

21

Count of successful retrieve attempts that returned no data

counter

#

counter

retrieveFailures

retrF

22

Count of retrieve attempts that failed

counter

#

counter

requestQueueOverflows

qOvf

27

Count of session ownership requests that could not be fulfilled due to lack of space in the thread pool request queue

counter

#

counter

batchedStatementSize

bStmtSz

Size of UPDATE/DELETE statements that were batched

sample

bytes

count

count

notBatchedStatementSize

XbStmtSz

Size of UPDATE/DELETE statements that were not batched

sample

bytes

count

count

persistedBatchSize

persSz

Total size of all UPDATE/DELETE statements batched in each persistence cycle

sample

bytes

count

count

readTime

readT

Time taken to execute each SELECT statement

sample

ms

time/milliseconds

time/milliseconds

writeTime

writeT

Time taken to execute (potentially batched) UPDATE/DELETE statements

sample

ms

time/milliseconds

time/milliseconds

Management Tools

This section provides an overview of tools included with Rhino for system administrators to manage the Rhino SLEE.

Topics

Tools for general operation, administration and maintenance

Using the command-line console, Rhino Element Manager, Apache Ant scripting and the Rhino Remote API.

Java Management Extension plugins

JMX M-lets including the JMX remote adaptor.

Tools for monitoring and system reporting

rhino-stats, generate-system-report and dumpthreads.

Utilities

init-management-db, generate-client-configuration, rhino-passwd and cascade-uninstall.

Export-related tools

rhino-export, rhino-import, rhino-snapshot, snapshot-decode and snapshot-to-export.

Tip Also review the memory considerations when using the management tools, especially when running the rhino cluster and management tools on the same host.

Tools for General Operations, Administration, and Maintenance

Command-Line Console (rhino-console)

The Rhino SLEE command console (rhino-console) is a command-line shell which supports both interactive and batch-file commands to manage and configure the Rhino SLEE.

Tip See also the instructions to configure, log into, select a management command from, and configure failover for the command console.

Below are details on the usage of the command-line console, the available commands, the Java archives required to run the command line console, and the security configuration.

rhino-console usage

The command console takes the following arguments:

Usage:

   rhino-console <options> <command> <parameters>

Valid options:

   -? or --help   - Display this message
   -h <host>      - Rhino host
   -p <port>      - Rhino port
   -u <username>  - Username
   -w <password>  - Password, or "-" to prompt

   -D             - Display connection debugging messages
   -r <timeout>   - Initial reconnection retry period (in seconds). May be 0
                    to indicate that the client should reconnect forever.
   -n <namespace> - Set the initial active namespace
If no command is specified, client will start in interactive mode.
The help command can be run without connecting to Rhino.

If you don’t specify a command argument, the client starts in interactive mode. If you do give rhino-console a command argument, it runs in non-interactive mode. (./rhino-console install is the equivalent of entering install in the interactive command shell.)

In interactive mode, the client reports alarms when they occur and includes the SLEE state and alarm count in the prompt. It only reports the SLEE state if SLEE of any event routing node is not RUNNING. This new behaviour can be disabled by setting the system property "rhino.console.disable-listeners" to true in $CLIENT_HOME/etc/rhino-client-common.

Tip
  • The command console features command-line completion: if you press the tab key, the console guesses the rest of a partially input command or command argument. It also records the history of the commands entered in interactive mode. The up and down arrow keys cycle through the history, and the history command prints a list of recent commands.

  • You can specify the help command argument without a connection to a Rhino node.

Command categories

Below are the available categories of rhino-console commands. Enter help <category name | command name substring> for a list of available commands in each category.

Category Description
 auditing

Manage Rhino’s auditing subsystem

 bindings

Manage component bindings

 config

Import, export, and manage Rhino configuration

 deployment

Deploy, undeploy, and view SLEE deployable units

 general

Rhino console help and features

 housekeeping

Housekeeping and maintenance functions

 license

Install, remove, and view Rhino licenses

 limiting

Manage Rhino’s limiting subsystem

 logging

Configure Rhino’s internal logging subsystem

 persistence

Configure Rhino’s external database persistence subsystem

 profile

Manage profiles and profile tables

 resources

Manage resource adaptors

 security

Manage Rhino security

 services

Manage services running in the SLEE

 sleestate

Query and manage Rhino SLEE state

 snmp

Manage Rhino SNMP agent configuration

 thresholdrules

Manage threshold alarm rules

 trace

View and set SLEE trace levels using the trace MBean

 usage

View SLEE service usage statistics

 usertransaction

Manage client-demarcated transaction boundaries

Java archives

The classes required to run the command console are packaged as a set of Java libraries. They include:

File/directory Description
 rhino-console.jar

Command-line client implementation

 rhino-remote.jar

Rhino remote API for Rhino

 rhino-logging.jar

Rhino logging system

 slee.jar

JAIN SLEE 1.1 API

 $RHINO_HOME/client/lib

Third-part libraries such as jline, log4j and other dependencies

Security

The command-line console relies on the JMX Remote Adaptor for security.

Note

For a detailed description of JMX security and MBean permission format, see Chapter 12 of the JMX 1.2 specification.

See also Security.

Configuring the Command Console

Tip Generally, you will not need to configure the command console for Rhino (the instructions below are for custom use).

Below are instructions on configuring ports and usernames and passwords for rhino-console.

Configure rhino-console ports

If another application is occupying the default command-console port (1199), you can change the configuration to use a different instead. For example, to use port 1299:

  1. Go to the $RHINO_BASE/client directory.
    (This directory will hereafter be referred to as $CLIENT_HOME.)

  2. Edit the $CLIENT_HOME/etc/client.properties file, to configure the RMI properties as follows:

    rhino.remote.port=1299
  3. Edit the $RHINO_BASE/etc/defaults/config/config_variables file (and $RHINO_BASE/node-XXX/config/config_variables for any node directory that has already been created) to specify port numbers as follows:

    RMI_MBEAN_REGISTRY_PORT=1299
Warning You need to restart each Rhino node for these changes to take effect.

Configure rhino-console usernames and passwords

To edit or add usernames and passwords for accessing Rhino with the command console, edit the $RHINO_BASE/rhino.passwd file. Its format is:

username:password:rolelist

The role names must match roles defined in the $RHINO_BASE/etc/defaults/config/defaults.xml file or those otherwise configured at runtime (see Security).

Warning You need to restart the Rhino node for these changes to take effect.

Logging into the Command Console

Below are instructions for logging into rhino-console on a local or remote machine.

Local

To log into the command console on the same host:

1

Go to the $CLIENT_HOME directory.

2

Run $CLIENT_HOME/bin/rhino-console:

Interactive Rhino Management Shell
Rhino management console, enter 'help' for a list of commands
[Rhino@localhost (#0)]

Remote

To log into the command console on a remote machine:

1

On the remote Rhino machine, edit the security policy to allow connections from the machine where you want to build and deploy (by default only local connections are allowed).

To do this, edit the node’s $RHINO_HOME/config/config_variables file (which contains a list of IP addresses from which Rhino will accept management connections, including the current Rhino machine), adding the address of your build machine to the LOCALIPS variable. For example:

LOCALIPS="192.168.0.1 127.0.0.1 <other-IP-address>"
Note You need to restart Rhino for this to take effect.

2

Copy $RHINO_BASE/client from the Rhino machine to your build machine.

Note The client directory contains SSL keys for communicating securely with Rhino (using RMI over SSL), so it must be copied from the Rhino instance you want to manage.

3

On the build machine, edit the client/etc/client.properties file, changing the rhino.remote.host property to the address of the Rhino machine. For example:

rhino.remote.host=<Address-of-the-Rhino-Machine>

Now the client/bin/rhino-console script should work on the build machine.

Tip Alternatively, you can run the client/bin/rhino-console -h <Address-of-the-Rhino-Machine> -p 1199 script on the build machine.

Selecting a Management Command from the Command Console

As summarised on the Command-Line Console (rhino-console) page, you can view:

  • rhino-console command usage and a list of command categories, by entering the help command with the rhino-console script (./rhino-console --help).

  • help on a particular command, by entering help, specifying the command, within the console:

    help [command | command-type]
    get help on available commands
  • a list of rhino-console commands in a particular category, by entering help <category name | command name substring>. For example:

    [Rhino@localhost (#1)] help getclusterstate
      getclusterstate
        Display the current state of the Rhino Cluster

Version-specific commands

Warning
Console commands may depend on the Rhino version

Some rhino-console commands work with SLEE 1.1 or 1.0 only.

As an example of version-specific rhino-console commands: between SLEE 1.0 and SLEE 1.1, underlying tracing has changed significantly. As per the SLEE 1.1 specification, the settracerlevel command can only be used for SBBs, profile abstract classes and resource adaptors (and potentially other SLEE subsystems based on SLEE 1.1-compliant specifications).

As detailed below, the settracelevel command has been deprecated in SLEE 1.1, replaced by settracerlevel. However you can still use settracelevel to set the trace level of a SLEE 1.0-compliant component.

settracerlevel (SLEE 1.1)

Console command: settracerlevel

Command

settracerlevel <type> <notif-source> <tracer> <level>
    Set the trace level for a notification source's tracer

Example

$ ./rhino-console settracerlevel sbb
    "service=ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8],
     sbb=SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8]" "" Finest
Set trace level of
    SbbNotification[service=ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8],
          sbb=SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8]] root tracer to Finest
MBean operation: TraceMBean

The TraceMBean management interface has been extended in SLEE 1.1 so that management clients can easily enable tracing for a particular NotificationSource and tracer name:

MBean

SLEE-defined

public void setTraceLevel(NotificationSource notificationSource, String tracerName, TraceLevel level)
    throws NullPointerException, UnrecognizedNotificationSourceException,
          InvalidArgumentException, ManagementException;

Arguments

For this operation, you need to specify the:

  • notificationSource — notification source

  • tracerName — name of the tracer to assign the trace level to (an empty string denotes the root tracer)

  • level — trace-filter level to assign to the tracer.

settracelevel (SLEE 1.0)

Console command: settracelevel

Command

 settracelevel <type> <id> <level>
Set the trace level for a component

Example

$ ./rhino-console settracelevel sbb "name=FT PingSbb,vendor=OpenCloud,version=1.0" Finest
    set trace level of SbbID[name=FT PingSbb,vendor=OpenCloud,version=1.0] to Finest
MBean operation: TraceMBean
Warning This method has been deprecated, since it uses a ComponentID to identify a notification source (which is not compatible with the changes made to the tracing subsystem in SLEE 1.1). It has been replaced with setTracerLevel(NotificationSource, String, TraceLevel)`.

MBean

SLEE-defined

public void setTraceLevel(ComponentID id, Level traceLevel)
    throws NullPointerException, UnrecognizedComponentException, ManagementException

Arguments

For this operation, you need to specify the:

  • id — identifier of the component

  • traceLevel — new trace-filter level for the component.

Configuring Failover for the Command Console

To configure the rhino-console to connect automatically to another node in the cluster if the current node fails, edit $CLIENT_HOME/etc/client.properties as follows (replacing hostN with the host names of your cluster and 1199 with the respective port numbers):

rhino.remote.serverlist=host1:1199,host2:1199

Now, if a node in the cluster fails, the command console will automatically connect to the next node in the list. The following example shows failover from node-101 to node-102:

Cluster state before failover

[Rhino@host1 (#2)] getclusterstate
node-id   active-alarms   host                    node-type      slee-state   start-time          up-time
--  --  --  --  --  --  ------------------
101               0                   host1   event-router      Running   20080430 18:02:35   0days,23h,16m,34s
102               0                   host2   event-router      Running   20080430 18:02:17   0days,23h,16m,16s
2 rows

Cluster state after failover

[Rhino@host2 (#2)] getclusterstate
node-id   active-alarms   host                    node-type      slee-state   start-time          up-time
--  --  --  --  --  --  ------------------
102               0                   host2   event-router      Running   20080430 18:02:17   0days,23h,16m,26s
1 rows
Warning Command-console failover is only available for the Production version of Rhino.

Rhino Element Manager (REM)

The Rhino Element Manager (REM) is a web-based console for monitoring, configuring, and managing a Rhino SLEE. REM provides a graphical user interface (GUI) for many of the management features documented in the Rhino Administration and Deployment Guide.

You can use REM to:

  • monitor a Rhino element (cluster nodes, activities, events, SBBs, alarms, resource adaptor entities, services, trace notifications, statistics, logs)

  • manage a Rhino element (SLEE state, alarms, deployment, profiles, resources), instances available in REM, and REM users

  • configure threshold rules, rate limiters, licenses, logging, and object pools

  • inspect activities, SBBs, timers, transactions, and threads

  • scan key information about multiple Rhino elements on a single screen.

Tip For details, please see the Rhino Element Manager documentation.

Scripting with Apache Ant

Apache Ant is a Java-based build tool (similar to Make). Ant projects are contained in XML files which specify a number of Ant targets and their dependencies. The body of each Ant target consists of a collection of Ant tasks. Each Ant task is a small Java program for performing common build operations (such as Java compilation and packaging).

Ant features

Ant includes the following features:

  • The configuration files are XML-based.

  • At runtime, a user can specify which Ant target(s) they want to run, and Ant will generate and execute tasks from a dependency tree built from the target(s).

  • Instead of a model extended with shell-based commands, Ant is extended using Java classes. Each task is run by an object that implements a particular task interface.

  • Ant build files are written in XML (and have the default name build.xml).

  • Each build file contains one project and at least one (default) target.

  • Targets contain task elements.

  • Each task element of the build file can have an id attribute and can later be referred to by the value supplied to this. The value has to be unique.

  • A project can have a set of properties. These might be set in the build file by the property task, or might be set outside Ant.

  • Dynamic or configurable build properties (such as path names or version numbers) are often handled through the use of a properties file associated with the Ant build file (often named build.properties).

Tip For more about Ant, see the Ant project page.

Writing an Ant build file by example

It is generally easier to write Ant build files by starting from a working example. The sample applications bundled with the Rhino SDK use Ant management scripts. They are good examples of how to automate the compilation and deployment steps. See rhino-connectivity/sip-*/build.xml in your Rhino SDK installation folder. Two Rhino tools can be used to create the build.xml file:

  • The Eclipse plugin creates a build.xml file that helps in building and creating components and deployable unit jar files.

  • The rhino-export tool creates a build.xml file that can redeploy deployed Rhino components to another Rhino instance. This feature is very useful during development — a typical approach is to manually install and configure a number of SLEE components, then use rhino-export to create a build.xml file, to automate provisioning steps.

The full set of Rhino Ant management tasks is available in the Rhino Management API.

Sample Rhino Ant Tasks

Metaswitch has developed custom Ant tasks for Rhino which can be used in Ant build scripts to deploy and configure SLEE components, including: packaging, deployment, services, resource adaptors, and profiles.

Below are examples of specific Ant tasks: install, createraentity, and activateservice.

Tip The full set of Rhino Ant management tasks is available in the Rhino Management API.

install

install is a Rhino management sub-task for installing deployable units.

Parameters

install takes the following Ant parameters (attributes for Ant tasks).

Parameter Description Required Default
 failonerror

Flag to control failure behaviour.

  • If true, the sub-task will throw a BuildException when an error is encountered.

  • If false, the sub-task will throw a NonFatalBuildException instead of a BuildException under specific circumstances.

No.

Taken from the Rhino management parent task.

 url

URL deployable unit to install.

Not if srcfile is specified.

 srcfile

Path to deployable unit to install.

Not if url is specified.

Example

For example, to install a deployable unit with a SIP resource adaptor, the build.xml file would contain:

<target name="install-ocjainsip-1.2-ra-du" depends="login">
  <slee-management>
    <install srcfile="units/ocjainsip-1.2-ra.jar"
             url="file:lib/ocjainsip-1.2-ra.jar"/>
  </slee-management>
</target>

createraentity

createraentity is a Rhino management sub-task for creating resource adaptor entities.

Parameters

createraentity takes the following Ant parameters:

Parameter Description Required Default
 failonerror

Flag to control failure behaviour.

  • If true, the sub-task will throw a BuildException when an error is encountered.

  • If false, the sub-task will throw a NonFatalBuildException instead of a BuildException under specific circumstances.

No.

Taken from the Rhino management parent task.

 entityname

Name of the resource adaptor entity to create — must be unique within the SLEE.

Yes.

 resourceadaptorid

Canonical name of the resource adaptor component from which the entity should be created.

Only required (or allowed) if the component nested element is not present.

 properties

Properties to be set on the resource adaptor.

No.

 component

Element that identifies the resource adaptor component from which the resource adaptor entity should be created. Available as a nested element. (See sleecomponentelement.)

Only required (or allowed) if the resourceadaptorid is not present.

Example

For example, to create a SIP resource adaptor entity, the `build.xml `file would contain:

<target name="create-ra-entity-sipra" depends="install-ocjainsip-1.2-ra-du">
  <slee-management>
    <createraentity entityname="sipra"
                    properties="ListeningPoints=0.0.0.0:5060/udp;0.0.0.0:5060/tcp,ExtensionMethods=,OutboundProxy=,
                                UDPThreads=1,TCPThreads=1,OffsetPorts=False,PortOffset=101,RetransmissionFilter=False,
                                AutomaticDialogSupport=False,Keystore=sip-ra-ssl.keystore,KeystoreType=jks,KeystorePassword=,
                                Truststore=sip-ra-ssl.truststore,TruststoreType=jks,TruststorePassword=,CRLURL=,CRLRefreshTimeout=86400,
                                CRLLoadFailureRetryTimeout=900,CRLNoCRLLoadFailureRetryTimeout=60,ClientAuthentication=NEED,
                                MaxContentLength=131072">
      <component name="OCSIP" vendor="Open Cloud" version="1.2"/>
    </createraentity>
    <bindralinkname entityname="sipra" linkname="OCSIP"/>
  </slee-management>
</target>

sleecomponentelement

A sleecomponentelement is an XML element that can be nested as a child in some of other Ant task, to give it a SLEE-component reference. It takes the following form:

<component name="name" vendor="vendor" version="version"/>

Below is the DTD definition:

<!ELEMENT component EMPTY>
<!ATTLIST component
          id ID #IMPLIED
          version CDATA #IMPLIED
          name CDATA #IMPLIED
          type CDATA #IMPLIED
          vendor CDATA #IMPLIED>

activateservice

activateservice is a Rhino management sub-task for activating services.

Parameters

activateservice takes the following Ant parameters.

Parameter Description Required Default
 failonerror

Flag to control failure behaviour.

  • If true, the sub-task will throw a BuildException when an error is encountered.

  • If false, the sub-task will throw a NonFatalBuildException instead of a BuildException under specific circumstances.

No.

Taken from the Rhino management parent task.

 serviceid

Canonical name of the service to activate.

Only required (or allowed) if the component nested element is not present.

 component

Element that identifies the service to activate. Available as a nested element. (See sleecomponentelement.)

Only required (or allowed) if the `serviceid is not present.

 nodes

Comma-separated list of node IDs on which the service should be activated.

No.

If not specified, the service is activated on all currently live Rhino event router nodes.

Example

For example, to activate three services based on the SIP protocol, the build.xml file might contain:

<target name="activate-services" depends="install-sip-ac-location-service-du,install-sip-registrar-service-du,install-sip-proxy-service-du">
  <slee-management>
    <activateservice>
      <component name="SIP AC Location Service" vendor="Open Cloud" version="1.5"/>
    </activateservice>
    <activateservice>
      <component name="SIP Registrar Service" vendor="Open Cloud" version="1.5"/>
    </activateservice>
    <activateservice>
      <component name="SIP Proxy Service" vendor="Open Cloud" version="1.5"/>
    </activateservice>
  </slee-management>
</target>

abstractbase

Abstract base class extended by other sub tasks.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances.

No. default value is taken from the Rhino management parent task.

activateraentity

A Rhino management sub task for activating Resource Adaptor Entities.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Name of the resource adaptor entity to activate.

Yes.

nodes

Comma-separated list of node IDs on which the resource adaptor entity should be activated.

No. If omitted an attempt is made to activate the resource adaptor entity on all current cluster members.

NonFatalBuildException throw conditions
  • The task is run targeting an already active resource adaptor.

activateservice

A Rhino management sub task for activating Services.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

serviceid

Canonical name of the service to activate.

Only required/allowed if the component nested element is not present.

nodes

Comma-separated list of node IDs on which the service should be activated.

No. If omitted an attempt is made to activate the service on all current cluster members.

Parameters available as nested elements
Element Description Required

component

Identifies the service to activate. See com.opencloud.slee.mlet.ant.SleeComponentElement

Only required/allowed if the serviceid attribute is not present.

NonFatalBuildException throw conditions
  • The task is run targeting an already active service.

addappenderref

A Rhino management sub task for adding an appender to a log key.

Ant Parameters
Attribute Description Required

logkey

Name of the log key to add the appender to.

Yes.

appendername

Name of the appender to add.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be added to the log key, eg. the appender has already been added.

addloggercomponent

A Rhino management sub task for adding a component to a Logger.

Ant Parameters
Attribute Type Description Required

logkey

String

Name of the log key to add the component to.

Yes.

pluginname

String

The Log4J plugin for this component.

Yes.

properties

String

A comma separated list of configuration properties for the component. Each property is a key=value pair. Use this or a nested propertyset.

No.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances.

No. default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

component

The components to add to this logger.

No.

propertyset

Configuration properties for this component. Alternative to the properties attribute.

No.

To use this task provide the configuration of the appender as attributes and sub-elements:

<slee-management>
   <addloggercomponent logkey="rhino.snmp" pluginname="DynamicThresholdFilter" properties="key=rhinoKey, onMatch=ACCEPT, onMismatch=NEUTRAL">
       <component pluginname="KeyValuePair" properties="key=rhino, value=INFO"/>
   </addloggercomponent>
</slee-management>

addpermissionmapping

A Rhino management sub task for adding a permission mapping.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

objectnamepattern

MBean object name pattern as specified in javax.management.ObjectName

Yes.

member

A MBean member (attribute or operation)

Only if rhinopermissionsubcategory is specified.

rhinopermissioncategory

Primary part of the Rhino permission name

Yes.

rhinopermissionsubcategory

Secondary (optional) part of the Rhino permission name

Only if member is specified.

NonFatalBuildException throw conditions
  • Permission mapping already exists

addpermissiontorole

A Rhino management sub task for adding a permission to a role.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

role

Role name

Yes.

permissionName

Permission name (taken from a permission mapping target as either PermissionCategory or PermissionCategory#PermissionSubcategory)

Yes.

permissionActions

Permission actions to add, either "read" or "read,write"

Yes.

NonFatalBuildException throw conditions
  • None

addpersistenceinstanceref

A Rhino management sub task for adding a persistence instance reference to a database resource.

Ant Parameters
Attribute Description Required

resourcetype

Type of resource to add the reference to. Must be one of "persistence" or "jdbc".

Yes.

resourcename

Name of the resource to add the reference to.

Yes.

persistenceinstancename

Name of the persistence instance to reference.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the persistence instance is already referenced by the resource.

addservicebindings

A Rhino management sub task for adding bindings to a service.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

service

Identifies the service component. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

binding

Identifies a binding descriptor component. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes. May be repeated as many times as needed to add multiple bindings.

mapping

Specifies a mapping for a copied component. If the source component identifier equals a component that will be copied as a result of the binding, then the copied component will have the identity given by the the target identifier, rather than a default value generated by the SLEE. See ComponentMappingElement

Yes. May be repeated as many times as needed to add multiple mappings.

NonFatalBuildException throw conditions
  • The task is run targeting a binding descriptor that has already been added to the service.

bindralinkname

A Rhino management sub task for binding Resource Adaptor Entity Link Names.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Canonical name of the resource adaptor entity to bind to the link name. This attribute can reference the Ant property saved by a previous createraentity sub task.

Yes.

linkname

The link name to bind to the resource adaptor entity.

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting and already bound linkname.

cascadeuninstall

A Rhino management sub task for uninstalling a deployable unit and all dependencies recursively.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

url

URL of deployable unit to uninstall.

Either a url or component element must be declared.

Parameters available as nested elements
Element Description Required

component

Identifies a component to be removed. The component must be a copied component. See com.opencloud.slee.mlet.ant.SleeComponentElement. (Note that for the cascadeuninstall sub task the optional {@code type} attribute of {@code component} is required.)

Either a url or component element must be declared.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existent deployable unit or component.

checkalarms

A Rhino management sub task for checking active alarms in the SLEE.
Lists any active alarms, and fails the build iff the failonerror attribute is set to true.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException if any alarms are active. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException if any alarms are active.

No. default value is taken from the Rhino management parent task.

commandline

A Rhino management sub task for interacting directly with the command line client.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

argument

Used to specify individual command line arguments. See com.opencloud.slee.mlet.ant.ArgumentElement

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

componentbased

Abstract base class for all sub tasks that accept a component element.

configureabsolutestatlimiter

A Rhino management sub task for configuring an absolute stat limiter.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

name

The name of the rate limiter to be configured.

Yes.

parameterset

The parameter set for the statistic to monitor. If setting this, statistic must also be specified.

No.

statistic

The statistic to monitor. If setting this, parameterset must also be specified.

No.

bypassed

Whether this rate limiter will be used or bypassed for limiting rate. If false the limiter will be used (not bypassed), if true the limiter will be bypassed and hence not be used. A value of '-' may be used to clear existing per-node settings when a list of nodes is specified.

No.

parent

Sets the parent of the limiter, adding the limiter to its parent’s limiter hierarchy.

No.

values

Comma-delimited list of rate limit tiers as values of the monitored statistic at which each tier should begin to take effect. Must specify the same number of values in limitpercentages corresponding to the percentage of calls to limit in each value tier.

No.

limitpercentages

Comma-delimited list of rate limit percentages per value. Values must be between 0 and 100 inclusive. A value of '-' may be used to clear (remove) the corresponding value tier. Must contain the same number of values as specified in values.

No.

nodes

Comma-delimited list of nodes to apply this configuration to. Only the values, limitpercentages, and bypassed configuration properties may be set on a per-node basis, all other properties are set uniformly across all nodes.

No.

configurelogger

A Rhino management sub task for configuring a logger. This task is suitable for more complex configuration than the addappenderref and setloglevel tasks.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

logKey

The name of the logger to configure.

Yes.

level

Logger level.

No.

additivity

Boolean value indicating logger additivity.

No. If not specified, the default value for loggers is used.

asynchronous

Boolean value indicating if the logger should be asynchronous.

No. If not specified, the default value for loggers is used.

Nested elements

Element

Description

Required

appenderref

The name of an appender to attach to the logger. Multiple appender references may be specified. See AppenderRef.

No.

component

A plugin component of for this logger. Multiple components may be specified. See CreateGenericComponentTask.

No.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

For example, to add a burst filter to an individual log key, set it to log events asynchronously, and attach it to the Cassandra appender:

<slee-management>
    <configurelogger logKey="rhino.er" level="DEBUG" asynchronous="true">
        <appenderref name="Cassandra"/>
        <component pluginname="BurstFilter" properties="level=WARN,rate=50"/>
    </configurelogger>
</slee-management>

configureobjectpools

A Rhino management sub task for configuring object pools.

Ant Parameters
Attribute Description Required

initialPooledPoolSize

The initial size of the object pool for objects in the pooled pool.

No.

pooledPoolSize

The current size of the object pool for objects in the pooled pool.

No.

statePoolSize

The current size of the object pool for objects in the state pool.

No.

persistentStatePoolSize

The current size of the object pool for objects in the persistent state pool.

No.

readyPoolSize

The current size of the object pool for objects in the ready pool.

No.

stalePoolSize

The current size of the object pool for objects in the stale pool.

No.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

configureratelimiter

A Rhino management sub task for configuring a rate limiter.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

name

The name of the rate limiter to be configured.

Yes.

maxrate

The maximum rate of tokens per second the rate limiter will allow. A value of '-' may be used to clear existing per-node settings when a list of nodes is specified.

No.

bypassed

Whether this rate limiter will be used or bypassed for limiting rate. If false the limiter will be used (not bypassed), if true the limiter will be bypassed and hence not be used. A value of '-' may be used to clear existing per-node settings when a list of nodes is specified.

No.

timeunit

The rate limiter will allow maxrate tokens per timeunit. Allowed values are SECONDS, MINUTES, HOURS, DAYS.

No.

depth

Controls the amount of "burstiness" allowed by the rate limiter. A value of '-' may be used to clear existing per-node settings when a list of nodes is specified.

No.

parent

Sets the parent of the limiter, adding the limiter to its parent’s limiter hierarchy.

No.

nodes

Comma-delimited list of nodes to apply this configuration to. Only the maxrate, bypassed, and depth configuration properties may be set on a per-node basis, all other properties are set uniformly across all nodes.

No.

configurerelativestatlimiter

A Rhino management sub task for configuring a relative stat limiter.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

name

The name of the rate limiter to be configured.

Yes.

parameterset

The parameter set for the statistic to monitor. If setting this, statistic must also be specified.

No.

statistic

The statistic to monitor. If setting this, parameterset must also be specified.

No.

relativeparameterset

The parameter set for the statistic to compare against. If setting this, relativestatistic must also be specified.

No.

relativestatistic

The statistic to compare against. If setting this, relativeparameterset must also be specified.

No.

bypassed

Whether this rate limiter will be used or bypassed for limiting rate. If false the limiter will be used (not bypassed), if true the limiter will be bypassed and hence not be used. A value of '-' may be used to clear existing per-node settings when a list of nodes is specified.

No.

parent

Sets the parent of the limiter, adding the limiter to its parent’s limiter hierarchy.

No.

relativepercentages

Comma-delimited list of rate limit tiers as relative percentages of relative stat value. Values must be between 0 and 100 inclusive. Must specify the same number of values in limitpercentages corresponding to the percentage of calls to limit in each relative percentage tier.

No.

limitpercentages

Comma-delimited list of rate limit percentages per relative percentage. Values must be between 0 and 100 inclusive. A value of '-' may be used to clear (remove) the corresponding relative percentage tier. Must contain the same number of values as specified in relativepercentages.

No.

nodes

Comma-delimited list of nodes to apply this configuration to. Only the relativepercentages, limitpercentages, and bypassed configuration properties may be set on a per-node basis, all other properties are set uniformly across all nodes.

No.

configuresas

A Rhino management sub task for configuring SAS.

Ant Parameters
Attribute Description Required

server

The hostname/address of the SAS server. If set, will override values in nested server elements.

No.

resourceIdentifier

The resource-identifier of the SAS resource bundle to associate with events sent to the SAS server.

No.

systemName

The system name to use when connecting to SAS.

No.

appendNodeID

Determine if the cluster node ID should be appended to the system name

No.

systemType

The system type to use when connecting to SAS.

No.

systemVersion

The system version to use when connecting to SAS.

No.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

server

A SAS server host and optional port specification. See com.opencloud.slee.mlet.ant.tasks.ConfigureSasTask.SasServerElement

No.

configuresaturationlimiter

A Rhino management sub task for configuring a queue saturation limiter.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

name

The name of the queue saturation limiter to be configured.

Yes.

maxsaturation

The maximum amount of saturation allowed in the staging queue before rejecting work, expressed as a percentage. A value of '-' may be used to clear existing per-node settings when a list of nodes is specified.

No.

bypassed

Whether this limiter will be used or bypassed for limiting. If false the limiter will be used (not bypassed), if true the limiter will be bypassed and hence not be used. A value of '-' may be used to clear existing per-node settings when a list of nodes is specified.

No.

parent

Sets the parent of the limiter, adding the limiter to its parent’s limiter hierarchy.

No.

nodes

Comma-delimited list of nodes to apply this configuration to. Only the maxsaturation and bypassed configuration properties may be set on a per-node basis, all other properties are set uniformly across all nodes.

No.

configurestagingqueues

A Rhino management sub task for configuring the staging queue.

Ant Parameters
Attribute Description Required

maximumSize

Maximum size of the staging queue.

No. If not specified, if there is no previously specified value; if no value has been specified, the default size is used (3000).

maximumAge

Maximum possible age of staging items, in milliseconds. Specify an age of -1 to ignorthe age of staging items (i.e. staging items will never be discarded due to their age).

No. If not specified, the last specified value is used; if there is no previously specified value, the default age is used (10000).

threadCount

Number of staging threads in the thread pool.

No. If not specified, the last specified value is used; if there is no previously specified value, the default size is used (30).

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

connectraentitylimiterendpoint

A Rhino management sub task for connecting an RA Entity limiter endpoint to a limiter.

Ant Parameters
Attribute Description Required

entityname

Name of the resource adaptor entity.

Yes.

endpointname

Name of the endpoint.

Yes.

limitername

Name of the limiter to connect to.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

connectservicelimiterendpoint

A Rhino management sub task for connecting a Service limiter endpoint to a limiter.

Ant Parameters
Attribute Description Required

servicename

Name of the service.

Yes.

servicevendor

Vendor of the service.

Yes.

serviceversion

Version of the service.

Yes.

sbbname

Name of the sbb.

Yes.

sbbvendor

Vendor of the sbb.

Yes.

sbbversion

Version of the sbb.

Yes.

endpointname

Name of the endpoint.

Yes.

limitername

Name of the limiter to connect to.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

copycomponent

A Rhino management sub task for copying a component to a new target identity.

Ant Parameters
Attribute Description Required

type

The component type. See com.opencloud.slee.mlet.ant.SleeComponentElement for valid component type strings.

Yes.

installLevel

The target install level for the copied component. Allowed values are: {@code INSTALLED, VERIFIED, DEPLOYED}.

No. If not specified, defaults to {@code DEPLOYED}.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

source

Identifies the source component. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

target

Identifies the component to create as a copy of the source component. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a component that has already been copied from the given source.

createappender

Abstract base class for all sub tasks that create logging appenders.

Ant Parameters
Attribute Description Required

appendername

Name of the appender to create. This name must be unique.

Yes.

ignoreexceptions

The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false, exceptions will be propagated to the caller instead.

No.

failonerror

Flag to control failure behaviour. If true, the sub task will throw a BuildException when an error is encountered. If false, the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • Implementation dependent.

createconsoleappender

A Rhino management sub task for creating a log appender with output directed to the console.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

direct

boolean

Log directly to the output stream instead of via the System.out/err PrintWriter

No.

follow

boolean

Follow changes to the destination stream of System.out/err. Incompatible with direct.

No.

target

String

Either "SYSTEM_OUT" or "SYSTEM_ERR". The default is "SYSTEM_OUT".

No.

ignoreexceptions

boolean

Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender).

No.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used.

No.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

createdatabaseresource

A Rhino management sub task for creating a database resource.

Ant Parameters
Attribute Description Required

resourcetype

Type of resource to create. Must be one of "perstence" or "jdbc".

Yes.

resourcename

Name of the resource to create. This name must be unique.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if a resource with the same type and name already exists.

createfileappender

A Rhino management sub task for creating a log appender writing to a file opened in write-only mode.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

filename

String

Name of log file to write to.

Yes.

append

boolean

When true - the default, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written.

No. If not specified, defaults to true.

bufferedio

boolean

When true - the default, records will be written to a buffer and the data will be written to disk when the buffer is full or, if immediateFlush is set, when the record is written. File locking cannot be used with bufferedIO. Performance tests have shown that using buffered I/O significantly improves performance, even if immediateFlush is enabled.

No. If not specified, defaults to true.

buffersize

int

When bufferedIO is true, this is the buffer size, the default is 8192 bytes.

No.

createondemand

boolean

The appender creates the file on-demand. The appender only creates the file when a log event passes all filters and is routed to this appender.

No. If not specified, defaults to false.

immediateflush

boolean

When set to true - the default, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance.

No. If not specified, defaults to true.

locking

boolean

When set to true, I/O operations will occur only while the file lock is held allowing FileAppenders in multiple JVMs and potentially multiple hosts to write to the same file simultaneously. This will significantly impact performance so should be used carefully. Furthermore, on many systems the file lock is "advisory" meaning that other applications can perform operations on the file without acquiring a lock.

No. If not specified, defaults to false.

ignoreexceptions

boolean

When set to true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead.

No. If not specified, defaults to true.

pattern

String

The pattern to use for logging output.

No. If not specified, the default is %m%n.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used.

No.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

The snippet below creates two appenders: a simple file appender using the default pattern of %m%n (log message followed by a newline character); and a more complex configuration where the filename is taken from a property defined earlier in the Ant script and filters that accept all messages from Trace level up, but limiting WARN and lower severity messages to a maximum of 3 per second. The second appender uses a more complex pattern containing a date stamp, the log level, the tracer name, the thread name, the diagnostic context map, the log message and exception stack trace. The substitution property logDir is defined in the Rhino logging configuration file. See https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout for the full pattern layout syntax.

<property name="fileName" value="${logDir}/baz"/>
<slee-management>
    <createfileappender appendername="foo" fileName="${logDir}/buz"/>
    <createfileappender appendername="bar">
        <propertyset>
            <propertyref name="fileName"/>
        </propertyset>
        <component pluginname="filters">
            <component pluginname="BurstFilter" properties="level=WARN,rate=3"/>
            <component pluginname="ThresholdFilter" properties="level=trace"/>
        </component>
        <component pluginname="PatternLayout" properties="pattern=%d{yyyy-MM-dd HH:mm:ss.SSSZ} ${plainLevel} [%tracer{*.0.0.*}] &lt;%threadName&gt; %mdc %msg{nolookups}%n%throwable"/>
    </createfileappender>
</slee-management>

creategenericappender

A Rhino management sub task for creating a log appender.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

pluginname

String

The Log4J plugin for this appender

Yes.

properties

String

A comma separated list of configuration properties for the appender. Each property is a key=value pair. Use this or a nested propertyset.

No.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

propertyset

Configuration properties for this appender. Alternative to the properties attribute.

No.

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used.

No.

component

Additional components such as loggerFields for SyslogAppender, KeyValuePairs, etc. Multiple components may be specified. See CreateGenericComponentTask.

No

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

To use this task provide the configuration of the appender as attributes and subelements. For example, to replicate the Log4J Cassandra appender example, first configure Cassandra with a keyspace test containing a table logs, add the Cassandra client jar and log4j-nosql-2.8.2.jar to rhino/lib/logging-plugins before starting Rhino, then create an Ant target containing the below slee-management block.

  <Appenders>
    <Cassandra name="Cassandra" clusterName="Test Cluster" keyspace="test" table="logs" bufferSize="10" batched="true">
      <SocketAddress host="localhost" port="9042"/>
      <ColumnMapping name="id" pattern="%uuid{TIME}" type="java.util.UUID"/>
      <ColumnMapping name="timeid" literal="now()"/>
      <ColumnMapping name="message" pattern="%message"/>
      <ColumnMapping name="level" pattern="%level"/>
      <ColumnMapping name="marker" pattern="%marker"/>
      <ColumnMapping name="logger" pattern="%logger"/>
      <ColumnMapping name="timestamp" type="java.util.Date"/>
      <ColumnMapping name="mdc" type="org.apache.logging.log4j.spi.ThreadContextMap"/>
      <ColumnMapping name="ndc" type="org.apache.logging.log4j.spi.ThreadContextStack"/>
    </Cassandra>
  </Appenders>
  <Loggers>
    <Logger name="org.apache.logging.log4j.cassandra" level="DEBUG">
      <AppenderRef ref="Cassandra"/>
    </Logger>
    <Root level="ERROR"/>
  </Loggers>
CREATE TABLE logs (
    id timeuuid PRIMARY KEY,
    timeid timeuuid,
    message text,
    level text,
    marker text,
    logger text,
    timestamp timestamp,
    mdc map<text,text>,
    ndc list<text>
);
<slee-management>
    <creategenericappender appendername="Cassandra" pluginname="Cassandra" properties="clusterName=Test Cluster,keyspace=test,table=logs,bufferSize=10,batched=true">
        <component pluginname="SocketAddress" properties="host=localhost,port=9042"/>
        <component pluginname="ColumnMapping" properties="name=id,pattern=%uuid{TIME},type=java.util.UUID"/>
        <component pluginname="ColumnMapping" properties="name=timeid,literal=now()"/>
        <component pluginname="ColumnMapping" properties="name=message,pattern=%message"/>
        <component pluginname="ColumnMapping" properties="name=level,pattern=%level"/>
        <component pluginname="ColumnMapping" properties="name=marker,pattern=%marker"/>
        <component pluginname="ColumnMapping" properties="name=logger,pattern=%logger"/>
        <component pluginname="ColumnMapping" properties="name=timestamp,type=java.util.Date"/>
        <component pluginname="ColumnMapping" properties="name=mdc,type=org.apache.logging.log4j.spi.ThreadContextMap"/>
        <component pluginname="ColumnMapping" properties="name=ndc,type=org.apache.logging.log4j.spi.ThreadContextStack"/>
    </creategenericappender>
    <addappenderref appendername="Cassandra" logKey="org.apache.logging.log4j.cassandra"/>
    <setloglevel logKey="org.apache.logging.log4j.cassandra" logLevel="DEBUG"/>
</slee-management>

creategenericcomponent

Defines a logging component.

Ant Parameters
Attribute Type Description Required

pluginname

String

The Log4J plugin name for this component.

Yes.

properties

String

A comma separated list of configuration properties for the appender. Each property is a key=value pair. Use this or a nested propertyset.

No.

Nested elements

Element

Description

Required

propertyset

Configuration properties for this appender. Alternative to the properties attribute.

No.

component

Additional components such as KeyValuePairs, etc. Multiple components may be specified.

No

createjdbcresourceconnectionpool

A Rhino management sub task for adding a connection pool configuration to a JDBC resource.

Ant Parameters
Attribute Description Required

resourcename

Name of the JDBC resource.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the JDBC resource already has a connection pool configuration.

createlimiter

A Rhino management sub task for creating a limiter.

Ant Parameters
Attribute Description Required

name

Name of the limiter to create.

Yes.

limiterType

The type of limiter to create.

No. If not specified, defaults to RATE.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if a limiter already exists with the same name.

createlinkedcomponent

A Rhino management sub task for creating a virtual component that is a link to another component.

Ant Parameters
Attribute Description Required

type

The component type. See com.opencloud.slee.mlet.ant.SleeComponentElement for valid component type strings.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

source

Identifies the component to create as a link to the target component. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

target

Identifies the target component of the link. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • The task is run with a source component that has already been linked to the given target.

creatememorymappedfileappender

A Rhino management sub task for creating a log appender writing to a memory-mapped file.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

filename

String

The file to write to

Yes.

append

boolean

Append to the file if true, otherwise clear the file on open.

No.

immediateflush

boolean

Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance.

No.

regionlength

Integer

The length of the mapped region. 256B-1GB. Default 32MB.

No.

ignoreexceptions

boolean

Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender).

No.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used.

No.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

createnamespace

A Rhino management sub task for creating a deployment namespace.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

namespace

The name of the namespace to create.

Yes.

Parameters available as nested elements
Element Description Required

options

Describes the options that the namespace should be created with. See com.opencloud.slee.mlet.ant.tasks.NamespaceOptionsElement.

No.

NonFatalBuildException throw conditions
  • The task is run with the name of a namespace that already exists.

createoutputstreamappender

Abstract base class for all sub tasks that create logging appenders that write to an output stream.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

ignoreexceptions

boolean

The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false, exceptions will be propagated to the caller instead.

No.

immediateflush

boolean

When set to true, each write will be followed by a flush. This will guarantee the data is written to the underlying output stream but could impact performance.

No.

failonerror

boolean

Flag to control failure behaviour. If true, the sub task will throw a BuildException when an error is encountered. If false, the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • Implementation dependent.

createpersistenceinstance

A Rhino management sub task for creating a persistence instance that can be used by a database resource.

Ant Parameters
Attribute Description Required

name

Name of the persistence instance to create. This name must be unique.

type

Type of the persistence instance to create, eg. 'jdbc' or 'cassandra'.

No. Defaults to 'jdbc'.

datasourceclass

Fully-qualified class name the the datasource class to be used by the persistence instance.

Only if 'type' is 'jdbc'.

failonerror

Parameters available as nested elements
Element Description Required

configproperty

Identifies a configuration property of the datasource class. See com.opencloud.slee.mlet.ant.ConfigPropertyElement. Note that the {@code type} property of {@code ConfigPropertyElement} is mandatory for this task.

One {@code ConfigPropertyElement} must be specified per config property.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if a persistence instance with the same name already exists.

createprofile

A Rhino management sub task for creating Profiles inside tables.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

profilename

Name of the profile to create.

Yes.

tablename

Name of the profile table in which the profile will be created.

Yes.

Parameters available as nested elements
Element Description Required

profilevalue

Assigns a value to a profile attribute once the profile has been created. See com.opencloud.slee.mlet.ant.ProfileValueElement

No.

NonFatalBuildException throw conditions
  • The task is run targeting an already existing profile.

createprofiletable

A Rhino management sub task for creating Profile Tables.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

profilespec

Canonical name of the profile specification from which the profile table should be created.

Only required/allowed if the component nested element is not present.

tablename

Name of the profile table to create, this name must be unique.

Yes.

Parameters available as nested elements
Element Description Required

component

Identifies the profile specification component from which the profile table should be created. See com.opencloud.slee.mlet.ant.SleeComponentElement

Only required/allowed if the profilespec attribute is not present.

NonFatalBuildException throw conditions
  • The task is run targeting an already existing table.

createraentity

A Rhino management sub task for creating Resource Adaptor Entities.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Name of the resource adaptor entity to create, this name must be unique within the SLEE.

Yes.

resourceadaptorid

Canonical name of the resource adaptor component from which the entity should be created.

Only required/allowed if the component nested element is not present.

properties

Properties to be set on the resource adaptor.

No.

Parameters available as nested elements
Element Description Required

component

Identifies the resource adaptor component from which the resource adaptor entity should be created. See com.opencloud.slee.mlet.ant.SleeComponentElement

Only required/allowed if the resourceadaptorid attribute is not present.

NonFatalBuildException throw conditions
  • The task is run targeting an already existing resource adaptor.

createrandomaccessfileappender

A Rhino management sub task for creating a log appender writing to a file opened in RW mode.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

filename

String

The file to write to

Yes.

append

boolean

Append to the file if true, otherwise clear the file on open.

No.

buffersize

Integer

The the size of the write buffer. Defults to 256kB

No.

immediateflush

boolean

Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance.

No.

ignoreexceptions

boolean

Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender).

No.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used.

No.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

createrole

A Rhino management sub task for creating a role.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

role

Role name

Yes.

baseRole

Role name to copy permissions from

No.

NonFatalBuildException throw conditions
  • Role already exists

createrollingfileappender

A Rhino management sub task for creating a log appender writing to a series of files opened in write-only mode.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

filename

String

The file to write to

Yes.

filepattern

String

The pattern of file names for archived log files. Dependent on the rollover policy used, typically contains a date pattern or %i for integer counter.

Yes.

append

boolean

Append to the file if true, otherwise clear the file on open.

No.

bufferedio

boolean

Write to an intermediate buffer to reduce the number of write() syscalls.

No.

buffersize

Integer

The the size of the write buffer. Defults to 256kB

No.

createondemand

boolean

Only create the file when data is written

No.

immediateflush

boolean

Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance.

No.

ignoreexceptions

boolean

Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender).

No.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used.

No.

policy

The rollover policy to determine when rollover should occur

Yes.

strategy

The strategy for archiving log files. Strategies determine the name, location, number and compression of the archived logs.

No.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

Sample Usage

To create a rolling file appender called MyConferenceApender with a write buffer of 2k and a file pattern of my-conference-<date>-<number>.log.gz Log messages are filtered to only include calls connecting to the MyConference system

<createrollingfileappender append="false" appendername="MyConferenceApender" bufferedio="true" buffersize="2048" createondemand="true" filename="${logDir}/my-conference.log" filepattern="${logDir}/my-conference-%d{MM-dd-yyyy}-%i.log.gz" ignoreexceptions="false" immediateflush="false">
    <component pluginname="SizeBasedTriggeringPolicy" properties="size=1024mb"/>
    <component pluginname="PatternLayout" properties="pattern=%m"/>
    <component pluginname="DynamicThresholdFilter" properties="key=cdPty,defaultThreshold=OFF">
        <component pluginname="KeyValuePair" properties="key=MyConference,value=DEBUG"/>
    </component>
</createrollingfileappender>

createrollingrandomaccessfileappender

A Rhino management sub task for creating a log appender writing to a series of files opened in RW mode.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

filename

String

The file to write to

Yes.

filepattern

String

The pattern of file names for archived log files. Dependent on the rollover policy used, typically contains a date pattern or %i for integer counter.

Yes.

append

boolean

Append to the file if true, otherwise clear the file on open.

No.

buffersize

Integer

The the size of the write buffer. Defults to 256kB

No.

createondemand

boolean

Only create the file when data is written

No.

immediateflush

boolean

Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance.

No.

ignoreexceptions

boolean

Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender).

No.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used.

No.

policy

The rollover policy to determine when rollover should occur

Yes.

strategy

The strategy for archiving log files. Strategies determine the name, location, number and compression of the archived logs.

No.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

createsocketappender

A Rhino management sub task for creating a log socket appender.

Ant Parameters
Attribute Type Description Required

appendername

String

Name of the appender to create. This name must be unique.

Yes.

remotehost

String

Name or IP address of the remote host to connect to.

Yes.

port

int

Port on the remote host to connect to.

Yes.

protocol

String

"TCP" (default), "SSL" or "UDP".

No. If not specified, the default is TCP

immediatefail

boolean

When set to true, log events will not wait to try to reconnect and will fail immediately if the socket is not available.

Yes.

immediateflush

boolean

When set to true, each write will be followed by a flush. This will guarantee the data is written to the socket but could impact performance.

No. If not specified, defaults to true.

bufferedio

boolean

When true, events are written to a buffer and the data will be written to the socket when the buffer is full or, if immediateFlush is set, when the record is written.

No. If not specified, defaults to true.

buffersize

int

When bufferedIO is true, this is the buffer size, the default is 8192 bytes.

No.

reconnectiondelaymillis

int

If set to a value greater than 0, after an error the SocketManager will attempt to reconnect to the server after waiting the specified number of milliseconds. If the reconnect fails then an exception will be thrown (which can be caught by the application if ignoreExceptions is set to false).

No. If not specified, the default is 0

connecttimeoutmillis

int

The connect timeout in milliseconds. The default is 0 (infinite timeout, like Socket.connect() methods).

No.

keystorelocation

String

The location of the KeyStore which is used to create an SslConfiguration

No.

keystorepassword

String

The password to access the KeyStore.

No.

truststorelocation

String

The location of the TrustStore which is used to create an SslConfiguration

No.

truststorepassword

String

The password of the TrustStore

No.

ignoreexceptions

boolean

The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead.

No.

failonerror

boolean

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Nested elements

Element

Description

Required

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used.

No.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

This example creates a socket appender that sends JSON formatted log events to the host and port specified in the host and port properties. It filters outgoing messages to a maximum of 3 INFO level messages per second and 50 WARN/Warning messages. All Severe and ERROR messages are transmitted.

<property name="host" value="localhost"/>
<property name="port" value="5000"/>
</slee-management>
    <createsocketappender appendername="bar" properties="protocol=UDP,bufferedIO=true,immediateFlush=false,immediateFail=false,bufferSize=65536">
        <propertyset>
            <propertyref name="host"/>
            <propertyref name="port"/>
        </propertyset>
        <component pluginname="filters">
            <component pluginname="BurstFilter" properties="level=WARN,rate=50"/>
            <component pluginname="BurstFilter" properties="level=INFO,rate=3"/>
            <component pluginname="ThresholdFilter" properties="level=INFO"/>
        </component>
        <component pluginname="JsonLayout" properties="compact=false,complete=false,includeStacktrace=true,properties=true"/>
    </createsocketappender>
</slee-management>

This example requires that the three jackson library jars jackson-annotations-2.5.0.jar, jackson-core-2.5.0.jar and jackson-databind-2.5.0.jar be present in rhino/lib/logging-plugins/ NOTE: the complete configuration property of JsonLayout should always be set to the default value of false as setting it to true will produce malformed JSON on unclean shutdown and node restart requiring manual cleanup. Do not use this layout with file appenders as the written JSON will be invalid.

To search the output of the appender "bar", use a tool such as jq

# Select all log entries for key "rhino"
jq '.[] | select(.loggerName=="rhino")' < node-101/work/log/baz
# Select all log entries for transaction ID 101:235885817027593
jq '.[] | select(.contextMap.txID=="101:235885817027593")' < node-101/work/log/baz

createsyslogappender

A Rhino management sub task for creating a log socket appender with output formatted for consumption by a syslog daemon.

Ant Parameters
Attribute Type Description Required

appendername

Name of the appender to create. This name must be unique.

Yes.

remotehost

Name or IP address of the remote host to connect to.

Yes.

port

Port on the remote host to connect to.

Yes.

advertise

boolean

Should the appender be advertised

No

appname

String

RFC-5424 APP-NAME to use if using the RFC-5454 record layout

No

enterprisenumber

String

The IANA enterprise number

No

facility

String

The facility to classify messages as. One of "KERN", "USER", "MAIL", "DAEMON", "AUTH", "SYSLOG", "LPR", "NEWS", "UUCP", "CRON", "AUTHPRIV", "FTP", "NTP", "AUDIT", "ALERT", "CLOCK", "LOCAL0", "LOCAL1", "LOCAL2", "LOCAL3", "LOCAL4", "LOCAL5", "LOCAL6", or "LOCAL7".

No

format

String

RFC-5424 or BSD

No

structureddataid

String

The RFC-5424 structured data ID to use if not present in the log message

No

includemdc

boolean

If true, include MDC fields in the RFC-5424 syslog record. Defaults to true.

No

mdcexcludes

String

A comma separated list of MDC fields to exclude. Mutually exclusive with mdcincludes.

No

mdcincludes

String

A comma separated list of MDC fields to include. Mutually exclusive with mdcexcludes.

No

mdcrequired

String

A comma separated list of MDC fields that must be present in the log event for it to be logged. If any of these are not present the event will be rejected with a LoggingException.

No

mdcprefix

String

A string that will be prepended to each MDC key.

No

messageid

String

The default value to be used in the MSGID field of RFC-5424 records.

No

newline

boolean

Write a newline on the end of each syslog record. Defaults to false.

No

No

protocol

String

TCP, UDP or SSL. Defaults to TCP.

No.

buffersize

Integer

The the size of the write buffer. Defults to 256kB

No.

connecttimeoutmillis

Integer

Maximum connection wait time in milliseconds if greater than 0.

No.

reconnectiondelaymillis

Integer

Maximum time to attempt reconnection for before throwing an exception. The default, 0, is to try forever.

No.

immediatefail

boolean

When set to true log events will be rejected immediately if the socket is unavailable instead of queuing.

No.

immediateflush

boolean

Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance.

No.

ignoreexceptions

boolean

Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender).

No.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

Nested elements

Element

Description

Required

filter

A filter to select events that will be reported by this appender.

No.

layout

The layout to use to format log events. Overrides the format attribute if set. Defaults to SyslogLayout.

No.

component

Additional components such as loggerFields

No

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be created, eg. an appender with the same name already exists.

Sample Usage

To create a syslog appender that sends all log messages not filtered by the loggers to Syslog on localhost and all WARN or greater to Syslog on logserver using TLS encryption.

<property name="fileName" value="baz"/>
<slee-management>
    <createsyslogappender appendername="local-syslog" mdcid="mdc" host="localhost" port="9601" facility="SYSLOG" protocol='UDP"/>
    <createsyslogappender appendername="remote-syslog" name="RFC5424" format="RFC5424" host="logserver" port="8514" protocol="TCP" appName="MyApp" includeMDC="true" facility="LOCAL0" enterpriseNumber="18060" newLine="true" messageId="Audit" id="App">
        <component pluginname="SslConfig" properties="protocol=TLS">
            <component pluginname="KeyStore" properties="location=log4j2-keystore.jks, password=KEYSTORE_PASSWORD"/>
            <component pluginname="TrustStore" properties="location=log4j2-truststore.p12, password=TRUSTSTORE_PASSWORD, type=PKCS12"/>
        </component>
        <component pluginname="filters">
            <component pluginname="ThresholdFilter">
                <component pluginname="KeyValuePair" properties="key=rhino, value=WARN"/>
            </component>
        </component>
    </createsyslogappender>
</slee-management>

createusageparameterset

A Rhino management sub task for creating usage parameter sets.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

name

Name of the usage parameter set to create.

Yes.

Parameters available as nested elements
Element Description Required

sbbNotificationSource

Identifies an SBB notification source. See com.opencloud.slee.mlet.ant.SbbNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, or ptNotifSource must be specified.

raEntityNotificationSource

Identifies a resource adaptor entity notification source. See com.opencloud.slee.mlet.ant.RAENotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, or ptNotifSource must be specified.

profileTableNotificationSource

Identifies a profile table notification source. See com.opencloud.slee.mlet.ant.PTNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, or ptNotifSource must be specified.

NonFatalBuildException throw conditions
  • The usage parameter set to be created already exists.

databaseresource

Abstract base class for sub tasks that manage database resources.

Ant Parameters
Attribute Description Required

resourcetype

Type of resource. Must be one of persistence or jdbc.

Yes.

resourcename

Name of the resource.

Yes.

failonerror

Flag to control failure behaviour. If true, the sub task will throw a BuildException when an error is encountered. If false, the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • Implementation dependent.

deactivateraentity

A Rhino management sub task for deactivating Resource Adaptor Entities.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Name of the resource adaptor entity to deactivate.

Yes.

nodes

Comma-separated list of node IDs on which the resource adaptor entity should be deactivated.

No. If omitted an attempt is made to deactivate the resource adaptor entity on all current cluster members.

NonFatalBuildException throw conditions
  • The task is run targeting an already deactivated entity.

  • The task is run targeting a non-existant entity.

deactivateservice

A Rhino management sub task for deactivating Services.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

serviceid

Canonical name of the service to deactivate.

Only required/allowed if the component nested element is not present.

nodes

Comma-separated list of node IDs on which the service should be deactivated.

No. If omitted an attempt is made to deactivate the service on all current cluster members.

Parameters available as nested elements
Element Description Required

component

Identifies the service to deactivate. See com.opencloud.slee.mlet.ant.SleeComponentElement

Only required/allowed if the serviceid attribute is not present.

NonFatalBuildException throw conditions
  • The task is run targeting a service which is not active.

  • The task is run targeting a non-existant service.

deploycomponent

A Rhino management sub task for deploying an installed component across the cluster.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

component

Identifies the component to deploy. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting an already deployed component.

deploydeployableunit

A Rhino management sub task for deploying components in an installed deployable unit across the cluster.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

url

URL of deployable unit to deploy.

Yes.

disablerampup

A Rhino management sub task that disables the ramp up of limiter rate for the system input limiter.

Ant Parameters
Attribute Description Required

name

The name of the limiter to disable ramp up for.

No. If not specified then ramping of the system input rate limiter is disabled.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

disablesymmetricactivationstatemode

Rhino versions prior to 3.0.0 had two modes of operation for managing the activation state of services and resource adaptor entities: per-node and symmetric. From Rhino 3.0.0 these two modes were combined and have been superseded by default desired state which can be overridden by per-node desired state. Per-node desired state overrides default desired state if present. Default desired state is effective if no per-node desired state exists. The commands and Ant tasks to enable and disable symmetric activation state mode remain but do not prevent later modification of per-node desired state.

A Rhino management sub task for disabling symmetric activation state mode.

Rhino versions prior to 3.0.0 had two modes of operation for managing the activation state of services and resource adaptor entities: per-node and symmetric. From Rhino 3.0.0 these two modes were combined and have been superseded by default desired state which can be overridden by per-node desired state. Per-node desired state overrides default desired state if present. Default desired state is effective if no per-node desired state exists. The commands and Ant tasks to enable and disable symmetric activation state mode remain but do not prevent later modification of per-node desired state.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • The task is run when the symmetric activation state mode is already disabled.

@Deprecated(since = "3.0.0")

disconnectraentitylimiterendpoint

A Rhino management sub task for disconnecting an RA Entity limiter endpoint from a limiter.

Ant Parameters
Attribute Description Required

entityname

Name of the resource adaptor entity.

Yes.

endpointname

Name of the endpoint.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

disconnectservicelimiterendpoint

A Rhino management sub task for disconnecting a Service limiter endpoint.

Ant Parameters
Attribute Description Required

servicename

Name of the service.

Yes.

servicevendor

Vendor of the service.

Yes.

serviceversion

Version of the service.

Yes.

sbbname

Name of the sbb.

Yes.

sbbvendor

Vendor of the sbb.

Yes.

sbbversion

Version of the sbb.

Yes.

endpointname

Name of the endpoint.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

enablerampup

A Rhino management sub task that configures and enables ramp up of limiter rate for an input limiter.

For the in-built system input rate limiter:

  • The start rate and rate increment are expressed in total events processed by the node.

  • The events per increment specifies how many events must be successfully processed before the allowed rate will ramp up by the rate increment.

For user-defined rate limiters:

  • The start rate and rate increment are expressed as a percentage of the maximum configured units of work available to the limiter. For example, if the maximum rate for the limiter is 400 work units per time unit and the start rate is set to 20.0 then the ramp will begin with an allowed consumption rate of 20% of 400 or 80 work units per time unit.

  • The events per increment specifies how many work units must be accepted by the limiter, ie. units not rate limited, before the allowed rate will ramp up by the rate increment.

Ant Parameters
Attribute Description Required

name

The name of the limiter to configure and enable ramp up for.

No. If not specified then the ramping of the system input rate limiter is enabled.

startrate

The initial number of events per second for the system input limiter (a double).

Yes.

rateincrement

The incremental number of events per second added to the allowed rate if Rhino is successfully processing work (a double).

Yes.

eventsperincrement

The number of events processed before Rhino will add rateincrement events to the allowed rate (an integer).

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

enablesymmetricactivationstatemode

Rhino versions prior to 3.0.0 had two modes of operation for managing the activation state of services and resource adaptor entities: per-node and symmetric. From Rhino 3.0.0 these two modes were combined and have been superseded by default desired state which can be overridden by per-node desired state. Per-node desired state overrides default desired state if present. Default desired state is effective if no per-node desired state exists. The commands and Ant tasks to enable and disable symmetric activation state mode remain but do not prevent later modification of per-node desired state.

A Rhino management sub task for enabling symmetric activation state mode.

Rhino versions prior to 3.0.0 had two modes of operation for managing the activation state of services and resource adaptor entities: per-node and symmetric. From Rhino 3.0.0 these two modes were combined and have been superseded by default desired state which can be overridden by per-node desired state. Per-node desired state overrides default desired state if present. Default desired state is effective if no per-node desired state exists. The commands and Ant tasks to enable and disable symmetric activation state mode remain but do not prevent later modification of per-node desired state.

Ant Parameters
Attribute Description Required

templatenode

The ID of the node to base symmetric state one. May be the string value 'any' to allow any node to be selected.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • The task is run when the symmetric activation state mode is already enabled.

@Deprecated(since = "3.0.0")

importconfigurationkey

A Rhino management sub task for importing configuration items.

Ant Parameters
Attribute Description Required

filename

Source file containing configuration to be imported.

Yes.

type

The configuration type to import.

Yes.

innamespace

Flag indicating if the configuration should be imported into the global environment (false) or the current namespace (true).

No. If not specified then the task will attempt to determine the appropriate target for import based on the configuration type and the content being imported.

replace

Flag indicating if the update of existing configuration keys is allowed during the import. Must be true if the configuration being imported is expected to replace any existing configuration. If this flag is false and the configuration being imported contains configuration keys that are already present in Rhino, the import will fail with an exception.

No. Defaults to true.

failonerror

Flag to control failure behaviour. If true, the sub task will throw a BuildException when an error is encountered. If false, the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

importprofiles

A Rhino management sub task for importing previously exported profiles.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

filename

Source file containing profiles to be imported.

Yes.

profile-table

Name of the profile table to import into. If not specified, profiles are imported into the profile table specified in the profile XML data.

No.

maxprofiles

Maximum number of profiles to handle in one transaction.

No.

replace

Flag indicating whether any existing profiles should be replaced with the new profile data.

No.

verify

Flag indicating whether the {@code profileVerify()} method will be invoked on each of the imported profiles.

No. Default value is {@code true}.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

install

A Rhino management sub task for installing Deployable Units.

Ant Parameters
Attribute Description Required

type

Type of deployable unit. Default supported types: "du", "bindings"

No. Defaults to "deployable unit".

url

URL deployable unit to install.

Not required if srcfile is specified.

installlevel

The install level to which the deployable unit should be installed. Must be one of "INSTALLED", "VERIFIED", or "DEPLOYED".

No. Defaults to "DEPLOYED".

assignbundlemappings

If true, assign bundle prefixes to any SAS mini-bundles in the DU.

No. Defaults to 'false'.

srcfile

Path to deployable unit to install.

Not required if url is specified.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

The location of the deployable unit JAR file to read, and the URL to associate with it when passing it to Rhino is determined as follows:

  • If both srcfile and url parameters are specified the JAR file is read from the file indicatedby srcfile and the URL used is that specified by url.

  • If only srcfile is specified then the JAR file is read from this file and the filename is also used to construct a URL.

  • If only url is specified then the JAR file is read from this location and deployed using the specified URL.

NonFatalBuildException throw conditions
  • The task is run targeting an already deployed deployable unit.

notificationsourcebased

Abstract base class for all sub tasks that accept a notification source element.

profilebased

Abstract base class extended by other sub tasks which deal with profiles.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

profilename

Name of the profile to create.

Yes.

tablename

Name of the profile table in which the profile will be created.

Yes.

Parameters available as nested elements
Element Description Required

profilevalue

Assigns a value to a profile attribute. See com.opencloud.slee.mlet.ant.ProfileValueElement

Implementation dependent.

NonFatalBuildException throw conditions
  • Implementation dependent.

profilevaluebased

Abstract base class for all sub tasks that accept attribute elements.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

profilevalue

Assigns a value to a profile attribute. See com.opencloud.slee.mlet.ant.ProfileValueElement

Implementation dependent.

NonFatalBuildException throw conditions
  • Implementation dependent.

raentitybased

Abstract base class for all sub tasks that take a resource adaptor entity.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Name of the resource adaptor entity targeted by this task.

Yes.

NonFatalBuildException throw conditions
  • Implementation dependent.

removeappenderref

A Rhino management sub task for removing an appender from a log key.

Ant Parameters
Attribute Description Required

logkey

Name of the log key to remove the appender from.

Yes.

appendername

Name of the appender to remove.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the appender cannot be remove from the log key, eg. the appender is not attached to the log key.

removecopiedcomponents

A Rhino management sub task for removing copied components.

Components can be removed by either specifying the URL of a deployable unit, in which case all copied components in the deployable unit will be removed, or by specifying one or more nested <component> elements.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

url

URL of deployable unit to remove copied components from.

Only required/allowed if no nested {@code component} elements are present.

Parameters available as nested elements

Element

Description

Required

component

Identifies a component to be removed. See com.opencloud.slee.mlet.ant.SleeComponentElement. (Note that for the removecopiedcomponent sub task the optional {@code type} attribute of {@code component} is required.)

Only required/allowed if the {@code url} attribute is not present. Multiple {@code component} elements are allowed.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existent deployable unit or component.

removedatabaseresource

A Rhino management sub task for removing a database resource.

Ant Parameters
Attribute Description Required

resourcetype

Type of resource to remove. Must be one of "perstence" or "jdbc".

Yes.

resourcename

Name of the resource to remove.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if a resource with the given type and name doesn’t exists.

removejdbcresourceconnectionpool

A Rhino management sub task for removing a connection pool configuration from a JDBC resource.

Ant Parameters
Attribute Description Required

resourcename

Name of the JDBC resource.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the JDBC resource doesn’t have a connection pool configuration.

removelimiter

A Rhino management sub task for removing a limiter.

Ant Parameters
Attribute Description Required

name

The name of the limiter to be removed.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the named limiter does not exist.

removelinkedcomponent

A Rhino management sub task for removing a virtual component that is a link to another component.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

component

Identifies the linked component to be removed. See com.opencloud.slee.mlet.ant.SleeComponentElement. (Note that for the removelinkedcomponent sub task the optional {@code type} attribute of {@code component} is required.)

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existent component.

removeloggerconfig

A Rhino management sub task for removing a configured logger.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

logKey

Name of the logger config to remove.

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

removeloggingproperty

A Rhino management sub task for removing logging properties. This task will always fail to remove in use properties.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

propertyName

Name of the logging property to remove. This property will not be removed if it is in use.

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting an unknown logging property.

removenamespace

A Rhino management sub task for removing a deployment namespace.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

namespace

The name of the namespace to remove.

Yes.

NonFatalBuildException throw conditions
  • The task is run with the name of a namespace that doesn’t exist.

removepermissionfromrole

A Rhino management sub task for removing a permission from a role.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

role

Role name

Yes.

permissionName

Permission name (taken from a permission mapping target as either PermissionCategory or PermissionCategory#PermissionSubcategory)

Yes.

permissionActions

Permission actions to remove, either "write" or "read,write"

Yes.

NonFatalBuildException throw conditions
  • Role does not exist

removepermissionmapping

A Rhino management sub task for removing a permission mapping.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

objectnamepattern

MBean object name pattern as specified in javax.management.ObjectName

Yes.

member

A MBean member (attribute or operation)

Only if rhinopermissionsubcategory is specified.

rhinopermissioncategory

Primary part of the Rhino permission name

Yes.

rhinopermissionsubcategory

Secondary (optional) part of the Rhino permission name

Only if member is specified.

NonFatalBuildException throw conditions
  • Permission mapping does not exist

removepersistenceinstance

A Rhino management sub task for removing a persistence instance.

Ant Parameters
Attribute Description Required

name

Name of the persistence instance to remove.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if a persistence instance with the given name doesn’t exists.

removepersistenceinstanceref

A Rhino management sub task for removing a persistence instance reference from a database resource.

Ant Parameters
Attribute Description Required

resourcetype

Type of resource to remove the reference from. Must be one of "perstence" or "jdbc".

Yes.

resourcename

Name of the resource to remove the reference from.

Yes.

persistenceinstancename

Name of the persistence instance reference to remove.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the persistence instance is not referenced by the resource.

removeprofile

A Rhino management sub task for removing a Profile from a table.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

profilename

Name of the profile to remove.

Yes.

tablename

Name of the profile table containing the profile.

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existant profile.

  • The task is run targeting a non-existant profile table.

removeprofiletable

A Rhino management sub task for removing Profile Tables.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

tablename

Name of the profile table to remove.

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existant profile table.

removeraentity

A Rhino management sub task for removing Resource Adaptor Entities.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Name of the resource adaptor entity to remove.

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existant resource adaptor.

removerole

A Rhino management sub task for removing a role.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

role

Role name

Yes.

NonFatalBuildException throw conditions
  • Role does not exist

removesasbundlemapping

A Rhino management sub task for removing a SAS bundle mapping.

Ant Parameters
Attribute Description Required

name

The name of the sas bundle.

Yes.

removeservicebindings

A Rhino management sub task for removing bindings from a service.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

service

Identifies the service component. See SleeComponentElement

Yes.

binding

Identifies a binding descriptor component. See SleeComponentElement

Yes. May be repeated as many times as needed to remove multiple bindings.

NonFatalBuildException throw conditions
  • The task is run targeting a binding descriptor that is not currently added to the service.

removeusageparameterset

A Rhino management sub task for removing usage parameter sets.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

name

Name of the usage parameter set to remove.

Yes.

Parameters available as nested elements
Element Description Required

sbbNotificationSource

Identifies an SBB notification source. See com.opencloud.slee.mlet.ant.SbbNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, or ptNotifSource must be specified.

raEntityNotificationSource

Identifies a resource adaptor entity notification source. See com.opencloud.slee.mlet.ant.RAENotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, or ptNotifSource must be specified.

profileTableNotificationSource

Identifies a profile table notification source. See com.opencloud.slee.mlet.ant.PTNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, or ptNotifSource must be specified.

NonFatalBuildException throw conditions
  • The usage parameter set to be removed doesn’t exist.

setactivenamespace

A Rhino management sub task for setting the active deployment namespace.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

namespace

The namespace to make active. Use an empty string to denote the default namespace.

Yes.

setadditivity

A Rhino management sub task for setting the additivity of a log key.

Ant Parameters
Attribute Description Required

logkey

Name of the log key to set the additivity of.

Yes.

additivity

Appender additivity of the log.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException.

setjdbcresourceconnectionpoolconfig

A Rhino management sub task for updating the connection pool configuration of a JDBC resource.

Ant Parameters
Attribute Description Required

resourcename

Name of the JDBC resource.

Yes.

maxconnections

The maximum total number of connections that may exist.

No.

minconnections

The minimum total number of connections that should exist.

No.

maxidleconnections

The maximum number of idle connections that may exist at any one time.

No.

maxidletime

The time period (in seconds) after which an idle connection may become eligible for discard.

No.

idlecheckinterval

The time period (in seconds) between idle connection discard checks.

No.

connectionpooltimeout

The maximum time (in milliseconds) that a SLEE application will block for a free connection before a timeout error occurs.

No.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

NonFatalBuildException throw conditions
  • This task will throw a NonFatalBuildException if the JDBC resource already has a connection pool configuration.

setloggingproperties

A Rhino management sub task for setting logging properties.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

  • .Nested elements

Element

Description

Required

property

A Logging property to set.

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

setloglevel

A Rhino management sub task for setting the log level of a particular log key.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

logkey

Name of the log key to change log level of.

Yes.

loglevel

Log level to set the key to.

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

Sample usage

    <setloglevel logKey="rhino.ah.ah" logLevel="DEBUG"/>
    <setloglevel logKey="rhino" logLevel="WARN"/>
    <setloglevel logKey="" logLevel="WARN"/>

setprofileattributes

A Rhino management sub task for modifying profile attributes.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

profilename

Name of the profile to create.

Yes.

tablename

Name of the profile table containing the profile.

Yes.

Parameters available as nested elements
Element Description Required

profilevalue

Assigns a value to a profile attribute. See com.opencloud.slee.mlet.ant.ProfileValueElement

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

setraentitystartingpriority

A Rhino management sub task for setting the starting priority of a Resource Adaptor Entity.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Name of the resource adaptor entity to update.

Yes.

priority

The new starting priority for the resource adaptor entity. Must be between -128 and 127. Higher priority values have precedence over lower priority values.

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

setraentitystoppingpriority

A Rhino management sub task for setting the stopping priority of a Resource Adaptor Entity.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Name of the resource adaptor entity to update.

Yes.

priority

The new stopping priority for the resource adaptor entity. Must be between -128 and 127. Higher priority values have precedence over lower priority values.

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

setsasbundlemapping

A Rhino management sub task for setting a SAS bundle prefix mapping.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

bundlemappingfile

A file of SAS bundle mappings to be installed. Formatted as <key>: <value> pairs per line.

Only if there are no bundlemapping elements.

Parameters available as nested elements
Element Description Required

bundlemapping

Maps a fully qualified class name to a prefix

Only if there is no bundlemappingfile attribute.

NonFatalBuildException throw conditions
  • The task sets mappings that already map to the requested prefixes

setsastracingenabled

A Rhino management task for setting the state of the SAS tracing.

Ant Parameters
Attribute Description Required

tracingEnabled

Boolean flag indicating if SAS tracing should be enabled (true) or disabled (false).

Yes.

force

If true, override the SLEE state check when disabling SAS tracing state. SAS tracing state cannot normally be disabled when the SLEE is not in the Stopped state, because this may cause incomplete trails to be created in SAS for sessions that are in progress.

No. Defaults to false.

setservicemetricsrecordingenabled

A Rhino management sub task for updating the SBB metrics recording status of a service.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

enabled

Boolean flag indicating if SBB metrics recording should be enabled (true) or disabled (false).

Yes.

Parameters available as nested elements
Element Description Required

component

Identifies the service to update. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

setservicestartingpriority

A Rhino management sub task for setting the starting priority of a Service.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

priority

The new starting priority for the service. Must be between -128 and 127. Higher priority values have precedence over lower priority values.

Yes.

Parameters available as nested elements
Element Description Required

component

Identifies the service to update. See SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

setservicestoppingpriority

A Rhino management sub task for setting the stopping priority of a Service.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

priority

The new stopping priority for the service. Must be between -128 and 127. Higher priority values have precedence over lower priority values.

Yes.

Parameters available as nested elements
Element Description Required

component

Identifies the service to update. See SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

settracelevel

A Rhino management sub task for setting the trace level of components.

Warning This ant task has been deprecated, since it uses a ComponentID to identify a notification source (which is not compatible with the changes made to the tracing subsystem in SLEE 1.1). It has been replaced with com.opencloud.slee.mlet.ant.tasks.SetTracerLevelTask.
Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

componentid

Canonical name of the component.

Only required/allowed if the component nested element is not present.

type

Indicates the type of component referenced by componentid. See below for allowable component types.

Only required/allowed if componentid parameter is present.

level

Requested trace level, can be one of 'finest', 'finer', 'fine', 'config', 'info', 'warning', 'severe', 'off'.

Yes.

Parameters available as nested elements
Element Description Required

component

Identifies the component. See com.opencloud.slee.mlet.ant.SleeComponentElement (Note that for the settracelevel sub task the optional type attribute of component is required.)

Only required/allowed if the serviceid attribute is not present.

Component Types

The following names are valid identifiers for a component type in the type attribute of the settracelevel task or a component nested element.

  • pspec - profile specification

  • ra - resource adaptor

  • service - service

  • sbb - sbb

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

settracerlevel

A Rhino management sub task for setting the trace level of notification sources.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

tracerName

Name of the tracer whose level is to be set.

Yes.

level

Requested trace level, can be one of 'finest', 'finer', 'fine', 'config', 'info', 'warning', 'severe', 'off'.

Yes.

Parameters available as nested elements
Element Description Required

sbbNotificationSource

Identifies an SBB notification source. See com.opencloud.slee.mlet.ant.SbbNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, ptNotifSource or subsystemNotifSource must be specified.

raEntityNotificationSource

Identifies a resource adaptor entity notification source. See com.opencloud.slee.mlet.ant.RAENotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, ptNotifSource or subsystemNotifSource must be specified.

profileTableNotificationSource

Identifies a profile table notification source. See com.opencloud.slee.mlet.ant.PTNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, ptNotifSource or subsystemNotifSource must be specified.

subsystemNotificationSource

Identifies a subsystem notification source. See com.opencloud.slee.mlet.ant.SubsystemNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, ptNotifSource or subsystemNotifSource must be specified.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

shadowcomponent

A Rhino management sub task for shadowing an existing component with another component.

Ant Parameters
Attribute Description Required

type

The component type. See com.opencloud.slee.mlet.ant.SleeComponentElement for valid component type strings.

Yes.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

source

Identifies the component to be shadowed. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

target

Identifies the component that will shadow the source component. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • The task is run with a source component that is already shadowed by the given target.

unbindralinkname

A Rhino management sub task for unbinding Resource Adaptor Entity Link Names.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

linkname

The link name to unbind.

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting an unbound linkname.

undeploycomponent

A Rhino management sub task for undeploying a component across the cluster.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

component

Identifies the component to undeploy. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a component that is not currently deployed.

uninstall

A Rhino management sub task for uninstalling Deployable Units.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

url

URL of deployable unit to uninstall.

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existent deployable unit.

unsetalltracers

A Rhino management sub task for unsetting the trace level assigned to all tracers of notification sources.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

sbbNotificationSource

Identifies an SBB notification source. See com.opencloud.slee.mlet.ant.SbbNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, ptNotifSource or subsystemNotifSource must be specified.

raEntityNotificationSource

Identifies a resource adaptor entity notification source. See com.opencloud.slee.mlet.ant.RAENotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, ptNotifSource or subsystemNotifSource must be specified.

profileTableNotificationSource

Identifies a profile table notification source. See com.opencloud.slee.mlet.ant.PTNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, ptNotifSource or subsystemNotifSource must be specified.

subsystemNotificationSource

Identifies a profile table notification source. See com.opencloud.slee.mlet.ant.SubsystemNotificationSourceElement.

One and only one of sbbNotifSource, raeNotifSource, ptNotifSource or subsystemNotifSource must be specified.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

unshadowcomponent

A Rhino management sub task for removing the shadow from a shadowed component.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

component

Identifies the component whose shadow is to be removed. See com.opencloud.slee.mlet.ant.SleeComponentElement. (Note that for the unshadowcomponent sub task the optional {@code type} attribute of {@code component} is required.)

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a component that is not shadowed.

unverifycomponent

A Rhino management sub task for unverifying a verified component.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

component

Identifies the component to unverify. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a component that is not currently verified.

updatepersistenceinstance

A Rhino management sub task for updating the settings of a persistence instance.

Ant Parameters
Attribute Description Required

name

Name of the persistence instance to create. This name must be unique.

Yes.

type

Type of the persistence instance to create, eg. 'jdbc' or 'cassandra'.

No. Only needs to be specified if changing the persistence instance type.

datasourceclass

Fully-qualified class name the the datasource class to be used by the persistence instance.

No. This parameter is only meaningful if 'type' is 'jdbc'. Only needs to be specified if a different datasource class should be used by the persistence instance than previously specified.

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. Default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

configproperty

Identifies a configuration property of the datasource class that should be updated. See ConfigPropertyElement. Note that the {@code type} property of {@code ConfigPropertyElement} is mandatory for this task.

One {@code ConfigPropertyElement} must be specified per config property.

removeconfigproperty

Identifies an existing configuration property of the datasource class that should be removed. See RemoveConfigPropertyElement.

One {@code RemoveConfigPropertyElement} must be specified per config property to be removed.

NonFatalBuildException throw conditions
  • This task will never throw a NonFatalBuildException. It will always fail (throw a BuildException) on errors.

usertransaction

A Rhino management sub task that allows its own subtasks to be executed in the context of a single client-demarcated transaction.

This task starts a user transaction then executes the nested subtasks. If all the subtasks complete successfully, the user transaction is committed. If any tasks fails with a fatal BuildException, or fails with a NonFatalBuildException but its failonerror flag is set to true, the user transaction is rolled back.

The following sub task elements can be provided in any number and in any order. The User Transaction task will execute these sub tasks in the specified order until a sub task fails by throwing a org.apache.tools.ant.BuildException which will be re-thrown to Ant with some contextual information regarding the sub task that caused it.

Sub-tasks specifiable as nested elements

com.opencloud.slee.mlet.ant.tasks.CreateProfileTask

Create Profiles inside tables.

com.opencloud.slee.mlet.ant.tasks.RemoveProfileTask

Remove a Profile from a table.

com.opencloud.slee.mlet.ant.tasks.SetProfileAttributesTask

Modify profile attributes.

com.opencloud.slee.mlet.ant.tasks.ImportProfilesTask

Import profile XML data.

verifycomponent

A Rhino management sub task for verifying an installed component across the cluster.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

Parameters available as nested elements
Element Description Required

component

Identifies the component to verify. See com.opencloud.slee.mlet.ant.SleeComponentElement

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting an already verified component.

verifydeployableunit

A Rhino management sub task for verifying components in an installed deployable unit across the cluster.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

url

URL of deployable unit to verify.

Yes.

waittilraentityisinactive

A Rhino management sub task for waiting on Resource Adaptor deactivation. This task will block while waiting for the specified Resource Adaptor to deactivate.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

entityname

Name of the resource adaptor entity targeted by this task.

Yes.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existant resource adaptor.

waittilserviceisinactive

A Rhino management sub task for waiting on Service deactivation. This task will block while waiting for the specified Service to deactivate.

Ant Parameters
Attribute Description Required

failonerror

Flag to control failure behaviour. If 'true', the sub task will throw a BuildException when an error is encountered. If 'false', the sub task will throw a NonFatalBuildException instead of a BuildException under specific circumstances. See below for conditions (if any) which will cause a NonFatalBuildException.

No. default value is taken from the Rhino management parent task.

serviceid

Canonical name of the service to deactivate.

Only required/allowed if the component nested element is not present.

Parameters available as nested elements
Element Description Required

component

Identifies the service to deactivate. See com.opencloud.slee.mlet.ant.SleeComponentElement

Only required/allowed if the serviceid attribute is not present.

NonFatalBuildException throw conditions
  • The task is run targeting a non-existant service.

Building Custom OA&M Tools with Rhino Remote API

The Rhino remote library is a collection of utility classes to help write remote management clients for Rhino. This library does not implement any proprietary interface to Rhino — it makes using the standard JMX interfaces easier. Rhino implements the JMX Remote API (as per jsr 160) using the Rhino JMX Remote Adaptor. A JMX client can also be written to connect to Rhino using the JMX Remote API directly, without using the Rhino remote library.

When would I use Rhino remote library ?

The most common reason to used the Rhino remote library is to implement a service-specific operations-administration-and-maintenance (OA&M) interface. For example:

  • a web interface to provision profiles interactively

  • a graphical application to monitor statistics in real time

  • a command-line tool to refresh a profile data cache from a master database

  • a daemon process to listen for alarm notifications and forward them to a third-party O&M platform.

How would I use Rhino remote library?

The basic steps to using the Rhino remote library are:

  1. Create a connection using RhinoConnectionFactory.

  2. Create a proxy object for the MBean you want to invoke operations on.

  3. Invoke the MBean operation by invoking the method on the proxy object.

For example, to stop the SLEE:

MBeanServerConnection connection = RhinoConnectionFactory.connect(properties);
SleeManagementMBean sleeMBean = SleeManagement.getSleeManagementMBean(connection);
sleeMBean.stop();
Tip See also the Rhino Remote API Javadoc.

Connecting to Rhino

The Rhino remote library contains a class that represents a connection to Rhino (RhinoConnection). Below are descriptions of various ways of getting and configuring an instance of this class, including:

RhinoConnectionFactory

Use the RhinoConnectionFactory class to create connections to Rhino. RhinoConnectionFactory has connect methods that let you specify connection parameters many ways. The RhinoConnectionFactory connect methods return objects that implement RhinoConnection. See the Rhino Remote API for details.

/**
 * Factory class to create connections to Rhino.
 * @author Open Cloud
 */
public class RhinoConnectionFactory {
  public static RhinoConnection connect(File propertiesFile)
         throws ConnectionException { /* ... */ }

  public static RhinoConnection connect(Properties properties)
         throws ConnectionException { /* ... */ }

  public static RhinoConnection connect(String host, int port, String username, String password)
         throws ConnectionException { /* ... */ }

  public static RhinoConnection connect(String serverList, String username, String password)
         throws ConnectionException { /* ... */ }

  public static RhinoConnection connect(String[] servers, String username, String password)
         throws ConnectionException { /* ... */ }

  public static RhinoConnection connect(String host, int port, String username, String password, Properties sslProps)
         throws ConnectionException { /* ... */ }

  public static RhinoConnection connect(String[] servers, String username, String password, Properties sslProps)
         throws ConnectionException { /* ... */ }
}

RhinoConnection

The RhinoConnection class represents a connection from an external OA&M tool to Rhino. RhinoConnection extends MBeanServerConnection so you can use it anywhere in the JMX Remote API that uses MBeanServerConnection. Other Rhino remote library classes can use MBeanServerConnection objects from other sources (such as a local MBeanServer) however such objects do not have the additional RhinoConnection features.

Tip You create a RhinoConnection with a list of servers. The OA&M client tries each server in the list — in order — as it connects to Rhino. A connection-failure exception is thrown if no connection can be made with any server in the list. If a connection fails whilst executing a command, the OA&M client tries the next server on the list and executes the pending command.
/**
 * Rhino-specific connection interface.
 * @author Metaswitch
 */
public interface RhinoConnection extends MBeanServerConnection {
  /**
   * Try to connect this connection.
   * @throws ConnectionException if the connection attempt failed
   */
  void connect() throws ConnectionException;

  /**
   * Disconnect this connection.
   */
  void disconnect();

  /**
   * Disconnect then connect this connection.
   * @throws ConnectionException if the connection attempt failed
   */
  void reconnect() throws ConnectionException;

  /**
   * Test this connection by invoking a basic MBean operation.  If the test fails, an reconnect
   * attempt will betriggered automatically.
   * @throws ConnectionException if the test and the attempt to automatically reconnect failed
   */
  void testConnection() throws ConnectionException;

  /**
   * Get the JMXConnector associated with this MBeanServer connection.
   * @return a JMXConnector instance
   */
  JMXConnector getJmxConnector();

  /**
   * Whether or not this connection is connected.
   * @return state of the connection
   */
  boolean getConnectedState();

  /**
   * The host name of the current connection.  Useful if this connection is configured to try
   * multiple hosts.
   * @return host where this connection object is currently connected to Rhino
   */
  String getServerHost();

  /**
   * The port of the current connection.  Useful if this connection is configured to try
   * multiple hosts.
   * @return port where this connection object is currently connected to Rhino
   */
  int getServerPort();

  /**
   * The node ID of the server the current connection is to.  Useful if this connection is
   * configured to try multiple hosts.
   * @return node id of Rhino where this connection object is currently connected to
   */
  int getServerNodeId();

  /**
   * Tell this connection to only allow particular connection contexts.
   * See {@link com.opencloud.slee.remote.ExecutionContext} for more details.
   * @param ctx the context to allow
   */
  void setAllowedExecutionContext(ExecutionContext ctx);
}

ExecutionContext

Occasionally you may want control over whether or not a particular operation or set of operations can failover to another node. You can do this using the ExecutionContext mechanism. (See RhinoConnection.setAllowedExecutionContext(ExecutionContext) and ExecutionContext.)

/**
 * Defines execution contexts for MBean operations.  An execution context can be used to control the
 * behaviour during connection failover.
 * <p/>
 * The execution context can be set at any time on a {@link com.opencloud.slee.remote.RhinoConnection}
 * using {@link com.opencloud.slee.remote.RhinoConnection#setAllowedExecutionContext(ExecutionContext)}
 * and will remain in force until it is reset.
 * <p/>
 * The default value is CLUSTER.
 * @author Metaswitch
 */
public enum ExecutionContext {
  /**
   * Allow commands to execute only on the current connection.  If the connection fails, an exception
   * will be thrown.
   */
  CONNECTION,

  /**
   * Allow commands to executed on any connection, as long as a new connection is made to the same
   * node as the previous command was executed on.  <br/>
   * If a new connection is made to a different node, then an exception will be thrown.
   */
  NODE,

  /**
   * Allow commands to executed on any connection and on any node.  An exception will only be thrown
   * if the client cannot connect to any node.
   */
  CLUSTER
}

Configuring an SSL connection

If Rhino is configured to require SSL connections to the JMX Remote interface (the default setting), then the SSL connection factory that the JMX client uses will need a keystore, a trust store, and a password for each. You can provide these two ways:

  1. Getting system properties when starting the JVM that is running the JMX client:

  -Djavax.net.ssl.keyStore=$RHINO_CLIENT_HOME/rhino-public.keystore -Djavax.net.ssl.keyStorePassword=xxxxxxxx \
  -Djavax.net.ssl.trustStore=$RHINO_CLIENT_HOME/rhino-public.keystore -Djavax.net.ssl.trustStorePassword=xxxxxxxx
  1. Putting these settings into a properties file or properties object, and using one of the connection factory methods that takes such a parameter. For example, you could have a properties file containing the following lines:

javax.net.ssl.trustStore=rhino-public.keystore
javax.net.ssl.trustStorePassword=xxxxxxxx
javax.net.ssl.keyStore=rhino-public.keystore
javax.net.ssl.keyStorePassword=xxxxxxxx

and then create a connection as follows:

File propertiesFile = new File("remote.properties");
MBeanServerConnection connection = RhinoConnectionFactory.connect(propertiesFile);

Connection logging

Exceptions that the connection factory methods throw generally contain sufficient detail to determine the cause of any connection problems.

If you need more fine-grained tracing, you can provide a PrintWriter to the connection factory, so that all connection objects write trace messages to it while trying to connect.

Tip For examples of how to write connection trace messages to stdout or a Log4J logger, see RhinoConnectionFactory.setLogWriter(PrintWriter).

Platform MBean Server

Occasionally it may be necessary for a deployed application to access MBeans to obtain information usually used by management clients. For example, a rate limiting plugin may need to combine statistics from a resource adaptor and the JVM to determine if an incoming message should be rejected. Such applications can use the platform MBean server to access MBeans in Rhino. A component that accesses MBeans will typically be a library or resource adaptor. SBBs should rarely need to access the MBeans and profiles should never do so.

StatsManagementMBean statsMBean;
AccessController.doPriviliged(() -> {
    try {
        statsMBean = JMX.newMBeanProxy(ManagementFactory.getPlatformMBeanServer(), new ObjectName(StatsManagementMBean.OBJECT_NAME), StatsManagementMBean.class, true);
    }
    catch (MalformedObjectNameException e) {
        ...
    }
    try {
        long sessionId = 0;
        sessionId = statsMBean.createSession();
        statsMBean.subscribeCounter(sessionId, 101, "JVM.Thread", "threadCount", SubscriptionMode.SIMPLE_GAUGE);
        statsMBean.startCollecting(sessionId, 1000);
    }
    catch (ManagementException | ... e) {
        ...
    }
});

The library or resource adaptor will also need to be granted security permissions for the MBean operations that it will invoke.

grant {
    permission javax.management.MBeanPermission "com.opencloud.rhino.monitoring.stats.ext.StatsManagementMBean#*", "invoke";
}
Note Unless the operations are only possible via MBeans the SLEE Facility APIs should be used. MBean access is platform specific, more complex, and does not provide the normal transaction semantics that the Facility APIs offer.

MBean Proxies

The API uses MBean proxies extensively. This allows MBean operations to be invoked on the remote MBean server by a direct method call on the proxy object.

Retrieving JSLEE MBean proxies

The SleeManagement class is used to create proxy instances for SLEE-standard MBeans with well-known Object Names. An MBeanServerConnection must be obtained first, then one of the methods on the MBeanServerConnection class can be called.

MBeanServerConnection connection = RhinoConnectionFactory.connect( ... );
SleeManagementMBean sleeManagement = SleeManagement.getSleeManagementMBean(connection);

Retrieving Rhino MBean proxies

The RhinoManagement class is used to create proxy instances for Rhino-specific MBeans. An MBeanServerConnection must be obtained first, then one of the methods on the RhinoManagement class can be called.

MBeanServerConnection connection = RhinoConnectionFactory.connect( ... );
RhinoHousekeepingMBean housekeeping = RhinoManagement.getRhinoHousekeepingMBean(connection);

Working with Profiles

The RemoteProfiles class contains a number of utility methods to greatly ease working with SLEE profile management operations.

There are methods to:

  • get proxies to ProfileMBeans

  • create and commit a new profile

  • create an uncommitted profile that can have its attributes set before it is committed

  • get attribute names, values and types.

These methods are in addition to the standard management operations available on ProfileProvisioningMBean.

Creating a profile table

This can be done using the ProfileProvisioningMBean, but RemoteProfiles has a utility method to check if a profile table exists:

ProfileSpecificationID profileSpecID =
        new ProfileSpecificationID("AddressProfileSpec", "javax.slee", "1.0");

if(RemoteProfiles.profileTableExists(connection, "TstProfTbl")) {
    profileProvisioning.removeProfileTable("TstProfTbl");
}
profileProvisioning.createProfileTable(profileSpecID, "TstProfTbl");

Creating a profile

Option 1: Supply the attributes when creating the profile and have it committed

AttributeList list = new AttributeList();
list.add(new Attribute("Addresses",
                       new Address[] { new Address(AddressPlan.IP, "127.0.0.1") }));
RemoteProfiles.createCommittedProfile(connection, "TstProfTbl", "TstProfile1", list);

Option 2: Create the profile in the uncommitted state, and use a proxy to the Profile Management interface to set the attributes, then call commitProfile

RemoteProfiles.UncommittedProfile<AddressProfileManagement> uncommittedA;
uncommittedA = RemoteProfiles.createUncommittedProfile(connection, "TstProfTbl", "TstProfile2",
                      AddressProfileManagement.class);

AddressProfileManagement addressProfile = uncommittedA.getProfileProxy();
addressProfile.setAddresses(new Address[] { new Address(AddressPlan.IP, "127.0.0.2") });
uncommittedA.getProfileMBean().commitProfile();

Option 3: Create the profile in the uncommitted state, and use the setAttribute method on the connection, then call commitProfile

RemoteProfiles.UncommittedProfile uncommittedB;
uncommittedB = RemoteProfiles.createUncommittedProfile(connection, "TstProfTbl", "TstProfile3");
connection.setAttribute(uncommittedB.getObjectName(),
                        new Attribute("Addresses",
                                      new Address[] { new Address(AddressPlan.IP, "127.0.0.3") }));

uncommittedB.getProfileMBean().commitProfile();

Editing a profile

Using the profile management interface as a proxy to the profile object allows set methods to be invoked directly:

ProfileMBean profileMBean
        = RemoteProfiles.getProfileMBean(connection, "TstProfTbl", profileName);
profileMBean.editProfile();

AddressProfileManagement addrProfMng
        = RemoteProfiles.getProfile(connection, "TstProfTbl", profileName,
                                    AddressProfileManagement.class);

addrProfMng.setAddresses(new Address[] { new Address(AddressPlan.IP, "127.0.1.1") });
profileMBean.commitProfile();

Inspecting profile tables

Using RemoteProfiles methods to get the attribute names and types for a given profile table:

String[] names = RemoteProfiles.getAttributeNames(connection, "TstProfTbl");
System.out.println("Profile attributes:");
for (String name : names) {
    String type = RemoteProfiles.getAttributeType(connection, "TstProfTbl", name);
    System.out.println("    " + name + " (" + type + ")");
}

SLEE Management Script

The slee.sh script provides functionality to operate on the SLEE state either for nodes on this host, or on the cluster as a whole.

It provides a global control point for all nodes in the cluster.

For convenience of administration the script can discover the running set of nodes; however for more control, or if managing multiple clusters, the set of nodes can be configured in the environment. The environment variables used are:

RHINO_SLEE_NODES   - List of Rhino SLEE event router node IDs on this host. If
                     not specified, will discover nodes automatically.
RHINO_QUORUM_NODES - List of Rhino quorum nodes IDs on this host.

The values of these variables can be specified, if necessary, in the file rhino.env.

Commands

The commands below control the state of the SLEE on nodes of a Rhino cluster.
They are run by executing slee.sh <command> <arguments>, for example:

slee.sh start -nodes 101,102
Command What it does
 start

Starts the SLEE on the cluster or on a set of nodes.

Use with no arguments to start the cluster, or with the argument -nodes and a comma-separated list of nodes.

 stop

Stops the SLEE on the cluster or a set of nodes.

Use with no arguments to stop the cluster, or with the argument -nodes and a comma-separated list of nodes.

 reboot

Reboots the cluster or a set of nodes.

Use the argument -cluster to restart all nodes, or -nodes and a comma-separate list of nodes to restart. When rebooting, you also need to provide a list of states, one for each node being restarted. For example for a four-node cluster:

slee.sh reboot -nodes 102,103 -states running,running

would reboot nodes 102 and 103, and set the state of each to running.

slee.sh reboot -cluster -states stopped,stopped,running,running

would reboot all four nodes, and set the states to stopped for 101 and 102 and to running for 103 and 104.

 shutdown

Shuts down the cluster or a set of nodes, stopping them if required.

  • The argument -cluster shuts down all nodes in the cluster.

  • The argument -local shuts down all nodes running on this host machine.

  • The argument -nodes with a comma-separated list of nodes shuts down the set of nodes specified.

 state

Prints the state of all the nodes in the cluster.

Also available as the alias st.

 console

Runs a management command using the Rhino console.

Also available as the aliases cmd and c.

Rhino Management Script

The script rhino.sh provides functionality to control and monitor the processes for the Rhino nodes on this host.

It does not connect to a Rhino management node to operate on the SLEE state (except for the stop command), nor does it affect nodes on other hosts.

For convenience of administration the script can discover the running set of nodes; however for more control, or if managing multiple clusters, the set of nodes can be configured in the environment. The environment variables used are:

RHINO_START_INTERVAL - How long to delay between starting each Rhino node.
                       It is helpful to stagger node startup because a
                       distributed lock is required during boot, and
                       acquisition of that lock may timeout if a very large
                       number of components are deployed.
RHINO_SLEE_NODES     - List of Rhino SLEE event router node IDs on this host.
                       If not specified, will discover nodes automatically.
RHINO_QUORUM_NODES   - List of Rhino quorum nodes IDs on this host.

The values of these variables can be specified, if necessary, in the file rhino.env.

Commands

The commands below control the state of the Rhino nodes.
They are run by executing rhino.sh <command> <arguments>, for example:

rhino.sh start -nodes 101,102

Command What it does
 start

Starts a set of nodes that are not operational.

Use with no arguments to start all local nodes, or with the argument -nodes and a comma-separated list of nodes to start.

 stop

Stops a set of nodes that are operational.

Use with no arguments to stop all local nodes or with the argument -nodes and a comma-separated list of nodes to stop.

This command connects to a management node in order to stop the SLEE on the affected nodes and send them the shutdown command.

 kill

Kills a set of operational nodes using SIGTERM.

Use with no arguments to stop all local nodes or with the argument -nodes and a comma-separated list of nodes to kill.

 restart

Kill a set of operational nodes using SIGTERM, and then start the same set of nodes.

Use with no arguments to stop all local nodes or with the argument -nodes and a comma-separated list of nodes to restart.

Java Management Extension Plugins (JMX M-lets)

You can extend Rhino’s OA&M features many ways, including deploying a management component called a management applet (m-let) in the JMX MBean server running in each Rhino node. Below is an introduction to the JMX model, how JMX m-lets work, and how Rhino uses them.

What is JMX?

The Java for Management eXtentions (JMX) specification defines a standard way of instrumenting and managing Java applications. The JMX Instrumentation and Agent Specification, v1.2 (October 2002) summarises JMX like this:

excerpt from JMX specification 1.2

The Java Management extensions (also called the JMX specification) define an architecture, the design patterns, the APIs, and the services for application and network management and monitoring in the Java programming language. This chapter introduces all these elements, presenting the broad scope of these extensions. The JMX specification provides Java developers across all industries with the means to instrument Java code, create smart Java agents, implement distributed management middleware and managers, and smoothly integrate these solutions into existing management and monitoring systems. In addition, the JMX specification is referenced by a number of Java APIs for existing standard management and monitoring technologies.

How does JAIN SLEE use JMX?

The JAIN SLEE 1.1 specification mandates using the JMX 1.2.1 MLet (management applet) specification for management clients to gain access to the SLEE MBean server and SLEE MBean objects.

excerpt from the JAIN SLEE 1.1 specification

14.4 Accessing the MBean Server and SLEE MBean objects

The SLEE specification mandates the use of the JMX 1.2.1 MLet (management applet) specification in order for management clients to gain access to the SLEE MBean Server and SLEE MBean objects.

14.4.1 Requirements of the SLEE Vendor

Changed in 1.1: A SLEE vendor may allow the use of JMX Remote API 1.0 (JSR 160)

The SLEE specification requires that a SLEE vendor implement necessary functionality to load and instantiate management clients implemented as MLet MBeans. For example, the javax.management.loading.MLet MBean can be used by a SLEE implementation to perform parsing of a MLet text file and instantiation of the MBeans defined in it, however this method of loading MLet MBeans is not prescriptive.

The SLEE must ensure that the MBean Server that the MLets are registered with is the same MBean Server that the SLEE MBean objects are registered with (or a remote image of it) in order for the MLet to invoke the SLEE’s management operations.

The SLEE specification does not strictly define when during a SLEE’s lifetime management client MLets should be instantiated. However, the earliest that a SLEE implementation should instantiate MLets is after the initialization phase is complete and the SLEE is ready to accept management operations. MLets may be instantiated any time after this point at the discretion of the SLEE implementation.

A SLEE vendor may also optionally allow management clients to connect to the SLEE’s MBean Server via a remote connection established in accordance with the JMX Remote API 1.0 (JSR 160).

14.4.2 Requirements of the Management Client

Changed in 1.1: The use of JMX 1.2.1 is mandated.

The general requirements of any management client that accesses the SLEE MBean objects are as follows:

  • The management client may not hold any direct references to a SLEE MBean object. It may only reference a SLEE MBean object by its Object Name. In other words, it can only invoke a SLEE MBean object through a local or remote MBean Server, and it identifies the SLEE MBean object to be invoked by the Object Name of the SLEE MBean object. Interaction with a SLEE MBean via a proxy object such as that created by the javax.management.MBeanServerInvocationHandler class is acceptable also.

  • Since the management client cannot hold a direct reference to a SLEE MBean object, it cannot directly add or remove itself as a notification listener to a SLEE MBean object that implements the NotificationBroadcaster interface. Rather, the management client registers and removes a listener by invoking the MBean Server’s addNotificationListener and removeNotificationListener methods, passing to these methods the Object Name of the SLEE MBean object that the management client wants to begin or stop receiving notifications from.

There are two typical approaches to implementing a custom SLEE management client. The first approach is to implement the client as a MLet MBean. The second approach is to implement a SLEE management client using the JMX Remote API. A SLEE management client implemented as an MLet MBean behaves as a JMX connector or protocol adaptor. This MLet is registered with the SLEE MBean Server and provides an adaptation layer between the SLEE management operations and the management client.

The requirements of a management client MLet MBean are:

  • The MLet can be implemented as any type of MBean supported by the JMX 1.2.1 specification.

  • The MLet should implement the javax.management.MBeanRegistration interface. The preRegister method defined in this interface provides the MLet with the MBean Server instance that the SLEE MBean objects are registered with.

  • A suitable MLet text file or other documentation that provides the necessary codebase and instantiation class information should be provided with the MLet distribution.

A SLEE management client implemented using the JMX Remote API interacts with a remote image of the SLEE’s MBean server. The remote MBean server forwards management client requests to the real MBean server running in the SLEE, thereby bypassing the need for the management client vendor to provide an MLet that provides protocol adaptation layer functionality. This implementation approach assumes that the SLEE implementation the management client is connecting to allow clients to connect via a JMX Remote API connection.

What are m-lets?

An m-let is a management applet service that lets you instantiate and register one or more Java Management Beans (MBeans), from a remote URL, in the MBean server. The server loads text-based m-let configuration file that specifies information about MBeans to be loaded.

Tip Metaswitch typically uses m-lets to implement JMX protocol adaptors.

How does Rhino use m-lets?

Each node in a Rhino cluster runs an MBean server (Rhino {space-metadata-from:rhino-internal-version} uses the Java VM MBean server). When Rhino starts, it dynamically loads m-lets into those MBean servers, based on m-let text files stored in the following places:

  • Rhino SDK — $RHINO_HOME/config/mlet.conf

  • Production Rhino — $RHINO_HOME/node-XXX/config/permachine-mlet.conf, $RHINO_HOME/node-XXX/config/pernode-node-mlet.conf.

These configuration files conform to to the OpenCloud M-Let Config 1.1 DTD.

See the OpenCloud M-Let Config 1.1 DTD:

<?xml version="1.0" encoding="ISO-8859-1"?>
<!--
Use:
    <!DOCTYPE mlets PUBLIC
        "-//Open Cloud Ltd.//DTD JMX MLet Config 1.1//EN"
        "http://www.opencloud.com/dtd/mlet_1_1.dtd">
-->

<!ELEMENT mlets (mlet*)>

<!--
The mlet element describes the configuration of an MLet.  It contains an
optional description, an optional object name, an optional classpath, mandatory
class information, and optional class constructor arguments.  Constructor
arguments must be specified in the order they are defined in the class
constructor.
-->
<!ELEMENT mlet (description?, object-name?, classpath?, class, arg*)>

<!--
The description element may contain any descriptive text about the parent
element.

Used in: mlet
-->
<!ELEMENT description (#PCDATA)>

<!--
The object-name element contains the JMX object name of the MLet.  If the name
starts with a colon (:), the domain part of the object name is set to the
domain of the agent registering the MLet.

Used in: mlet

Example:
    <object-name>Adaptors:name=MyMLet</object-name>
-->
<!ELEMENT object-name (#PCDATA)>

<!--
The classpath element contains zero or more jar-url elements specifying jars
to be included in the classpath of the MLet and an optional specification
identifying security permissions that should be granted to classes loaded
from the specifed jars.

Used in: mlet
-->
<!ELEMENT classpath (jar-url*, security-permission-spec?)>

<!--
The jar-url element contains a URL of a jar file to be included in the
classpath of the MLet.

Used in: classpath

Example:
    <jar-url>file:/path/to/location/of/file.jar</jar-url>
-->
<!ELEMENT jar-url (#PCDATA)>

<!--
The security-permission-spec element specifies security permissions based on
the security policy file syntax. Refer to the following URL for definition of
Sun's security policy file syntax:

http://java.sun.com/j2se/1.3/docs/guide/security/PolicyFiles.html#FileSyntax

The security permissions specified here are granted to classes loaded from the
jar files identified in the jar-url elements in the classpath of the MLet.

Used in: jar

Example:
    <security-permission-spec>
        grant {
            permission java.lang.RuntimePermission "modifyThreadGroup";
        };
    </security-permission-spec>
-->
<!ELEMENT security-permission-spec (#PCDATA)>

<!--
The class element contains the fully-qualified name of the MLet's MBean class.

Used in: mlet

Example:
    <class>com.opencloud.slee.mlet.mymlet.MyMlet</class>
-->
<!ELEMENT class (#PCDATA)>

<!--
The arg element contains the type and value of a parameter of the MLet class'
constructor.

Used in: mlet
-->
<!ELEMENT arg (type, value)>

<!--
The type element contains the fully-qualified name of the parameter type.  The
currently supported types for MLets are: Java primitive types, object wrappers
for Java primitive types, and java.lang.String.

Used in: arg

Example:
    <type>int</type>
-->
<!ELEMENT type (#PCDATA)>

<!--
The value element contains the value for a parameter.  The value must be
appropriate for the corresponding parameter type.

Used in: arg

Example:
   <value>8055</value>
-->
<!ELEMENT value (#PCDATA)>
<!ATTLIST mlet  enabled CDATA #IMPLIED >

Structure of the m-let text file

<mlets>
    <mlet enabled="true">
        <classpath>
            <jar-url> </jar-url>
            <security-permission-spec>
            </security-permission-spec>
        </classpath>
        <class> </class>
        <arg>
            <type></type>
            <value></value>
        </arg>
    </mlet>

    <mlet enabled="true">

    </mlet>

The m-let text file can contain any number of MLET tags, each for instantiating a different MBean.

  • classpath — The classpath defines the code source of the MBean to be loaded.

    • jar-url — The URL to be used for loading the MBean classes.

    • security-permission-spec — Defines the security environment of the MBean.

  • class — The main class of the MBean to be instantiated.

  • arg — There may be zero or more arguments to the MBean. Each argument is defined by an arg element. The set of arguments must correspond to a constructor defined by the MBean main class.

    • type — The Java type of the argument.

    • value — The value of the argument.

Note For details on m-lets included in Metaswitch Rhino, see JMX Remote Adaptor M-let

JMX Remote Adaptor M-let

The JMX Remote Adaptor m-let is a fundamental component of Rhino.

All Metaswitch management tools use the JMX Remote Adaptor to connect to Rhino. This component must be present and active, or Rhino cannot be managed! The JMX Remote API is defined by the Java Management Extensions (JMX) Remote API (jsr 160). The JMX Remote Adaptor uses the JMX Remote API to expose a management port at the following URL:

service:jmx:rmi:///jndi/rmi://<rhino host>:<rhino jmx-r port>/opencloud/rhino

JMX Remote and Metaswitch tools

All Metaswitch management tools (the command-line console, the Rhino Element Manager, the stats client, the import-export tool) all use the JMX Remote API to connect to Rhino using the JMX Remote Adaptor. (See also Building Custom OA&M Tools with Rhino Remote API.)

JMX Remote Adaptor configuration options

Warning In normal conditions you should not need to change the configuration of this component!
<mlet enabled="true">
  <classpath>
    <jar-url>
      @FILE_URL@@RHINO_BASE@/lib/jmxr-adaptor.jar
    </jar-url>
    <jar-url>
      @FILE_URL@@RHINO_BASE@/lib/jmxr-adaptor-gpl2.jar
    </jar-url>
    <security-permission-spec>
    ...
    </security-permission-spec>
  </classpath>
  <class>
    com.opencloud.slee.mlet.jmxr.JMXRAdaptor
  </class>
  <!-- the local rmi registry port -->
  <arg>
    <type>int</type>
    <value>@RMI_MBEAN_REGISTRY_PORT@</value>
  </arg>
  <!-- the local jmx connector port -->
  <arg>
    <type>int</type>
    <value>@JMX_SERVICE_PORT@</value>
  </arg>
  <!-- enable ssl -->
  <arg>
    <type>boolean</type>
    <value>true</value>
  </arg>
</mlet>

As Rhino starts, it pre-processes m-let configuration files, substitutes configuration variables and creates a working m-let configuration file in the node-XXX/work/config subdirectory. Note the following arguments:

Argument Description Default

1

 rmi registry port

The port of the RMI registry that the JMX Adaptor registers itself with.

 1199

2

 local JMX connector port

The JRMP port the JMX Remote Adaptor listens on.

 1202

3

 enable SSL

Whether SSL is enabled.

 true

Monitoring and System-Reporting Tools

This section includes details and sample output for the following monitoring and system-reporting tools.

Script What it does

captures statistical-performance data about the cluster and displays it in tabular-text form on the console, or graphed on a GUI

generates a report of useful system information

sends a signal to the JVM to produce a thread dump

shows dependencies between SLEE components

rhino-stats

Rhino provides monitoring facilities for capturing statistical-performance data about the cluster, using the client-side application rhino-stats. This application connects to Rhino using JMX, and samples requested statistics, in real time. You can display extracted statistics in tabular-text form on the console, or graphed on a GUI using various graphing modes.


When correctly configured, monitored, and tuned — using Rhino’s performance-monitoring tools — Rhino SLEEs will surpass industry standards for performance and stability. ---

For service developers and administrators

Much of the statistical information that rhino-stats gathers is useful to both service developers and administrators:

  • Service developers can use performance data, such as event-processing time statistics, to evaluate the impact of SBB-code changes on overall performance.

  • Administrators can use statistics to evaluate settings for tunable performance parameters. For example, the following can help determine appropriate configuration parameters:

Parameter set type Tunable parameters

Object pools

Object pool sizing

Staging threads

Staging configuration

Memory database sizing

Memory database size limits

System memory usage

JVM heap size

See also:

About Rhino Statistics

Rhino’s statistics subsystem collects three types of statistic:

  • counters  — the number of occurrences of a particular event (unbounded and can only increase); for example, lock waits or rejected events

  • gauges — the amount of a particular object or item (can increase and decrease within some arbitrary bound, typically between 0 and some positive number); for example, free memory or active activities

  • samples — values every time a particular event or action occurs; for example, event-processing time or lock-manager wait time.

Rhino records and reports counter- and gauge-type statistics as absolute values. It tallies sample-type statistics, however, in a frequency distribution (which it reports to statistics clients such as rhino-stats).

Parameter sets

Rhino defines a set of related statistics as a parameter set. Many of the available parameter sets are organised hierarchically. Child parameter sets that represent related statistics from a particular source may contribute to parent parameter sets that summarise statistics from a group of sources.

For example, the Events parameter set summarises event statistics from all resource adaptor entities. Below this is a parameter set for each resource adaptor entity which summarises statistics for all the event types produced by that resource adaptor entity. Further below each of these is another parameter set for each event type fired by the resource adaptor entity. These last parameter sets record the raw statistics for each fired event type as summarised by the parent parameter sets. This means, when examining the performance of an application, you can drill down to analyse statistics on a per-event-type basis.

parameter sets

Running rhino-stats

Below are the requirements and options for running rhino-stats.

Requirements for running rhino-stats

The requirements and recommendations for running the Rhino statistics-gathering tool (rhino-stats) are as follows.

Run on a workstation, not cluster node

Rhino’s statistics-gathering tool (rhino-stats) should be run only on a workstation (not a cluster node).

Warning
Impact on CPU usage

Executing the statistics client on the production cluster node is not recommended. The statistics client’s GUI can impact CPU usage, such that a cluster may drop calls. (The exact impact depends on the number of distinct parameter sets monitored, the number of simultaneous users and the sample frequency.)

Direct connections

The host running the rhino-stats client requires a direct TCP connection to each of the Rhino cluster nodes it get statistics from. Moreover, the client asks each node to create a TCP connection back to it, for the express purpose of sending it statistics data. Therefore, any intermediary firewalls between the client host and the Rhino cluster must be configured to allow these connections.

Single outgoing JMX connection to a cluster node (deprecated)

Versions of the statistics client, before the release of Rhino 1.4.4, retrieved statistics by creating a single outgoing JMX connection to one of the cluster nodes. This statistics-retrieval method was deprecated in favour of the newer direct-connection method, since the old method had a greater performance impact on the Rhino cluster. The single-connection method is still available however, through the use of the -j option.

Tip
Performance implications (minimal impact)

Rhino’s statistics subsystem is designed to have minimal impact on performance. Gathering counter- or gauge-type statistics should not have any noticeable impact on CPU usage or latency. Gathering sample-type statistics is more costly, and will usually result in a 1-2% impact on CPU usage when several parameter sets are monitored.

rhino-stats options

rhino-stats includes the following options:

$ ./rhino-stats
One (and only one) of -g (Start GUI), -m (Monitor Parameter Set), -l (List Available Parameter Sets) required.

Available command line format:
-h <argument> : hostname
-p <argument> : port
-u <argument> : username
-w <argument> : password
-D            : display connection debugging messages
-g            : gui mode
-l <argument> : query available statistics parameter sets
-m <argument> : monitor a statistics parameter set on the console
-s <argument> : sample period in milliseconds
-i <argument> : internal polling period in milliseconds
-d            : display actual value in addition to deltas for counter stats
-n <argument> : name a tab for display of subsequent graph configuration files
-f <argument> : full path name of a saved graph configuration .xml file to redisplay
-j            : use JMX remote option for statistics download in place of direct statistics download
-t <argument> : runtime in seconds (console mode only)
-q            : quiet mode - suppresses informational messages
-T            : disable display of timestamps (console mode only)
-R            : display raw timestamps (console mode only)
-C            : use comma separated output format (console mode only)
-S            : no per second conversion of counter deltas
-k <argument> : number of hours samples to keep in gui mode (default=6)
-r            : output one row per node (console mode only)
-o            : write output to rolling CSV files (console mode only)

Gathering Statistics in Console Mode

In console mode, you can run the rhino-stats client with options to:

List root parameter sets

To list the different types of statistics that can be monitored in Rhino, run rhino-stats with the -l parameter. For example:

$ ./rhino-stats -l
The following root parameter sets are available for monitoring:
Activities, ActivityHandler, EventRouter, Events, JVM, LicenseAccounting, LockManagers, MemDB-Local, MemDB-Replicated,
ObjectPools, SLEE-Usage, Savanna-Protocol, Services, StagingThreads, SystemInfo, Transactions

For parameter set type descriptions and a list of available parameter sets use the -l <root parameter set name> option

Display parameter set descriptions

Tip The output below illustrates the root parameter set (Events) with many different child parameter sets. You can use this information to select the level of granularity at which you want statistics reported. (See the instructions for monitoring parameters in real time.)

To query the available child parameter sets within a particular root parameter set, use -l <root parameter set name>. For example, for the root parameter set Events:

$ ./rhino-stats -l Events
Parameter Set: Events
Parameter Set Type: Events
Description: Event Statistics

Counter type statistics:
  Id: Name:         Label:  Description:
  0   accepted      acc     Accepted events
  1   rejected      rej     Events rejected due to overload
  2   failed        fail    Events that failed in event processing
  3   successful    succ    Event processed successfully

Sample type statistics:
  Id: Name:                 Label:  Description:
  4   eventRouterSetupTime  ERT     Event router setup time
  5   sbbProcessingTime     SBBT    SBB processing time
  6   eventProcessingTime   EPT     Total event processing time

Found 164 parameter sets under 'Events':
    ->  "Events"
    ->  "Events.Rhino Internal"
    ->  "Events.Rhino Internal.[javax.slee.ActivityEndEvent javax.slee, 1.0]"
    ->  "Events.Rhino Internal.[javax.slee.facilities.TimerEvent javax.slee, 1.0]"
    ->  "Events.Rhino Internal.[javax.slee.profile.ProfileAddedEvent javax.slee, 1.0]"
    ->  "Events.Rhino Internal.[javax.slee.profile.ProfileRemovedEvent javax.slee, 1.0]"
    ->  "Events.Rhino Internal.[javax.slee.profile.ProfileUpdatedEvent javax.slee, 1.0]"
    ->  "Events.Rhino Internal.[javax.slee.serviceactivity.ServiceStartedEvent javax.slee, 1.0]"
    ->  "Events.Rhino Internal.[javax.slee.serviceactivity.ServiceStartedEvent javax.slee, 1.1]"
    ->  "Events.insis-cap1a"
    ->  "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.CloseInd OpenCloud, 2.0]"
    ->  "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.DelimiterInd OpenCloud, 2.0]"
    ->  "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.NoticeInd OpenCloud, 2.0]"
    ->  "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.OpenConf OpenCloud, 2.0]"
    ->  "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.OpenInd OpenCloud, 2.0]"
    ->  "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.ProviderAbortInd OpenCloud, 2.0]"
    ->  "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.UserAbortInd OpenCloud, 2.0]"
...

Parameter set types — required for monitoring

Note A parameter set can only be monitored by a statistics client such as rhino-stats if it has a parameter set type.

A parameter set’s type is listed in its description. Most parameter sets have a type, such as the Events parameter set, which has the type "Events". However, the SLEE-Usage root parameter set, for example, does not have a type, as shown below:

$ ./rhino-stats -l SLEE-Usage
Parameter Set: SLEE-Usage
 (no parameter set type defined for this parameter set)

Found 16 parameter sets under 'SLEE-Usage':
    ->  "SLEE-Usage"
    ->  "SLEE-Usage.ProfileTables"
    ->  "SLEE-Usage.RAEntities"
    ->  "SLEE-Usage.Services"
    ->  "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2]"
    ->  "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]"
    ->  "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2].(default)"
    ->  "SLEE-Usage.Services.ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.2]"
    ->  "SLEE-Usage.Services.ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Duration Logging SBB,vendor=OpenCloud,version=0.2]"
    ->  "SLEE-Usage.Services.ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Duration Logging SBB,vendor=OpenCloud,version=0.2].(default)"

Neither the SLEE-Usage parameter set, nor its immediate child parameter sets (SLEE-Usage.ProfileTables, SLEE-Usage.RAEntities, and SLEE-Usage.Services), have a parameter set type — as usage parameters are defined by SLEE components. The parameter set representing usage for an SBB within a particular service does however have a parameter set type and can be monitored:

$ ./rhino-stats -l "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]"
Parameter Set: SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]
Parameter Set Type: Usage.Services.SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]
Description: Usage stats for SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]

Counter type statistics:
  Id: Name:               Label:  Description:
  0   missingParameters   n/a     missingParameters
  1   tCallAttempts       n/a     tCallAttempts
  2   unknownSubscribers  n/a     unknownSubscribers
  3   oCallAttempts       n/a     oCallAttempts
  4   callsBarred         n/a     callsBarred
  5   callsAllowed        n/a     callsAllowed

Sample type statistics: (none defined)

Found 2 parameter sets under 'SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]':
    ->  "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]"
    ->  "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2].(default)"
Tip Rhino guarantees that if a particular parameter set has a non-null parameter set type, then all its child parameter sets will also have a non-null parameter set type and can therefore also be individually monitored.

Monitor parameters in real time

Tip Once started, rhino-stats will continue to extract and print the latest statistics once per second. This period can be changed using the -s option.

To monitor a parameter set in real time using the console interface, run rhino-stats with the -m command-line argument followed by the parameter set name. For example:

$ ./rhino-stats -m "Events.insis-cap1a.[com.opencloud.slee.resources.incc.operation.InitialDPInd OpenCloud, 3.0]"
2008-05-01 17:37:20.687  INFO   [rhinostat]  Connecting to localhost:1199
2008-05-01 17:37:21.755  INFO   [dispatcher]  Establish direct session DirectSession[host=x.x.x.x port=17400 id=56914320286194693]
2008-05-01 17:37:21.759  INFO   [dispatcher]  Connecting to localhost/127.0.0.1:17400

                          Events.insis-cap1a.[com.opencloud.slee.resources.incc.operation.InitialDPI
time                       acc   fail  rej   succ        EPT              ERT             SBBT
                                                       50% 90% 95%      50% 90% 95%      50% 90% 95%
-----------------------   --------------------------------------------------------------------------
2008-05-01 17:37:25.987      69     0     0    69      0.7 1.2 1.4      0.2 0.4 0.4      0.5 0.8 1.0
2008-05-01 17:37:26.989      69     0     0    69      0.9 1.2 1.4      0.2 0.4 0.4      0.5 0.8 1.0
2008-05-01 17:37:27.991      61     0     0    61      0.9 1.3 1.6      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:28.993      67     0     0    67      0.9 1.3 1.4      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:29.996      69     0     0    69      0.9 1.3 1.4      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:30.996      63     0     0    63      0.9 1.3 1.4      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:31.999      71     0     0    71      0.9 1.3 1.4      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:33.001      64     0     0    64      0.9 1.3 1.4      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:34.002      68     0     0    68      0.9 1.3 1.4      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:35.004      60     0     0    60      0.9 1.3 1.4      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:36.006      64     0     0    64      1.0 1.3 1.4      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:37.008      67     0     0    66      1.0 1.3 1.5      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:38.010      61     0     0    62      1.0 1.4 1.5      0.3 0.4 0.4      0.6 0.9 1.0
2008-05-01 17:37:39.012      61     0     0    61      1.0 1.4 1.5      0.3 0.4 0.4      0.6 0.9 1.0
Tip The "50% 90% 95%" headers indicate percentile buckets for sample type statistics.

Configure console output

The default console output is not particularly useful when you want to do automated processing of the logged statistics. To make post-processing of the statistics easier, rhino-stats supports a number of command-line arguments to modify the format of statistics output:

  • -R — outputs raw (single number) timestamps

  • -C — outputs comma-separated statistics

  • -d — display actual value in addition to deltas for counter stats

  • -S — no per second conversion of counter deltas

  • -r — output one row per node

  • -q — suppresses printing non-statistics information

For example, to output a comma-separated log of event statistics:

$ ./rhino-stats -m "Events.insis-cap1a.[com.opencloud.slee.resources.incc.operation.InitialDPInd OpenCloud, 3.0]" -R -C -q
time,acc,fail,rej,succ,EPT,ERT,SBBT
1209620311166,64,0,0,64,0.9 1.2 1.3,0.3 0.4 0.4,0.6 0.8 0.9
1209620312168,63,0,0,63,0.9 1.3 1.3,0.3 0.4 0.4,0.6 0.9 0.9
1209620313169,67,0,0,67,0.9 1.3 1.3,0.3 0.4 0.4,0.6 0.9 0.9
1209620314171,66,0,0,66,0.9 1.3 1.3,0.3 0.4 0.4,0.6 0.9 0.9
1209620315174,65,0,0,65,0.9 1.3 1.3,0.3 0.4 0.4,0.6 0.9 0.9
1209620316176,65,0,0,65,0.9 1.3 1.5,0.3 0.4 0.4,0.6 0.9 1.0
1209620317177,62,0,0,62,0.9 1.3 1.4,0.3 0.4 0.4,0.6 0.9 0.9
1209620318179,66,0,0,66,1.0 1.3 1.5,0.3 0.4 0.4,0.6 0.9 1.0
1209620319181,58,0,0,58,1.0 1.3 1.6,0.3 0.4 0.4,0.6 0.9 1.1
1209620320181,69,0,0,69,1.0 1.3 1.6,0.3 0.4 0.4,0.6 0.9 1.0
1209620321182,68,0,0,68,1.0 1.3 1.6,0.3 0.4 0.4,0.6 0.9 1.0
1209620322183,65,0,0,65,1.0 1.3 1.5,0.3 0.4 0.4,0.6 0.9 1.0
1209620323184,67,0,0,67,1.0 1.3 1.5,0.3 0.4 0.4,0.6 0.9 1.0
...

Write output to file

To write the statistical output to rolling CSV files, the -o command-line argument is used. The files will be written to a subdirectory named output. There is a limit of ten 10MB files that will be compressed once they reach their max size. These values are configurable through client/etc/rhino-console-log4j2.xml. The output rows are comma separated by default (same effect as -C command-line argument). All other console output modifiers will still work:

  • -R — outputs raw (single number) timestamps

  • -d — display actual value in addition to deltas for counter stats

  • -S — no per second conversion of counter deltas

  • -r — output one row per node

  • -q — suppresses printing non-statistics information

Gathering Statistics in Graphical Mode

To create a graph start the rhino-stats client in graphical mode using the -g option:

$ ./rhino-stats -g

After the client has downloaded parameter set information from Rhino, the main application window displays. Below are some of the options available for a graph, and instructions for creating a quick or complex graph.

Graphing options

When run in graphical mode, rhino-stats offers a range of options for interactively extracting and graphically displaying statistics gathered from Rhino, including:

  • counter/gauge plot — displays the values of gauges, or the change in values of counters, over time; displays multiple counters or gauges using different colours; stores an hour’s worth of statistics history

  • sample distribution plot — displays the 5th, 25th, 50th, 75th and 95th percentiles of a sample distribution as it changes over time, either as a bar-and-whisker type graph or as a series of line plots

  • sample distribution histogram — displays a constantly updating histogram of a sample distribution in both logarithmic and linear scales.

Quick graph

The client includes a browser panel at left, with the available parameter sets displayed in a hierarchy tree. To quickly create a simple graph, right-click a parameter set, and select a parameter and type of graph. For example, the following illustration shows selecting a quick plot of lockTimeouts:

quickgraph

Complex graph

To create more complex graphs, comprising multiple statistics, use the "graph creation wizard". The following steps are illustrated with an example that creates a plot of event-processing counter statistics from the IN-SIS.

1

Start the wizard
  • Select New Graph from the Monitoring menu.

The wizard presents a selection of graph types.

SelectGraphType

The following types are available:

Graph type selection Description

Line graph for a counter or guage

Select and combine multiple statistics in a single-line, plot-type graph.

Line graph for a sample distribution

Display percentile values, for a single sample-type statistic, on a line plot.

Histogram for a sample frequency distribution

Display the frequency distribution for a single sample-type statistic, in a histogram. Options include:

  • rolling distribution — a frequency distribution influenced by the last X generations of samples

  • resetting distribution — a frequency distribution influenced by all samples since the client last sampled statistics

  • permanent distribution — a frequency distribution influenced by all samples since monitoring started.

Load an existing graph configuration from a file

Create a new graph using a previously saved graph configuration file.

  • Select a graph type (in the example, the first option Line graph for a counter or a gauge), and click Next.

2

Select statistics

The wizard presents a selection of graph components.

SelectGraphComponents

This screen displays a table of statistics selected for the line plot. Initially, this is empty. To add statistics, click Add.

3

Select parameter sets

The Select Parameter Set dialog displays.

SelectParameterSet
  • Click the panel at left to navigate to and select a parameter set (in the example, Events.insis-cap1a. Available statistics for that parameter set display in the panel at right.

  • Select the statistics you want to include. In the example, the counter type statistics "accepted", "failed", "rejected" and "successful" have been selected (using shift-click to select a range).

  • Select where to extract statistics from. In the case of a multi-node Rhino cluster, you can select cluster (as in the example) to combine statistics for the whole cluster, or select an individual node to extract them from.

  • Click OK.

4

Select colours

The Select Graph Components screen redisplays with the components added.

SelectGraphComponents2
  • To change the colour assigned to a statistic, select from the Colour drop-down menu.

  • Click Next.

5

Name the graph

The wizard prompts you to name the graph.

NameGraph
  • Type a name for the graph.

  • Select an existing tab, or type the name to create a new tab where to display the graph. For this example, the graph has been named "IN-SIS CAP1a Events (cluster)" and displayed in a new tab named "Events". If the new tab name is left blank then by default the graph will be created in a new tab with the same name as the graph title. However you might, for example, want to add several related graphs to the same tab for easy visual comparison and therefore will want to name the tab appropriately.

  • Click Finish.

6

View the graph

The client creates the graph and begins populating it with statistics extracted from Rhino.

SampleGraph

The client will continue collecting statistics periodically from Rhino and adding them to the graph. By default the graph will only display the last one minute of information. This can be changed using the timescale drop-down list on the toolbar or clicking the magnify or demagnify buttons either side of the drop-down list — the x-axis scales from 30 seconds up to 10 minutes.

Each line graph stores approximately one hour of data (using the default sample frequency of 1 second). To view stored data that is not currently visible in the graph window, click and drag the scrollbar underneath the graph, or click directly on a position within the scrollbar.

Details of Available Statistics

Note

If Rhino is configured with a key/value store for replication, then the Cassandra Key/Value Store will also provide statistics as detailed here.

If Rhino is configured with session ownership enabled, then the Cassandra Session Ownership Store will also provide statistics as detailed here.

Tip Further details about the Rhino SNMP OID mappings are available here.

Activities

Activity Statistics

OID: 1.3.6.1.4.1.19808.2.1.2

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

lifeTime

lifeT

Activity life time (create to end)

sample

sec

time/milliseconds

time/seconds

created

2

Activities created

counter

ended

3

Activities ended

counter

rejected

4

Activities rejected

counter

active

5

Activity count

counter

gauge

startSuspended

stsusp

6

Number of activities started suspended

counter

activities

suspendActivity

susp

7

Number of activities suspended

counter

activities

ActivityHandler

Rhino Activity Handler statistics

OID: 1.3.6.1.4.1.19808.2.1.13

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

gCTime

gct

GC time

sample

µsec

time/nanoseconds

time/milliseconds

txCreate

txCre

2

Transacted Activity Creation

counter

activities

txFire

txFire

3

Events Fired Transacted

counter

events

txEnd

txEnd

4

Transacted Activity Ends

counter

activities

nonTxCreate

NtxCre

5

Non-transacted Activity Creation

counter

activities

nonTxFire

NtxFire

6

Events Fired Non-transacted

counter

events

nonTxEnd

NtxEnd

7

Non-transacted Activity Ends

counter

activities

nonTxLookup

NtxLook

8

Non-transacted lookups

counter

lookups

txLookup

txLook

9

Transacted lookups

counter

lookups

nonTxLookupMiss

NtxLkMiss

10

Misses in non-transacted lookups

counter

lookups

txLookupMiss

txLkMiss

11

Misses in transacted lookups

counter

lookups

ancestorCount

ances

12

Ancestor activities created

counter

activities

gcCount

agcs

13

Number of activities queried by GC

counter

sweeps

generationsCollected

gensC

14

Activity MVCC generations collected

counter

generations

activitiesCollected

actsC

15

Number of activities collected

counter

activities

activitiesUnclean

uncln

16

Number of activities not cleaned by GC and retained for next GC

counter

activities

activitiesScanned

scan

17

Number of activities scanned by GC

counter

activities

administrativeRemove

admRem

18

Number of activities administratively removed

counter

activities

livenessQueries

qlive

19

Number of activity liveness queries performed

counter

queries

timersSet

tmset

20

Number of SLEE timers created

counter

timers

timersCancelled

tmcanc

21

Number of SLEE timers cancelled

counter

timers

localLockRequests

llock

22

Number of activity state locks requested for activities owned by the same node

counter

locks

foreignLockRequests

flock

23

Number of activity state locks requested for activities owned by another node

counter

locks

create

24

Activities created

counter

activities

end

25

Activities ended

counter

activities

fire

fire

26

Events fired

counter

events

lookup

look

27

Activity lookups attempted

counter

lookups

lookupMiss

lkMiss

28

Activity lookups failed

counter

lookups

churn

churn

29

Activity state churn

counter

units

gauge

liveCount

live

30

Activity handler live activities count

counter

activities

gauge

tableSize

tblsz

31

Activity lookup table size

counter

activities

gauge

timerCount

timers

32

Number of SLEE timers

counter

timers

gauge

lockRequests

locks

33

Number of activity state locks requested

counter

locks

ClassLoading

JVM Class Loading Statistics

OID: 1.3.6.1.4.1.19808.2.1.28

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

loadedClassCount

loadClasses

2

Number of classes currently loaded

counter

gauge

totalLoadedClassCount

totLoadClasses

3

Total number of classes loaded since JVM start

counter

gauge

unloadedClassCount

unloadClasses

4

Total number of classes unloaded since JVM start

counter

gauge

ClusterTopology

Cluster topology stats

OID: 1.3.6.1.4.1.19808.2.1.40

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

bootingNodes

booting

2

Number of cluster nodes currently starting up

counter

gauge

eventRouterNodes

er

3

Number of event router nodes in the cluster

counter

gauge

quorumNodes

quorum

4

Number of quorum nodes in the cluster

counter

gauge

Compilation

JVM Compilation Statistics

OID: 1.3.6.1.4.1.19808.2.1.29

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

totalCompilationTime

totCompTime

2

The total compilation time

counter

gauge

Convergence

Configuration convergence statistics

OID: 1.3.6.1.4.1.19808.2.1.39

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

convergenceScans

cScans

2

Number of times the configuration convergence scanner has checked for work in a namespace

counter

delta

tasksAdded

tAdded

3

Number of convergence tasks added to the execution queue

counter

delta

tasksExecuted

tExecuted

4

Number of convergence tasks executed

counter

delta

tasksCompleted

tCompleted

5

Number of convergence tasks completed

counter

delta

tasksFailed

tFailed

6

Number of convergence tasks failed

counter

delta

tasksRetried

tRetried

7

Number of convergence tasks retried

counter

delta

queueSize

qSize

8

Size of the convergence task queue

counter

gauge

tasksRunning

tRunning

9

Number of convergence tasks currently running

counter

gauge

maxAge

maxAge

10

Age of the oldest convergence task in the queue

counter

gauge

EndpointLimiting

SLEE Endpoint Limiting Statistics

OID: 1.3.6.1.4.1.19808.2.1.22

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

submitted

sub

2

Activities and events submitted to a SLEE endpoint

counter

accepted

acc

3

Activities and events accepted by a SLEE endpoint

counter

userAccepted

usrAcc

4

Activities and events accepted by the user rate limiter (SystemInput)

counter

userRejected

usrRej

5

Activities and events rejected by the user rate limiter (SystemInput)

counter

licenseRejected

licRej

6

Activities and events rejected due to the SDK license limit

counter

EventRouter

EventRouter Statistics

OID: 1.3.6.1.4.1.19808.2.1.15

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

eventHandlerStages

evh

2

Event handler stages executed

counter

rollbackHandlerStages

rbh

3

Rollback handler stages executed

counter

cleanupStages

clh

4

Cleanup stages executed

counter

badGuyHandlerStages

bgh

5

Badguy handler stages executed

counter

vEPs

vep

6

Event router setup (virgin events)

counter

rootSbbFinds

rootf

7

Number of root SBBs resolved

counter

sbbsResolved

res

8

Number of SBBs resolved

counter

sbbCreates

cre

9

Number of SBBs created

counter

sbbExceptions

exc

10

Number of SBB thrown exceptions caught

counter

processingRetrys

retr

11

Number of event processing retries due to concurrent activity updates

counter

Events

Event Statistics

OID: 1.3.6.1.4.1.19808.2.1.1

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

eventRouterSetupTime

ERT

Event router setup time

sample

µsec

time/nanoseconds

time/milliseconds

sbbProcessingTime

SBBT

SBB processing time

sample

µsec

time/nanoseconds

time/milliseconds

eventProcessingTime

EPT

Total event processing time

sample

µsec

time/nanoseconds

time/milliseconds

accepted

acc

2

Accepted events

counter

rejected

rej

3

Events rejected due to overload (total)

counter

failed

fail

4

Events that failed in event processing

counter

successful

succ

5

Event processed successfully

counter

rejectedQueueFull

rejqf

6

Events rejected due to overload (queue full)

counter

rejectedQueueTimeout

rejqt

7

Events rejected due to overload (queue timeout)

counter

rejectedOverload

rejso

8

Events rejected due to overload (system overload)

counter

ExecutorStats

Executor pool statistics

OID: 1.3.6.1.4.1.19808.2.1.23

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

executorTaskExecutionTime

etExecTime

Time a task spends executing

sample

time/nanoseconds

time/milliseconds

executorTaskWaitingTime

etWaitTime

Time a task spends waiting for execution

sample

time/nanoseconds

time/milliseconds

executorTasksExecuted

etExecuted

2

Number of executor tasks executed

counter

delta

executorTasksExecuting

etExecuting

3

Number of executor tasks currently executing

counter

gauge

executorTasksRejected

etRejected

4

Number of executor tasks rejected

counter

delta

executorTasksSubmitted

etSubmitted

5

Number of executor tasks submitted

counter

delta

executorTasksWaiting

etWaiting

6

Number of executor tasks waiting to execute

counter

gauge

executorThreadsIdle

thrIdle

7

Number of idle executor threads

counter

gauge

executorThreadsTotal

thrTotal

8

Total number of executor threads

counter

gauge

GarbageCollector

JVM Garbage Collector Statistics

OID: 1.3.6.1.4.1.19808.2.1.30

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

collectionDuration

duration

Garbage Collector Collection Duration

sample

msec

time/milliseconds

time/milliseconds

collectionInterval

interval

Garbage Collector Collection Interval

sample

msec

time/milliseconds

time/milliseconds

collectionPeriod

period

Garbage Collector Collection Period

sample

msec

time/milliseconds

time/milliseconds

collectionCount

collCount

2

Garbage Collector Collection Count

counter

gauge

collectionTime

collTime

3

Cumulative Garbage Collector Collection Time

counter

gauge

lastCollectionDuration

collD

4

Last Collection Duration

counter

gauge

lastCollectionInterval

collI

5

Last Collection Interval (application runtime between GC events, end of GC to next GC start)

counter

gauge

lastCollectionPeriod

collP

6

Last Collector Collection Period (period of GC starts)

counter

gauge

JDBCDatasource

JDBC Datasource Statistics

OID: 1.3.6.1.4.1.19808.2.1.16

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

getWait

getWait

Time spent by threads waiting for a connection (that eventually succeeded)

sample

msec

time/milliseconds

time/milliseconds

duration

duration

Time that connections spent in use (allocated to a client)

sample

msec

time/milliseconds

time/milliseconds

create

create

2

Number of new connections created

counter

removeIdle

remIdle

3

Number of connections removed from the pool due to being idle

counter

removeOverflow

remOver

4

Number of connections removed from the pool due to the idle pool being full

counter

removeError

remErr

5

Number of connections removed from the pool due to a connection error

counter

getRequest

getReq

6

Number of getConnection() requests made

counter

getSuccess

getOk

7

Number of getConnection() requests that succeeded

counter

getTimeout

getTO

8

Number of getConnection() requests that failed due to a timeout

counter

getError

getErr

9

Number of getConnection() requests that failed due to a connection error

counter

putOk

putOk

10

Number of connections returned to the pool that were retained

counter

putOverflow

putOver

11

Number of connections returned to the pool that were closed because the idle pool was at maximum size

counter

putError

putErr

12

Number of connections returned to the pool that were closed due to a connection error

counter

inUseConnections

cInUse

13

Number of in use connections

counter

gauge

idleConnections

cIdle

14

Number of idle, pooled, connections

counter

gauge

pendingConnections

cPend

15

Number of connections in the process of being created

counter

gauge

totalConnections

cTotal

16

Total number of open connections

counter

gauge

maxConnections

cMax

17

Maximum number of open connections

counter

gauge

JVM

JVM Statistics

OID: 1.3.6.1.4.1.19808.2.1.14

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

heapUsed

husd

2

Used heap memory

counter

gauge

heapCommitted

hcomm

3

Committed heap memory

counter

gauge

heapInitial

hinit

4

Initial heap memory

counter

gauge

heapMaximum

hmax

5

Maximum heap memory

counter

gauge

nonHeapUsed

nhusd

6

Used non-heap memory

counter

gauge

nonHeapCommitted

nhcomm

7

Committed non-heap memory

counter

gauge

nonHeapInitial

nhinit

8

Initial non-heap memory

counter

gauge

nonHeapMaximum

nhmax

9

Maximum non-heap memory

counter

gauge

classesCurrentLoaded

cLoad

10

Number of classes currently loaded

counter

gauge

classesTotalLoaded

cTotLoad

11

Total number of classes loaded since JVM start

counter

gauge

classesTotalUnloaded

cTotUnload

12

Total number of classes unloaded since JVM start

counter

gauge

LicenseAccounting

License Accounting Information

OID: 1.3.6.1.4.1.19808.2.1.12

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

accountedUnits

acc

2

Accounted License Units Consumed

counter

units

unaccountedUnits

unacc

3

Unaccounted License Units Consumed

counter

units

Limiters

Limiter Statistics

OID: 1.3.6.1.4.1.19808.2.1.17

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

unitsUsed

used

2

Number of units used

counter

unitsRejected

rejected

3

Number of units rejected (both here and by parent)

counter

unitsRejectedByParent

rejectedByParent

4

Number of units rejected by parent limiter

counter

LockManagers

Lock Manager Statistics

OID: 1.3.6.1.4.1.19808.2.1.4

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

lockAcquisitionTime

LAT

Lock acquisition time

sample

µsec

time/nanoseconds

time/milliseconds

lockWaitTime

LWT

Time waited for contested locks

sample

µsec

time/nanoseconds

time/milliseconds

locksAcquired

acq

2

Locks acquired

counter

locksReleased

rel

3

Locks released

counter

lockWaits

wait

4

Lock waits occurred

counter

lockTimeouts

timeout

5

Lock timeouts occurred

counter

knownLocks

locks

6

Total number of locks with state

counter

gauge

acquireMessages

msgAcquire

7

LOCK_ACQUIRE messages sent

counter

abortMessages

msgAbort

8

LOCK_ABORT_ACQUIRE messages sent

counter

releaseMessages

msgRelease

9

LOCK_RELEASE_TRANSACTION messages sent

counter

migrationRequestMessages

msgMigReq

10

LOCK_MIGRATION_REQUEST messages sent

counter

migrationReleaseMessages

msgMigRel

11

LOCK_MIGRATION_RELEASE messages sent

counter

MemDB-Local

Local Memory Database Statistics

OID: 1.3.6.1.4.1.19808.2.1.9

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

committedSize

csize

2

Current committed size in kilobytes

counter

kb

gauge

maxCommittedSize

max

3

Maximum allowed committed size in kilobytes

counter

kb

gauge

churnSize

churn

4

Churn space used by the database, in bytes

counter

b

cleanupCount

cleanups

5

Number of state cleanups performed by the database

counter

#

retainedSize

rsize

6

Current total retained state size in kilobytes

counter

kb

gauge

MemDB-Replicated

Replicated Memory Database Statistics

OID: 1.3.6.1.4.1.19808.2.1.10

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

committedSize

csize

2

Current committed size in kilobytes

counter

kb

gauge

maxCommittedSize

max

3

Maximum allowed committed size in kilobytes

counter

kb

gauge

churnSize

churn

4

Churn space used by the database, in bytes

counter

b

cleanupCount

cleanups

5

Number of state cleanups performed by the database

counter

#

retainedSize

rsize

6

Current total retained state size in kilobytes

counter

kb

gauge

MemDB-Timestamp

Memory Database Timestamp Statistics

OID: 1.3.6.1.4.1.19808.2.1.25

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

waitingThreads

waiting

2

The number of threads waiting for a timestamp to become safe

counter

gauge

unexposedCommits

unexposed

3

The number of commits which are not yet safe to expose

counter

gauge

Memory

JVM Memory Statistics

OID: 1.3.6.1.4.1.19808.2.1.31

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

heapInitial

heapInit

2

Memory Heap Usage Initial

counter

gauge

heapUsed

heapUsed

3

Memory Heap Usage Used

counter

gauge

heapMax

heapMax

4

Memory Heap Usage Max

counter

gauge

heapCommitted

heapComm

5

Memory Heap Usage Committed

counter

gauge

nonHeapInitial

nonheapInit

6

Memory Non Heap Usage Initial

counter

gauge

nonHeapUsed

nonheapUsed

7

Memory Non Heap Usage Used

counter

gauge

nonHeapMax

nonheapMax

8

Memory Non Heap Usage Max

counter

gauge

nonHeapCommitted

nonheapComm

9

Memory Non Heap Usage Committed

counter

gauge

MemoryPool

JVM Memory Pool Statistics

OID: 1.3.6.1.4.1.19808.2.1.32

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

freed

freed

Bytes collected from this memory pool in recent garbage collections

sample

count/bytes

count/bytes

lowWaterMark

lwm

Memory Pool usage after recent garbage colletions

sample

count/bytes

count/bytes

collectionUsageInitial

collUsageInit

2

Memory Pool Usage Initial, as sampled at end of last GC

counter

gauge

collectionUsageUsed

collUsageUsed

3

Memory Pool Usage Used, as sampled at end of last GC

counter

gauge

collectionUsageMax

collUsageMax

4

Memory Pool Usage Max, as sampled at end of last GC

counter

gauge

collectionUsageCommitted

collUsageComm

5

Memory Pool Usage Committed, as sampled at end of last GC

counter

gauge

collectionUsageThreshold

collUsageThresh

6

Memory Pool Usage Threshold, as sampled at end of last GC

counter

gauge

collectionUsageThresholdCount

collUseThrCount

7

Memory Pool Usage Threshold Count, as sampled at end of last GC

counter

gauge

peakUsageInitial

peakUsageInit

8

Memory Pool Peak Usage Initial

counter

gauge

peakUsageUsed

peakUsageUsed

9

Memory Pool Peak Usage Used

counter

gauge

peakUsageMax

peakUsageMax

10

Memory Pool Peak Usage Max

counter

gauge

peakUsageCommitted

peakUsageComm

11

Memory Pool Peak Usage Committed

counter

gauge

usageThreshold

usageThresh

12

Memory Pool Usage Threshold

counter

gauge

usageThresholdCount

usageThreshCount

13

Memory Pool Usage Threshold Count

counter

gauge

usageInitial

usageInit

14

Memory Pool Usage Initial

counter

gauge

usageUsed

usageUsed

15

Memory Pool Usage Used

counter

gauge

usageMax

usageMax

16

Memory Pool Usage Max

counter

gauge

usageCommitted

usageComm

17

Memory Pool Usage Committed

counter

gauge

lastCollected

lastColl

18

Memory Pool usage collected in last garbage collection

counter

gauge

collected

collected

19

Memory Pool usage collected in garbage collections

counter

delta

collectionCount

collCount

21

Number of garbage collections that collected objects in this pool

counter

gauge

ObjectPools

Object Pool Statistics

OID: 1.3.6.1.4.1.19808.2.1.7

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

added

2

Freed objects that were accepted to the pool

counter

removed

3

Allocation requests that were satisfied from the pool

counter

overflow

4

Freed objects that were discarded because the pool was full

counter

miss

5

Allocation requests not satisfied by the pool because it was empty

counter

size

6

Current number of objects in the pool

counter

gauge

capacity

7

Maximum object pool capacity

counter

gauge

pruned

8

Objects in the pool that were discarded because they fell off the end of the LRU queue

counter

OperatingSystem

JVM Operating System Statistics

OID: 1.3.6.1.4.1.19808.2.1.33

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

availableProcessors

availProc

2

Operating System Available Processors

counter

gauge

committedVirtualMemorySize

commVirtMem

3

Operating System Committed Virtual Memory

counter

gauge

freePhysicalMemorySize

freePhysMem

4

Operating System Free Physical Memory Size

counter

gauge

freeSwapSpaceSize

freeSwapSpc

5

Operating System Free Swap Space Size

counter

gauge

processCpuTime

procCpuTime

6

Operating System Process Cpu Time

counter

gauge

totalPhysicalMemorySize

totPhysMem

7

Operating System Total Physical Memory Size

counter

gauge

totalSwapSpaceSize

totSwapSpc

8

Operating System Total Swap Space Size

counter

gauge

maxFileDescriptorCount

maxFileDesc

9

Operating System Max File Descriptor Count

counter

gauge

openFileDescriptorCount

openFileDesc

10

Operating System Open File Descriptor Count

counter

gauge

PooledByteArrayBuffer

Byte Array Buffer Pool Statistics

OID: 1.3.6.1.4.1.19808.2.1.26

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

out

out

2

Total buffer allocation requests

counter

in

in

3

Total freed buffers

counter

added

added

4

Freed buffers that were accepted to the pool

counter

removed

removed

5

Buffer allocation requests that were satisfied from the pool

counter

overflow

overflow

6

Freed buffers that were discarded because the pool was full

counter

miss

miss

7

Buffer allocation requests not satisfied by the pool because it was empty

counter

poolSize

psize

8

Current number of buffers in the pool

counter

gauge

bufferSize

bsize

9

Buffer size

counter

gauge

poolCapacity

capacity

10

Maximum pool capacity

counter

gauge

RemoteTimerClientStats

__

OID: 1.3.6.1.4.1.19808.2.1.37

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

RemoteTimerServerStats

__

OID: 1.3.6.1.4.1.19808.2.1.38

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

RemoteTimerTimingWheel

Remote timer server’s timing-wheel execution statistics

OID: 1.3.6.1.4.1.19808.2.1.36

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

cascadeOverflow

cOverflow

2

Number of scheduled jobs that were cascaded from the overflow list to wheel 3

counter

delta

cascadeWheel1

cWheel1

3

Number of scheduled jobs that were cascaded from wheel 1 to wheel 0

counter

delta

cascadeWheel2

cWheel2

4

Number of scheduled jobs that were cascaded from wheel 2 to wheel 1

counter

delta

cascadeWheel3

cWheel3

5

Number of scheduled jobs that were cascaded from wheel 3 to wheel 2

counter

delta

jobsExecuted

jExecuted

6

Number of jobs that reached their expiry time and were submitted for execution

counter

delta

jobsRejected

jRejected

7

Number of submitted jobs that were rejected by the executor

counter

delta

jobsScheduled

jScheduled

8

Number of jobs scheduled onto a timer queues for later execution

counter

delta

jobsToOverflow

jToOverflow

9

Number of scheduled jobs that were initially placed on the overflow list

counter

delta

jobsToWheel0

jToWheel0

10

Number of scheduled jobs that were initially placed on wheel 0

counter

delta

jobsToWheel1

jToWheel1

11

Number of scheduled jobs that were initially placed on wheel 1

counter

delta

jobsToWheel2

jToWheel2

12

Number of scheduled jobs that were initially placed on wheel 2

counter

delta

jobsToWheel3

jToWheel3

13

Number of scheduled jobs that were initially placed on wheel 3

counter

delta

jobsWaiting

jWaiting

14

Number of submitted jobs that are currently waiting to reach their expiry time

counter

delta

tasksCancelled

tCancelled

15

Number of tasks successfully cancelled

counter

delta

tasksFixedDelay

tRepDelay

16

Number of fixed-delay repeated execution tasks submitted

counter

delta

tasksFixedRate

tRepRate

17

Number of fixed-rate repeated execution tasks submitted

counter

delta

tasksImmediate

tNow

18

Number of immediate-execution tasks submitted

counter

delta

tasksOneShot

tOnce

19

Number of one-shot delayed execution tasks submitted

counter

delta

tasksRepeated

tRepeated

20

Number of times a repeated-execution task was rescheduled

counter

delta

ticks

ticks

21

Number of timer ticks elapsed

counter

delta

Runtime

JVM Runtime Statistics

OID: 1.3.6.1.4.1.19808.2.1.34

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

uptime

uptime

2

Runtime Uptime

counter

gauge

startTime

startTime

3

Runtime Start Time

counter

gauge

Services

Service Statistics

OID: 1.3.6.1.4.1.19808.2.1.5

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

sbbLifeTime

lifeT

Root SBB lifetime

sample

msec

time/milliseconds

time/seconds

rootSbbsCreated

created

2

Root sbbs created

counter

rootSbbsRemoved

removed

3

Root sbbs removed

counter

activeRootSbbs

active

4

# of active root sbbs

counter

gauge

SLEEState

SLEE state stats

OID: 1.3.6.1.4.1.19808.2.1.41

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

startingNodes

starting

2

Number of event router nodes where the current SLEE actual state is 'starting'

counter

gauge

runningNodes

running

3

Number of event router nodes where the current SLEE actual state is 'running'

counter

gauge

stoppingNodes

stopping

4

Number of event router nodes where the current SLEE actual state is 'stopping'

counter

gauge

stoppedNodes

stopped

5

Number of event router nodes where the current SLEE actual state is 'stopped'

counter

gauge

unlicensedNodes

unlic

6

Number of event router nodes where the current SLEE actual state is 'unlicensed' (if using host-based licensing)

counter

gauge

failedNodes

failed

7

Number of event router nodes where the current SLEE actual state is 'failed'

counter

gauge

StagingThreads

Staging Thread Statistics

OID: 1.3.6.1.4.1.19808.2.1.3

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

waitTime

waitT

Time spent waiting on stage queue

sample

msec

time/nanoseconds

time/milliseconds

itemsAdded

2

Items of work submitted to the thread pool

counter

itemsCompleted

3

Items of work completed by the thread pool

counter

queueSize

qSize

4

Size of the work item queue

counter

gauge

numThreads

numthrd

5

Current size of the thread pool

counter

gauge

availableThreads

avail

6

Number of idle worker threads

counter

gauge

minThreads

min

7

Configured minimum size of the thread pool

counter

gauge

maxThreads

max

8

Configured maximum size of the thread pool

counter

gauge

activeThreads

activ

9

Worker threads currently active processing work

counter

gauge

peakThreads

peak

10

The most threads that were ever active in the thread pool in the current configuration

counter

gauge

dropped

drop

11

Number of dropped stage items

counter

StagingThreads-Misc

Distributed Resource Manager Runnable Stage Statistics

OID: 1.3.6.1.4.1.19808.2.1.21

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

waitTime

waitT

Time spent waiting on stage queue

sample

msec

time/nanoseconds

time/milliseconds

itemsAdded

2

Items of work submitted to the thread pool

counter

itemsCompleted

3

Items of work completed by the thread pool

counter

queueSize

qSize

4

Size of the work item queue

counter

gauge

numThreads

numthrd

5

Current size of the thread pool

counter

gauge

availableThreads

avail

6

Number of idle worker threads

counter

gauge

minThreads

min

7

Configured minimum size of the thread pool

counter

gauge

maxThreads

max

8

Configured maximum size of the thread pool

counter

gauge

activeThreads

activ

9

Worker threads currently active processing work

counter

gauge

peakThreads

peak

10

The most threads that were ever active in the thread pool in the current configuration

counter

gauge

dropped

drop

11

Number of dropped stage items

counter

Thread

JVM Thread Statistics

OID: 1.3.6.1.4.1.19808.2.1.35

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

currentThreadCpuTime

currThrdCpuTm

2

Thread Current Thread Cpu Time

counter

gauge

currentThreadUserTime

currThrdUsrTm

3

Thread Current Thread User Time

counter

gauge

daemonThreadCount

daeThrds

4

Thread Daemon Thread Count

counter

gauge

peakThreadCount

peakThrds

5

Thread Peak Thread Count

counter

gauge

threadCount

threads

6

Thread Thread Count

counter

gauge

totalStartedThreadCount

totStartThrds

7

Thread Total Started Thread Count

counter

gauge

TimerFacility

Timer Facility’s timing-wheel execution statistics

OID: 1.3.6.1.4.1.19808.2.1.24

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

cascadeOverflow

cOverflow

2

Number of scheduled jobs that were cascaded from the overflow list to wheel 3

counter

delta

cascadeWheel1

cWheel1

3

Number of scheduled jobs that were cascaded from wheel 1 to wheel 0

counter

delta

cascadeWheel2

cWheel2

4

Number of scheduled jobs that were cascaded from wheel 2 to wheel 1

counter

delta

cascadeWheel3

cWheel3

5

Number of scheduled jobs that were cascaded from wheel 3 to wheel 2

counter

delta

jobsExecuted

jExecuted

6

Number of jobs that reached their expiry time and were submitted for execution

counter

delta

jobsRejected

jRejected

7

Number of submitted jobs that were rejected by the executor

counter

delta

jobsScheduled

jScheduled

8

Number of jobs scheduled onto a timer queues for later execution

counter

delta

jobsToOverflow

jToOverflow

9

Number of scheduled jobs that were initially placed on the overflow list

counter

delta

jobsToWheel0

jToWheel0

10

Number of scheduled jobs that were initially placed on wheel 0

counter

delta

jobsToWheel1

jToWheel1

11

Number of scheduled jobs that were initially placed on wheel 1

counter

delta

jobsToWheel2

jToWheel2

12

Number of scheduled jobs that were initially placed on wheel 2

counter

delta

jobsToWheel3

jToWheel3

13

Number of scheduled jobs that were initially placed on wheel 3

counter

delta

jobsWaiting

jWaiting

14

Number of submitted jobs that are currently waiting to reach their expiry time

counter

delta

tasksCancelled

tCancelled

15

Number of tasks successfully cancelled

counter

delta

tasksFixedDelay

tRepDelay

16

Number of fixed-delay repeated execution tasks submitted

counter

delta

tasksFixedRate

tRepRate

17

Number of fixed-rate repeated execution tasks submitted

counter

delta

tasksImmediate

tNow

18

Number of immediate-execution tasks submitted

counter

delta

tasksOneShot

tOnce

19

Number of one-shot delayed execution tasks submitted

counter

delta

tasksRepeated

tRepeated

20

Number of times a repeated-execution task was rescheduled

counter

delta

ticks

ticks

21

Number of timer ticks elapsed

counter

delta

Transactions

Transaction Manager Statistics

OID: 1.3.6.1.4.1.19808.2.1.6

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

active

2

Number of active transactions

counter

gauge

started

started

3

Transactions started

counter

committed

commit

4

Transactions committed

counter

rolledBack

rollback

5

Transactions rolled back

counter

UnpooledByteArrayBuffer

Unpooled Byte Array Buffer Statistics

OID: 1.3.6.1.4.1.19808.2.1.27

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

out

out

2

Total buffer allocation requests

counter

in

in

3

Total freed buffers

counter

bytesAllocated

allocated

4

Total number of bytes allocated to buffers

counter

bytesDiscarded

discarded

5

Total number of bytes discarded by freed buffers

counter

Savanna-Group

Per-group Savanna statistics

OID: 1.3.6.1.4.1.19808.2.1.19

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

tokenRotationTime

tokRotation

Token rotation time samples

sample

time/milliseconds

time/milliseconds

regularMessageSize

rgmSize

Sent regular message size samples

sample

count/bytes

count/bytes

messageDeliveryTime

dyMsgTime

Time taken to locally deliver a single message

sample

time/nanoseconds

time/milliseconds

transmitBatchBytes

txBatchBytes

Size of messages sent per token rotation

sample

count/bytes

count/bytes

appMessageSize

appMsgSize

Sent application message size samples

sample

count/bytes

count/bytes

appRoundTripTime

appRTT

Time from sending a message to the message being delivered to application code on the same node

sample

time/nanoseconds

time/milliseconds

appTransmitDelay

appXmitDelay

Time from sending an application message to savanna, to it being sent at the network level

sample

time/nanoseconds

time/milliseconds

appDeliveryDelay

appDelivDelay

Time from a received message being eligible for delivery, to it being delivered to the application

sample

time/nanoseconds

time/milliseconds

fragmentsPerAppMessage

appMsgFrags

Number of fragments making up a single application message

sample

count/fragments

count/fragments

fragmentSize

appFragSize

Size of sent message fragments

sample

count/bytes

count/bytes

fragmentsPerRegularMessage

rgmFrags

Number of application message fragments in a single regular message

sample

count/fragments

count/fragments

udpBytesSent

udpBytesTx

2

Total UDP bytes sent

counter

bytes

udpBytesReceived

udpBytesRx

3

Total UDP bytes received

counter

bytes

udpDatagramsSent

udpTx

4

Number of UDP datagrams successfully sent

counter

count

udpDatagramsReceived

udpRx

5

Number of valid UDP datagrams received

counter

count

udpInvalidDatagramsReceived

udpErrorRx

6

Number of invalid UDP datagrams received

counter

count

udpDatagramSendErrors

udpErrorTx

7

Number of UDP datagrams that failed to be sent

counter

count

tokenRetransmits

tokRetrans

8

Number of token retransmits

counter

count

activityEstimate

activityEst

9

Cluster activity time estimate

counter

msec

gauge

regularMessagesSent

rgmTx

10

Number of regular messages sent

counter

count

regularMessagesReceived

rgmRx

11

Number of regular messages received

counter

count

recoveryMessagesSent

rcmTx

12

Number of recovery messages sent

counter

count

recoveryMessagesReceived

rcmRx

13

Number of recovery messages received

counter

count

restartGroupMessagesSent

rsmTx

14

Number of restart group messages sent

counter

count

restartGroupMessagesReceived

rsmRx

15

Number of restart group messages received

counter

count

restartGroupMessageRetransmits

rsmRetrans

16

Number of restart group messages retransmitted

counter

count

regularTokensSent

rtokTx

17

Number of regular tokens sent

counter

count

regularTokensReceived

rtokRx

18

Number of regular tokens received

counter

count

installTokensSent

itokTx

19

Number of install tokens sent

counter

count

installTokensReceived

itokRx

20

Number of install tokens received

counter

count

groupIdles

idles

21

Number of times group has gone idle

counter

count

messagesLessThanARU

belowARU

22

Number of messages seen less than ARU

counter

count

shiftToInstall

toInstall

23

Number of times group has shifted to install

counter

count

shiftToRecovery

toRecovery

24

Number of times group has shifted to recovery

counter

count

shiftToOperational

toOper

25

Number of times group has shifted to operational

counter

count

messageRetransmits

msgRetrans

26

Number of message retransmits

counter

count

fcReceiveBufferSize

fcRcvBuf

27

Flowcontrol receive buffer size

counter

bytes

gauge

fcSendWindowSize

fcSendWin

28

Flowcontrol send window size

counter

bytes

gauge

fcCongestionWindowSize

fcConWin

29

Flowcontrol congestion window size

counter

bytes

gauge

fcTokenRotationEstimate

fcTokEst

30

Flow control token rotation time estimate

counter

msec

gauge

fcRetransmissionRequests

fcRetrans

31

Number of current retransmission requests

counter

count

gauge

fcLimitedSends

fcLimited

32

Number of token rotations where we wanted to send more data than flowcontrol allowed

counter

count

deliveryQueueSize

dyQSize

33

Size of messages waiting to be delivered locally

counter

bytes

gauge

deliveryQueueBytes

dyQBytes

34

Size of messages waiting to be delivered locally

counter

bytes

gauge

transmitQueueSize

txQSize

35

Number of messages waiting to send

counter

bytes

gauge

transmitQueueBytes

txQBytes

36

Size of messages waiting to send

counter

bytes

gauge

appBytesSent

appBytesTx

37

Number of application message bytes sent

counter

bytes

appBytesReceived

appBytesRx

38

Number of application message bytes received

counter

bytes

appMessagesSent

appTx

39

Number of application messages sent

counter

count

appMessagesReceived

appRx

40

Number of application messages received and fully reassembled

counter

count

appPartialMessagesReceived

appPartialRx

41

Number of application messages received and partially reassembled

counter

count

appSendErrors

appErrorTx

42

Number of application messages dropped due to full buffers

counter

count

fragStartSent

fragStartTx

43

Number of start message fragments sent

counter

count

fragMidSent

fragMidTx

44

Number of middle message fragments sent

counter

count

fragEndSent

fragEndTx

45

Number of final message fragments sent

counter

count

fragNonSent

fragNonTx

46

Number of messages sent unfragmented

counter

count

Savanna-Membership

Membership ring Savanna statistics

OID: 1.3.6.1.4.1.19808.2.1.18

Name Short Name Mapping Description Type Unit Label Default View Source Units Default Display Units

tokenRotationTime

tokRotation

Token rotation time samples

sample

time/milliseconds

time/milliseconds

udpBytesSent

udpBytesTx

2

Total UDP bytes sent

counter

bytes

udpBytesReceived

udpBytesRx

3

Total UDP bytes received

counter

bytes

udpDatagramsSent

udpTx

4

Number of UDP datagrams successfully sent

counter

count

udpDatagramsReceived

udpRx

5

Number of valid UDP datagrams received

counter

count

udpInvalidDatagramsReceived

udpErrorRx

6

Number of invalid UDP datagrams received

counter

count

udpDatagramSendErrors

udpErrorTx

7

Number of UDP datagrams that failed to be sent

counter

count

tokenRetransmits

tokRetrans

8

Number of token retransmits

counter

count

activityEstimate

activityEst

9

Cluster activity time estimate

counter

msec

gauge

joinMessagesSent

joinTx

10

Number of join messages sent

counter

count

joinMessagesReceived

joinRx

11

Number of join messages received

counter

count

membershipTokensSent

mtokTx

12

Number of membership tokens sent

counter

count

membershipTokensReceived

mtokRx

13

Number of membership tokens received

counter

count

commitTokensSent

ctokTx

14

Number of commit tokens sent

counter

count

commitTokensReceived

ctokRx

15

Number of commit tokens received

counter

count

shiftToGather

toGather

16

Number of times group has shifted to gather

counter

count

shiftToInstall

toInstall

17

Number of times group has shifted to install

counter

count

shiftToCommit

toCommit

18

Number of times group has shifted to commit

counter

count

shiftToOperational

toOper

19

Number of times group has shifted to operational

counter

count

tokenRetransmitTimeouts

tokTimeout

20

Number of token retransmission timeouts

counter

count

Metrics.Services.cmp

SBB CMP field metrics. These metrics are generated for every CMP field in each SBB.

Note The metrics recording can be turned on/off with rhino-console commands.
Name Description Type Unit Label View Source Units Default Display Units

<cmpfield>Reads

CMP field <cmpfield> reads

Counter

<cmpfield>Writes

CMP field <cmpfield> writes

Counter

<cmpfield>ReferenceCacheHits

CMP field <cmpfield> reference cache hits during field reads

Counter

<cmpfield>ReferenceCacheMisses

CMP field <cmpfield> reference cache misses during field reads

Counter

<cmpfield>WriteTime

CMP field <cmpfield> serialisation time

Sample

<cmpfield>ReadTime

CMP field <cmpfield> deserialisation time

Sample

<cmpfield>Size

CMP field <cmpfield> serialised size

Sample

Metrics.Services.lifecycle

SBB lifecycle method metrics. These stats records invocations of SBB lifecycle methods.

Note The metrics recording can be turned on/off with rhino-console commands.
Name Description Type Unit Label View Source Units Default Display Units

sbbSetContexts

Total number of setSbbContext invocations

Counter

sbbUnsetContexts

Total number of unsetSbbContext invocations

Counter

sbbCreates

Total number of sbbCreate invocations

Counter

sbbRemoves

Total number of sbbRemove invocations

Counter

sbbLoads

Total number of sbbLoad invocations

Counter

sbbStores

Total number of sbbStore invocations

Counter

sbbActivates

Total number of sbbActivate invocations

Counter

sbbPassivates

Total number of sbbPassivate invocations

Counter

sbbRollbacks

Total number of sbbRolledBack invocations

Counter

sbbExceptionsThrown

Total number of sbbExceptionThrown invocations

Counter

generate-system-report

The generate-system-report script generates a tarball of useful system information for the Metaswitch support team. Below is an example of its output:

$ ./generate-system-report.sh

This script generates a report tarball which can be useful when remotely
diagnosing problems with this installation.

The created tarball contains information on the current Rhino installation
(config files, license details), as well as various system settings (operating
system, program versions, and network settings).

It is recommended that you include the generated 'report.tar' file when
contacting Open Cloud for support.

Generating report using /home/user/rhino/node-101/work/report for temporary file
s.

IMPORTANT: It is a good idea to run the start-rhino.sh script before this
script. Otherwise, important run-time configuration information will be missing
from the generated report.

Done. 'report.tar' generated.

dumpthreads

The dumpthreads script sends a QUIT signal to the JVM process that Rhino is running in, causing the JVM to produce a thread dump.

The script itself has no output. It is used internally by Rhino (via the Watchdog) to produce a Java thread dump from the Rhino JVM in certain error scenarios (such as stuck event processing threads). Below is a partial example of thread-dump output:

"StageWorker/TM/1" prio=1 tid=0x082bb5c0 nid=0x192 in Object.wait() [0x9aae9000..0x9aaea060]
        at java.lang.Object.wait(Native Method)
        - waiting on <0x9f4154d8> (a [Lcom.opencloud.ob.RhinoSDK.mM;)
        at java.lang.Object.wait(Object.java:474)
        at com.opencloud.ob.RhinoSDK.oS$a.run(13520:68)
        - locked <0x9f4154d8> (a [Lcom.opencloud.ob.RhinoSDK.mM;)
        at java.lang.Thread.run(Thread.java:595)

"Timer-2" prio=1 tid=0x9ac06988 nid=0x18e in Object.wait() [0x9ab6a000..0x9ab6afe0]
        at java.lang.Object.wait(Native Method)
        - waiting on <0x9f4bff28> (a java.util.TaskQueue)
        at java.util.TimerThread.mainLoop(Timer.java:509)
        - locked <0x9f4bff28> (a java.util.TaskQueue)
        at java.util.TimerThread.run(Timer.java:462)

dependency-graph

dependency-graph is a command-line utility to show the dependencies between SLEE components in a running SLEE.

It can either list them to the console (useful with grep), or write a DOT file for use with Graphviz. To use it, call the dependency-graph script in the rhino/client/bin directory.

Prerequisites

  • Rhino 2.1 or higher

  • Graphviz software:

  • Some SLEE services and RAs deployed in Rhino

Options

Invoke with no arguments to view command-line options:

bin$ ./dependency-graph
Valid command line options are:
  -h <host>       - The hostname to connect to.
  -p <port>       - The port to connect to.
  -u <username>   - The user to authenticate as.
  -w <password>   - The password used for authentication.
  -D              - Display connection debugging messages.
Exactly one of the following two options:
  -c              - Write the dependencies to the console.
  -o <output>     - Draws the graph to the given output file in "DOT" format
(see: http://en.wikipedia.org/wiki/DOT_language).
Graph options only (when using -o option):
  -e              - Exclude events.
  -f              - Write full component IDs (including vendor and version).
  -g              - Group by deployable unit.
  -m              - Monochrome (turn colors off).

Exactly one of -c or -o should be specified.

Examples

Below are some example sessions against a RhinoSDK with the SIP examples installed. They illustrate how the level of detail can be controlled using the command-line flags.

With -e, -f, -g flags
$ cd rhino/client/bin
bin$ ./dependency-graph -o sip-dependencies.dot -e -f -g
Connecting to localhost:1199
Fetching dependency info from SLEE...
Processing dependencies...
Writing dependency graph in DOT format to sip-dependencies.dot...
Finished generating file.

If you have graphviz installed, this command should generate a PNG image file:
    dot -Tpng sip-dependencies.dot -o sip-dependencies.dot.png
bin$ dot -Tpng sip-dependencies.dot -o SipExamples-EFG.png

This excludes events (-e), draws full component IDs (-f), and groups components by deployable unit (-g). It produces the image below (click to enlarge):

SipExamples EFG

Just -f and -g

This is the equivalent graph after dropping the (-e) flag so that events are included:

$ ./dependency-graph -o sip-dependencies.dot -f -g
...
$ dot -Tpng sip-dependencies.dot -o SipExamples-most-detail.png
SipExamples most detail
Note Events in the same event jar are drawn as a single unit.

Just -e

This is the equivalent graph with the least detail possible, using just the (-e) flag to exclude events:

$ ./dependency-graph -o sip-dependencies.dot -e
...
$ dot -Tpng sip-dependencies.dot -o SipExamples-least-detail.png
SipExamples least detail

Utilities

This section includes details and sample output for the following Rhino utilities:

Script What it does

cleans out the Rhino disk-based database

generates configuration files for Rhino’s management clients

outputs a hash for password authentication

uninstalls a deployable unit along with everything that depends on it

init-management-db

The init-management-db script cleans out the Rhino disk-based database.

The primary effect of this is the deletion of all SLEE state (including deployed components and desired states). For the SDK, this means deleting and regenerating the embedded database.

Below are examples of init-management-db output for the production and SDK versions of Rhino:

Production

$ ./init-management-db.sh
Initializing database..
Connected to jdbc:postgresql://localhost:5432/template1 (PostgreSQL 8.4.9)
Connected to jdbc:postgresql://localhost:5432/rhino (PostgreSQL 8.4.9)
Database initialized.

SDK

Initializing database..
Connected to jdbc:derby:rhino_sdk;create=true (Apache Derby 10.6.1.0 - (938214))
Database initialized.

generate-client-configuration

The generation-client-configuration script generates configuration files for Rhino’s management clients based on the Rhino configuration specified as a command-line argument.

The purpose of this script is to allow regeneration of the client configuration if the Rhino configuration is ever updated.

Below are examples of generation-client-configuration output for the production and SDK versions of Rhino:

Production

$ ./generate-client-configuration ../../node-101/config/config_variables
Using configuration in ../../node-101/config/config_variables

SDK

$ ./generate-client-configuration ../../config/config_variables
Using configuration in ../../config/config_variables

The list of files regenerated by this script can be found in the etc/templates directory:

$ ls etc/templates/
client.properties
jetty-file-auth.xml
jetty-jmx-auth.xml
web-console.passwd
web-console.properties

rhino-passwd

The rhino-passwd script outputs a password hash for the given password, for use with management-authentication methods such as the file login module.

Below is an example of rhino-passwd output:

This utility reads passwords from the console and displays the hashed password
that must be put in the rhino.passwd file.  Enter a blank line to exit.

Password:

acbd18db4cc2f85cedef654fccc4a4d8

cascade-uninstall

The cascade-uninstall script uninstalls a deployable unit from Rhino — and everything that depends on it, including:

  • other deployable units that directly or indirectly depend on components contained in the deployable unit being uninstalled

  • services defined in deployable units that are being uninstalled

  • profile tables created from profile specifications defined in deployable units that are being uninstalled

  • resource adaptor entities created from resource adaptors defined in deployable units that are being uninstalled.

The script deactivates all services and resource adaptor entities that are to be removed and are in the ACTIVE state, and waits for them to reach the INACTIVE state before proceeding to uninstall the deployable unit.

Below are command-line options and sample uses of cascade-uninstall:

Options

$ ./cascade-uninstall
Valid command line options are:
-h <host>       - The hostname to connect to.
-p <port>       - The port to connect to.
-u <username>   - The user to authenticate as.
-w <password>   - The password used for authentication.
-D              - Display connection debugging messages.
-n <namespace>  - Select namespace to perform the actions on.
-l              - List installed deployable units.
-d <url>        - Uninstall specified deployable unit.
-c <id>         - Uninstall specified copied or linked component.
-s <id>         - Remove the shadow from the specified shadowed component.
-y              - Assume a yes response to the uninstall confirmation.
                  Information about what will be removed from the SLEE prior
                  to removal is not displayed and components are removed
                  without user confirmation.

If any of the -h, -p, -u, or -w are not specified, the defaults from the client directory the script is run from are used.

Typically you would only need use the -l option to list the deployable units installed, then the -d option to uninstall the required deployable unit.

Examples

To list all deployable units installed in Rhino:

$ ./cascade-uninstall -l
Connecting to localhost:1199
The following deployable units are installed:
file:/home/rhino/rhino/lib/javax-slee-standard-types.jar
file:rhino/units/in-common-du_2.0.jar
file:rhino/units/incc-callbarring-service-du.jar
file:rhino/units/incc-callduration-service-du.jar
file:rhino/units/incc-callforwarding-service-du.jar
file:rhino/units/incc-ratype-du_3.0.jar
file:rhino/units/incc-vpn-service-du.jar
file:rhino/units/insis-base-du_2.0.jar
file:rhino/units/insis-caprovider-ratype-du_1.0.jar
file:rhino/units/insis-scs-ratype-du_2.0.jar
file:rhino/units/insis-scs-test-service-du.jar
file:rhino/units/insis-swcap-du_2.0.jar

To uninstall the deployable unit with the URL file:rhino/units/insis-base-du_2.0.jar, along with all its dependents:

$ ./cascade-uninstall -d file:rhino/units/insis-base-du_2.0.jar
Connecting to localhost:1199
Cascade removal of deployable unit file:rhino/units/insis-base-du_2.0.jar requires the following operations to be performed:
Deployable unit file:rhino/units/insis-scs-test-service-du.jar will be uninstalled
SBB with SbbID[name=IN-SIS Test Service Composition Selector SBB,vendor=OpenCloud,version=0.2] will be uninstalled
Service with ServiceID[name=IN-SIS Test Service Composition Selector Service,vendor=OpenCloud,version=0.2] will be uninstalled
This service will first be deactivated
Deployable unit file:rhino/units/insis-swcap-du_2.0.jar will be uninstalled
Resource adaptor with ResourceAdaptorID[name=IN-SIS Signalware CAP,vendor=OpenCloud,version=2.0] will be uninstalled
Resource adaptor entity insis-cap1b will be removed
This resource adaptor entity will first be deactivated
Resource adaptor entity insis-cap1a will be removed
This resource adaptor entity will first be deactivated
Deployable unit file:rhino/units/insis-scs-ratype-du_2.0.jar will be uninstalled
Resource adaptor type with ResourceAdaptorTypeID[name=IN-SIS Service Composition Selection,vendor=OpenCloud,version=2.0] will be uninstalled
Event type with EventTypeID[name=com.opencloud.slee.resources.sis.script.in.scs.INSCSEvent,vendor=OpenCloud,version=2.0] will be uninstalled
Deployable unit file:rhino/units/insis-base-du_2.0.jar will be uninstalled
Profile specification with ProfileSpecificationID[name=IN-SIS Initial Trigger Rule Profile,vendor=OpenCloud,version=1.0] will be uninstalled
Profile table initial-trigger-selection-rules will be removed
Profile specification with ProfileSpecificationID[name=IN-SIS Service Composition Profile,vendor=OpenCloud,version=1.0] will be uninstalled
Profile table service-compositions will be removed
Profile specification with ProfileSpecificationID[name=IN-SIS Macro Profile,vendor=OpenCloud,version=1.0] will be uninstalled
Profile table initial-trigger-selection-macros will be removed
Profile specification with ProfileSpecificationID[name=IN-SIS Configuration,vendor=OpenCloud,version=2.0] will be uninstalled
Profile table insis-configs will be removed
Profile specification with ProfileSpecificationID[name=IN-SIS Service Configuration,vendor=OpenCloud,version=2.0] will be uninstalled
Profile table service-configs will be removed
Profile specification with ProfileSpecificationID[name=IN-SIS Address Subscription,vendor=OpenCloud,version=2.0] will be uninstalled
Profile table address-subscriptions will be removed
Profile specification with ProfileSpecificationID[name=IN-SIS Trigger Address Tracing,vendor=OpenCloud,version=2.0] will be uninstalled
Profile table trigger-address-tracing will be removed
Profile specification with ProfileSpecificationID[name=IN-SIS Service Key Subscription,vendor=OpenCloud,version=2.0] will be uninstalled
Profile table service-key-subscriptions will be removed
Library with LibraryID[name=IN-SIS Scripting Provider,vendor=OpenCloud,version=1.0] will be uninstalled

Continue? (y/n): y
Deactivating service ServiceID[name=IN-SIS Test Service Composition Selector Service,vendor=OpenCloud,version=0.2]
All necessary services are inactive
Deactivating resource adaptor entitiy insis-cap1b
Deactivating resource adaptor entitiy insis-cap1a
All necessary resource adaptor entities are inactive
Uninstalling deployable unit file:rhino/units/insis-scs-test-service-du.jar
Removing resource adaptor entity insis-cap1b
Removing resource adaptor entity insis-cap1a
Uninstalling deployable unit file:rhino/units/insis-swcap-du_2.0.jar
Uninstalling deployable unit file:rhino/units/insis-scs-ratype-du_2.0.jar
Removing profile table initial-trigger-selection-rules
Removing profile table service-compositions
Removing profile table initial-trigger-selection-macros
Removing profile table insis-configs
Removing profile table service-configs
Removing profile table address-subscriptions
Removing profile table trigger-address-tracing
Removing profile table service-key-subscriptions
Uninstalling deployable unit file:rhino/units/insis-base-du_2.0.jar

Rhino includes the following export and import related scripts:

Script

What it does

rhino-export

export the state of the SLEE

rhino-import

import SLEE state saved using rhino-export

rhino-snapshot

save a profile snapshot

snapshot-decode

inspect a profile snapshot

snapshot-to-export

prepare a snapshot for importing

Tip For details on using these scripts, see the Backup and Restore section.

Memory Considerations

The Rhino Management and Monitoring Tools default to memory settings that will allow operation on most systems without error.

It may occasionally be necessary to tune the memory requirements for each tool. In particular, exporting and importing very large profile tables or deployable units may require increasing the heap limit above the default values for the rhino-console, rhino-export or rhino-import tools.

Memory settings can be configured for each tool separately by editing the tool script in $RHINO_HOME/client/bin and adding a GC_OPTIONS= line. For example:

GC_OPTIONS="-Xmx256m"

The memory settings can be set globally for all tools by editing the existing GC_OPTIONS line in $RHINO_HOME/client/etc/rhino-client-common.

Warning Metaswitch recommends not decreasing the default values unless you need to run the tools in a memory-constrained environment.