This document details basic procedures for system administrators managing, maintaining, configuring and deploying Rhino 2.6 using the command-line console. To manage using a web interface, see the Rhino Element Manager documentation.
Topics
Administrative tasks for day-to-day management of the Rhino SLEE, its components and entities deployed in it, including: operational state, deployable units, services, resource adaptor entities, per-node activation state, profile tables and profiles, alarms, usage, user transactions, and component activation priorities. |
|
Procedures for configuring Rhino upon installation, and as needed (for example to tune performance), including: logging, staging, object pools, licenses, rate limiting, security and external databases. |
|
Finding Housekeeping MBeans, and finding, inspecting and removing one or all activities or SBB entities. |
|
Backing up and restoring the database, and exporting and importing SLEE deployment state. |
|
Managing the SNMP subsystem in Rhino, including: configuring the agent, managing MIB files, assigning OID mappings. |
|
Tools included with Rhino for system administrators to manage Rhino. |
Other documentation for the Rhino TAS can be found on the Rhino TAS product page.
SLEE Management
This section covers general administrative tasks for day-to-day management of the Rhino SLEE, its components and entities deployed in it.
JMX MBeans
Rhino SLEE uses Java Management Extension (JMX) MBeans for management functionality. This includes both functions defined in the JAIN SLEE 1.1 specification and Rhino extensions that allow additional functionality, beyond what’s in the specification.
Rhino’s command-line console is a front end for these MBeans, providing access to functions for managing:
Management may also be performed via the Rhino Element Manager web interface.
See also Management Tools, and the Rhino Management Extension APIs . |
Namespaces
As well as an overview of Rhino namespaces, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
createnamespace |
Namespace Management → createNamespace |
|
removenamespace |
Namespace Management → removeNamespace |
|
listnamespaces |
Namespace Management → getNamespaces |
|
Setting the active namespace |
-n <namespace> (command-line option) setactivenamespace (interactive command) |
Namespace Management → setActiveNamespace |
Getting the active namespace |
|
Namespace Management → getActiveNamespace |
About Namespaces
A namespace is an independent deployment environment that is isolated from other namespaces.
A namespace has:
-
its own SLEE operational state
-
its own set of deployable units
-
its own set of instantiated profile tables, profiles, and resource adaptor entities
-
its own set of component configuration state
-
its own set of activation states for services and resource adaptor entities.
All of these things can be managed within an individual namespace without affecting the state of any other namespace.
A namespace can be likened to a SLEE in itself, where Rhino with multiple namespaces is a container of SLEEs. |
A Rhino cluster always has a default namespace that cannot be deleted. Any number of user-defined namespaces may also be created, managed, and subsequently deleted when no longer needed.
Management clients by default interact with the default namespace unless they explicitly request to interact with a different namespace.
Creating a Namespace
To create a new namespace, use the following rhino-console command or related MBean operation.
Console command: createnamespace
Command |
createnamespace <name> Description Create a new deployment namespace |
---|---|
Example |
$ ./rhino-console createnamespace testnamespace Namespace testnamespace created |
MBean operation: createNamespace
MBean |
|
---|---|
Rhino extension |
public void createNamespace(String name) throws NullPointerException, InvalidArgumentException, NamespaceAlreadyExistsException, ManagementException; |
Removing a Namespace
To remove an existing user-defined namespace, use the following rhino-console command or related MBean operation.
The default namespace cannot be removed. |
All deployable units (other than the deployable unit containing the standard JAIN SLEE-defined types) must be uninstalled and all profile tables removed from a namespace before that namespace can be removed. |
Console command: removenamespace
Command |
removenamespace <name> Description Remove an existing deployment namespace |
---|---|
Example |
$ ./rhino-console removenamespace testnamespace Namespace testnamespace removed |
MBean operation: removeNamespace
MBean |
|
---|---|
Rhino extension |
public void removeNamespace(String name) throws NullPointerException, UnrecognizedNamespaceException, InvalidStateException, ManagementException; |
Listing Namespaces
To list all user-defined namespaces in a SLEE, use the following rhino-console command or related MBean operation.
Console command: listnamespaces
Command |
listnamespaces Description List all deployment namespaces |
---|---|
Example |
$ ./rhino-console listnamespaces testnamespace |
MBean operation: getNamespaces
MBean |
|
---|---|
Rhino extension |
public String[] getNamespaces() throws ManagementException; This operation returns the names of the user-defined namespaces that have been created. |
Setting the Active Namespace
Each individual authenticated client connection to Rhino is associated with a namespace.
This setting, known as the active namespace, controls which namespace is affected by management commands such as those that install deployable units or change operational states.
To change the active namespace for a client connection, use the following rhino-console command or related MBean operation.
Console command: setactivenamespace
Command and command-line option |
Interactive mode
In interactive mode, the setactivenamespace <-default|name> Description Set the active namespace
Non-interactive mode
In non-interactive mode, the |
---|---|
Example |
Interactive mode
$ ./rhino-console Interactive Rhino Management Shell Rhino management console, enter 'help' for a list of commands [Rhino@localhost (#0)] setactivenamespace testnamespace The active namespace is now testnamespace [Rhino@localhost [testnamespace] (#1)] setactivenamespace -default The active namespace is now the default namespace [Rhino@localhost (#2)]
Non-interactive mode
$ ./rhino-console -n testnamespace start The active namespace is now testnamespace Starting SLEE on node(s) [101] SLEE transitioned to the Starting state on node 101 |
MBean operation: setActiveNamespace
MBean |
|
---|---|
Rhino extension |
public void setActiveNamespace(String name) throws NoAuthenticatedSubjectException, UnrecognizedNamespaceException, ManagementException; This operation sets the active namespace for the client connection. A |
Getting the Active Namespace
To get the active namespace for a client connection, use the following rhino-console information and related MBean operation.
Console:
Command prompt information |
The currently active namespace is reported in the command prompt within square brackets. If no namespace is reported, then the default namespace is active. |
---|---|
Example |
$ ./rhino-console Interactive Rhino Management Shell Rhino management console, enter 'help' for a list of commands [Rhino@localhost (#0)] setactivenamespace testnamespace The active namespace is now testnamespace [Rhino@localhost [testnamespace] (#1)] setactivenamespace -default The active namespace is now the default namespace [Rhino@localhost (#2)] |
MBean operation: getActiveNamespace
MBean |
|
---|---|
Rhino extension |
public String getActiveNamespace() throws NoAuthenticatedSubjectException, ManagementException; This operation returns the name of the namespace currently active for the client connection. |
Operational State
As well as an overview of SLEE operational states, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
start |
SLEE Management → start |
|
stop |
SLEE Management → stop |
|
Retrieving the basic operational state of nodes |
state |
SLEE Management → getState |
Retrieving detailed information for every node in the cluster |
getState |
Rhino Housekeeping → getClusterState |
Gracefully shutting nodes down |
shutdown |
SLEE Management → shutdown |
Forcefully terminating nodes |
kill |
SLEE Management → kill |
Gracefully rebooting nodes |
reboot |
SLEE Management → reboot |
Enabling the symmetric activation state mode |
enablesymmetricactivationstatemode |
Runtime Config Management → enableSymmetricActivationStateMode |
Disabling the symmetric activation state mode |
disablesymmetricactivationstatemode |
Runtime Config Management → disableSymmetricActivationStateMode |
Getting the current activation state mode |
getsymmetricactivationstatemode |
Runtime Config Management → isSymmetricActivationStateModeEnabled |
Listing nodes with per-node activation state |
getnodeswithpernodeactivationstate |
Node Housekeeping → getNodesWithPerNodeActivationState |
Copying per-node activation state to another node |
copypernodeactivationstate |
Node Housekeeping → copyPerNodeActivationState |
Removing per-node activation state |
removePerNodeActivationState |
Node Housekeeping → removePerNodeActivationState |
About SLEE Operational States
The SLEE specification defines the operational lifecycle of a SLEE — illustrated, defined, and summarised below.
SLEE lifecycle states
The SLEE lifecycle states are:
State | Definition |
---|---|
STOPPED |
The SLEE has been configured and initialised, and is ready to be started. Active resource adaptor entities have been loaded and initialised, and SBBs corresponding to active services have been loaded and are ready to be instantiated. The entire event-driven subsystem, however, is idle: resource adaptor entities and the SLEE are not actively producing events, the event router is not processing work, and the SLEE is not creating SBB entities. |
STARTING |
Resource adaptor entities in the SLEE that have been recorded in the management database as being in the ACTIVE state are started. The SLEE still does not create SBB entities. The node automatically transitions from this state to the RUNNING state when all startup tasks are complete, or to the STOPPING state if a startup task fails. |
RUNNING |
Activated resource adaptor entities in the SLEE can fire events, and the SLEE creates SBB entities and delivers events to them as needed. |
STOPPING |
Identical to the RUNNING state, except resource adaptor entities do not create (and the SLEE does not accept) new activity objects. Existing activity objects can end (according to the resource adaptor specification). The node automatically transitions out of this state, returning to the STOPPED state, when all SLEE activities have ended. The node can transition to this state directly from the STARTING state, effective immediately, if it has no activity objects. |
Independent SLEE states
Each namespace in each event-router node in a Rhino cluster maintains its own SLEE-lifecycle state machine, independent from other namespaces on the same or other nodes in the cluster. For example:
-
the default namespace on one node in a cluster might be in the RUNNING state
-
while a user-defined namespace on the same node is in the STOPPED state
-
while the default namespace on another node is in the STOPPING state
-
and the user-defined namespace on that node is in the RUNNING state.
The operational state of each namespace on each cluster node persists to the disk-based database.
Bootup SLEE state
After completing bootup and initialisation, a namespace on a node will enter the STOPPED state if:
-
the database has no persistent operational state information for that namespace on that node;
-
the namespace’s persistent operational state is STOPPED on that node; or
-
the node was started with the
-x
option (see Start Rhino in the Rhino Getting Started Guide).
Otherwise, the namespace will return to the same operational state that it was last in, as recorded in persistent storage.
Changing a namespace’s operational state
You can change the operational state of any namespace on any node at any time, as long as least one node in the cluster is available to perform the management operation (regardless of whether or not the node whose operational state being changed is a current cluster member). For example, you might set the operational state of the default namespace on node 103 to RUNNING before node 103 is started — then, when node 103 is started, after it completes initialising, the default namespace will enter the RUNNING state.
Changing a quorum node’s operational state
You can also change the operational state of a node which is a current member of the cluster as a quorum node… but quorum nodes make no use of operational state information stored in the database, and will not respond to operational state changes. (A node only uses operational state information if it starts as a regular event-router node.) |
Starting the SLEE
To start a SLEE on one or more nodes, use the following rhino-console command or related MBean operation.
Console command: start
Command |
start [-nodes node1,node2,...] [-ifneeded] Description Start the SLEE (on the specified nodes) |
---|---|
Example |
To start nodes 101 and 102: $ ./rhino-console start -nodes 101,102 Starting SLEE on node(s) [101,102] SLEE transitioned to the Starting state on node 101 SLEE transitioned to the Starting state on node 102 |
MBean operation: start
MBean |
|
---|---|
SLEE-defined |
Start all nodes
public void start() throws InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Start specific nodes
public void start(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument which lets you control which nodes to start (by specifying node IDs). For this to work, the specified nodes must be in the STOPPED state. |
Stopping the SLEE
To stop SLEE event-routing functions on one or more nodes, use the following rhino-console command or related MBean operation.
Console command: stop
Command |
stop [-nodes node1,node2,...] [-reassignto -node3,node4,...] [-ifneeded] Description Stop the SLEE (on the specified nodes (reassigning replicated activities to the specified nodes)) |
||
---|---|---|---|
Examples |
To stop nodes 101 and 102: $ ./rhino-console stop -nodes 101,102 Stopping SLEE on node(s) [101,102] SLEE transitioned to the Stopping state on node 101 SLEE transitioned to the Stopping state on node 102 To stop only node 101 and reassign replicated activities to node 102: $ ./rhino-console stop -nodes 101 -reassignto 102 Stopping SLEE on node(s) [101] SLEE transitioned to the Stopping state on node 101 Replicated activities reassigned to node(s) [102] To stop node 101 and distribute replicated activities of each replicating resource adaptor entity to all other eligible nodes (those on which the resource adaptor entity is in the ACTIVE state and the SLEE is in the RUNNING state), specify an empty (zero-length) argument for the $ ./rhino-console stop -nodes 101 -reassignto "" Stopping SLEE on node(s) [101] SLEE transitioned to the Stopping state on node 101 Replicated activities reassigned to node(s) [102,103]
|
MBean operation: stop
MBean |
|
---|---|
SLEE-defined |
Stop all nodes
public void stop() throws InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extensions |
Stop specific nodes
public void stop(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument which lets you control which nodes to stop (by specifying node IDs). For this to work, specified nodes must begin in the RUNNING state.
Reassign activities to other nodes
public void stop(int[] stopNodeIDs, int[] reassignActivitiesToNodeIDs) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Rhino also provides an extension that adds another argument, which lets you reassign ownership of replicated activities (from replicating resource adaptor entities) from the stopping nodes, distributing the activities of each resource adaptor entity equally among other event-router nodes where the resource adaptor entity is eligible to adopt them. With a smaller set of activities, the resource adaptor entities on the stopping nodes can more quickly return to the INACTIVE state (which is required for the SLEE to transition from the STOPPING to the STOPPED state). This only works for resource adaptor entities that are replicating activity state (see the description of the "Rhino-defined configuration property" on the MBean tab on Creating a Resource Adaptor Entity). See also Reassigning a Resource Adaptor Entity’s Activities to Other Nodes, in particular the Requirements tab. |
Basic Operational State of a Node
To retrieve the basic operational state of a node, use the following rhino-console command or related MBean operation.
Console command: state
Command |
state [-nodes node1,node2,...] Description Get the state of the SLEE (on the specified nodes) |
---|---|
Output |
The |
Examples |
To display the state of only node 101: $ ./rhino-console state -nodes 101 Node 101 is Stopped To display the state of every event-router node: $ ./rhino-console state Node 101 is Stopped Node 102 is Running |
MBean operation: getState
MBean |
|
---|---|
SLEE-defined |
Return state of current node
public SleeState getState() throws ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Return state of specific nodes
public SleeState[] getState(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, ManagementException; Rhino provides an extension that adds an argument which lets you control which nodes to examine (by specifying node IDs). |
Detailed Information for Every Node in the Cluster
To retrieve detailed information for every node in the cluster (including quorum nodes), use the following rhino-console command or related MBean operation.
Console command: getclusterstate
Command |
getclusterstate Description Display the current state of the Rhino Cluster |
---|---|
Output |
For every node in the cluster, retrieves detailed information on the:
|
Example |
$ ./rhino-console getclusterstate node-id active-alarms host node-type slee-state start-time up-time -------- -------------- ----------------- ------------- ----------- ------------------ ----------------- 101 0 host1.domain.com event-router Stopped 20080327 12:16:26 0days,2h,40m,3s 102 0 host2.domain.com event-router Running 20080327 12:16:30 0days,2h,39m,59s 103 0 host3.domain.com quorum n/a 20080327 14:36:25 0days,0h,20m,4s |
MBean operation: getClusterState
MBean |
|
---|---|
Rhino extension |
public TabularData getClusterState() throws ManagementException; (Refer to the |
See also Basic operational state of a node. |
Terminating Nodes
To terminate cluster nodes, you can:
You can stop functions on nodes and nodes themselves, by:
See also Stop Rhino in the Getting Started Guide, which details using the |
Shut Down Gracefully
To gracefully shut down one or more nodes, use the following rhino-console command or related MBean operation.
Console command: shutdown
Command |
shutdown [-nodes node1,node2,...] [-timeout timeout] [-restart] Description Gracefully shutdown the SLEE (on the specified nodes). The optional timeout is specified in seconds. Optionally restart the node(s) after shutdown |
---|---|
Examples |
To shut down the entire cluster: $ ./rhino-console shutdown Shutting down the SLEE Shutdown successful To shut down only node 102: $ ./rhino-console shutdown -nodes 102 Shutting down node(s) [102] Shutdown successful |
MBean operation: shutdown
MBean |
|
---|---|
SLEE-defined |
Shut down all nodes
public void shutdown() throws InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Shut down specific nodes
public void shutdown(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument which lets you control which nodes to shut down (by specifying node IDs). |
Event-router nodes can only be shut down when STOPPED — if any in the set of nodes that a shutdown operation targets is not in the STOPPED state, the shutdown operation will fail (and no nodes will shut down). Quorum nodes can be shut down at any time. |
Forcefully Terminate
To forcefully terminate a cluster node that is in any state where it can respond to management operations, use the following rhino-console command or related MBean operation.
Console command: kill
Command |
kill -nodes node1,node2,... Description Forcefully terminate the specified nodes (forces them to become non-primary) |
---|---|
Example |
To forcefully terminate nodes 102 and 103: $ ./rhino-console kill -nodes 102,103 Terminating node(s) [102,103] Termination successful |
MBean operation: kill
MBean |
|
---|---|
Rhino operation |
public void kill(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, ManagementException; Rhino’s |
Application state may be lost
Killing a node is not recommended — forcibly terminated nodes lose all non-replicated application state. |
Rebooting Nodes
To gracefully reboot one or more nodes, use the following rhino-console command or related MBean operation.
Console command: reboot
Command |
reboot [-nodes node1,node2,...] {-state state | -states state1,state2,...} Description Gracefully shutdown the SLEE (on the specified nodes), restarting into either the running or stopped state. Use either the -state argument to set the state for all nodes or the -states argument to set it separately for each node rebooted. If a list of nodes is not provided then -state must be used to set the state of all nodes. Valid states are (r)unning and (s)topped |
---|---|
Examples |
To reboot the entire cluster: $ ./rhino-console reboot -states running,running,stopped Restarting node(s) [101,102,103] (using Rhino's default shutdown timeout) Restarting To shut down only node 102: $ ./rhino-console reboot -nodes 102 -states stopped Restarting node(s) [102] (using Rhino's default shutdown timeout) Restarting |
MBean operation: reboot
MBean |
|
---|---|
Rhino extension |
Reboot all nodes
public void reboot(SleeState[] states) throws InvalidArgumentException, InvalidStateException, ManagementException; Reboots every node in the cluster to the state specified. |
Rhino extension |
Reboot specific nodes
public void reboot(int[] nodeIDs, SleeState[]) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Extension to reboot that adds an argument which lets you control which nodes to shut down (by specifying node IDs). |
Event-router nodes can restart to either the RUNNING state or the STOPPED state. Quorum nodes must have a state provided but do not use this in operation. |
Activation State
As well as an overview of activation state modes, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
Enabling the symmetric activation state mode |
enablesymmetricactivationstatemode |
Runtime Config Management → enableSymmetricActivationStateMode |
Disabling the symmetric activation state mode |
disablesymmetricactivationstatemode |
Runtime Config Management → disableSymmetricActivationStateMode |
Getting the current activation state mode |
getsymmetricactivationstatemode |
Runtime Config Management → isSymmetricActivationStateModeEnabled |
Listing nodes with per-node activation state |
getnodeswithpernodeactivationstate |
Node Housekeeping → getNodesWithPerNodeActivationState |
Copying per-node activation state to another node |
copypernodeactivationstate |
Node Housekeeping → copyPerNodeActivationState |
Removing per-node activation state |
removePerNodeActivationState |
Node Housekeeping → removePerNodeActivationState |
About Activation State Modes
Rhino has two modes of operation for managing the activation state of services and resource adaptor entities: per-node and symmetric.
The activation state for the SLEE event-routing functions is always maintained on a per-node basis.
Per-node activation state
In the per-node activation state mode, Rhino maintains activation state for the installed services and created resource adaptor entities in a namespace on a per-node basis. That is, the SLEE records separate activation state information for each individual cluster node.
The per-node activation state mode is the default mode in a newly installed Rhino cluster.
Symmetric activation state
In the symmetric activation state mode, Rhino maintains a single cluster-wide activation state view for each installed service and created resource adaptor entity. So, for example, if a service is activated, then it is simultaneously activated on every cluster node. If a new node joins the cluster, then the services and resource adaptor entities on that node each enter the same activation state as for existing cluster nodes.
MBean operations affecting the activation state of services or resource adaptor entities that accept a list of node IDs to apply the operation to, such as the Similary, MBean operations that inspect or manage per-node activation state, such as the |
Symmetric Activation State
This section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
enablesymmetricactivationstatemode |
Runtime Config Management → enableSymmetricActivationStateMode |
|
disablesymmetricactivationstatemode |
Runtime Config Management → disableSymmetricActivationStateMode |
|
getsymmetricactivationstatemode |
Runtime Config Management → isSymmetricActivationStateModeEnabled |
Enable Symmetric Activation State Mode
To switch from the per-node activation state mode to the symmetric activation state mode, use the following rhino-console command or related MBean operation.
Console command: enablesymmetricactivationstatemode
Command |
enablesymmetricactivationstatemode <template-node-id> Description Enable symmetric activation state mode of services and resource adaptor entities across the cluster. Components across the cluster will assume the state of the specified template node |
---|---|
Example |
$ ./rhino-console enablesymmetricactivationstatemode 101 Symmetric activation state mode enabled using node 101 as a template |
MBean operation: enableSymmetricActivationStateMode
MBean |
|
---|---|
Rhino operation |
public void enableSymmetricActivationStateMode(int templateNodeID) throws InvalidStateException, InvalidArgumentException, ConfigurationException; This operation activates and/or deactivates services and resource adaptor entities on other cluster nodes such that their activation state matches that of the template node. If a service or resource adaptor entity is deactivated on a node where the SLEE operational state is RUNNING or STOPPING, that service or resource adaptor entity will enter the STOPPING state and be allowed to drain before it transitions to the INACTIVE state. |
Disable Symmetric Activation State Mode
To switch from the symmetric activation state mode back to the per-node activation state mode, use the following rhino-console command or related MBean operation.
Console command: disablesymmetricactivationstatemode
Command |
disablesymmetricactivationstatemode Description Disable symmetric activation state mode of services and resource adaptor entities across the cluster, such that components may have different activation states on different nodes |
---|---|
Example |
$ ./rhino-console disablesymmetricactivationstatemode Symmetric activation state mode disabled |
MBean operation: disableSymmetricActivationStateMode
MBean |
|
---|---|
Rhino operation |
public void disableSymmetricActivationStateMode() throws InvalidStateException, ConfigurationException; This operation disables the symmetric activation state mode, thus restoring the per-node activation state mode. The existing activation states of services and resource adaptor entities are not affected by this operation — the per-node activation state of each service and resource adaptor entity is simply set to the current corresponding symmetric state. |
Get the Current Activation State Mode
To determine if the symmetric activation state mode is enabled or disabled, use the following rhino-console command or related MBean operation.
If the symmetric activation state mode is disabled, then the per-node activation state mode consequently must be in force.
Console command: getsymmetricactivationstatemode
Command |
getsymmetricactivationstatemode Description Display the current status of the symmetric activation state mode |
---|---|
Example |
$ ./rhino-console getsymmetricactivationstatemode Symmetric activation state mode is currently enabled |
MBean operation: isSymmetricActivationStateModeEnabled
MBean |
|
---|---|
Rhino operation |
public boolean isSymmetricActivationStateModeEnabled() throws ConfigurationException; |
Per-Node Activation State
This section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs.
Procedure | rhino-console command | MBean → Operation |
---|---|---|
getnodeswithpernodeactivationstate |
Node Housekeeping → getNodesWithPerNodeActivationState |
|
copypernodeactivationstate |
Node Housekeeping → copyPerNodeActivationState |
|
removepernodeactivationstate |
Node Housekeeping → removePerNodeActivationState |
See also Finding Housekeeping MBeans. |
Listing Nodes with Per-Node Activation State
To get a list of nodes with per-node activation state, use the following rhino-console command or related MBean operation.
Console command: getnodeswithpernodeactivationstate
Command |
getnodeswithpernodeactivationstate Description Get the set of nodes for which per-node activation state exists |
---|---|
Example |
$ ./rhino-console getnodeswithpernodeactivationstate Nodes with per-node activation state: [101,102,103] |
MBean operation: getNodesWithPerNodeActivationState
MBean |
|
---|---|
Rhino operation |
public int[] getNodesWithPerNodeActivationState() throws ManagementException; This operation returns an array, listing the cluster node IDs for nodes that have per-node activation state recorded in the database). |
Copying Per-Node Activation State to Another Node
To copy per-node activation state from one node to another, use the following rhino-console command or related MBean operation.
Console command: copypernodeactivationstate
Command |
copypernodeactivationstate <from-node-id> <to-node-id> Description Copy per-node activation state from one node to another |
||
---|---|---|---|
Example |
To copy the per-node activation state from node 101 to node 102: $ ./rhino-console copypernodeactivationstate 101 102 Per-node activation state copied from 101 to 102
|
MBean operation: copyPerNodeActivationState
MBean |
|
---|---|
Rhino operation |
public boolean copyPerNodeActivationState(int targetNodeID) throws UnsupportedOperationException, InvalidArgumentException, InvalidStateException, ManagementException; This operation:
|
The start-rhino.sh command with the Production version of Rhino also includes an option (-c ) to copy per-node activation state from another node to the booting node as it initialises. (See Start Rhino in the Getting Started Guide.) |
Removing Per-Node Activation State
To remove per-node activation state, use the following rhino-console command or related MBean operation.
Console command: removepernodeactivationstate
Command |
removepernodeactivationstate <from-node-id> Description Remove per-node activation state from a node |
||
---|---|---|---|
Example |
To remove per-node activation state from node 103: $ ./rhino-console removepernodeactivationstate 103 Per-node activation state removed from 103
|
MBean operation: removePerNodeActivationState
MBean |
|
---|---|
Rhino operation |
public boolean removePerNodeActivationState() throws UnsupportedOperationException, InvalidStateException, ManagementException; This operation:
|
The start-rhino.sh command with the Production version of Rhino also includes an option (-d ) to remove per-node activation state from the booting node as it initialises. (See Start Rhino in the Getting Started Guide.) |
Startup and Shutdown Priority
Startup and shutdown priorities should be set when resource adaptors and services need to be activated or deactivated in a particular order when the SLEE is started or stopped. For example, the resource adaptors responsible for writing Call Detail Records often need to be deactivated last.
Valid priorities are between -128 and 127. Startup and shutdown occur from highest to lowest priority.
Console commands
Console command: getraentitystartingpriority
Command |
getraentitystartingpriority <entity-name> Description Get the starting priority for a resource adaptor entity |
---|---|
Examples |
./rhino-console getraentitystartingpriority sipra Resource adaptor entity sipra activation priority is currently 0 |
Console command: getraentitystoppingpriority
Command |
getraentitystoppingpriority <entity-name> Description Get the stopping priority for a resource adaptor entity |
---|---|
Examples |
./rhino-console getraentitystoppingpriority sipra Resource adaptor entity sipra deactivation priority is currently 0 |
Console command: getservicestartingpriority
Command |
getservicestartingpriority <service-id> Description Get the starting priority for a service |
---|---|
Examples |
./rhino-console getservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority is currently 0 |
Console command: getservicestoppingpriority
Command |
getservicestoppingpriority <service-id> Description Get the stopping priority for a service |
---|---|
Examples |
./rhino-console getservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority is currently 0 |
Console command: setraentitystartingpriority
Command |
setraentitystartingpriority <entity-name> <priority> Description Set the starting priority for a resource adaptor entity. The priority must be between -128 and 127 and higher priority values have precedence over lower priority values |
---|---|
Examples |
./rhino-console setraentitystartingpriority sipra 127 Resource adaptor entity sipra activation priority set to 127 ./rhino-console setraentitystartingpriority sipra -128 Resource adaptor entity sipra activation priority set to -128 |
Console command: setraentitystoppingpriority
Command |
setraentitystoppingpriority <entity-name> <priority> Description Set the stopping priority for a resource adaptor entity. The priority must be between -128 and 127 and higher priority values have precedence over lower priority values |
---|---|
Examples |
./rhino-console setraentitystoppingpriority sipra 127 Resource adaptor entity sipra deactivation priority set to 127 ./rhino-console setraentitystoppingpriority sipra -128 Resource adaptor entity sipra deactivation priority set to -128 |
Console command: setservicestartingpriority
Command |
setservicestartingpriority <service-id> <priority> Description Set the starting priority for a service. The priority must be between -128 and 127 and higher priority values have precedence over lower priority values |
---|---|
Examples |
./rhino-console setservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 127 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority set to 127 ./rhino-console setservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 -128 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority set to -128 |
Console command: setservicestoppingpriority
Command |
setservicestoppingpriority <service-id> <priority> Description Set the stopping priority for a service. The priority must be between -128 and 127 and higher priority values have precedence over lower priority values |
---|---|
Examples |
./rhino-console setservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 127 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority set to 127 ./rhino-console setservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 -128 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority set to -128 |
MBean operations
Services
MBean |
|
---|---|
Rhino extensions |
getStartingPriority
byte getStartingPriority(ServiceID service) throws NullPointerException, UnrecognizedServiceException, ManagementException;
getStartingPriorities
Byte[] getStartingPriorities(ServiceID[] services) throws NullPointerException, ManagementException;
getStoppingPriority
byte getStoppingPriority(ServiceID service) throws NullPointerException, UnrecognizedServiceException, ManagementException;
getStoppingPriorities
Byte[] getStoppingPriorities(ServiceID[] services) throws NullPointerException, ManagementException;
setStartingPriority
void setStartingPriority(ServiceID service, byte priority) throws NullPointerException, UnrecognizedServiceException, ManagementException;
setStoppingPriority
void setStoppingPriority(ServiceID service, byte priority) throws NullPointerException, UnrecognizedServiceException, ManagementException; |
Resource Adaptors
MBean |
|
---|---|
Rhino extensions |
getStartingPriority
byte getStartingPriority(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
getStartingPriorities
Byte[] getStartingPriorities(String[] entityNames) throws NullPointerException, ManagementException;
getStoppingPriority
byte getStoppingPriority(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
getStoppingPriorities
Byte[] getStoppingPriorities(String[] entityNames) throws NullPointerException, ManagementException;
setStartingPriority
void setStartingPriority(String entityName, byte priority) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
setStoppingPriority
void setStoppingPriority(String entityName, byte priority) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException; |
Deployable Units
As well as an overview of deployable units, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean → Operation |
---|---|---|
install installlocaldu |
DeploymentMBean → install |
|
uninstall |
DeploymentMBean → uninstall |
|
listdeployableunits |
DeploymentMBean → getDeployableUnits |
About Deployable Units
Below are a definition, preconditions for installing and uninstalling, and an example of a deployable unit.
What is a deployable unit?
A deployable unit is a jar file that can be installed in the SLEE. It contains:
-
a deployment descriptor
-
constituent jar files, with Java class files and deployment descriptors for components such as:
-
SBBs
-
events
-
profile specifications
-
resource adaptor types
-
resource adaptors
-
libraries
-
-
XML files for services.
The JAIN SLEE 1.1 specification defines the structure of a deployable unit. |
Installing and uninstalling deployable units
You must install and uninstall deployable units in a particular order, according to the dependencies of the SLEE components they contain. You cannot install a deployable unit unless either it contains all of its dependencies, or they are already installed. For example, if your deployable unit contains an SBB which depends on a library jar, the library jar must either already be installed in the SLEE, or be included in that same deployable unit jar.
Pre-conditions
A deployable unit cannot be installed if any of the following is true:
-
A deployable unit with the same URL has already been installed in the SLEE.
-
The deployable unit contains a component with the same name, vendor and version as a component of the same type that is already installed in the SLEE.
-
The deployable unit contains a component that references other components that are not yet installed in the SLEE and are not included in the deployable unit jar. (For example, an SBB component may reference event-type components and profile-specification components that are not included or pre-installed.)
A deployable unit cannot be uninstalled if either of the following is true:
-
There are any dependencies on any of its components from components in other installed deployable units. For example, if a deployable unit contains an SBB jar that depends on a profile-specification jar contained in a second deployable unit, the deployable unit containing the profile-specification jar cannot be uninstalled while the deployable unit containing the SBB jar remains installed.
-
There are "instances" of components contained in the deployable unit. For example, a deployable unit containing a resource adaptor cannot be uninstalled if the SLEE includes resource adaptor entities of that resource adaptor.
Deployable unit example
The following example illustrates the deployment descriptor for a deployable unit jar file:
<deployable-unit> <description> ... </description> ... <jar> SomeProfileSpec.jar </jar> <jar> BarAddressProfileSpec.jar </jar> <jar> SomeCustomEvent.jar </jar> <jar> FooSBB.jar </jar> <jar> BarSBB.jar </jar> ... <service-xml> FooService.xml </service-xml> ... </deployable-unit>
The content of the deployable unit jar file is as follows:
META-INF/deployable-unit.xml META-INF/MANIFEST.MF ... SomeProfileSpec.jar BarAddressProfileSpec.jar SomeCustomEvent.jar FooSBB.jar BarSBB.jar FooService.xml ...
Installing Deployable Units
To install a deployable unit, use the following rhino-console command or related MBean operation.
Console commands: install
, installlocaldu
Commands |
Installing from a URL
install <url> [-type <type>] [-installlevel <level>] Description Install a deployable unit jar or other artifact. To install something other than a deployable unit, the -type option must be specified. The -installlevel option controls to what degree the deployable artifact is installed
Installing from a local file
installlocaldu <file url> [-type <type>] [-installlevel <level>] [-url url] Description Install a deployable unit or other artifact. This command will attempt to forward the file content (by reading the file) to rhino if the management client is on a different host. To install something other than a deployable unit, the -type option must be specified. The -installlevel option controls to what degree the deployable artifact is installed. The -url option allows the deployment unit to be installed with an alternative URL identifier |
---|---|
Examples |
To install a deployable unit from a given URL: $ ./rhino-console install file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar installed: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar] To install a deployable unit from the local file system of the management client: $ ./rhino-console installlocaldu file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar installed: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar] |
MBean operation: install
MBean |
|
---|---|
SLEE-defined |
Install a deployable unit from a given URL
public DeployableUnitID install(String url) throws NullPointerException, MalformedURLException, AlreadyDeployedException, DeploymentException, ManagementException; Installs the given deployable unit jar file into the SLEE. The given URL must be resolvable from the Rhino node. |
Rhino extension |
Install a deployable unit from a given byte array
public DeployableUnitID install(String url, byte[] content) throws NullPointerException, MalformedURLException, AlreadyDeployedException, DeploymentException, ManagementException; Installs the given deployable unit jar file into the SLEE. The caller passes the actual file contents of the deployable unit in a byte array as a parameter to this method. The SLEE then installs the deployable unit as if it were from the URL. |
Uninstalling Deployable Units
To uninstall a deployable unit, use the following rhino-console command or related MBean operation.
A deployable unit cannot be uninstalled if it contains any components that any other deployable unit installed in the SLEE depends on. |
Console command: uninstall
Command |
uninstall <url> Description Uninstall a deployable unit jar |
---|---|
Examples |
To uninstall a deployable unit which was installed with the given URL: $ ./rhino-console uninstall file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar uninstalled: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar] |
Console command: cascadeuninstall
Command |
cascadeuninstall <type> <url|component-id> [-force] [-s] Description Cascade uninstall a deployable unit or copied component. The optional -force argument prevents the command from prompting for confirmation before the uninstall occurs. The -s argument removes the shadow from a shadowed component and is not valid for deployable units |
---|---|
Examples |
To uninstall a deployable unit which was installed with the given URL and all deployable units that depend on this: $ ./rhino-console cascadeuninstall du file:du/ocsip-ra-2.3.1.17.du.jar Cascade removal of deployable unit file:du/ocsip-ra-2.3.1.17.du.jar requires the following operations to be performed: Deployable unit file:jars/sip-registrar-service.jar will be uninstalled SBB with SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8] will be uninstalled Service with ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] will be uninstalled This service will first be deactivated Deployable unit file:jars/sip-presence-service.jar will be uninstalled SBB with SbbID[name=EventStateCompositorSbb,vendor=OpenCloud,version=1.0] will be uninstalled SBB with SbbID[name=NotifySbb,vendor=OpenCloud,version=1.1] will be uninstalled SBB with SbbID[name=PublishSbb,vendor=OpenCloud,version=1.0] will be uninstalled Service with ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] will be uninstalled This service will first be deactivated Service with ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] will be uninstalled This service will first be deactivated Service with ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] will be uninstalled This service will first be deactivated Deployable unit file:jars/sip-proxy-service.jar will be uninstalled SBB with SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8] will be uninstalled Service with ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] will be uninstalled This service will first be deactivated Deployable unit file:du/ocsip-ra-2.3.1.17.du.jar will be uninstalled Resource adaptor with ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] will be uninstalled Resource adaptor entity sipra will be removed This resource adaptor entity will first be deactivated Link name OCSIP bound to this resource adaptor entity will be removed Continue? (y/n): y Deactivating service ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] Deactivating service ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] Deactivating service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] Deactivating service ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] Deactivating service ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] All necessary services are inactive Deactivating resource adaptor entity sipra All necessary resource adaptor entities are inactive Uninstalling deployable unit file:jars/sip-registrar-service.jar Uninstalling deployable unit file:jars/sip-presence-service.jar Uninstalling deployable unit file:jars/sip-proxy-service.jar Unbinding resource adaptor entity link name OCSIP Removing resource adaptor entity sipra Uninstalling deployable unit file:du/ocsip-ra-2.3.1.17.du.jar |
MBean operation: uninstall
MBean |
|
---|---|
SLEE-defined |
public void uninstall(DeployableUnitID id) throws NullPointerException, UnrecognizedDeployableUnitException, DependencyException, InvalidStateException, ManagementException; Uninstalls the given deployable unit jar file (along with all the components it contains) out of the SLEE. |
Listing Deployable Units
To list the installed deployable units, use the following rhino-console command or related MBean operation.
Console command: listdeployableunits
Command |
listdeployableunits Description List the current installed deployable units |
---|---|
Example |
To list the currently installed deployable units: $ ./rhino-console listdeployableunits DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar] DeployableUnitID[url=file:/home/rhino/rhino/lib/javax-slee-standard-types.jar] |
MBean operation: getDeployableUnits
MBean |
|
---|---|
SLEE-defined |
public DeployableUnitID[] getDeployableUnits() throws ManagementException; Returns the set of deployable unit identifiers that identify all the deployable units installed in the SLEE. |
Services
As well as an overview of SLEE services, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean → Operation(s) |
---|---|---|
listservices |
Deployment → getServices |
|
listservicesbystate |
Service Management → getServices |
|
activateservice |
Service Management → activate |
|
deactivateservice |
Service Management → deactivate |
|
deactivateandactivateservice |
Service Management → deactivateAndActivate |
|
listserviceralinks |
Deployment → getServices |
|
listsbbs |
Deployment → getSbbs |
|
|
Deployment → getDescriptors |
|
getservicemetricsrecordingenabled |
ServiceManagementMBean → getServiceMetricsRecordingEnabled |
|
setservicemetricsrecordingenabled |
ServiceManagementMBean → setServiceMetricsRecordingEnabled |
About Services
The SLEE specification presents the operational lifecycle of a SLEE service — illustrated, defined and summarised below.
What are SLEE services?
Services are SLEE components that provide the application logic to act on input from resource adaptors. |
Service lifecycle states
State | Definition |
---|---|
INACTIVE |
The service has been installed successfully and is ready to be activated, but not yet running. The SLEE will not create SBB entities of the service’s root SBB, to process events. |
ACTIVE |
The service is running. The SLEE will create SBB entities, of the service’s root SBB, to process initial events. The SLEE will also deliver events to SBB entities of the service’s SBBs, as appropriate. |
STOPPING |
The service is deactivating. Existing SBB entities of the service continue running and may complete their processing. But the SLEE will not create new SBB entities of the service’s root SBB, for new activities. |
When the SLEE can reclaim all of a service’s SBB entities, it transitions out of the STOPPING state and returns to the INACTIVE state. |
Independent operational states
As explained in About SLEE Operational States, each event-router node in a Rhino cluster maintains its own lifecycle state machine, independent of other nodes in the cluster. This is also true for each service: one service might be INACTIVE on one node in a cluster, ACTIVE on another, and STOPPING on a third. The operational state of a service on each cluster node also persists to the disk-based database.
A service will enter the INACTIVE state, after node bootup and initialisation completes, if the database’s persistent operational state information for that service is missing, or is set to INACTIVE or STOPPING.
And, like node operational states, you can change the operational state of a service at any time, as long as least one node in the cluster is available to perform the management operation (regardless of whether or not the node whose operational state being changed is a current cluster member). For example, you might activate a service on node 103 before node 103 is booted — then, when node 103 boots, and after it completes initialisation, that service will transition to the ACTIVE state.
Configuring services
An administrator can configure a service before deployment by modifying its service-jar.xml
deployment descriptor (in its deployable unit). This includes specifying:
-
the address profile table to use when a subscribed address selects initial events for the service’s root SBB
-
the default event-router priority for the SLEE to give to root SBB entities of the service when processing initial events.
Individual SBBs used in a service can also have configurable properties or environment entries. Values for these environment entries are defined in the sbb-jar.xml
deployment descriptor included in the SBB’s component jar. Administrators can set or adjust the values for each environment entry before the SBB is installed in the SLEE.
The SLEE only reads the configurable properties defined for a service or SBB deployment descriptor at deployment time. If you need to change the value of any of these properties, you’ll need to:
-
uninstall the related component (service or SBB whose properties you want to configure) from the SLEE
-
change the properties
-
reinstall the component
-
uninstall and reinstall other components (as needed) to satisfy dependency requirements enforced by the SLEE.
All Available Services
To list all available services installed in the SLEE, use the following rhino-console command or related MBean operation.
Console command: listservices
Command |
listservices Description List the current installed services |
---|---|
Example |
$ ./rhino-console listservices ServiceID[name=SIP AC Location Service,vendor=OpenCloud,version=1.7] ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] |
MBean operation: getServices
MBean |
|
---|---|
SLEE-defined |
public ServiceID[] getServices() throws ManagementException; This operation returns an array of service component identifiers, identifying the services installed in the SLEE. |
See also Services by State. |
Services by State
To list a specified activation state, use the following rhino-console command or related MBean operation.
Console command: listservicesbystate
Command |
listservicesbystate <state> [-node node] Description List the services that are in the specified state (on the specified node) |
---|---|
Output |
The activation state of a service is node-specific. If the |
Example |
To list the services in the ACTIVE state on node 102: $ ./rhino-console listservicesbystate Active -node 102 Services in Active state on node 102: ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] |
MBean operation: getServices
MBean |
|
---|---|
SLEE-defined |
Get services on all nodes
public ServiceID[] getServices(ServiceState state) throws NullPointerException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Get services on specific nodes
public ServiceID[] getServices(ServiceState state, int nodeID) throws NullPointerException, InvalidArgumentException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to list services in a particular state (by specifying node IDs). |
See also All Available Services. |
Activating Services
To activate one or more services, use the following rhino-console command or related MBean operation.
Console command: activateservice
Command |
activateservice <service-id>* [-nodes node1,node2,...] [-ifneeded] Description Activate a service (on the specified nodes) |
---|---|
Example |
To activate the Call Barring and Call Forwarding services on nodes 101 and 102: $ ./rhino-console activateservice \ "name=Call Barring Service,vendor=OpenCloud,version=0.2" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \ -nodes 101,102 Activating services [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2], ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]] on node(s) [101,102] Services transitioned to the Active state on node 101 Services transitioned to the Active state on node 102 |
MBean operation: activate
MBean |
|
---|---|
SLEE-defined |
Activate on all nodes
public void activate(ServiceID id) throws NullPointerException, UnrecognizedServiceException, InvalidStateException, InvalidLinkNameBindingStateException, ManagementException; public void activate(ServiceID[] ids) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, InvalidLinkNameBindingStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Activate on specific nodes
public void activate(ServiceID id, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; public void activate(ServiceID[] ids, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; Rhino provides an extension that adds a argument to let you control the nodes on which to activate the specified services (by specifying node IDs). For this to work, the specified services must be in the INACTIVE state on the specified nodes. |
A service may require resource adaptor entity link names to be bound to appropriate resource adaptor entities before it can be activated. (See Getting Link Bindings Required by a Service and Managing Resource Adaptor Entity Link Bindings.) |
Deactivating Services
To deactivate one or more services on one or more nodes, use the following rhino-console command or related MBean operation.
Console command: deactivateservice
Command |
deactivateservice <service-id>* [-nodes node1,node2,...] [-ifneeded] Description Deactivate a service (on the specified nodes) |
---|---|
Example |
To deactivate the Call Barring and Call Forwarding services on nodes 101 and 102: $ ./rhino-console deactivateservice \ "name=Call Barring Service,vendor=OpenCloud,version=0.2" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \ -nodes 101,102 Deactivating services [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2], ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]] on node(s) [101,102] Services transitioned to the Stopping state on node 101 Services transitioned to the Stopping state on node 102 |
MBean operation: deactivate
MBean |
|
---|---|
SLEE-defined |
Deactivate on all nodes
public void deactivate(ServiceID id) throws NullPointerException, UnrecognizedServiceException, InvalidStateException, ManagementException; public void deactivate(ServiceID[] ids) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Deactivate on specific nodes
public void deactivate(ServiceID id, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; public void deactivate(ServiceID[] ids, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to deactivate the specified services (by specifying node IDs). For this to work, the specified services must be in the ACTIVE state on the specified nodes. |
Console command: waittilserviceisinactive
Command |
waittilserviceisinactive <service-id> [-timeout timeout] [-nodes node1,node2,...] Wait for a service to finish deactivating (on the specified nodes) (timing out after N seconds) |
---|---|
Example |
To wait for the Call Barring and Call Forwarding services on nodes 101 and 102: $ ./rhino-console waittilserviceisinactive \ "name=Call Barring Service,vendor=OpenCloud,version=0.2" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \ -nodes 101,102 Service ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2] is in the Inactive state on node(s) [101,102] Service ServiceID[Call Forwarding Service,vendor=OpenCloud,version=0.2] is in the Inactive state on node(s) [101,102] |
Upgrading (Activating & Deactivating) Services
To activate some services and deactivate others, use the following rhino-console command or related MBean operation.
Activating and deactivating in one operation
The SLEE specification defines the ability to deactivate some services and activate other services in a single operation. As one set of services deactivates, the existing activities being processed by those services continue to completion, while new activities (started after the operation is invoked) are processed by the activated services. The intended use of this is to upgrade a service or services with new versions (however the operation does not have to be used strictly for this purpose). |
Console command: deactivateandactivateservice
Command |
deactivateandactivateservice Deactivate <service-id>* Activate <service-id>* [-nodes node1,node2,...] Description Deactivate some services and Activate some other services (on the specified nodes) |
---|---|
Example |
To deactivate version 0.2 of the Call Barring and Call Forwarding services and activate version 0.3 of the same services on nodes 101 and 102: $ ./rhino-console deactivateandactivateservice \ Deactivate "name=Call Barring Service,vendor=OpenCloud,version=0.2" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \ Activate "name=Call Barring Service,vendor=OpenCloud,version=0.3" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.3" \ -nodes 101,102 On node(s) [101,102]: Deactivating service(s) [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2], ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]] Activating service(s) [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.3], ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.3]] Deactivating service(s) transitioned to the Stopping state on node 101 Activating service(s) transitioned to the Active state on node 101 Deactivating service(s) transitioned to the Stopping state on node 102 Activating service(s) transitioned to the Active state on node 102 |
MBean operation: deactivateAndActivate
MBean |
|
---|---|
SLEE-defined |
Deactivate and activate on all nodes
public void deactivateAndActivate(ServiceID deactivateID, ServiceID activateID) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, InvalidLinkNameBindingStateException, ManagementException; public void deactivateAndActivate(ServiceID[] deactivateIDs, ServiceID[] activateIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, InvalidLinkNameBindingStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Deactivate and activate on specific nodes
public void deactivateAndActivate(ServiceID deactivateID, ServiceID activateID, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; public void deactivateAndActivate(ServiceID[] deactivateIDs, ServiceID[] activateIDs, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to activate and deactivate services (by specifying node IDs). For this to work, the services to deactivate must be in the ACTIVE state, and the services to activate must be in the INACTIVE state, on the specified nodes. |
Getting Link Bindings Required by a Service
To find the resource adaptor entity link name bindings needed for a service, and list the service’s SBBs, use the following rhino-console commands or related MBean operations.
Console commands
listserviceralinks
Command |
listserviceralinks service-id Description List resource adaptor entity links required by a service |
---|---|
Example |
To list the resource adaptor entity links that the JCC VPN service needs: $ ./rhino-console listserviceralinks "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" In service ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0]: SBB SbbID[name=AnytimeInterrogation sbb,vendor=Open Cloud,version=1.0] requires entity link bindings: slee/resources/map SBB SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0] requires entity link bindings: slee/resources/cdr |
listsbbs
Command |
listsbbs [service-id] Description List the current installed SBBs. If a service identifier is specified only the SBBs in the given service are listed |
---|---|
Example |
To list the SBBs in the JCC VPN service: $ ./rhino-console listsbbs "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" SbbID[name=AnytimeInterrogation sbb,vendor=Open Cloud,version=1.0] SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0] SbbID[name=Proxy route sbb,vendor=Open Cloud,version=1.0] |
MBean operations: getServices
, getSbbs
, and getDescriptors
MBean |
|||
---|---|---|---|
SLEE-defined |
Get all services in the SLEE
public ServiceID[] getServices() throws ManagementException;
Get all SBBs in a service
public SbbID[] getSbbs(ServiceID service) throws NullPointerException, UnrecognizedServiceException, ManagementException;
Get the component descriptor for a component
public ComponentDescriptor[] getDescriptors(ComponentID[] ids) throws NullPointerException, ManagementException;
|
Configuring service metrics recording status
To check and configure the status for recording service metrics, use the following rhino-console commands or related MBean operations.
The details for metrics stats are listed in Metrics.Services.cmp and Metrics.Services.lifecycle.
The default is set to disabled for performance consideration. |
Console commands
getservicemetricsrecordingenabled
Command |
getservicemetricsrecordingenabled <service-id> Description Determine if metrics recording for a service has been enabled |
---|---|
Example |
To check the status for recording metrics: $ ./rhino-console getservicemetricsrecordingenabled name=service1,vendor=OpenCloud,version=1.0 Metrics recording for ServiceID[name=service1,vendor=OpenCloud,version=1.0] is currently disabled |
setservicemetricsrecordingenabled
Command |
setservicemetricsrecordingenabled <service-id> <true|false> Description Enable or disable the recording of metrics for a service |
---|---|
Example |
To enable the recording metrics: $ ./rhino-console setservicemetricsrecordingenabled name=service1,vendor=OpenCloud,version=1.0 true Metrics recording for ServiceID[name=service1,vendor=OpenCloud,version=1.0] has been enabled |
MBean operations: getServiceMetricsRecordingEnabled
and setServiceMetricsRecordingEnabled
MBean |
|
---|---|
Rhino extension |
Determine if the recording of metrics for a service is currently enabled or disabled.
public boolean getServiceMetricsRecordingEnabled(ServiceID service) throws NullPointerException, UnrecognizedServiceException, ManagementException;
Enable or disable the recording of metrics for a service.
public void setServiceMetricsRecordingEnabled(ServiceID service, boolean enabled) throws NullPointerException, UnrecognizedServiceException, ManagementException; |
Resource Adaptor Entities
As well as an overview of resource adaptor entities, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
listraconfigproperties |
Resource Management → getConfigurationProperties |
|
createraentity |
Resource Management → createResourceAdaptorEntity |
|
removeraentity |
Resource Management → removeResourceAdaptorEntity |
|
listraentityconfigproperties |
Resource Management → getConfigurationProperties |
|
updateraentityconfigproperties |
Resource Management → updateConfigurationProperties |
|
activateraentity |
Resource Management → activateResourceAdaptorEntity |
|
deactivateraentity |
Resource Management → deactivateResourceAdaptorEntity |
|
reassignactivities |
Resource Management → reassignActivities |
|
getraentitystate |
Resource Management → getState |
|
listraentitiesbystate |
Resource Management → getResourceAdaptorEntities |
|
bindralinkname |
Resource Management → bindLinkName |
|
unbindralinkname |
Resource Management → unbindLinkName |
|
listralinknames |
Resource Management → getLinkNames |
About Resource Adaptor Entities
Resource adaptors (RAs) are SLEE components which let particular network protocols or APIs be used in the SLEE.
They typically include a set of configurable properties (such as address information of network endpoints, URLs to external systems, or internal timer-timeout values). These properties may include default values. A resource adaptor entity is a particular configured instance of a resource adaptor, with defined values for all of that RA’s configuration properties.
The resource adaptor entity lifecycle
The SLEE specification presents the operational lifecycle of a resource adaptor entity — illustrated, defined, and summarised below.
Resource adaptor entity lifecycle states
The SLEE lifecycle states are:
State | Definition |
---|---|
INACTIVE |
The resource adaptor entity has been configured and initialised. It is ready to be activated, but may not yet create activities or fire events to the SLEE. Typically, it is not connected to network resources. |
ACTIVE |
The resource adaptor entity is connected to the resources it needs to function (assuming they are available), and may create activities and fire events to the SLEE. |
STOPPING |
The resource adaptor entity may not create new activities in the SLEE, but may fire events to the SLEE on already existing activities. A resource adaptor entity transitions out of the STOPPING state, returning to the INACTIVE state, when all activities it owns have either ended or been assigned to another node for continued processing. |
Creating activities in the STOPPING state
By default, Rhino 2.6 prevents a resource adaptor from creating an activity in the This behaviour is controlled by the When set to The default value in earlier versions of Rhino was |
Independent lifecycle state machines
As explained in About SLEE Operational States, each event-router node in a Rhino cluster maintains its own lifecycle state machine, independent of other nodes in the cluster. This is also true for each resource adaptor entity: one resource adaptor entity might be INACTIVE on one node in a cluster, ACTIVE on another, and STOPPING on a third. The operational state of a resource adaptor entity on each cluster node also persists to the disk-based database.
A resource adaptor entity will enter the INACTIVE state, after node bootup and initialisation completes, if the database’s persistent operational state information for that resource adaptor entity is missing, or is set to INACTIVE or STOPPING.
And, like node operational states, you can change the operational state of a resource adaptor entity at any time, as long as least one node in the cluster is available to perform the management operation (regardless of whether or not the node whose operational state being changed is a current cluster member). For example, you might activate a resource adaptor entity on node 103 before node 103 is booted — then, when node 103 boots, and after it completes initialisation, that resource adaptor entity will transition to the ACTIVE state.
Finding RA Configuration Properties
To determine resource adaptor configuration properties (which you need to know when Creating a Resource Adaptor Entity) use the following rhino-console command or related MBean operation.
Console command: listraconfigproperties
Command |
listraconfigproperties <resource-adaptor-id> Description List the configuration properties (and any default values) for a resource adaptor |
---|---|
Example |
To list the configuration properties of the OpenCloud SIP Resource Adaptor: $ ./rhino-console listraconfigproperties name=OCSIP,vendor=OpenCloud,version=2.1 Configuration properties for resource adaptor name=OCSIP,vendor=OpenCloud,version=2.1: Automatic100TryingSupport (java.lang.Boolean): true CRLLoadFailureRetryTimeout (java.lang.Integer): 900 CRLNoCRLLoadFailureRetryTimeout (java.lang.Integer): 60 CRLRefreshTimeout (java.lang.Integer): 86400 CRLURL (java.lang.String): ... |
MBean operation: getConfigurationProperties
MBean |
|
---|---|
SLEE-defined |
public ConfigProperties getConfigurationProperties(ResourceAdaptorID id) throws NullPointerException, UnrecognizedResourceAdaptorException, ManagementException |
Output |
This operation returns a |
Creating a Resource Adaptor Entity
To create a resource adaptor entity use the following rhino-console command or related MBean operation.
Console command: createrantity
Command |
createraentity <resource-adaptor-id> <entity-name> [<config-params>|(<property-name> <property-value)*] Description Create a resource adaptor entity with the given name. Optionally configuration properties can be specified, either as a single comma-separated string of name=value pairs, or as a series of separate name and value argument pairs |
||
---|---|---|---|
Example |
To create an instance of the OpenCloud SIP resource adaptor, called $ ./rhino-console createraentity name=OCSIP,vendor=OpenCloud,version=2.1 sipra \ IPAddress=192.168.0.100,Port=5160,SecurePort=5161 Created resource adaptor entity sipra |
||
Notes |
Entering configuration properties
When creating a resource adaptor entity, determine its configuration properties and then enter them in
|
MBean operation: createResourceAdaptorEntity
MBean |
|||||
---|---|---|---|---|---|
SLEE-defined |
public void createResourceAdaptorEntity(ResourceAdaptorID id, String entityName, ConfigProperties properties) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorException, ResourceAdaptorEntityAlreadyExistsException, InvalidConfigurationException, ManagementException; |
||||
Arguments |
This operation requires that you specify the resource adaptor entity’s:
|
Removing a Resource Adaptor Entity
To remove a resource adaptor entity use the following rhino-console command or related MBean operation.
You can only remove a resource adaptor entity from the SLEE when it is in the INACTIVE state on all event-router nodes currently in the primary component. |
Console command: removeraentity
Command |
removeraentity <entity-name> Description Remove a resource adaptor entity |
---|---|
Example |
To remove the resource adaptor entity named $ ./rhino-console removeraentity sipra Removed resource adaptor entity sipra |
MBean operation: removeResourceAdaptorEntity
MBean |
|
---|---|
SLEE-defined |
public void removeResourceAdaptorEntity(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, DependencyException, ManagementException; |
Listing configuration properties for a Resource Adaptor Entity
To list the configuration properties for a resource adaptor entity use the following rhino-console command or related MBean operation.
Console command: listraentityconfigproperties
Command |
listraentityconfigproperties <entity-name> Description List the configuration property values for a resource adaptor entity |
---|---|
Example |
To list the resource adaptor entity called $ ./rhino-console listraentityconfigproperties sipra Configuration properties for resource adaptor entity sipra: Automatic100TryingSupport (java.lang.Boolean): true AutomaticOptionsResponses (java.lang.Boolean): true CRLLoadFailureRetryTimeout (java.lang.Integer): 900 CRLNoCRLLoadFailureRetryTimeout (java.lang.Integer): 60 CRLRefreshTimeout (java.lang.Integer): 86400 CRLURL (java.lang.String): ClientAuthentication (java.lang.String): NEED EnableDialogActivityTests (java.lang.Boolean): false EnabledCipherSuites (java.lang.String): ExtensionMethods (java.lang.String): IPAddress (java.lang.String): AUTO Keystore (java.lang.String): sip-ra-ssl.keystore KeystorePassword (java.lang.String): KeystoreType (java.lang.String): jks MaxContentLength (java.lang.Integer): 131072 OffsetPorts (java.lang.Boolean): false Port (java.lang.Integer): 5060 PortOffset (java.lang.Integer): 101 ReplicatedDialogSupport (java.lang.Boolean): false RetryAfterInterval (java.lang.Integer): 5 SecurePort (java.lang.Integer): 5061 TCPIOThreads (java.lang.Integer): 1 Transports (java.lang.String): udp,tcp Truststore (java.lang.String): sip-ra-ssl.truststore TruststorePassword (java.lang.String): TruststoreType (java.lang.String): jks UseVirtualAddressInURIs (java.lang.Boolean): true ViaSentByAddress (java.lang.String): VirtualAddresses (java.lang.String): WorkerPoolSize (java.lang.Integer): 4 WorkerQueueSize (java.lang.Integer): 50 slee-vendor:com.opencloud.rhino_max_activities (java.lang.Integer): 0 slee-vendor:com.opencloud.rhino_replicate_activities (java.lang.String): mixed |
MBean operation: getConfigurationProperties
MBean |
|
---|---|
SLEE-defined |
public ConfigProperties getConfigurationProperties(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException; |
Output |
This operation returns a |
Updating configuration properties for a Resource Adaptor Entity
To update configuration properties for a resource adaptor entity use the following rhino-console command or related MBean operation.
When is it appropriate to update configuration properties?
A resource adaptor may elect to support reconfiguration when its resource adaptor entities are active using the If the value of the If the value of the |
Console command: updateraentityconfigproperties
Command |
updateraentityconfigproperties <entity-name> [<config-params>|(<property-name> <property-value)*] Description Update configuration properties for a resource adaptor entity. Properties can be specified either as a single comma-separated string of name=value pairs or as a series of separate name and value argument pairs |
---|---|
Example |
To update the $ ./rhino-console updateraentityconfigproperties sipra Port 5061 SecurePort 5062 Updated configuration parameters for resource adaptor entity sipra |
MBean operation: updateConfigurationProperties
MBean |
|
---|---|
SLEE-defined |
public void updateConfigurationProperties(String entityName, ConfigProperties properties) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, InvalidConfigurationException, ManagementException; |
Input |
This operation requires a |
Activating a Resource Adaptor Entity
To activate a resource adaptor entity on one or more nodes use the following rhino-console command or related MBean operation.
Console command: activateraentity
Command |
activateraentity <entity-name> [-nodes node1,node2,...] [-ifneeded] Description Activate a resource adaptor entity (on the specified nodes) |
---|---|
Example |
To activate the resource adaptor entity called $ ./rhino-console activateraentity sipra -nodes 101,102 Activating resource adaptor entity sipra on node(s) [101,102] Resource adaptor entity transitioned to the Active state on node 101 Resource adaptor entity transitioned to the Active state on node 102 |
MBean operation: activateResourceAdaptorEntity
MBean |
|
---|---|
SLEE-defined |
Activate on all nodes
public void activateResourceAdaptorEntity(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Activate on specific nodes
public void activateResourceAdaptorEntity(String entityName, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to activate the resource adaptor entity (by specifying node IDs). For this to work, the resource adaptor entity must be in the INACTIVE state on the specified nodes. |
Deactivating a Resource Adaptor Entity
To deactivate a resource adaptor entity on one or more nodes use the following rhino-console command or related MBean operation.
See also Reassigning a Resource Adaptor Entity’s Activities to Other Nodes, particularly the Requirements tab. |
Console command: deactivateraentity
Command |
deactivateraentity <entity-name> [-nodes node1,node2,... [-reassignto node3,node4,...]] [-ifneeded] Description Deactivate a resource adaptor entity (on the specified nodes (reassigning replicated activities to the specified nodes)) |
---|---|
Examples |
To deactivate the resource adaptor entity named $ ./rhino-console deactivateraentity sipra -nodes 101,102 Deactivating resource adaptor entity sipra on node(s) [101,102] Resource adaptor entity transitioned to the Stopping state on node 101 Resource adaptor entity transitioned to the Stopping state on node 102 To deactivate the resource adaptor entity named $ ./rhino-console deactivateraentity sipra -nodes 101 -reassignto 102 Deactivating resource adaptor entity sipra on node(s) [101] Resource adaptor entity transitioned to the Stopping state on node 101 Replicated activities reassigned to node(s) [102] To deactivate the resource adaptor entity named $ ./rhino-console deactivateraentity sipra -nodes 101 -reassignto "" Deactivating resource adaptor entity sipra on node(s) [101] Resource adaptor entity transitioned to the Stopping state on node 101 Replicated activities reassigned to node(s) [102,103] |
MBean operation: deactivateResourceAdaptorEntity
MBean |
|
---|---|
SLEE-defined |
Deactivate on all nodes
public void deactivateResourceAdaptorEntity(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extensions |
Deactivate on specific nodes
public void deactivateResourceAdaptorEntity(String entityName, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to deactivate the resource adaptor entity (by specifying node IDs). For this to work, the resource adaptor entity must be in the ACTIVE state on the specified nodes.
Reassign deactivating activities to other nodes
public void deactivateResourceAdaptorEntity(String entityName, int[] nodeIDs, int[] reassignActivitiesToNodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino also provides an extension that adds another argument, that lets you reassign ownership of replicated activities (from a replicating resource adaptor entity), distributing them equally among other available event-router nodes. This reduces the set of activities on the nodes with the deactivating resource adaptor entity, so the resource adaptor entity can return to the INACTIVE state on those nodes quicker. This only works for resource adaptor entities that are replicating activity state (see the description of the "Rhino-defined configuration property" for the MBean on Creating a Resource Adaptor Entity). |
Reassigning a Resource Adaptor Entity’s Activities to Other Nodes
To reassign activities from a resource adaptor entity to a different node, use the following rhino-console command or related MBean operation, noting the requirements.
Why reassign replicating activities?
A resource adaptor entity in the STOPPING state cannot return to the INACTIVE state until all the activities that it owns have ended. You can let a deactivating resource adaptor entity return to the INACTIVE state quicker by reassigning its replicating activities to other eligible nodes. |
Console command: reassignactivities
Command |
reassignactivities <entity-name> -from node1,node2,... -to node3,node4,... Description Reassign replicated activities of a resource adaptor entity from the specified nodes to other nodes |
||
---|---|---|---|
Examples |
To reassign activities owned by the resource adaptor entity named $ ./rhino-console reassignactivities sipra -from 101 -to 102,103 Replicated activities for sipra reassigned to node(s) [102,103]
To reassign activities owned by the resource adaptor entity named $ ./rhino-console reassignactivities sipra -from 101 -to "" Replicated activities for sipra reassigned to node(s) [102,103] |
MBean operation: reassignActivities
MBean |
|
---|---|
Rhino extension |
public void reassignActivities(String entityName, int[] fromNodeIDs, int[] toNodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; This operation reassigns replicated activities owned by the named resource adaptor entity, on the nodes specified, using Rhino’s standard failover algorithm, to the nodes specified by the |
Requirements for reassigning activities
You can only reassign replicated activities from a resource adaptor entity to other nodes if the all the following conditions are satisfied:
-
The node is a current member of the primary component.
-
The node is an event-router node (not a quorum node).
-
The operational state of the SLEE on the node is RUNNING or STOPPING.
-
The operational state of the resource adaptor entity on the node is ACTIVE or STOPPING.
Further, a node can only take ownership of replicated activities if it satisfies all the following conditions:
-
The node is a current member of the primary component.
-
The node is an event-router node (not a quorum node).
-
The operational state of the SLEE on the node is RUNNING.
-
The operational state of the resource adaptor entity on the node is ACTIVE.
Also, non-replicated activities cannot be reassigned to other nodes, and a resource adaptor entity must end any non-replicated activities it created itself.
You can choose to forcefully remove activities if a resource adaptor entity fails to end them in a timely manner. |
Retrieving a Resource Adaptor Entity’s State
To retrieve the operational state of a resource adaptor entity, use the following rhino-console command or related MBean operation.
Console command: getraentitystate
Command |
getraentitystate <entity-name> [-nodes node1,node2,...] Description Get the state of a resource adaptor entity (on the specified nodes) |
---|---|
Output |
The |
Examples |
To display the state of the resource adaptor entity with the name $ ./rhino-console getraentitystate sipra Resource adaptor entity is Inactive on node 101 Resource adaptor entity is Active on node 102 To display the state of the resource adaptor entity on only node 101: $ ./rhino-console getraentitystate sipra -nodes 101 Resource adaptor entity is Inactive on node 101 |
MBean operation: getState
MBean |
|
---|---|
SLEE-defined |
Return state of resource adaptor entity on current node
public ResourceAdaptorEntityState getState(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Return state of resource adaptor entity on specified node(s)
public ResourceAdaptorEntityState[] getState(String entityName, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino provides an extension that adds an argument which lets you control the nodes on which to return the state of the resource adaptor entity (by specifying node IDs). |
Listing Resource Adaptor Entities by State
To list resource adaptor entities in a particular operational state, use the following rhino-console command or related MBean operation.
Console command: listraentitiesbystate
Command |
listraentitiesbystate <state> [-node node] Description List the resource adaptor entities that are in the specified state (on the specified node) |
---|---|
Examples |
To list the resource adaptor entities on the node where $ ./rhino-console listraentitiesbystate Active No resource adaptor entities in Active state on node 101 To list the resource adaptor entities that are active on node 102: $ ./rhino-console listraentitiesbystate Active -node 102 Resource adaptor entities in Active state on node 102: sipra |
MBean operation: getResourceAdaptorEntities
MBean |
|
---|---|
SLEE-defined |
Return names of resource adaptor entities in specified state on current node
public String[] getResourceAdaptorEntities(ResourceAdaptorEntityState state) throws NullPointerException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Return names of resource adaptor entities in specified state on specified node
public String[] getResourceAdaptorEntities(ResourceAdaptorEntityState state, int nodeID) throws NullPointerException, InvalidArgumentException, ManagementException; Rhino provides an extension that lets you specify the nodes (by specifying node IDs) on which to return the names of resource adaptor entities in the specified state. |
Managing Resource Adaptor Entity Link Bindings
What are resource adaptor entity link name bindings?
When an SBB needs access to a resource adaptor entity, it uses JNDI to get references to Java objects that implement the resource adaptor interface (provided by the resource adaptor entity). The SBB declares (in its deployment descriptor) the resource adaptor type it expects, and an arbitrary link name. Before activating a service using the SBB, an administrator must bind a resource adaptor entity (of the type expected) to the specified link name. |
Rhino includes procedures for:
Binding a Resource Adaptor Entity to a Link Name
To bind a resource adaptor entity to a link name, use the following rhino-console command or related MBean operation.
Only one resource adaptor entity can be bound to a link name at any time. |
Console command: bindralinkname
Command |
bindralinkname <entity-name> <link-name> Description Bind a resource adaptor entity to a link name |
---|---|
Example |
To bind the resource adaptor entity with the name $ ./rhino-console bindralinkname sipra sip Bound sipra to link name sip |
MBean operation: bindLinkName
MBean |
|
---|---|
SLEE-defined |
public void bindLinkName(String entityName, String linkName) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, LinkNameAlreadyBoundException, ManagementException; |
Unbinding Link Names
To unbind a resource adaptor entity from a link name, use the following rhino-console command or related MBean operation.
Console command: unbindralinkname
Command |
unbindralinkname <link-name> Description Unbind a resource adaptor entity from a link name |
---|---|
Example |
To unbind the link name $ ./rhino-console unbindralinkname sip Unbound link name sip |
MBean operation: unbindLinkName
MBean |
|
---|---|
SLEE-defined |
public void unbindLinkName(String linkName) throws NullPointerException, UnrecognizedLinkNameException, DependencyException,ManagementException; |
Listing Link Name Bindings
To list resource adaptor entity link names that have been bound in the SLEE, use the following rhino-console command or related MBean operation.
Console command: listralinknames
Command |
listralinknames [entity name] Description List the bound link names (for the specified resource adaptor entity) |
---|---|
Examples |
To list all resource adaptor entity link name bindings: $ ./rhino-console listralinknames slee/resources/cdr -> cdrra slee/resources/map -> mapra To list all link name bindings for the resource adaptor entity named $ ./rhino-console listralinknames mapra slee/resources/map |
MBean operation: getLinkNames
MBean |
|
---|---|
SLEE-defined |
List all bound link names
public String[] getLinkNames() throws ManagementException; Rhino’s implementation of the SLEE-defined
List link names to which a specific resource adaptor entity has been bound
public String[] getLinkNames(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException; The SLEE-defined operation also includes an argument for returning just link names to which a specified resource adaptor entity has been bound. If the resource adaptor entity has not been bound to any link names, the returned array is zero-length. |
Profile Tables and Profiles
As well as an overview of SLEE profiles, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
createprofiletable |
Profile Provisioning → createProfileTable |
|
createprofile |
Profile Provisioning → createProfile |
|
listprofiletables |
Profile Provisioning → getProfileTables |
|
listprofiles |
Profile Provisioning → getProfiles |
|
listprofileattributes |
Profile Provisioning, Profile → getProfile |
|
setprofileattributes |
Profile Provisioning, Profile → getProfile |
|
listprofilesbyattribute |
Profile Provisioning → getProfilesByAttribute |
|
exportall |
Profile Provisioning → exportProfiles |
See also the New SLEE 1.1 Profile Features How-to Guide on the OpenCloud Developer Portal. |
About Profiles
What are profiles? profile tables? profile specifications?
A profile is an entry in a profile table. It has a name, may have values (called "attributes") and may have indexed fields. It’s like a row in SQL, but may also include business and management logic. A profile table is a "container" for profiles. Its specification schema, the profile specification deployment descriptor, may define queries for the profile table. The SLEE specification defines the format and structure of profile specification schemas. A profile table’s default profile is the initial set of profile attribute values for newly created profiles within that table (if not specified explicitly with the profile-creation command). |
Before deploying a profile into the SLEE, an administrator can configure its profile specification. You do this by modifying values in the profile’s profile-spec-jar.xml
deployment descriptor (in its deployable unit). For example, you can specify:
-
static queries available to SLEE components, and administrators using the management interface
-
profile specification environment entries
-
indexing hints for profile attributes.
For more on profile static queries, environment entires and indexing, see the SLEE 1.1 specification. |
Creating Profile Tables
To create a new profile table based on an already-deployed profile specification, use the following rhino-console command or related MBean operation.
Name character restriction
The profile table name cannot include the |
Console command: createprofiletable
Command |
createprofiletable <profile-spec-id> <table-name> Description Create a profile table |
---|---|
Example |
$ ./rhino-console createprofiletable name=AddressProfileSpec,vendor=javax.slee,version=1.1 testtable Created profile table testtable |
MBean operation: createProfileTable
MBean |
|
---|---|
SLEE-defined |
public void createProfileTable(javax.slee.profile.ProfileSpecificationID id, String newProfileTableName) throws NullPointerException, UnrecognizedProfileSpecificationException, InvalidArgumentException, ProfileTableAlreadyExistsException, ManagementException; |
Arguments |
This operation requires that you specify the profile table’s:
|
Creating Profiles
To create a profile in an existing profile table, use the following rhino-console command or related MBean operation.
Console command createprofile
Command |
createprofile <table-name> <profile-name> (<attr-name> <attr-value>)* Description Add a profile to a table, optionally setting attributes (see -setProfileAttributes option) Add a profile to a table, optionally setting attributes (See Setting Profile attributes) |
---|---|
Example |
$ ./rhino-console createprofile testtable testprofile Profile testtable/testprofile created |
Notes |
Setting profile attributes
When creating a profile, decide the profile’s attribute names and then enter them in
White space, commas, quotes
If a profile or profile table name or an attribute name or value contains white space or a comma, you must quote the string. For example: $ ./rhino-console createprofile "testtable 2" "testprofile 2" SubscriberAddress "my address" forwarding true If the value requires quotes, you must escape them using a backslash "\" (to avoid them being removed by the parser). For example: $ ./rhino-console createprofile testtable testprofile attrib "\"The quick brown fox\""
Name uniqueness
The profile name must be unique within the scope of the profile table. |
MBean operation: createProfile
MBean |
|
---|---|
SLEE-defined |
public javax.management.ObjectName createProfile(String profileTableName, String newProfileName) throws NullPointerException, UnrecognizedProfileTableNameException, InvalidArgumentException, ProfileAlreadyExistsException, ManagementException; |
Arguments |
This operation requires that you specify the profile’s:
|
Notes |
Profile MBean commit state
This operation returns an
Name uniqueness
The profile name must be unique within the scope of the profile table. |
Listing Profile Tables
To list all profile tables in a SLEE, use the following rhino-console command or related MBean operation.
Console command: listprofiletables
Command |
listprofiletables Description List the current created profile tables |
---|---|
Example |
$ ./rhino-console listprofiletables callbarring callforwarding |
MBean operation: getProfileTables
MBean |
|
---|---|
SLEE-defined |
public Collection getProfileTables() throws ManagementException; |
Listing Profiles
To list all profiles of a specific profile table, use the following rhino-console command or related MBean operation.
Console command: listprofiles
Command |
listprofiles <table-name> Description List the profiles in a table |
---|---|
Example |
$ ./rhino-console listprofiles testtable testprofile |
MBean operation: getProfiles
MBean |
|
---|---|
SLEE-defined |
public Collection getProfiles(String profileTableName) throws NullPointerException, UnrecognizedProfileTableNameException, ManagementException; |
Arguments |
This operation requires that you specify the profile table’s:
|
Listing Profile Attributes
To list a profile’s attributes (names and current values), use the following rhino-console command or related MBean operation.
Console command: listprofileattributes
Command |
listprofileattributes <table-name> [profile-name] Description List the current values of a profile, or if no profile is specified the current values of the default profile are listed |
---|---|
Example |
$ ./rhino-console listprofileattributes testtable testprofile Address={null} |
MBean operation: getProfile
MBean |
|||
---|---|---|---|
SLEE-defined |
public javax.management.ObjectName getProfile(String profileTableName,String profileName) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedProfileNameException, ManagementException; |
||
Arguments |
This operation requires that you specify the profile table’s:
|
||
Notes |
Profile MBean state
This operation returns an
|
Setting Profile Attributes
To set a profile’s attribute values, use the following rhino-console command or related MBean operation.
Console command: setprofileattributes
Command |
setprofileattributes <table-name> <profile-name> (<attr-name> <attr-value>)* Description Set the current values of a profile (use "" for default profile). The implementation supports only a limited set of attribute types that it can convert from strings to objects |
---|---|
Example |
$ ./rhino-console setprofileattributes testtable testprofile Address IP:192.168.0.1 Set attributes in profile testtable/testprofile |
Notes |
White space, commas, quotes
If a profile or profile table name or an attribute name or value contains white space or a comma, you must quote the string. For example: $ ./rhino-console setprofileattributes "testtable 2" "testprofile 2" SubscriberAddress "my address" forwarding true If the value requires quotes, you must escape them using a backslash "\" (to avoid them being removed by the parser). For example: $ ./rhino-console setprofileattributes testtable testprofile attrib "\"The quick brown fox\"" |
MBean operation: getProfile
MBean |
|||
---|---|---|---|
SLEE-defined |
public javax.management.ObjectName getProfile(String profileTableName,String profileName) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedProfileNameException, ManagementException; |
||
Arguments |
This operation requires that you specify the profile table’s:
|
||
Notes |
Profile MBean state
This operation returns an To put the MBean into the read-write state, invoke
|
Finding Profiles by Attribute Value
To retrieve all profiles with a specific attribute value, use the following rhino-console commands or related MBean operations:
Console command: listprofilesbyattribute
Command |
listprofilesbyattribute <table-name> <attr-name> <attr-value> [display-attributes (true/false)] Description List the profile which have an attribute <attr-name> equal to <attr-value>. The implementation supports only a limited set of attribute types that it can convert from strings to objects |
---|---|
Example |
$ ./rhino-console listprofilesbyattribute testtable Address IP:192.168.0.1 1 profiles returned ProfileID[table=testtable,profile=testprofile] |
Notes |
SLEE 1.1- & SLEE 1.0-specific commands
Between SLEE 1.0 and SLEE 1.1, the underlying profile specification schema changed significantly. According to the SLEE 1.1 Specification, profile attributes no longer have to be indexed to be legally used by a find-by-attribute-value query. Therefore, the
Backwards compatibility
SLEE 1.1 demands backwards compatibility for SLEE 1.0-compliant profiles, which means a SLEE 1.0 -compliant profile specification can be deployed into the SLEE; and profile tables and profiles can be successfully created and managed. |
Console command: listprofilesbyindexedattribute
Command |
listprofilesbyindexedattribute <table-name> <attr-name> <attr-value> [display-attributes (true/false)] Description List the profiles which have an indexed attribute <attr-name> equal to <attr-value>. The implementation supports only a limited set of attribute types that it can convert from strings to objects |
---|---|
Example |
$ ./rhino-console listprofilesbyindexedattribute testtable indexedAttrib someValue 1 profiles returned ProfileID[table=testtable,profile=testprofile] |
MBean operation: getProfilesByAttribute
MBean |
|
---|---|
SLEE-defined |
public Collection getProfilesByAttribute(String profileTableName, String attributeName, Object attributeValue) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedAttributeException, InvalidArgumentException, AttributeTypeMismatchException, ManagementException; |
Arguments |
This operation requires that you specify the:
|
Notes |
SLEE 1.1- & SLEE 1.0-specific commands
Between SLEE 1.0 and SLEE 1.1, the underlying profile specification schema changed significantly. According to the SLEE 1.1 Specification, profile attributes no longer have to be indexed to be legally used by a find-by-attribute-value query. Therefore, the
Backwards compatibility
SLEE 1.1 demands backwards compatibility for SLEE 1.0-compliant profiles, which means a SLEE 1.0 compliant profile specification can be deployed into the SLEE; and profile tables and profiles can be successfully created and managed. |
MBean operation: getProfilesByIndexedAttribute
MBean |
|
---|---|
SLEE-defined |
public Collection getProfilesByIndexedAttribute(String profileTableName, String attributeName, Object attributeValue) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedAttributeException, AttributeNotIndexedException, AttributeTypeMismatchException, ManagementException; |
Arguments |
This operation requires that you specify the:
|
Finding Profiles Using Static Queries
To retrieve all profiles match a static query (pre-defined in a profile table’s profile specification schema), use the following MBean operation.
The Rhino SLEE does not use a rhino-console command for this function. |
MBean operation: getProfilesByStaticQuery
MBean |
|
---|---|
SLEE-defined |
public Collection getProfilesByStaticQuery(String profileTableName, String queryName, Object[] parameters) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedQueryNameException, InvalidArgumentException, AttributeTypeMismatchException, ManagementException; |
Arguments |
This operation requires that you specify the:
|
For more about static query methods, please see chapter 10.8.2 "Static query methods" in the SLEE 1.1 specification. |
Finding Profiles Using Dynamic Queries
To retrieve all profiles match a dynamic query (an expression the administrator constructs at runtime) , use the following MBean operation.
The Rhino SLEE does not use a rhino-console command for this function. |
MBean operation: getProfilesByDynamicQuery
MBean |
|
---|---|
SLEE-defined |
public Collection getProfilesByDynamicQuery(String profileTableName, QueryExpression expr) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedAttributeException, AttributeTypeMismatchException, ManagementException; |
Arguments |
This operation requires that you specify the:
|
For more about dynamic query methods, please see chapter 10.20.3 "Dynamic Profile queries" in the SLEE 1.1 specification. |
Exporting Profiles
To export SLEE profiles, use the following rhino-console command or related MBean operation.
Console command: exportall
The Rhino command console currently does not have a command specific to profile exports. Instead you use a more general export function, which (apart from SLEE profiles) also exports deployable units for services and RAs currently installed in the SLEE. |
Command |
exportall <target directory> Description Export the internal state of the SLEE including deployable units, profile tables, and other component state. Uses JMX to export profiles and writes XML. Use the standalone rhino-export utility to export profile snapshots |
---|---|
Example |
$ ./rhino-console exportall /home/userXY/myexport Exporting file:jars/incc-callbarring-service.jar... Exporting file:jars/incc-callforwarding-service.jar... Taking snapshot for callforwarding Saving callforwarding.jar (183kb) Streaming profile table 'callforwarding' snapshot to callforwarding.data (2 entries) [################################################################################] 2/2 entries Taking snapshot for callbarring Saving callbarring.jar (177kb) Streaming profile table 'callbarring' snapshot to callbarring.data (2 entries) [################################################################################] 2/2 entries Extracted 4 of 4 entries (157 bytes) Snapshot timestamp 2008-05-07 15:17:42.325 (1210130262325) Critical region time : 0.002 s Request preparation time : 0.053 s Data extraction time : 0.302 s Total time : 0.355 s Converting 2 profile table snapshots... Converting callforwarding... bean class=class com.opencloud.deployed.Profile_Table_2.ProfileOCBB_Bean [###########################################################################] converted 2 of 2 [###########################################################################] converted 2 of 2 Converted 2 records Converting callbarring... bean class=class com.opencloud.deployed.Profile_Table_1.ProfileOCBB_Bean [###########################################################################] converted 2 of 2 [###########################################################################] converted 2 of 2 Converted 2 records Export complete |
Exported profile files
After the export, you will find the exported profiles as .xml files in the
Exporting "snapshots"
See also Profile Snapshots, to export profile snapshots in binary format and convert them into
Exporting a SLEE
See also Exporting a SLEE, to export all deployed components and configuration of a Rhino SLEE. |
MBean operation: exportProfiles
MBean |
|||
---|---|---|---|
Rhino extension |
com.opencloud.rhino.management.profile.ProfileDataCollection exportProfiles(String profileTableName, String[] profileNames) throws NullPointerException, UnrecognizedProfileTableNameException, ManagementException; |
||
Arguments |
This operation requires that you specify the profile table’s:
|
Importing Profiles
To import SLEE profiles, use the following rhino-console command or related MBean operation.
Console command: importprofiles
Use the importprofiles command to import profile data from an xml file that has previously been created (for example, using the exportall command). |
Command |
importprofiles <filename.xml> [-table table-name] [-replace] [-max profiles-per-transaction] [-noverify] Description Import profiles from xml data |
---|---|
Example |
$ ./rhino-console exportall /home/userXY/myexport ... ./rhino-console importprofiles /home/userXY/myexport/profiles/testtable.xml Importing profiles into profile table: testtable 2 profile(s) processed: 1 created, 0 replaced, 0 removed, 1 skipped |
Notes |
Referenced profile table must exist
For the profile import to run successfully, the profile table the xml data refers to must exist before invoking the |
MBean operation: importProfiles
MBean |
|||
---|---|---|---|
Rhino extension |
com.opencloud.rhino.management.profile.ProfileImportResult importProfiles(com.opencloud.rhino.management.profile.ProfileDataCollection profileData) throws NullPointerException, UnrecognizedProfileTableNameException, InvalidArgumentException, ProfileAlreadyExistsException, UnrecognizedProfileNameException, ManagementException; |
||
Arguments |
This operation requires that you specify the profile table’s:
|
Alarms
As well as an overview and list of alarms, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs.
Procedure | rhino-console command | MbBean → Operations |
---|---|---|
listactivealarms |
Alarm → getAlarms Alarm → getDescriptors |
|
clearalarm |
Alarm → clearAlarm |
|
clearalarms |
Alarm → clearAlarms |
|
setalarmlogperiod getalarmlogperiod |
Logging Management → SetAlarmLogPeriod Logging Management → GetAlarmLogPeriod |
|
createthresholdrule removethresholdrule |
Threshold Rule Management → createRule Threshold Rule Management → removeRule |
|
listthresholdrules |
Threshold Rule Management → getRules |
|
|
getconfig exportconfig importconfig |
Threshold Rule → addTriggerCondition Threshold Rule → getTriggerConditions Threshold Rule → removeTriggerCondition Threshold Rule → getTriggerConditionsOperator Threshold Rule → setTriggerConditionsOperator Threshold Rule → getTriggerPeriod Threshold Rule → setTriggerPeriod Threshold Rule → addResetCondition Threshold Rule → getResetConditions Threshold Rule → removeResetCondition Threshold Rule → getResetConditionsOperator Threshold Rule → setResetConditionsOperator Threshold Rule → getResetPeriod Threshold Rule → setResetPeriod Threshold Rule → setAlarm Threshold Rule → getAlarmLevel Threshold Rule → getAlarmType Threshold Rule → getAlarmMessage |
activatethresholdrule |
Threshold Rule → activateRule Threshold Rule → deactivateRule |
|
getthresholdrulescanperiod setthresholdrulescanperiod |
Threshold Rule Management → getScanPeriod Threshold Rule Management → setScanPeriod |
About Alarms
Alarms in Rhino alert the SLEE administrator to exceptional conditions.
Application components in the SLEE raise them, as does Rhino itself (upon detecting an error condition). Rhino clears some alarms automatically when the error conditions are resolved. The SLEE administrator must clear others manually.
When an alarm is raised or cleared, Rhino generates a JMX notification from the Alarm MBean
. Management clients may attach a notification listener to the Alarm MBean, to receive alarm notifications. Rhino logs all alarm notifications.
What’s new in SLEE 1.1?
While only SBBs could generate alarms in SLEE 1.0, other types of application components can also generate alarms in SLEE 1.1.
In SLEE 1.1, alarms are stateful — between being raised and cleared, an alarm persists in the SLEE, where an administrator may examine it. (In SLEE 1.0, alarms could be generated with a severity level that indicated a cleared alarm, but the fact that an error condition had occurred did not persist in the SLEE beyond the initial alarm generation.)
Sample log file messages
SLEE 1.1 |
|
---|---|
SLEE 1.0 |
|
For both SLEE 1.1 and 1.0, if the cause of an alarm is an Java exception, the log includes the exception and its stack trace (following the alarm description message). |
See also the How Do I Use Stateful Alarms in SLEE 1.1 on the OpenCloud Developer Portal. |
Configuring alarm log period
To set and get the interval between periodic active alarm logs, use the following rhino-console commands or related MBean operations.
Rhino periodically logs active alarms and the default interval is 60 seconds.
setalarmlogperiod
Command |
setalarmlogperiod <seconds> Description Sets the interval between periodic active alarm logs. Required Arguments seconds The interval between periodic alarm logs. Setting to 0 will disable logging of periodic alarms. |
---|---|
Example |
To set log period to 30 seconds: $ ./rhino-console setalarmlogperiod 30 Active alarm logging period set to 30 seconds. |
getalarmlogperiod
Command |
getalarmlogperiod Description Returns the interval between periodic active alarm logs. |
---|---|
Example |
To get alarm log period: $ ./rhino-console getalarmlogperiod Active alarm logging period is currently 30 seconds. |
MBean operations: setAlarmLogPeriod
MBean |
|
---|---|
SLEE-defined |
Set the interval between periodic active alarm logs
public void setAlarmLogPeriod(int period) throws IllegalArgumentException, ConfigurationException; Sets the interval between periodic active alarm logs. Setting the period to 0 will disable periodic alarm logging.
Get the interval between periodic active alarm logs
public int getAlarmLogPeriod() throws ConfigurationException; Returns the interval between periodic active alarm logs. |
Viewing Active Alarms
To view active alarms, use the following rhino-console command or related MBean operation.
Console command: listactivealarms
Command |
listactivealarms [<type> <notif-source>] [-stack] Description List the alarms currently active in the SLEE (for a specific notification if provided). Use -stack to display stack traces for alarm cause exceptions. |
---|---|
Example |
To list all active alarms in the SLEE: $ ./rhino-console listactivealarms 1 alarm: Alarm 101:193215480667648 [diameter.peer.connectiondown] Level : Warning InstanceID : diameter.peer.hss-instance Source : (RA Entity) sh-cache-ra Timestamp : 20161019 14:02:58 (active 15m 30s) Message : Connection to hss-instance:3868 is down The number value on the first line "101:193215480667648" is the The value in the square brackets "diameter.peer.connectiondown" is the |
MBean operations: getAlarms
and getDescriptors
MBean |
|
---|---|
SLEE-defined |
Get identifiers of all active alarms in the SLEE
public String[] getAlarms() throws ManagementException; Rhino’s implementation of the SLEE-defined
Get identifiers of active alarms raised by a specific notification source
public String[] getAlarms(NotificationSource notificationSource) throws NullPointerException, UnrecognizedNotificationSourceException, ManagementException; This variant of
Get alarm descriptor for an alarm identifier
public Alarm[] getDescriptors(String[] alarmIDs) throws NullPointerException, ManagementException; This operation returns the |
Clear Individual Alarms
To clear an alarm using its alarm identifier, use the following rhino-console command or related MBean operation.
Console command: clearalarm
console command
Command |
clearalarm <alarmid> Description Clear the specified alarm. |
---|---|
Example |
To clear the alarm with the identifier $ ./rhino-console clearalarm 101:102916243593:1 Alarm 101:102916243593:1 cleared |
MBean operation: clearAlarm
MBean |
|
---|---|
SLEE-defined |
public boolean clearAlarm(String alarmID) throws NullPointerException, ManagementException; Rhino’s implementation of the SLEE-defined |
Clear Alarms Raised by a Particular Notification Source
To clear alarms raised by a particular notification source, use the following rhino-console command or related MBean operation.
Console command: clearalarms
Command |
clearalarms <type> <notification-source> [<alarm type>] Description Clear all alarms raised by the notification source (of the specified alarm type) This command clears all alarms of the specified alarm type (or all alarms if no alarm-type is specified), that have been raised by the specified notification source. |
---|---|
Example |
To clear all alarms raised by a resource adaptor entity named $ ./rhino-console clearalarms resourceadaptorentity insis-cap 2 alarms cleared To clear only "noconnection" alarms raised by the resource adaptor entity named $ ./rhino-console clearalarms resourceadaptorentity insis-cap noconnection 1 alarm cleared |
MBean operation: clearAlarms
MBean |
|
---|---|
SLEE-defined |
Clear all active alarms raised by a notification source
public int clearAlarms(NotificationSource notificationSource) throws NullPointerException, UnrecognizedNotificationSourceException, ManagementException Rhino’s implementation of the SLEE-defined
Clear active alarms of a specified type raised by a notification source
public int clearAlarms(NotificationSource notificationSource, String alarmType) throws NullPointerException, UnrecognizedNotificationSourceException, ManagementException; This variant of |
Threshold Alarms
To supplement standard alarms (which Rhino and installed components raise), an administrator may configure custom alarms (which Rhino will raise or clear automatically based on SLEE Statistics.
These are known as threshold alarms, and you manage them using the Threshold Rule Management MBean
.
Threshold rules
You configure the threshold rules governing each threshold alarm using a Threshold Rule MBean
.
Each threshold rule consists of:
-
a unique name identifying the rule
-
one or more trigger conditions
-
an alarm level, type and message text
-
and optionally:
-
one or more reset conditions
-
how long (in milliseconds) the trigger conditions must remain before Rhino raises the alarm
-
how long (in milliseconds) the reset conditions must remain before Rhino clears the alarm.
-
You can combine condition sets using either an AND or an OR operator. (AND means all conditions must be satisfied, whereas OR means any one of the conditions may be satisfied — to raise or clear the alarm.) |
Parameter sets
Threshold rules use the same parameter sets as the statistics client. You can discover them either by using the statistics client graphically or by using its command-line mode from a command shell as shown below.
$ ./rhino-stats -l The following root parameter sets are available for monitoring: Activities, ActivityHandler, ByteArrayBuffers, CGIN, DatabaseQuery, Diameter, EndpointLimiting, EventRouter, Events, HTTP, JDBCDatasource, JVM, LicenseAccounting, Limiters, LockManagers, MemDB-Local, MemDB-Replicated, MemDB-Timestamp, Metrics, ObjectPools, SIP, SIS-SIP, SLEE-Usage, Services, StagingThreads, StagingThreads-Misc, TimerFacility, TransactionManager For parameter set type descriptions and a list of available parameter sets use the -l <root parameter set name> option
$ ./rhino-stats -l JVM Connecting to localhost:1199 Parameter Set: JVM Parameter Set Type: JVM Description: JVM Statistics Counter type statistics: Id: Name: Label: Description: 0 heapUsed husd Used heap memory 1 heapCommitted hcomm Committed heap memory 2 heapInitial hinit Initial heap memory 3 heapMaximum hmax Maximum heap memory 4 nonHeapUsed nhusd Used non-heap memory 5 nonHeapCommitted nhcomm Committed non-heap memory 6 nonHeapInitial nhinit Initial non-heap memory 7 nonHeapMaximum nhmax Maximum non-heap memory 8 classesCurrentLoaded cLoad Number of classes currently loaded 9 classesTotalLoaded cTotLoad Total number of classes loaded since JVM start 10 classesTotalUnloaded cTotUnload Total number of classes unloaded since JVM start Sample type statistics: (none defined) Found 1 parameter sets under 'JVM': -> "JVM"
How Rhino evaluates threshold rules
Rhino periodically evaluates the trigger conditions of each configured rule. When a trigger condition is satisfied and its trigger period has been met or exceeded, Rhino raises the corresponding alarm. If the rule has reset conditions, Rhino evaluates those too, and when the reset condition is satisfied and the reset trigger period has been met or exceeded, clears the alarm. If the rule does not have reset conditions, an administrator must clear the alarm manually.
You can configure the frequency of threshold alarm rule evaluation, using the Threshold Rule Management MBean
. An administrator can specify a polling frequency in milliseconds, or enter 0
to disable rule evaluation. The Rhino default is 0
(which must be changed to enable threshold-rule evaluation). Ideal polling frequency is highly dependent on the nature of alarms configured.
Simple and relative rule conditions
There are two types of threshold rule conditions, explained in the tables below.
Simple rule conditions
What it compares | Operators for comparison | Conditions | Example |
---|---|---|---|
The value of a counter-type Rhino statistic against a constant value. |
>, >=, <, ⇐, ==, != |
The constant value to compare against may be any floating-point number. The condition can either compare against the absolute value of the statistic (suitable for gauge-type statistics), or against the observed difference between successive samples (suitable for pure counter-type statistics). |
A condition that selects the statistic |
Relative rule conditions
What it compares | Operators for comparison | Conditions | Example |
---|---|---|---|
The ratio between two monitored statistics against a constant value. |
>, >=, <, ⇐, ==, != |
The constant value to compare against may be any floating-point number. |
A condition that selects the statistics |
For definitions of counter, guage and sample type statistics, see About Rhino Statistics. |
See also:
|
Creating and Removing Rules
To create or remove a threshold-alarm rule, use the following rhino-console commands or related MBean operations.
Console command: createthresholdrule
Command |
createthresholdrule <name> Description Create a threshold alarm rule |
---|---|
Example |
To create a rule named "low memory": $ ./rhino-console createthresholdrule "low memory" Threshold rule low memory created |
MBean operation: createRule
MBean |
|
---|---|
Rhino operation |
public ObjectName createRule(String ruleName) throws ConfigurationException, ValidationException; This operation creates a rule with the name given, and returns the JMX object name of a |
Console command: removethresholdrule
Command |
removethresholdrule <name> Description Remove a threshold alarm rule |
---|---|
Example |
To remove a rule named "low memory": $ ./rhino-console removethresholdrule "low memory" Threshold rule low memory removed |
MBean operation: removeRule
MBean |
|
---|---|
Example |
public void removeRule(String ruleName) throws ConfigurationException, ValidationException; This operation removes the rule with the name given. |
Listing Rules
To list all threshold alarm rules, use the following rhino-console command or related MBean operation.
Console command: listthresholdrules
Command |
listthresholdrules Description List threshold alarm rules |
---|---|
Example |
To list all threshold alarm rules, with their activation states: $ ./rhino-console listthresholdrules Current threshold rules: low memory (active) low disk (inactive) testrule (inactive) |
MBean operation: getRules
MBean |
|
---|---|
Rhino operation |
public String[] getRules() throws ConfigurationException; |
Configuring Rules
To configure a threshold alarm rule:
-
use the following rhino-console commands to view available rules, export a rule to an XML file, edit the rule file, and then re-import the edited file into the SLEE
-
or use Threshold Rule MBean operations.
View rules
To view a current threshold alarm rule., use the getconfig
console command:
Command |
getconfig [-namespace] <configuration type> [configuration key] Description Extract and display content of a container configuration key. The optional -namespace argument must be used to get the config of a namespace-specific key. If no key is specified the configs of all keys of the given type are shown |
---|---|
Example |
To display the threshold alarm rule named "low_memory": $ ./rhino-console getconfig threshold-rules "rule/low_memory" <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE rhino-threshold-rules-config PUBLIC "-//Open Cloud Ltd.//DTD Rhino Threshold Rules Config 2.3//EN" "rhino-threshold-rules-config-2.3.dtd"> <rhino-threshold-rules-config config-version="2.3" rhino-version="Rhino (version='2.5', release='1.0', build=xxx, revision=xxx)" timestamp="xxx"> <!-- Generated Rhino configuration file: xxxx-xx-xx xx:xx:xx.xxx --> <threshold-rules active="false" name="low_memory"> <trigger-conditions name="Trigger conditions" operator="OR" period="0"> <relative-threshold operator="<=" value="0.2"> <first-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="freePhysicalMemorySize"/> <second-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="totalPhysicalMemorySize"/> </relative-threshold> </trigger-conditions> <reset-conditions name="Reset conditions" operator="OR" period="0"/> <trigger-actions> <raise-alarm-action level="Major" message="Low on memory" type="MEMORY"/> </trigger-actions> <reset-actions> <clear-raised-alarm-action/> </reset-actions> </threshold-rules> </rhino-threshold-rules-config> |
Export rules
To save a threshold rule configuration to a file for editing, use the exportconfig
console command:
Command |
exportconfig [-namespace] <configuration type> [configuration key] <filename> Description Extract content of a container configuration key and save it to a file. The optional -namespace argument must be used to export the config of a namespace-specific key |
---|---|
Example |
To export the threshold alarm rule named "low memory" to the file $ ./rhino-console exportconfig threshold-rules "rule/low memory" rule_low_memory.xml Export threshold-rules: (rule/low memory) to rule_low_memory.xml Wrote rule_low_memory.xml |
The structure of the exported data in the XML file is identical to that displayed by the getconfig command. |
Edit rules
You can modify a rule using a text editor. In the following example, a reset condition has been added to a rule previously exported, so that the alarm raised will automatically clear when free memory becomes greater than 30%. (Previously the reset-conditions
element in this rule had no conditions.)
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE rhino-threshold-rule PUBLIC "-//Open Cloud Ltd.//DTD Rhino Threshold Rule 2.0//EN" "http://www.opencloud.com/dtd/rhino-config-threshold-rules_2_0.dtd"> <rhino-threshold-rule config-version="2.3" rhino-version="Rhino (version='2.5', release='00', build=xxx, revision=xxx)" timestamp=xxx> <!-- Generated Rhino configuration file: xxxx-xx-xx xx:xx:xx.xxx --> <threshold-rules active="false" name="low memory"> <trigger-conditions name="Trigger conditions" operator="OR" period="0"> <relative-threshold operator="<=" value="0.2"> <first-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="freePhysicalMemorySize"/> <second-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="totalPhysicalMemorySize"/> </relative-threshold> </trigger-conditions> <reset-conditions name="Reset conditions" operator="OR" period="0"> <relative-threshold operator=">" value="0.3"> <first-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="freePhysicalMemorySize"/> <second-statistic calculate-delta="false" parameter-set="JVM.OperatingSystem" statistic="totalPhysicalMemorySize"/> </relative-threshold> </reset-conditions> <trigger-actions> <raise-alarm-action level="Major" message="Low on memory" type="MEMORY"/> </trigger-actions> <reset-actions> <clear-raised-alarm-action/> </reset-actions> </threshold-rules> </rhino-threshold-rule>
Import rules
To import the modified threshold alarm rule file, use the importconfig
console command:
Command |
importconfig [-namespace] <configuration type> <filename> [-replace] Description Import a container configuration key. The optional -namespace argument must be used to import a config for a namespace-specific key |
---|---|
Example |
To import the threshold alarm rule from the file $ ./rhino-console importconfig threshold-rules rule_low_memory.xml -replace Configuration successfully imported. |
The -replace option is required when importing a rule with the same name as an existing rule, as there can be only one rule configuration with a given name present at any one time. |
Threshold Rule MBean Operations
To configure a threshold alarm rule, use the following MBean operations (defined on the Threshold Rule MBean
interface), for:
-
adding, removing and getting trigger conditions, and getting and setting their operators and periods
-
adding, removing and getting reset conditions, and getting and setting their operators and periods
-
setting the alarm
-
getting an alarm’s level, type, and message.
See also Configuring Rules. |
Trigger conditions
To add, remove and get threshold alarm trigger conditions, and get and set their operators and periods, use the following MBean operations:
Operations | Usage |
---|---|
|
To add a trigger condition to the rule:
public void addTriggerCondition(String parameterSetName, String statistic, String operator, double value) throws ConfigurationException, UnknownStatsParameterSetException, UnrecognizedStatisticException, ValidationException; public void addTriggerCondition(String parameterSetName1, String statistic1, String parameterSetName2, String statistic2, String operator, double value throws ConfigurationException, UnknownStatsParameterSetException, UnrecognizedStatisticException, ValidationException; The first operation adds a simple trigger condition to the rule. The second operation adds a relative condition between two parameter set statistics (see Simple and relative rule conditions).
To get the current trigger conditions:
public String[] getTriggerConditions() throws ConfigurationException;
To remove a trigger condition:
public void removeTriggerCondition(String key) throws ConfigurationException, ValidationException;
To get or set the trigger condition operator:
public String getTriggerConditionsOperator() throws ConfigurationException; public void setTriggerConditionsOperator(String operator) throws ConfigurationException, ValidationException; The operator must be one of the logical operators
To get or set the trigger condition period:
public long getTriggerPeriod() throws ConfigurationException; public void setTriggerPeriod(long period) throws ConfigurationException, ValidationException; The trigger period is measured in milliseconds. If it is |
Reset conditions
To add, remove and get threshold alarm reset conditions, and get and set their operators and periods, use the following MBean operations:
Operations | Usage |
---|---|
|
To add a reset condition to the rule:
public void addResetCondition(String parameterSetName, String statistic, String operator, double value) throws ConfigurationException, UnknownStatsParameterSetException, UnrecognizedStatisticException, ValidationException; public void addResetCondition(String parameterSetName1, String statistic1, String parameterSetName2, String statistic2, String operator, double value) throws ConfigurationException, UnknownStatsParameterSetException, UnrecognizedStatisticException, ValidationException; The first operation adds a simple reset condition to the rule. The second operation adds a relative condition between two parameter set statistics (see bxfref:threshold-alarms[Simple and relative rule conditions]).
To get the current reset conditions:
public String[] getResetConditions() throws ConfigurationException;
To remove a reset condition:
public void removeResetCondition(String key) throws ConfigurationException, ValidationException;
To get or set the reset condition operator:
public String getResetConditionsOperator() throws ConfigurationException; public void setResetConditionsOperator(String operator) throws ConfigurationException, ValidationException; The operator must be one of the logical operators
To get or set the reset condition period:
public long getResetPeriod() throws ConfigurationException; public void setResetPeriod(long period) throws ConfigurationException, ValidationException; The reset period is measured in milliseconds. If it is |
Setting alarms
To set the alarm to be raised by a threshold rule, use the following MBean operation:
Operations | Usage |
---|---|
public void setAlarm(AlarmLevel level, String type, String message) throws ConfigurationException, ValidationException; The alarm level may be any level other than |
Getting alarm information
To get a threshold alarm’s level, type, and message, use the following MBean operations:
Operations | Usage |
---|---|
public AlarmLevel getAlarmLevel() throws ConfigurationException; public String getAlarmType() throws ConfigurationException; public String getAlarmMessage() throws ConfigurationException; |
Activating and Deactivating Rules
To activate or deactivate a threshold-alarm rule, use the following rhino-console commands or related MBean operations.
Activate Rules
Console command: activatethresholdrule
Command |
activatethresholdrule <name> Description Activate a threshold alarm rule |
---|---|
Example |
To activate the rule with the name "low memory": $ ./rhino-console activatethresholdrule "low memory" Threshold rule low memory activated |
You can also activate a rule by exporting it, modifying the XML, and then reimporting it (assuming the active parameter in the rule is set to true — see Configuring Rules). |
MBean operation: activateRule
MBean |
|
---|---|
Rhino operation |
public void activateRule() throws ConfigurationException; This operation activates the threshold-alarm rule represented by the |
threshold rule scan period must be configured to a non-zero value before Rhino will evaluate active threshold-alarm rules. |
Deactivate rules
Console command: deactivatethresholdrule
Command |
deactivatethresholdrule <name> Description Deactivate a threshold alarm rule |
---|---|
Example |
To deactivate the rule with the name "low memory": $ ./rhino-console deactivatethresholdrule "low memory" Threshold rule low memory deactivated |
MBean operation: deactivateRule
MBean |
|
---|---|
Rhino operation |
public void deactivateRule() throws ConfigurationException; This operation deactivates the threshold-alarm rule represented by the |
Setting and Getting Rule-Scan Periods
To set or get the threshold rule scan period, use the following rhino-console commands or MBean operations.
What is a rule-scan period?
A threshold-alarm rule-scan period determines when Rhino’s threshold-rule scanner evaluates active threshold-alarm rules. The scan period must be set to a valid non-zero value for Rhino to evaluate the rules. At the beginning of each scan period, Rhino evaluates each active threshold-alarm rule as follows:
(The same process applies to the reset conditions once a rule has been triggered.) |
Console command: setthresholdrulescanperiod
Command |
setthresholdrulescanperiod <period> Description Set the threshold alarm rule scan period, measured in ms. Must be > 500 or 0 to disable rule checking |
---|---|
Example |
To set the threshold rule scan period to 30000ms (30s): $ ./rhino-console setthresholdrulescanperiod 30000 Threshold rule scan period set to 30000ms To disable threshold rule scanning: $ ./rhino-console setthresholdrulescanperiod 0 Threshold rule scanning disabled |
MBean operation: setScanPeriod
MBean |
|
---|---|
Rhino operation |
public void setScanPeriod(int scanPeriod) throws ConfigurationException, ValidationException; The scan period is measured in milliseconds. |
Console command: getthresholdrulescanperiod
Command |
getthresholdrulescanperiod Description Get the threshold alarm rule scan period |
---|---|
Example |
$ ./rhino-console getthresholdrulescanperiod Threshold rule scan period set to 30000ms |
MBean operation: getScanPeriod
MBean |
|
---|---|
Rhino operation |
public int getScanPeriod() throws ConfigurationException; The scan period is measured in milliseconds. |
Runtime Alarm List
To list all alarms that may be raised by Rhino and installed components (including their messages, and when raised and cleared), use the following rhino-console command.
Console command: alarmcatalog
Command |
alarmcatalog [-v] Description List the alarms that may be raised by Rhino and installed components. Using the -v flag will display more detail. |
---|---|
Example |
And this displays more detail:
|
Rhino Alarm List
This is a list of all alarms raised by this version of Rhino. For the management command that lists all alarms that may be raised by Rhino and installed components see Runtime Alarm List.
Alarm Type | Description |
---|---|
Category: AbnormalExecution (Alarms raised as a result of an abnormal execution condition being detected) |
|
An uncaught exception has been detected. |
|
Category: Activity Handler (Alarms raised by Rhino activity handler) |
|
The oldest activity handler snapshot is too old. |
|
Category: Clustering (Alarms raised by Rhino cluster state changes) |
|
A node left the cluster for some reason other than a management-initiated shutdown. |
|
Category: Configuration Management (Alarms raised by the Rhino configuration manager) |
|
An error occurred while trying to write the file-based configuration for the configuration type specified in the alarm instance. |
|
An error occurred while trying to read the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
|
An error occurred while trying to activate the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
|
Category: Database (Alarms raised during database communications) |
|
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
Rhino requires a backing database for persistence of state for failure recovery purposes. A persistent instance defines a connection to a database backend. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
|
Rhino requires a backing database for persistence of state for failure recovery purposes. If no connection to the database backend is available, state cannot be persisted. |
|
A persistent instance defines the connection to the database backend. If the persistent instance cannot be instantiated then JDBC connections cannot be made. |
|
Category: Event Router State (Alarms raised by event router state management) |
|
A licensing problem was detected during service activation at startup. |
|
A licensing problem was detected during resource adaptor entity activation. |
|
Category: Licensing (Alarms raised by Rhino licensing) |
|
Rate limiter throttling is active. This throttling and hence this alarm only happens in SDK versions of Rhino, not production versions. |
|
A license installed in Rhino has passed its expiry time. |
|
A license installed in Rhino is within seven days of its expiry time. |
|
The hardware addresses listed in a host-based license only partially match those on the host. |
|
The hardware addresses listed in a host-based license do not match those on the host. |
|
Rhino does not have a valid license installed. |
|
The work done by a function exceeds licensed capacity. |
|
A particular function is not licensed. |
|
Category: Limiting (Alarms raised by Rhino limiting) |
|
A rate limiter is below negative capacity. |
|
Category: Logging (Alarms raised by Rhino logging) |
|
An appender has thrown an exception when attempting to pass log messages from a logger to it. |
|
Category: M-lets Startup (Alarms raised by the M-let starter) |
|
The M-Let starter component could not register itself with the platform MBean server. This normally indicates a serious JVM misconfiguration. |
|
The M-Let starter component could not register an MBean for a configured m-let. This normally indicates an error in the m-let configuration file. |
|
Category: REM Startup (Alarms raised by embedded REM starter) |
|
This version of Rhino is supposed to contain an embedded instance of REM but it was not found, most likely due to a packaging error. |
|
There was an unexpected problem while starting the embedded REM. This could be because of a port conflict or packaging problem. |
|
Rhino Element Manager requires Java 1.8, but Rhino was started using Java 1.7 |
|
Category: Runtime Environment (Alarms related to the runtime environment) |
|
This JVM is not a Sun/Oracle JVM. The Rhino SDK will raise this alarm but a Rhino Production instance will refuse to start. |
|
This Java version is not a supported major version. The Rhino SDK will raise this alarm but a Rhino Production instance will refuse to start. |
|
This Java version is older than the minimum recommended update. A Rhino instance will start, but will always raise this alarm. |
|
SLEE event-routing functions failed to start after node restart |
|
Filenames with the maximum length expected by Rhino are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
|
Category: SAS facility (Alarms raised by Rhino SAS Facility) |
|
Attempting to reconect to SAS server |
|
SAS message queue is full. Some events have not been reported to SAS |
|
Category: SNMP (Alarms raised by Rhino SNMP) |
|
The SNMP agent listens for requests received on all network interfaces that match the requested SNMP configuration. If no suitable interfaces can be found that match the requested configuration, then the SNMP agent cannot process any SNMP requests. |
|
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If no ports could be bound, the SNMP agent cannot process any SNMP requests. |
|
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If this succeeds on some (but not all) interfaces, the SNMP agent can only process requests received via the interfaces that succeeded. |
|
This is a catchall alarm for unexpected failures during agent startup. If an unexpected failure occurs, the state of the SNMP agent is unpredictable and requests may not be successfully processed. |
|
This alarm represents a failure to determine an address from the notification target configuration. This can occur if the notification hostname is not resolvable, or if the specified hostname is not parseable. |
|
Multiple parameter set type configurations for in-use parameter set types map to the same OID. All parameter set type mappings will remain inactive until the conflict is resolved. |
|
Multiple counters in the parameter set type configuration map to the same index. The parameter set type mappings will remain inactive until the conflict is resolved. |
|
Category: Scattercast Management (Alarms raised by Rhino scattercast management operations) |
|
Reboot needed to make scattercast update active. |
|
Category: Service State (Alarms raised by service state management) |
|
The service threw an exception during service activation at startup. |
|
Category: Threshold Rules (Alarms raised by the threshold alarm rule processor) |
|
A threshold rule trigger or reset rule failed. |
|
A threshold rule trigger or reset rule refers to an unknown statistics parameter set. |
|
Category: Watchdog (Alarms raised by the watchdog) |
|
The system property watchdog.no_exit is set, enabling override of default node termination behaviour on failed watchdog conditions. This can cause catastrophic results and should never be used. |
|
A forward timewarp was detected. |
|
A reverse timewarp was detected. |
Category: AbnormalExecution
Alarms raised as a result of an abnormal execution condition being detected
Alarm Type |
rhino.uncaught-exception |
---|---|
Level |
WARNING |
Message |
Uncaught exception thrown by thread %s: %s |
Description |
An uncaught exception has been detected. |
Raised |
When an uncaught exception has been thrown. |
Cleared |
Never, must be cleared manually or Rhino restarted with the source of the uncaught exception corrected. |
Category: Activity Handler
Alarms raised by Rhino activity handler
Alarm Type |
rhino.ah.snapshot-age |
---|---|
Level |
WARNING |
Message |
Oldest activity handler snapshot is older than %s, snapshot is %s (from %d), creating thread: %s |
Description |
The oldest activity handler snapshot is too old. |
Raised |
When the age of the oldest activity handler snapshot is greater than the threshold set by the rhino.ah.snapshot_age_warn system property (30s default). |
Cleared |
When the age of the oldest snapshot is less than or equal to the threshold. |
Category: Clustering
Alarms raised by Rhino cluster state changes
Alarm Type |
rhino.node-failure |
---|---|
Level |
MAJOR |
Message |
Node %d has left the cluster |
Description |
A node left the cluster for some reason other than a management-initiated shutdown. |
Raised |
When the cluster state listener detects a node has left the cluster unexpectedly. |
Cleared |
When the failed node returns to the cluster. |
Category: Configuration Management
Alarms raised by the Rhino configuration manager
Alarm Type |
rhino.config.activation-failure |
---|---|
Level |
MAJOR |
Message |
Error activating configuration from file %s. Configuration was replaced with defaults and old configuration file was moved to %s. |
Description |
An error occurred while trying to activate the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
Raised |
When an exception occurs while activating a file-based configuration. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.config.read-error |
---|---|
Level |
MAJOR |
Message |
Error reading configuration from file %s. Configuration was replaced with defaults and old configuration file was moved to %s. |
Description |
An error occurred while trying to read the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
Raised |
When an exception occurs while reading a configuration file. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.config.save-error |
---|---|
Level |
MAJOR |
Message |
Error saving file based configuration: %s |
Description |
An error occurred while trying to write the file-based configuration for the configuration type specified in the alarm instance. |
Raised |
When an exception occurs while writing to a configuration file. |
Cleared |
Never, must be cleared manually. |
Category: Database
Alarms raised during database communications
Alarm Type |
rhino.database.connection-lost |
---|---|
Level |
MAJOR |
Message |
Connection to %s database failed: %s |
Description |
Rhino requires a backing database for persistence of state for failure recovery purposes. If no connection to the database backend is available, state cannot be persisted. |
Raised |
When the connection to a database backend is lost. |
Cleared |
When the connection is restored. |
Alarm Type |
rhino.database.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has been removed |
Description |
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.database.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has no active persistence instances |
Description |
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.database.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s for database %s |
Description |
Rhino requires a backing database for persistence of state for failure recovery purposes. A persistent instance defines a connection to a database backend. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Alarm Type |
rhino.jdbc.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s for JDBC configuration with JNDI name %s |
Description |
A persistent instance defines the connection to the database backend. If the persistent instance cannot be instantiated then JDBC connections cannot be made. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Category: Event Router State
Alarms raised by event router state management
Alarm Type |
rhino.state.unlicensed-raentity |
---|---|
Level |
MAJOR |
Message |
No valid license for resource adaptor entity "%s" found. The resource adaptor entity has not been activated. |
Description |
A licensing problem was detected during resource adaptor entity activation. |
Raised |
During Rhino startup if a resource adaptor entity is configured to be in the active state but there is no valid license for it. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.state.unlicensed-service |
---|---|
Level |
MAJOR |
Message |
No valid license for service "%s" found. The service has not been activated. |
Description |
A licensing problem was detected during service activation at startup. |
Raised |
During Rhino startup if an service is configured to be in the active state but there is no valid license for it. |
Cleared |
Never, must be cleared manually. |
Category: Licensing
Alarms raised by Rhino licensing
Alarm Type |
rhino.license.expired |
---|---|
Level |
MAJOR |
Message |
License with serial "%s" has expired |
Description |
A license installed in Rhino has passed its expiry time. |
Raised |
When a license expires and there is no superseding license installed. |
Cleared |
When the license is removed or a superseding license is installed. |
Alarm Type |
rhino.license.over-licensed-capacity |
---|---|
Level |
MAJOR |
Message |
Over licensed capacity for function "%s". |
Description |
The work done by a function exceeds licensed capacity. |
Raised |
When the amount of work processed by the named function exceeds the licensed capacity. |
Cleared |
When the amount of work processed by the function becomes less than or equal to the licensed capacity. |
Alarm Type |
rhino.license.over-limit |
---|---|
Level |
MAJOR |
Message |
Rate limiter throttling active, throttled to %d events/second |
Description |
Rate limiter throttling is active. This throttling and hence this alarm only happens in SDK versions of Rhino, not production versions. |
Raised |
When there is more incoming work than allowed by the licensed limit so Rhino starts rejecting some. |
Cleared |
When the total input rate (both accepted and rejected work) drops below the licensed limit. |
Alarm Type |
rhino.license.partially-licensed-host |
---|---|
Level |
MINOR |
Message |
Host "%s" is not fully licensed. Not all hardware addresses on this host match those licensed. Please request a new license for host "%s". |
Description |
The hardware addresses listed in a host-based license only partially match those on the host. |
Raised |
When a host-based license with invalid host addresses is installed. |
Cleared |
When the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.pending-expiry |
---|---|
Level |
MAJOR |
Message |
License with serial "%s" is due to expire on %s |
Description |
A license installed in Rhino is within seven days of its expiry time. |
Raised |
Seven days before a license will expire and there is no superseding license installed. |
Cleared |
When the license expires, the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.unlicensed-function |
---|---|
Level |
MAJOR |
Message |
There are no valid licenses installed for function "%s" and version "%s". |
Description |
A particular function is not licensed. |
Raised |
When a unit of an unlicensed function is requested. |
Cleared |
When a license is installed that licenses a particular function, and another unit is requested. |
Alarm Type |
rhino.license.unlicensed-host |
---|---|
Level |
MINOR |
Message |
"%s" is not licensed. Hardware addresses on this host did not match those licensed, or hostname has changed. Please request a new license for host "%s". |
Description |
The hardware addresses listed in a host-based license do not match those on the host. |
Raised |
When a host-based license with invalid host addresses is installed. |
Cleared |
When the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.unlicensed-rhino |
---|---|
Level |
MAJOR |
Message |
Rhino platform is no longer licensed |
Description |
Rhino does not have a valid license installed. |
Raised |
When a license expires or is removed leaving Rhino in an unlicensed state. |
Cleared |
When an appropriate license is installed. |
Category: Limiting
Alarms raised by Rhino limiting
Alarm Type |
rhino.limiting.below-negative-capacity |
---|---|
Level |
WARNING |
Message |
Token count in rate limiter "%s" capped at negative saturation point on node %d. Too much work has been forced. Alarm will clear once token count >= 0. |
Description |
A rate limiter is below negative capacity. |
Raised |
By a rate limiter when a very large number of units have been forcibly used and the internal token counter has reached the biggest possible negative number (-2,147,483,648). |
Cleared |
When the token count becomes greater than or equal to zero. |
Category: Logging
Alarms raised by Rhino logging
Alarm Type |
rhino.logging.appender-error |
---|---|
Level |
MAJOR |
Message |
An error occurred logging to an appender: %s |
Description |
An appender has thrown an exception when attempting to pass log messages from a logger to it. |
Raised |
When an appender throws an AppenderLoggingException when a logger tries to log to it. |
Cleared |
When the problem with the given appender has been resolved and the logging configuration is updated. |
Category: M-lets Startup
Alarms raised by the M-let starter
Alarm Type |
rhino.mlet.loader-failure |
---|---|
Level |
MAJOR |
Message |
Error registering MLetLoader MBean |
Description |
The M-Let starter component could not register itself with the platform MBean server. This normally indicates a serious JVM misconfiguration. |
Raised |
During Rhino startup if an error occurred registering the m-let loader component with the MBean server. |
Cleared |
Never, must be cleared manually or Rhino restarted. |
Alarm Type |
rhino.mlet.registration-failure |
---|---|
Level |
MINOR |
Message |
Could not create or register MLet: %s |
Description |
The M-Let starter component could not register an MBean for a configured m-let. This normally indicates an error in the m-let configuration file. |
Raised |
During Rhino startup if an error occurred starting a m-let configured. |
Cleared |
Never, must be cleared manually or Rhino restarted with updated configuration. |
Category: REM Startup
Alarms raised by embedded REM starter
Alarm Type |
rhino.rem.insufficient-java-version |
---|---|
Level |
MINOR |
Message |
This Java version (%s) is insufficient for starting the embedded Rhino Element Manager. REM requires Java 1.8. |
Description |
Rhino Element Manager requires Java 1.8, but Rhino was started using Java 1.7 |
Raised |
During Rhino startup if running on Java 1.7 and embedded REM is enabled. |
Cleared |
Never, must be cleared manually, Rhino restarted using Java 1.8, or embedded REM disabled. |
Alarm Type |
rhino.rem.missing |
---|---|
Level |
MINOR |
Message |
Rhino Element Manager classes not found, embedded REM is disabled. |
Description |
This version of Rhino is supposed to contain an embedded instance of REM but it was not found, most likely due to a packaging error. |
Raised |
During Rhino startup if the classes could not be found to start the embedded REM. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.rem.startup |
---|---|
Level |
MINOR |
Message |
Could not start embedded Rhino Element Manager |
Description |
There was an unexpected problem while starting the embedded REM. This could be because of a port conflict or packaging problem. |
Raised |
During Rhino startup if an error occurred starting the embedded REM. |
Cleared |
Never, must be cleared manually or Rhino restarted with updated configuration. |
Category: Runtime Environment
Alarms related to the runtime environment
Alarm Type |
rhino.runtime.long-filenames-unsupported |
---|---|
Level |
WARNING |
Message |
Filenames with a length of %s characters are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
Description |
Filenames with the maximum length expected by Rhino are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
Raised |
During Rhino startup if the long filename check fails. |
Cleared |
Never, must be cleared manually or Rhino restarted after being installed on a filesystem supporting long filenames. |
Alarm Type |
rhino.runtime.slee |
---|---|
Level |
CRITICAL |
Message |
SLEE event-routing functions failed to start after node restart |
Description |
SLEE event-routing functions failed to start after node restart |
Raised |
During Rhino startup if SLEE event-routing functions fail to restart. |
Cleared |
Never, must be cleared manually or the node restarted. |
Alarm Type |
rhino.runtime.unsupported.javamajorversion |
---|---|
Level |
WARNING |
Message |
This Java version (%s) is not supported for Rhino production instances. Supported versions are: %s |
Description |
This Java version is not a supported major version. The Rhino SDK will raise this alarm but a Rhino Production instance will refuse to start. |
Raised |
During Rhino SDK startup if an unsupported JVM was detected. |
Cleared |
Never, must be cleared manually or Rhino restarted with a supported Java version. |
Alarm Type |
rhino.runtime.unsupported.javaupdateversion |
---|---|
Level |
MINOR |
Message |
This Java version (%s) is older than the minimum recommended update (%s) |
Description |
This Java version is older than the minimum recommended update. A Rhino instance will start, but will always raise this alarm. |
Raised |
During Rhino startup if an old Java version was detected. |
Cleared |
Never, must be cleared manually or Rhino restarted with a supported Java version. |
Alarm Type |
rhino.runtime.unsupported.jvm |
---|---|
Level |
WARNING |
Message |
This JVM (%s) is not supported for Rhino production instances. Supported JVMs are: %s |
Description |
This JVM is not a Sun/Oracle JVM. The Rhino SDK will raise this alarm but a Rhino Production instance will refuse to start. |
Raised |
During Rhino SDK startup if an unsupported JVM was detected. |
Cleared |
Never, must be cleared manually or Rhino restarted with a supported JVM. |
Category: SAS facility
Alarms raised by Rhino SAS Facility
Alarm Type |
rhino.sas.connection.lost |
---|---|
Level |
MAJOR |
Message |
Connection to SAS server at %s:%d is down |
Description |
Attempting to reconect to SAS server |
Raised |
When SAS client loses connection to server |
Cleared |
On reconnect |
Alarm Type |
rhino.sas.queue.full |
---|---|
Level |
WARNING |
Message |
SAS message queue is full |
Description |
SAS message queue is full. Some events have not been reported to SAS |
Raised |
When SAS facility outgoing message queue is full |
Cleared |
When the queue is not full for at least sas.queue_full_interval |
Category: SNMP
Alarms raised by Rhino SNMP
Alarm Type |
rhino.snmp.bind-failure |
---|---|
Level |
MAJOR |
Message |
The SNMP agent could not be started on node %d: no addresses were successfully bound. |
Description |
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If no ports could be bound, the SNMP agent cannot process any SNMP requests. |
Raised |
When the SNMP Agent attempts to start listening for requests, but no port in the configured range on any configured interface could be used. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.duplicate-counter-mapping |
---|---|
Level |
WARNING |
Message |
Duplicate counter mappings in parameter set type %s |
Description |
Multiple counters in the parameter set type configuration map to the same index. The parameter set type mappings will remain inactive until the conflict is resolved. |
Raised |
When a in-use parameter set type has a configuration with duplicate counter mappings. |
Cleared |
When the conflict is resolved, either by changing the relevant counter mappings, or if the parameter set type is removed from use. |
Alarm Type |
rhino.snmp.duplicate-oid-mapping |
---|---|
Level |
WARNING |
Message |
Duplicate parameter set type mapping configurations for OID %s |
Description |
Multiple parameter set type configurations for in-use parameter set types map to the same OID. All parameter set type mappings will remain inactive until the conflict is resolved. |
Raised |
When multiple in-use parameter set types have a configuration that map to the same OID. |
Cleared |
When the conflict is resolved, either by changing the OID mappings in the relevant parameter set type configurations, or if a parameter set type in conflict is removed from use. |
Alarm Type |
rhino.snmp.general-failure |
---|---|
Level |
MINOR |
Message |
The SNMP agent encountered an error during startup: %s |
Description |
This is a catchall alarm for unexpected failures during agent startup. If an unexpected failure occurs, the state of the SNMP agent is unpredictable and requests may not be successfully processed. |
Raised |
When the SNMP Agent attempts to start listening for requests, but there is an unexpected failure not covered by other alarms. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.no-bind-addresses |
---|---|
Level |
MAJOR |
Message |
The SNMP agent could not be started on node %d: no suitable bind addresses available. |
Description |
The SNMP agent listens for requests received on all network interfaces that match the requested SNMP configuration. If no suitable interfaces can be found that match the requested configuration, then the SNMP agent cannot process any SNMP requests. |
Raised |
When the SNMP Agent attempts to start listening for requests, but no suitable network interface addresses can be found to bind to. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.notification-address-failure |
---|---|
Level |
MAJOR |
Message |
Failed to create notification target for address "%s". |
Description |
This alarm represents a failure to determine an address from the notification target configuration. This can occur if the notification hostname is not resolvable, or if the specified hostname is not parseable. |
Raised |
During SNMP agent start if a notification target address cannot be determined (e.g. due to a hostname resolution failing). |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.partial-failure |
---|---|
Level |
MINOR |
Message |
The SNMP agent failed to bind to the following addresses: %s |
Description |
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If this succeeds on some (but not all) interfaces, the SNMP agent can only process requests received via the interfaces that succeeded. |
Raised |
When the SNMP Agent attempts to start listening for requests, and only some of the configured interfaces successfully bound a UDP port. |
Cleared |
When the SNMP Agent is stopped. |
Category: Scattercast Management
Alarms raised by Rhino scattercast management operations
Alarm Type |
rhino.scattercast.update-reboot-required |
---|---|
Level |
CRITICAL |
Message |
Scattercast endpoints have been updated. A cluster reboot is required to apply the update. An automatic reboot has been triggered, Manual intervention required if the reboot fails. |
Description |
Reboot needed to make scattercast update active. |
Raised |
When scattercast endpoints are updated. |
Cleared |
On node reboot. |
Category: Service State
Alarms raised by service state management
Alarm Type |
rhino.state.service-activation |
---|---|
Level |
MAJOR |
Message |
Service "%s" failed to reactivate successfully. |
Description |
The service threw an exception during service activation at startup. |
Raised |
During Rhino startup if a service is configured to be in the active state but throws an exception while handling the activation request. |
Cleared |
If the service is manually reactivated successfully, or uninstalled. |
Category: Threshold Rules
Alarms raised by the threshold alarm rule processor
Alarm Type |
rhino.threshold-rules.rule-failure |
---|---|
Level |
WARNING |
Message |
Threshold rule %s trigger or reset condition failed to run |
Description |
A threshold rule trigger or reset rule failed. |
Raised |
When a threshold rule condition cannot be evaluated, for example it refers to a statistic that does not exist. |
Cleared |
When the threshold rule condition is corrected. |
Alarm Type |
rhino.threshold-rules.unknown-parameter-set |
---|---|
Level |
WARNING |
Message |
Threshold rule %s refers to unknown statistics parameter set '%s' |
Description |
A threshold rule trigger or reset rule refers to an unknown statistics parameter set. |
Raised |
When a threshold rule condition cannot be evaluated because it refers to a statistics parameter set that does not exist. |
Cleared |
When the threshold rule condition is corrected. |
Category: Watchdog
Alarms raised by the watchdog
Alarm Type |
rhino.watchdog.forward-timewarp |
---|---|
Level |
WARNING |
Message |
Forward timewarp of %sms detected at %s |
Description |
A forward timewarp was detected. |
Raised |
When the system clock is detected to have progressed by an amount exceeding the sum of the watchdog check interval and the maximum pause margin. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.watchdog.no-exit |
---|---|
Level |
CRITICAL |
Message |
System property watchdog.no_exit is set, watchdog will be terminated rather than killing the node if a failed watchdog condition occurs |
Description |
The system property watchdog.no_exit is set, enabling override of default node termination behaviour on failed watchdog conditions. This can cause catastrophic results and should never be used. |
Raised |
When the watchdog.no_exit system property is set. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.watchdog.reverse-timewarp |
---|---|
Level |
WARNING |
Message |
Reverse timewarp of %sms detected at %s |
Description |
A reverse timewarp was detected. |
Raised |
When the system clock is detected to have progressed by an amount less than the difference between the watchdog check interval and the reverse timewarp margin. |
Cleared |
Never, must be cleared manually. |
Usage
As well as an overview of usage, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs:
Procedure | rhino-console command | MBean → Operations |
---|---|---|
dumpusagestats |
Usage → get<usage-parameter-name> |
|
setusagenotificationsenabled |
UsageNotificationManager → set<usage-parameter-name> NotificationsEnabled |
|
listusagenotificationsenabled |
UsageNotificationManager → get<usage-parameter-name> NotificationsEnabled |
|
createusageparameterset |
ServiceUsage → createUsageParameterSet ProfileTableUsage → createUsageParameterSet ResourceUsage → createUsageParameterSet |
|
listusageparametersets |
ServiceUsage → getUsageParameterSets ProfileTableUsage → getUsageParameterSets ResourceUsage → getUsageParameterSets |
|
removeusageparameterset |
ServiceUsage → removeUsageParameterSet ProfileTableUsage → removeUsageParameterSet ResourceUsage → removeUsageParameterSet |
About Usage
A usage parameter is a parameter that an object in the SLEE can update, to provide usage information.
There are two types:
-
Counter-type usage parameters have values that can be incremented or decremented.
-
Sample-type usage parameters accumulate sample data.
Accessing usage parameters
Administrators can access usage parameters through the SLEE’s management interface.
Management clients can access usage parameters through the usage parameters interface declared in an SBB, resource adaptor or profile specification. Usage parameters cannot be created through the management interface. Instead, a usage parameters interface must be declared in the SLEE component. For example, an SBB declares an sbb-usage-parameters-interface
element in the SBB deployment descriptor (similar procedures apply for resource adaptors and profile specifications).
You can also use notifications to output usage parameters to management clients.
Creating named usage parameter sets
By default, the SLEE creates unnamed usage parameter sets for a notification source. You can also create named usage parameter sets, for example to hold multiple values of usage parameters for the same notification source.
Rhino usage extensions
To alleviate the limitations of the SLEE-defined usage mechanism, Rhino provides a usage extension mechanism that allows an SBB or resource adaptor to declare multiple usage parameters interfaces, and defines a Usage facility with which SBBs and resource adaptors can manage and access their own usage parameter sets.
Viewing Usage Parameters
To view the current value of a usage parameter, use the following rhino-console command or related MBean operation.
Whereas the MBean operation below can only get individual usage parameter values, the console command outputs current values of all usage parameters for a specified notification source. |
Console command: dumpusagestats
Command |
dumpusagestats <type> <notif-source> [param-set-name] [reset] Description Dump the current values of the usage parameters for the specified notification source. The usage parameter set name is optional and if not specified the values for the unnamed (or root) parameter set are returned. If [reset] is specified, the values of the usage parameters are reset after being obtained |
---|---|
Example |
$ ./rhino-console dumpusagestats sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" parameter-name counter-value sample-stats type ------------------- -------------- ------------- -------- callAttempts 0 counter missingParameters 0 counter offNetCalls 0 counter onNetCalls 0 counter unknownShortCode 0 counter unknownSubscribers 0 counter 6 rows |
MBean operation: get<usage-parameter-name>
MBean |
|
---|---|
SLEE-defined |
Counter-type usage parameters
public long get<usage-parameter-name>(boolean reset) throws ManagementException;
Sample-type usage parameters
public SampleStatistics get<usage-parameter-name>(boolean reset) throws ManagementException; |
Arguments |
This operation requires that you specify whether the values are to be reset after being read:
|
Return value |
Operations for counter-type usage parameters return the current value of the counter. Operations for sample-type usage parameters return a |
Usage Notifications
You can enable or disable usage notifications, and list which usage notifications are enabled:
Enabling and Disabling Usage Notifications
To enable or disable usage notifications, use the following rhino-console command or related MBean operation.
The notifications-enabled flag
To enable notifications to output usage parameters to management clients, set the usage |
Console command: setusagenotificationsenabled
Command |
setusagenotificationsenabled <type> <notif-source> [upi-type] <param-name> <flag> Description Set the usage notifications-enabled flag for specified usage notification source's usage parameter. The usage parameters interface type is optional and if not specified the root usage parameters interface type is used |
---|---|
Example |
$ ./rhino-console setusagenotificationsenabled sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" \ callAttempts true Usage notifications for usage parameter callAttempts for SbbNotification[service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]] have been enabled |
MBean operation: set<usage-parameter-name>NotificationsEnabled
MBean |
|
---|---|
SLEE-defined |
public void set<usage-parameter-name>NotificationsEnabled(boolean enabled) throws ManagementException; |
Arguments |
|
Notes |
Enabling usage notification
Usage notifications are enabled or disabled on a per-usage-parameter basis for each notification source. That means that if usage notifications are enabled for a particular usage parameter, if that usage parameter is updated in any usage parameter set belonging to the notification source, a usage notification will be generated by the SLEE. |
Viewing Usage Notification Status
To list usage parameter status, use the following rhino-console command or related MBean operation.
To see which usage parameters management clients are receiving through notifications, you can list usage parameter status. |
Console command: listusagenotificationsenabled
Command |
listusagenotificationsenabled <type> <notif-source> [upi-type] Description List the usage notification manager flags for the specified notification source. The usage parameters interface type is optional and if not specified the flags for the root usage parameters interface type are returned |
---|---|
Example |
$ ./rhino-console listusagenotificationsenabled sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" parameter-name notifications-enabled ------------------- ---------------------- callAttempts true missingParameters false offNetCalls false onNetCalls false unknownShortCode false unknownSubscribers false 6 rows |
MBean operation: get<usage-parameter-name>NotificationsEnabled
MBean |
|
---|---|
SLEE-defined |
public boolean get<usage-parameter-name>NotificationsEnabled() throws ManagementException; |
Arguments |
|
Named Usage Parameter Sets
By default, the SLEE creates unnamed usage parameter sets for a notification source. You can also create named usage parameter sets, for example to hold multiple values of usage parameters for the same notification source.
Rhino includes facilities for creating, listing and removing named usage parameter sets for services, resource adaptor entities and profile tables.
This section includes the following procedures:
Usage parameter sets for internal subsystems (not listed using console command)
The SLEE specification also includes usage parameter sets for "internal subsystems". You can list these, but not create or remove them, since they are part of the SLEE implementation. However, Rhino uses its own statistics API to collect statistics from internal subsystems — so if you try to list usage parameter set names for an internal subsystem using |
Creating Usage Parameter Sets
To create a named usage parameter set for services, resource adaptor entities or profile tables, use the following rhino-console or related MBean operations.
Services
Console command: createusageparameterset
Command |
createusageparameterset <type> <notif-source> <param-set-name> Description Create a new usage parameter set with the specified name for the specified notification source |
---|---|
Example |
$ ./rhino-console createusageparameterset sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" \ firstLook created usage parameter set firstLook for SbbNotification[service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]] |
MBean operation: createUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void createUsageParameterSet(SbbID id, String paramSetName) throws NullPointerException, UnrecognizedSbbException, InvalidArgumentException, UsageParameterSetNameAlreadyExistsException, ManagementException; |
Arguments |
|
Resource adaptor entities
Console command: createusageparameterset
Command |
createusageparameterset <type> <notif-source> <param-set-name> Description Create a new usage parameter set with the specified name for the specified notification source |
---|---|
Example |
$ ./rhino-console createusageparameterset resourceadaptorentity \ "entity=cdr" \ cdr-usage created usage parameter set cdr-usage for RAEntityNotification[entity=cdr] |
MBean operation: createUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void createUsageParameterSet(String paramSetName) throws NullPointerException, InvalidArgumentException, UsageParameterSetNameAlreadyExistsException, ManagementException; |
Arguments |
|
Profile tables
Console command: createusageparameterset
Command |
createusageparameterset <type> <notif-source> <param-set-name> Description Create a new usage parameter set with the specified name for the specified notification source |
---|---|
Example |
$ ./rhino-console createusageparameterset profiletable \ "table=PostpaidChargingPrefixTable" \ ppprefix-usage created usage parameter set ppprefix-usage for ProfileTableNotification[table=PostpaidChargingPrefixTable] |
MBean operation: createUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void createUsageParameterSet(String paramSetName) throws NullPointerException, InvalidArgumentException, UsageParameterSetNameAlreadyExistsException, ManagementException; |
Arguments |
|
Listing Usage Parameter Sets
To list named usage parameter sets for services, resource adaptor entities or profile tables, use the following rhino-console or related MBean operations.
Services
Console command: listusageparametersets
Command |
listusageparametersets <type> <notif-source> Description List the usage parameter sets for the specified notification source. The unnamed (or root) parameter set is not included in this list |
---|---|
Example |
$ ./rhino-console listusageparametersets sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" firstLook secondLook |
MBean operation: getUsageParameterSets
MBean{cth} |
|
---|---|
SLEE-defined |
public String[] getUsageParameterSets(SbbID id) throws NullPointerException, UnrecognizedSbbException, InvalidArgumentException, ManagementException |
Arguments |
|
Resource adaptor entities
Console command: listusageparametersets
Command |
listusageparametersets <type> <notif-source> Description List the usage parameter sets for the specified notification source. The unnamed (or root) parameter set is not included in this list |
---|---|
Example |
$ ./rhino-console listusageparametersets resourceadaptorentity \ "entity=cdr" cdr-usage |
MBean operation: getUsageParameterSets
MBean{cth} |
|
---|---|
SLEE-defined |
public String[] getUsageParameterSets() throws ManagementException |
Profile tables
Console command: listusageparametersets
Command |
listusageparametersets <type> <notif-source> Description List the usage parameter sets for the specified notification source. The unnamed (or root) parameter set is not included in this list |
---|---|
Example |
$ ./rhino-console listusageparametersets profiletable \ "table=PostpaidChargingPrefixTable" ppprefix-usage |
MBean operation: getUsageParameterSets
MBean{cth} |
|
---|---|
SLEE-defined |
public String[] getUsageParameterSets() throws ManagementException |
Removing Usage Parameter Sets
To removed a named usage parameter set for services, resource adaptor entities or profile tables, use the following rhino-console or related MBean operations.
Services
Console command: removeusageparameterset
Command |
removeusageparameterset <type> <notif-source> <param-set-name> Description Remove the existing usage parameter set with the specified name from the specified notification source |
---|---|
Example |
$ ./rhino-console removeusageparameterset sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" \ secondLook removed usage parameter set secondLook for SbbNotification[service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]] |
MBean operation: removeUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void removeUsageParameterSet(SbbID id, String paramSetName) throws NullPointerException, UnrecognizedSbbException, InvalidArgumentException, UnrecognizedUsageParameterSetNameException, ManagementException; |
Arguments |
|
Resource adaptor entities
Console command: removeusageparameterset
Command |
removeusageparameterset <type> <notif-source> <param-set-name> Description Remove the existing usage parameter set with the specified name from the specified notification source |
---|---|
Example |
$ ./rhino-console removeusageparameterset resourceadaptorentity \ "entity=cdr" \ cdr-usage removed usage parameter set cdr-usage for RAEntityNotification[entity=cdr] |
MBean operation: removeUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void removeUsageParameterSet(String paramSetName) throws NullPointerException, InvalidArgumentException, UnrecognizedUsageParameterSetNameException, ManagementException; |
Argument |
|
Profile tables
Console command: removeusageparameterset
Command |
removeusageparameterset <type> <notif-source> <param-set-name> Description Remove the existing usage parameter set with the specified name from the specified notification source |
---|---|
Example |
$ ./rhino-console removeusageparameterset profiletable \ "table=PostpaidChargingPrefixTable" \ ppprefix-usage removed usage parameter set ppprefix-usage for ProfileTableNotification[table=PostpaidChargingPrefixTable] |
MBean operation: removeUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void removeUsageParameterSet(String paramSetName) throws NullPointerException, InvalidArgumentException, UnrecognizedUsageParameterSetNameException, ManagementException; |
Argument |
|
User Transactions
As well as an overview of user transactions, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
|
User Transaction Management →
|
|
|
User Transaction Management →
|
|
|
User Transaction Management →
|
About User Transactions
Using the User Transaction Management
MBean, a client can demarcate transaction boundaries for a subset of profile-management operations, by:
-
starting a user transaction
-
performing some profile-management operations, across a number of different profiles (in the context of that transaction)
-
then committing the transaction — resulting in an atomic update of profile state.
Binding user transactions with authenticated subjects
The SLEE binds user transactions to the java.security.auth.Subject
associated with the invoking thread. For all user-transaction management, the thread invoking the management operation must therefore be associated with an authenticated subject. The command console interface handles this task as part of the client-login process. (Other user-provided m-lets installed in the Rhino SLEE will need to take care of this requirement in their own way.)
Executing Profile Provisioning operations in a user transaction
The following operations on the Profile Provisioning
MBean support execution in a user transaction: createProfile
, createProfiles
, removeProfile
, getDefaultProfile
, getProfile
, and importProfiles
Furthermore, accessing a Profile MBean while a user transaction is active:
-
enlists that MBean into that user transaction
-
changes that MBean to the read/write state
-
puts any changes to the profile in context of the user transaction.
Committing or rolling back profiles enlisted in user transactions
You cannot invoke the |
Starting User Transactions
To start a user transaction, use the following rhino-console command or related MBean operation.
Console command: startusertransaction
Command |
startusertransaction Description Start a client-demarcated transaction. Note that only a limited set of Rhino management operations support user transactions |
---|---|
Example |
$ ./rhino-console startusertransaction |
MBean operation: startUserTransaction
MBean |
|
---|---|
Rhino extension |
void startUserTransaction() throws com.opencloud.rhino.management.usertx.NoAuthenticatedSubjectException, NotSupportedException, ManagementException; |
Committing User Transactions
To commit a user transaction, use the following rhino-console command or related MBean operation.
Console command: commitusertransaction
Command |
commitusertransaction Description Commit a client-demarcated transaction |
---|---|
Example |
$ ./rhino-console commitusertransaction |
MBean operation: commitUserTransaction
MBean |
|
---|---|
Rhino extension |
void commitUserTransaction() throws com.opencloud.rhino.management.usertx.NoAuthenticatedSubjectException, InvalidStateException, ProfileVerificationException, HeuristicMixedException, HeuristicRollbackException, RollbackException, ManagementException; |
Rolling Back User Transactions
To rollback a user transaction, use the following rhino-console command or related MBean operation.
Console command: rollbackusertransaction
Command |
rollbackusertransaction Description Roll back a client-demarcated transaction |
---|---|
Example |
$ ./rhino-console rollbackusertransaction |
MBean operation: rollbackUserTransaction
MBean |
|
---|---|
Rhino extension |
void rollbackUserTransaction() throws com.opencloud.rhino.management.usertx.NoAuthenticatedSubjectException, InvalidStateException, ManagementException; |
Auditing Management Operations
Rhino logs all management operations to two log files in the working directory of each Rhino node (work/logs
by default):
-
management.csv
, a plain text CSV file -
encrypted.management.csv
, a second, encrypted copy of the logs.
You can use the encrypted version to detect tampering with the plain text copy — decode with the decrypt-management-log script, in the client/bin directory. |
The management audit logs roll over once they reach 100MB, with an unlimited number of backup files. This logging configuration is currently hard-coded. |
The format of management audit log can be chosen via system property rhino.audit.log_format . See more detail in system property. |
What’s in the log file?
Rhino management operations logs include the following fields:
Field | Description |
---|---|
date |
A timestamp in the form |
uniqueID |
An identifier used to correlate a set of log lines for a single management operation. All of the log lines from the same operation will have the same |
opcode |
Uniquely identifies the type of operation. |
user |
The name of the user invoking the management operation, or |
roles |
Any roles associated with the user. |
access |
Identifies whether the operation results in a state change of some sort. May be |
client address |
The IP address of the client invoking the management operation. |
namespace |
The namespace where the management operation invoked. Empty if it is the default namespace. |
MBean name |
|
operation type |
The general type of operation. |
operation name |
The name of the invoked method or |
arguments |
The contents of all arguments passed to the management operation. Byte array arguments display converted to a length and a hash. |
duration |
How long (in milliseconds) the operation took. |
result |
|
failure reason |
A text string indicating why an operation failed. (Only present for failed results.) 2 |
All management operations except for AUTHENTICATION type operations come in pairs with the first entry indicating the start of an operation, and the second entry indicating success or failure, as well as how long the operation took. Only the result lines make use of the duration , result , and failure reason fields. |
For a list of all operations currently recognised by the auditing subsystem, run the getopcodexml command from the command-line console. It will return the complete XML representation of all known management operations. |
Operation types
The operation type
field may contain one of the following values:
Type | Result type | Description |
---|---|---|
AUTHENTICATION |
n/a |
A successful or failed authentication attempt. |
INVOKE |
INVOKE (RESULT) |
An MBean invoke operation. |
GET |
GET (RESULT) |
An MBean attribute get operation. |
SET |
SET (RESULT) |
An MBean attribute set operation. |
GET-ATTRIBUTES |
GET-ATTRIBUTES (RESULT) |
An MBean bulk-attributes |
SET-ATTRIBUTES |
SET-ATTRIBUTES (RESULT) |
An MBean bulk-attributes |
Managing the audit level
The auditing subsystem provides two console commands to manage what gets logged to the management audit log:
getmanagementauditlevel Description Returns the current level of management operation auditing.
setmanagementauditlevel <none \| writes \| all> Description Sets the current level of management operation auditing.
Writes is useful, for example, to avoid receiving excessive messages from an automated management client that continually polls Rhino state using JMX. |
Rhino always logs changes to the audit level (irrespective of the current level). |
Example 1: Resource adaptor deployment and activation
The following example shows management logs from deploying a resource adaptor, creating a resource adaptor entity for it, and activating that resource adaptor entity.
The log shows the resource adaptor activated twice in a row, the second operation failing (because the RA was already activated) — see the result and failure fields. |
date | uniqueID | opcode | user | roles | access | client address | MBean name | operation type | operation name | arguments | duration | result | failure reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2010-06-08 14:22:06.850 |
101:176452077447:22 |
|
admin |
|
|
192.168.0.7 |
|
AUTHENTICATION |
|
|
|
ok |
|
2010-06-08 14:22:35.622 |
101:176452077447:29 |
19000 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=Deployment |
INVOKE |
install |
[file:/home/alex/simple/simple-ra-ha.jar, [byte array, length=65164, md5sum=96322071e6128333bdee3364a224b48c] |
|
|
|
2010-06-08 14:22:38.961 |
101:176452077447:29 |
19000 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=Deployment |
INVOKE (RESULT) |
install |
[file:/home/alex/simple/simple-ra-ha.jar, [byte array, length=65164, md5sum=96322071e6128333bdee3364a224b48c] ] |
3339ms |
ok |
|
2010-06-08 14:22:53.356 |
101:176452077447:36 |
22014 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=ResourceManagement |
INVOKE |
getConfigurationProperties |
[ResourceAdaptorID [name=Simple,vendor=OpenCloud,version=1.0] ] |
|
|
|
2010-06-08 14:22:53.359 |
101:176452077447:36 |
22014 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=ResourceManagement |
INVOKE (RESULT) |
getConfigurationProperties |
[ResourceAdaptorID [name=Simple,vendor=OpenCloud,version=1.0] ] |
3ms |
ok |
|
2010-06-08 14:22:53.369 |
101:176452077447:39 |
22016 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=ResourceManagement |
INVOKE |
createResourceAdaptorEntity |
[ResourceAdaptorID [name=Simple,vendor=OpenCloud,version=1.0], simplera, [(Host:java.lang.String=localhost), (Port:java.lang.Integer=14477), (slee-vendor: com.opencloud.rhino_replicate_activities: java.lang.String=none) ] ] |
|
|
|
2010-06-08 14:22:53.536 |
101:176452077447:39 |
22016 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=ResourceManagement |
INVOKE (RESULT) |
createResourceAdaptorEntity |
[ResourceAdaptorID [name=Simple,vendor=OpenCloud,version=1.0], simplera, [(Host:java.lang.String=localhost), (Port:java.lang.Integer=14477), (slee-vendor: com.opencloud.rhino_replicate_activities: java.lang.String=none) ] ] |
167ms |
ok |
|
2010-06-08 14:23:11.987 |
101:176452077447:47 |
22004 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=ResourceManagement |
INVOKE |
activateResourceAdaptorEntity |
[simplera,[101]] |
|
|
|
2010-06-08 14:23:12.029 |
101:176452077447:47 |
22004 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=ResourceManagement |
INVOKE (RESULT) |
activateResourceAdaptorEntity |
[simplera,[101]] |
42ms |
ok |
|
2010-06-08 14:23:30.802 |
101:176452077447:52 |
22004 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=ResourceManagement |
INVOKE |
activateResourceAdaptorEntity |
[simplera,[101]] |
|
|
|
2010-06-08 14:23:30.820 |
101:176452077447:52 |
22004 |
admin |
admin |
write |
192.168.0.7 |
javax.slee.management: name=ResourceManagement |
INVOKE (RESULT) |
activateResourceAdaptorEntity |
[simplera,[101]] |
18ms |
failed |
simplera not in INACTIVE state on node(s)[101] |
Example 2: Bulk GET
operation on Licensing
MBean
The example below shows a GET-ATTRIBUTES
operation called on the Licensing
MBean. It includes queries on four separate attributes: LicenseSummary
, LicensedFunctions
, LicensedVersions
, and Licenses
. The result of the bulk-attribute query operation are in the last line.
Note that the uniqueID field is the same for all lines representing the GET-ATTRIBUTES operation. |
date | uniqueID | opcode | user | roles | access | client address | MBean name | operation type | operation name | arguments | duration | result | failure reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2010-05-28 14:07:11.223 |
101:175500674962:292 |
|
admin |
admin |
|
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET-ATTRIBUTES |
|
|
|
|
|
2010-05-28 14:07:11.223 |
101:175500674962:292 |
2008 |
admin |
admin |
read |
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET |
LicenseSummary |
|
|
|
|
2010-05-28 14:07:11.223 |
101:175500674962:292 |
2005 |
admin |
admin |
read |
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET |
LicensedFunctions |
|
|
|
|
2010-05-28 14:07:11.223 |
101:175500674962:292 |
2006 |
admin |
admin |
read |
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET |
LicensedVersions |
|
|
|
|
2010-05-28 14:07:11.223 |
101:175500674962:292 |
2004 |
admin |
admin |
read |
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET |
Licenses |
|
|
|
|
2010-05-28 14:07:11.226 |
101:175500674962:292 |
2008 |
admin |
admin |
read |
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET (RESULT) |
LicenseSummary |
|
3ms |
ok |
|
2010-05-28 14:07:11.226 |
101:175500674962:292 |
2005 |
admin |
admin |
read |
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET (RESULT) |
LicensedFunctions |
|
3ms |
ok |
|
2010-05-28 14:07:11.226 |
101:175500674962:292 |
2006 |
admin |
admin |
read |
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET (RESULT) |
LicensedVersions |
|
3ms |
ok |
|
2010-05-28 14:07:11.226 |
101:175500674962:292 |
2004 |
admin |
admin |
read |
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
GET (RESULT) |
Licenses |
|
3ms |
ok |
|
2010-05-28 14:07:11.226 |
101:175500674962:292 |
|
admin |
admin |
|
192.168.0.7 |
com.opencloud.rhino: type=Licensing |
|
|
3ms |
ok |
|
The durations listed for the individual GET (RESULT) lines correspond to the duration of the entire GET-ATTRIBUTES operation and not the individual GET components. In the example above, the entire GET-ATTRIBUTES operation took 3ms. |
Linked and Shadowed Components
There can be times when creating component dependencies in a deployable unit where a specific dependency target may not be known. For example, the particular version of a dependent library may be variable. At other times, some already installed component may need to be replaced with another, possibly a new version with a bug fix, and reinstalling all dependent components with updated deployment descriptors is undesirable.
Bindings can help with this problem, to some degree, however bindings can introduce other issues. Bindings always operate on virtual copies of the original components, and keeping track of copied components can be difficult if many binding operations are made.
Rhino provides a solution to these problems with support for linked and shadowed components.
Linked components
A linked component is a virtual component that provides an alias for some other component. Incoming references to the linked component are redirected to the link target. A linked component’s component type, for example SBB, profile specification, library, and so on, is the same as the component that it is linked to; and, like all other components, has a unique identity represented by the (name, vendor, version) tuple.
A linked component identifier can be used anywhere where a regular component identifier is required.
Shadowed components
A shadowed component is an existing component that has been "shadowed" or replaced by a link to another component of the same type. Incoming references to the shadowed component are redirected to the link target rather than using the original component.
Conceptually, linked and shadowed component perform the same function: to redirect an incoming reference to another component. The difference is that a linked component is a new virtual component with a unique identity, whereas a shadow replaces a component that is already installed in the SLEE.
Components supporting links and shadows
The following types of components currently support links and shadows:
|
Managing linked components
Below are overviews of the procedures to create, remove, and view the metadata for linked components.
Creating a linked component
You create linked components using the createLinkedComponent
management operation. For example, using rhino-console
:
[Rhino@localhost:2199 (#0)] createlinkedcomponent sbb name=MySBB,vendor=OpenCloud,version=1.0 MySBBLink OpenCloud 1.0 Component SbbID[name=MySBBLink,vendor=OpenCloud,version=1.0] linked to SbbID[name=MySBB,vendor=OpenCloud,version=1.0]
The first two arguments identify the component type and identifier of the link target. The target component must already exist in the SLEE. The last three arguments define the name, vendor, and version strings for the new linked component identifier.
Removing a linked component
You remove a linked component using the removeLinkedComponent
management operation. For example, using rhino-console
:
[Rhino@localhost:2199 (#0)] removelinkedcomponent sbb name=MySBBLink,vendor=OpenCloud,version=1.0 Linked component SbbID[name=MySBBLink,vendor=OpenCloud,version=1.0] removed
A linked component cannot be removed if:
-
another component with an install level of
VERIFIED
orDEPLOYED
references it; -
another linked component specifies this linked component as its target; or
-
another component is shadowed by this linked component.
Viewing linked component metadata
The getDescriptor
management operation returns a SLEE ComponentDescriptor
object for any component that exists in the SLEE. A ComponentDescriptor
object for a linked component has the following properties:
-
its deployable unit is the same as the deployable unit of the link target
-
its source component jar is the same as the source component jar of the link target
-
it contains a vendor-specific data object of type
LinkedComponentDescriptorExtensions
.
Linked component descriptor extensions
The LinkedComponentDescriptorExtensions
class defines Rhino-specific component metadata extensions for linked components. Here’s what it looks like:
package com.opencloud.rhino.management.deployment;
import java.io.Serializable;
import java.util.Date;
import javax.slee.ComponentID;
public class LinkedComponentDescriptorExtensions implements Serializable {
public LinkedComponentDescriptorExtensions(...) { ... }
public ComponentID getLinkTarget() { ... }
public Date getLinkDate() { ... }
public InstallLevel getInstallLevel() { ... }
public ComponentID[] getIncomingLinks() { ... }
public ComponentID[] getShadowing() { ... }
...
}
-
The
getLinkTarget
method returns the component identifier of the link target. -
The
getLinkDate
method returns aDate
object that specifies the date and time the linked component was created. -
The
getInstallLevel
method returns the current install level of the linked component. The install level of a linked component is immaterial, and changing it has no effect on the linked component itself; however, since an install level is a property of all components installed in Rhino, a linked component must have one by definition. -
The
getIncomingLinks
method returns the component identifiers of any other linked components that have this linked component as a target. -
The
getShadowing
method returns the component identifiers of any other component that has been shadowed by this linked component.
Managing component shadows
Shadowing or unshadowing a component effectively changes the definition of the component; therefore a component can only undergo these transitions if it has an install level of INSTALLED
. This ensures that any components that depend on the affected component also have an install level of INSTALLED
, and thus will need (re)verifying against the updated component before further use. Rhino will allow a component with an install level of VERIFIED
to be shadowed or unshadowed, but will automatically transition the component (and any upstream dependencies) to the INSTALLED
install level first. A component with an install level of DEPLOYED
must be manually undeployed before a shadow can be created or removed.
Below are overviews of the procedures to shadow, unshadow, and view the shadow metadata for a component.
Shadowing a component
You shadow one component with another using the shadowComponent
management operation. For example, using rhino-console
:
[Rhino@localhost:2199 (#0)] shadowcomponent sbb name=MySBB,vendor=OpenCloud,version=1.0 name=MySBB,vendor=OpenCloud,version=1.0.2 Component SbbID[name=MySBB,vendor=OpenCloud,version=1.0] shadowed by SbbID[name=MySBB,vendor=OpenCloud,version=1.0.2]
The first two arguments identify the component type and identifier of the component to be shadowed. The last argument identifies the component that this component will be shadowed by. Both the shadowed and shadowing components must already exist in the SLEE.
Link cycles won’t work
Using shadows, you might try to create a link cycle. For example, if component |
Unshadowing a component
You unshadow a component using the unshadowComponent
management operation. For example, using rhino-console
:
[Rhino@localhost:2199 (#0)] unshadow sbb name=MySBB,vendor=OpenCloud,version=1.0 Shadow removed from component SbbID[name=MySBB,vendor=OpenCloud,version=1.0]
Viewing shadowed component metadata
The getDescriptor
management operation returns a SLEE ComponentDescriptor
object for any component that exists in the SLEE. The component descriptor for a shadowed component continues to describe the original unshadowed component, but contains a vendor-specific data object of type com.opencloud.rhino.management.deployment.ComponentDescriptorExtensions
that includes the following information relevant to shadowing:
-
The
getShadowedBy
method returns the component identifier of the component that shadows this component. This target component will be used in place of the described component. -
The
getShadowDate
method returns aDate
object that specifies the date and time the shadow was established. -
The
getShadowing
method returns the component identifiers of any other component that has in turn been shadowed by this shadowed component.
Linked and shadowed component resolution
In most cases where a component identifier is specified, Rhino will follow a chain of links and shadows to resolve the component identifier to a concrete target component. Typical cases where this occurs are as follows:
-
wherever a component references another component in its deployment descriptor or in a binding descriptor
-
if a service component is activated or deactivated
-
when a profile table is created from a profile specification
(though Rhino will report that the profile table was created from the specified component rather than the resolved target) -
when a resource adaptor entity is created from a resource adaptor
(though again Rhino will report that the resource adaptor entity was created from the specified component rather than the resolved target) -
when interrogating or updating a component’s security policy.
Specific cases where a management operation applies directly to a linked or shadowed component rather than its resolved target are as follows:
-
when requesting a component’s metadata descriptor
-
when copying a shadowed component
(The shadowed component itself is copied, rather than the shadowing component. Linked components are still resolved though when determining the actual component to copy; so an attempt to copy a linked component will result in a copy of the resolved target component being copied.)
Additional notes
-
Creating a link to a service component automatically adds a clone of the resolved target service’s statistics with the linked component identifier to the stats manager. For example, if service component
A
is linked to service componentB
, then the stats forB
can be accessed from the stats manager using either component identifierA
orB
. The same result will be obtained in each case. Listing the available stats parameter sets will include bothA
andB
. -
The activation state reported for a linked or shadowed service component is the state of the service component that the link or shadow resolves to. Activating or deactivating the linked or shadowed component has the same effect as activating or deactivating the resolved component.
-
If a resource adaptor entity generates events that may be consumed by a given service component, and a link to that service component is created, then the resource adaptor entity will also be notified about a change to the lifecycle state for the linked component when the state of the target service component changes.
-
A resource adaptor entity may fire an event targeted at a linked service component, and Rhino will deliver the event to the resolved target service component. If an SBB in the service invokes a resource adaptor interface API method while handling that event, then the value returned by the
ResourceAdaptorContext.getInvokingService()
method will equal the target service component identifier specified by the resource adaptor entity when the event was fired; that is, it will be the linked component identifier. However if an SBB in the service invokes a resource adaptor interface API method while handling an event that had no specific service target, then the value returned by the samegetInvokingService()
method will be the service component identifier of the resolved service that is actually processing the event.
Component Activation Priorities
Rhino versions 2.4 and above allow configuration of the activation order of SLEE components. This ordering is controlled separately for activating and deactivating components.
Introduction to priorities
In Rhino 2.3.1 and older, RAs and services started in effectively random order. The startup order was based on the indexing hash order in the system.
The priority system added in Rhino 2.4 allows operator control of this order.
Priorities are values between -128 and 127. If a component (service or resource adaptor entity), X, has a numerically higher priority value than another component, Y, then X will be started before Y. Components with the same priority may be started in an arbitrary order, or may be started concurrently. The same rule applies for component stopping priorities; i.e., highest priority stops first.
If you have assigned startup priorities to; RAs A=100, B=20, C=10; and service S=15, they will start up in the following order:
-
activate RA entity A
-
activate RA entity B
-
activate service S
-
activate RA entity C
Note that a service can still potentially receive an event from an RA before it receives a ServiceStartedEvent on the ServiceActivity. That’s a completely different problem to activation ordering, and given the asynchronous nature of event delivery not something we can readily do anything about. A service that depends on the ServiceStartedEvent may be able to use the service activation callbacks in Rhino 2.4 instead. You may notice that services of the same priority level as RA entities start before the RA entities and stop after them. This ordering is not part of the priority system definition. It is possible that they will be started concurrently in the future, so always use different priorities if you need a specific order.
Managing priorities
Below are overviews of the procedures to manage service priorities, manage RA entity priorities, and list priorities.
Managing service priorities
You get priorities for services using the getStartingPriority
and getStoppingPriority
management operations.
You set priorities for services using the setStartingPriority
and setStoppingPriority
management operations.
For example, using rhino-console
:
[Rhino@localhost:2199 (#0)] setservicestartingpriority name=MyService,vendor=OpenCloud,version=1.0 10 Service ServiceID[name=MyService,vendor=OpenCloud,version=1.0] activation priority set to 10
Managing RA entity priorities
You get priorities for RA entities using the getStartingPriority
and getStoppingPriority
management operations.
You set priorities for RA entities using the setStartingPriority
and setStoppingPriority
management operations.
For example, using rhino-console
:
[Rhino@localhost:2199 (#0)] setraentitystartingpriority sipra 80 Resource adaptor entity sipra activation priority set to 80
Listing priorities
You list priorities for services using the getStartingPriorities
and getStoppingPriorities
management operations.
You list priorities for RA entities using the getStartingPriorities
and getStoppingPriorities
management operations.
You get a full combined listing using the liststartingpriorities
and liststoppingpriorities
commands in rhino-console
.
For example:
[Rhino@localhost:2199 (#0)] liststartingpriorities Starting priorities of installed services and resource adaptor entities: 80 : resource adaptor entity sipra 20 : ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] 10 : ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] 0 : ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] ServiceID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0] ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] -5 : ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] Note: Components with the same priority may be started in any order
Rhino Configuration
This section covers procedures for configuring Rhino upon installation, and as needed (for example to tune performance).
This includes configuring:
See also Management Tools. |
Logging
Rhino supports logging to multiple different locations and offers granular configuration of loggers and output. The most common configuration used is of log levels and output appenders. Log appenders are used to direct logging output for display and storage, typically to files or the console terminal. Rhino provides management interfaces and commands for configuring the logging framework. It also provides access to the logging framework for deployed components as extensions to the SLEE specification.
This section includes the following topics:
JMX clients can access logging management operations via the Logging Management MBean . |
About Logging
The Rhino SLEE uses the Apache Log4j 2 logging framework to provide logging facilities for SLEE components and deployed services.
The Logging Management MBean
Rhino SLEE allows changes to logging configuration at runtime. This is useful for capturing log information to diagnose a problem, without having to restart the SLEE. You configure logging using the |
Asynchronous Logging
The Log4j 2 logging architecture provides a new approach to asynchronous logging. It uses asynchronous loggers, which submit log events to a work queue for later handling by the appropriate appenders.
More details can be found in the Log4j 2 async loggers documentation
Rhino offers support for mixed synchronous and asynchronous logging through logger configuration commands. Correctly configuring asynchronous logging has some complexity, discussed in how to Configure a Logger.
Mapped Diagnostic Context
Rhino 2.6 introduces access to the Mapped Diagnostic Context (MDC) as a tool to tag and correlate log messages throughout an activity’s life cycle. This tagging can be combined with the new filters to allow very fine grained control of logging and tracing.
A simple SIP example of useful context would be including the P-charging-vector
header. As this uniquely identifies a single call, it becomes trivial to identify all log messages related to handling an individual call. Identification (or filtering) remains simple even under load, with multiple calls handled in parallel.
The Logging Context Facility discusses MDC in greater detail.
Logger names
Subsystems within the Rhino SLEE send log messages to specific loggers. For example, the rhino.facility.alarm
logger periodically receives messages about which alarms are currently active within the Rhino SLEE.
Examples of logger names include:
-
root
— the root logger, from which all loggers are derived (can be used to change the log level for all loggers at once) -
rhino
— main Rhino logger -
rhino.management
— for log messages related to Rhino management systems -
trace.<namespace>.<deployable_type>.<notification_source>.<tracer name>
— loggers used by deployed SLEE components that use tracers. By default these keys appear abbreviated in console and file logs. Details of tracer abbreviation can be found at Tracer pattern converter.
Log levels
Log levels can be assigned to individual loggers to filter how much information the SLEE produces:
Log level | Information sent |
---|---|
OFF |
No messages sent to logs (not recommended). |
FATAL |
Error messages for unrecoverable errors only (not recommended). |
ERROR |
Error messages (not recommended). |
WARN |
Warning messages. |
INFO |
Informational messages (especially during node startup or deployment of new resource adaptors or services). The default. |
DEBUG |
Detailed log messages. Used for debugging by OpenCloud Rhino SLEE developers. |
TRACE |
Finest level. Not currently used. |
ALL |
All of the above. |
Each log level will log all messages for that log level and above. For example, if a logger is set to the INFO level (the default), all of the log messages logged at the INFO, WARN, ERROR, and FATAL levels will be logged as well.
If a logger is not assigned a log level, it inherits its parent’s. For example, if the rhino.management logger has not been assigned a log level, it will have the same effective log level as the rhino logger.
The root logger is a special logger which is considered the parent of all other loggers. By default, the root logger is configured with the INFO log level. In this way, all other logger will output log messages at the INFO log level or above unless explicitly configured otherwise.
Use INFO
A lot of useful or crucial information is output at the INFO log level. Because of this, setting log levels to WARN, ERROR or FATAL is not recommended. |
Log appenders
System administrators can use the console to create some simple log appenders. Full appender creation is available through the Rhino Element Manager (REM). These append log messages to destinations such as the console, a log file, socket, or Unix syslog daemon. At runtime, when Rhino logs a message (as permitted by the log level of the associated logger), Rhino sends the message to the log appender for writing. Types of log appenders include:
-
file appenders — which append messages to files (and may be rolling file appenders)
-
console appenders — which send messages to the Rhino console
-
socket appenders — Append to a network socket. Either raw, or syslog formatted.
-
custom appenders — which can do a wide variety of things. Common "custom" apenders are used to append to various kinds of database.
Rolling file appenders
Typically, to manage disk usage, administrators are interested in sending log messages to a set of rolling files. They do this by setting up rolling file appenders which:
-
create new log files if the current one gets too large
-
rename old log files as numbered backups
-
delete old logs when a certain number of them have been archived.
Log files roll over when they exceed the configured size i.e. size is checked after logging each message, if the log file is larger than the maximum the next message will be written to a new file. Since Rhino 2.6.0 only the SDK rolls over log files on node restart. Production nodes use a size-based policy only.
You can configure the size and number of rolled-over log files and the rollover behaviour. Options include size-based, time-based and on node-restart. All configurations described for Log4j 2 are valid: https://logging.apache.org/log4j/2.x/manual/appenders.html#RollingFileAppender
An example logging config containing a complex rollover strategy that increments file numbers, retaining up to 4 historical files younger than 30 days:
<appender name="RhinoLog" plugin-name="RollingFile"> <layout name="RhinoLogLayout" plugin-name="PatternLayout"> <property name="pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%logger] <%threadName> %mdc : %msg%n%throwable"/> <property name="header" value="${rhinoVersionHeader}"/> </layout> <property name="filePattern" value="${logDir}/rhino.log.%i"/> <property name="fileName" value="${logDir}/rhino.log"/> <component name="RhinoLogPolicy" plugin-name="SizeBasedTriggeringPolicy"> <property name="size" value="100MB"/> </component> <component name="RhinoLogStrategy" plugin-name="NotifyingRolloverStrategy"> <property name="min" value="1"/> <property name="max" value="2147483647"/> <component name="deleteAction" plugin-name="Delete"> <property name="basePath" value="${logDir}"/> <property name="maxDepth" value="1"/> <component name="fileName" plugin-name="IfFileName"> <property name="glob" value="rhino.log.*"/> </component> <component name="any" plugin-name="IfAny"> <component name="lastmodified" plugin-name="IfLastModified"> <property name="age" value="30d"/> </component> <component name="fileCount" plugin-name="IfAccumulatedFileCount"> <property name="exceeds" value="4"/> </component> </component> </component> </component> </appender>
Default appenders
By default, the Rhino SLEE comes configured with the following appenders active:
Appender | Where it sends messages | Logger name | Type of appender |
---|---|---|---|
RhinoLog |
the Rhino logs directory (work/log/rhino.log) |
root |
a rolling file appender |
STDERR |
the Rhino console where a node is running (in a standard error stream) |
root |
a console appender |
ConfigLog |
work/log/config.log |
rhino.config |
a rolling file appender |
New appenders won’t receive messages until associated with at least one logger
By default, a newly created log appender is not associated with any loggers, so will not receive any log messages. |
Appender additivity
If a logger has its additivity flag set to true, all of the output of its log statements goes to all of its appenders and ancestors. If a specific ancestor has its additivity flag set to false, then the output goes to all appenders and ancestors up to and including that specific ancestor — but not to appenders in any of that ancestor’s ancestors. (By default, logger additivity flags are set to true.}
See Apache’s Log4j 2 Architecture page for details on additivity. |
Filters
Filters can be applied to both loggers and appenders to restrict the set of log messages that are reported by a logger or through an appender. They provide a more flexible limiting approach than log level alone. Configuring filters can be performed using the Rhino Element Manager or by modifying an export of the logging configuration. A list of the filters available by default and their configuration properties can be found in the Log4j 2 filter documentation
An example filter configuration setting for limiting log levels to Warning
in namespace volte
and Info
in all other namespaces is shown below:
<component plugin-name="DynamicThresholdFilter"> <property name="defaultThreshold" value="Finer"/> <property name="key" value="namespace"/> <component plugin-name="KeyValuePair"> <property name="key" value="volte"/> <property name="value" value="Warning"/> </component> </component>
If three trace messages are emitted by the service
tracer.warning("TransparentDataCache(MMTEL-Services) (RepositoryDataAccessKey{REPOSITORY_DATA, userId=tel:+34600000002, userIdType=IMPU, serviceIndication=MMTEL-Services}): [DoUDR] failed to send request") tracer.finer("Cache gave immediate response. Latency: 1 ms") tracer.finest("Removing service indication: MMTEL-Services from the session state list.Initial items: [MMTEL-Services]")
With the service deployed in namespace volte
only the Warning
will be logged:
2017-11-14 13:35:38.123+1300 Warning [trace.sh_cache_ra.sh-cache-ra] <jr-4> {ns=volte, txID=101:210487189646097} TransparentDataCache(MMTEL-Services) (RepositoryDataAccessKey{REPOSITORY_DATA, userId=tel:+34600000002, userIdType=IMPU, serviceIndication=MMTEL-Services}): [DoUDR] failed to send request
otherwise both the Finer
and Warning
messages will be logged:
2017-11-14 13:35:38.123+1300 Warning [trace.sh_cache_ra.sh-cache-ra] <jr-4> {ns=mmtel, txID=101:210487189646097} TransparentDataCache(MMTEL-Services) (RepositoryDataAccessKey{REPOSITORY_DATA, userId=tel:+34600000002, userIdType=IMPU, serviceIndication=MMTEL-Services}): [DoUDR] failed to send request 2017-11-14 13:35:38.137+1300 Finer [trace.volte_sentinel_sip.2_7_0_copy_1.volte_sentinel_sip.sentinel.sbb] <jr-4> {ns=mmtel, txID=101:210487189646097} Cache gave immediate response. Latency: 1 ms
The default threshold of Finer
will cause the Finest
message to never be logged.
Logging plugins
Rhino contains several logging plugins to extend the functionality of Log4j 2 to aid SLEE management and provide additional context to logs.
-
NotifyingRolloverStrategy
-
NotifyingDirectWriteRolloverStrategy
-
LogNotificationAppender
-
PolledMemoryAppender
NotifyingRolloverStrategy
An extended variant of the DefaultRolloverStrategy
providing an API for components to receive notification of log file rollover. RolloverNotificationListener`s can be registered to receive a callback whenever a log file is rolled over. This strategy should be used instead of the Log4j 2 `DefaultRolloverStrategy
so Rhino can send notifications to monitoring systems.
NotifyingDirectWriteRolloverStrategy
An extended variant of the DirectWriteRolloverStrategy
providing an API for components to receive notification of log file rollover. RolloverNotificationListener`s can be registered to receive a callback whenever a log file is rolled over. This strategy should be used instead of the Log4j 2 `DirectWriteRolloverStrategy
so Rhino can send notifications to monitoring systems.
LogNotificationAppender
A log appender for delivering log messages to a listener inside the application. This is used to send log messages to JMX monitoring clients and as SNMP notifications. It is only necessary to use the LogNotificationAppender if using SNMP to receive log messages.
TraceNotificationAppender
A log appender for delivering log messages to a listener inside the application that extracts tracer messages to send as `TraceNotification`s. This is used to send tracer messages to JMX monitoring clients such as REM. It is necessary to use the TraceNotificationAppender if using JMX to receive tracer messages. Without an instance of this appender in the log configuration REM instances connecting to this Rhino instance will not be able to receive or display tracer messages.
PolledMemoryAppender
A log appender that stores messages in an internal buffer that the REM can poll for live log watching. This implementation is only of use when log output is infrequent enough for human monitoring and has a minor performance cost. It will be removed in a future release of Rhino. We recommend that log files or an external log server be used as the primary log output.
See Logging plugins for instructions on enabling additional appender types. |
Other plugins
The Log4j 2 project (https://logging.apache.org/log4j/2.x) provides a number of plugins for extending the functionality of Log4j 2. These plugins provide appenders for sending logs to a number of log servers, files and databases, layouts for configuring the format of log messages, and filters to restrict the logging of messages. System integrators or operators can create plugins to add further functionality or support for other log handling systems.
Rhino log configuration properties
Rhino log configuration variables include a rhino
namespace containing options useful for providing additional context in log files. These are:
-
${rhino:node-id}
: The node ID of the Rhino node that wrote the log message parameterised with this variable -
${rhino:version}
: The version of Rhino running at the time the log message parameterised with this variable was written
Tracer objects
SLEE 1.1 provides tracer objects for logging messages from deployed components.
Rhino logs all messages sent to a Tracer under the trace.<notification source>.<tracer name>
logger.
In an extension of the SLEE specification Rhino allows configuration of tracer levels at coarser grain than the component tracer. This extended functionality is accessed through the Rhino logging configuration. For example setloglevel trace Finest
will set the default tracer level to Finest
. All tracers not explicitly set will log at levels from Finest
up. To support this SLEE extension root tracers for individual notification sources inherit their levels from the trace
logger. It is also permitted to unset the root tracer level for a given notification source using setTracerLevel
. Unsetting the root tracer level reverts to using the inherited level.
A further extension of the SLEE specification allows for full use of logging management commands against Tracers. A SLEE 1.1 Tracer may have appenders and filters added to further customise tracing output, both to JMX notifications, and logging destinations. Any supported appender may be used, so logging destinations are not restricted to file only.
Tracer log levels
Log levels for trace messages are logged at the level they are emitted.
See also About SLEE 1.1 Tracers. |
SLEE 1.0-based application components can still use the trace facility (defined in the JAIN SLEE 1.0 specification) for logging, however the trace facility has been deprecated for JAIN SLEE 1.1. |
About SLEE 1.1 Tracers
Tracer Interface
In SLEE 1.1, there are more components that may need tracing support. In addition to SBBs, trace messages may also be generated by profile abstract classes and resource adaptors, and potentially any other SLEE subsystem.
All of these components may use the SLEE 1.1 javax.slee.facilities.Tracer
interface. The Tracer interface will be familiar to users of other logging APIs. It provides methods for generating traces at different trace levels. Details of the tracing methods available are in the javax.slee.facilities.Tracer
javadoc.
Obtaining a Tracer
Components obtain Tracers by calling the getTracer() method on the particular component’s context object. Rhino 2.6 provides 'com.opencloud.rhino.facilities.ExtendedTracer' instances when acquiring a Tracer, If only Rhino 2.6 support is required, the Tracer
aquired from a context may be safely cast to ExtendedTracer
Older Rhino versions provide a com.opencloud.rhino.facilities.Tracer
. The older Rhino implementation does not offer the extended logging API that the ExtendedTracer
does.
For backwards compatibility Rhino 2.6 API library contains a com.opencloud.rhino.facilities.trace.TracerAccessor
which handles safely acquiring a Rhino 2.6 ExtendedTracer
.
Component | Tracer access method |
---|---|
SBB |
ExtendedTracer trace = (ExtendedTracer)SbbContext.getTracer(String) |
Profiles |
ProfileContext.getTracer(String) |
Resource Adaptors |
ResourceAdaptorContext.getTracer(String) or TracerAccessor.getExtendedTracer(ResourceAdaptorContext, String) |
The string parameter in the above methods is the tracer name. This is a hierarchical name, following Java naming conventions, where the different levels in the hierarchy are delimited by a dot. For example, a tracer named "com.foo" is the parent of "com.foo.bar". Components may create any number of tracers, with different names, for different purposes. Tracers inherit the trace level of their parent in the hierarchy. The tracer named "" (empty string) is the top-level or root tracer. The hierarchical naming is a convention used in most logging APIs, and allows an administrator to easily enable or disable tracing for an entire hierarchy of tracers.
import javax.slee.Sbb;
import javax.slee.SbbContext;
import javax.slee.facilities.Tracer;
public abstract class MySbb implements Sbb {
private Tracer rootTracer;
private ExtendedTracer fooTracer;
private SbbContext context;
public void setSbbContext(SbbContext context) {
this.context = context;
this.rootTracer = context.getTracer("");
this.fooTracer = (ExtendedTracer)context.getTracer("foo");
}
...
// Generate an INFO trace on the root tracer
rootTracer.info("An event has occurred");
...
// Generate a WARNING trace on the fooTracer
fooTracer.warning("Could not combobulate {}", "discombobulator");
Notification Sources
SLEE 1.1 introduces the javax.slee.management.NotificationSource
interface, which the SLEE automatically adds to notifications generated by SLEE tracers. As this is automatically asssociated with the Tracer object, there is no need to manually specify source as in SLEE 1.0. This solves the problem of identifying which SBB in which service generated a trace message. The NotificationSource
explicity identifies the component that generated the trace, so a management client can easily see which service and SBB the trace came from, allowing filtering by service or SBB.
Tracer Extensions
To alleviate some limitations of the SLEE 1.1 Tracer system, Rhino offers an extended Tracer API. This extended API offers a larger set of tracing methods, to support tracing without string concatenation to build trace messages. Tracer extensions contains details of the Tracer API extensions, and com.opencloud.rhino.facilities.ExtendedTracer
javadoc is available.
Rhino 2.6 Tracer Extensions
In Rhino 2.6, the Tracer subsystem has been substantially reworked. As a result, Tracers are now first class loggers. This means that a Tracer may be manipulated by logging management commands as if it were a logger, with the exception that it will only accept Tracer levels.
Tracers now have very long logger names, as they must be unique to support making Tracers first class loggers. In log files these very long names are inconvenient, as they will frequently cause log entries to run over multiple lines on screen. In order to alleviate this issue, we have included a default tracer name abbreviation system.
Tracer pattern converter
The Tracer abbreviator used by default is based heavily on the logger pattern converter supplied with Log4j 2. See Log4j 2 Pattern Layout for documentation.
The tracer pattern converter shipped with Rhino allows for optionally completely removing a logger/tracer name component. In contrast, the logger pattern converter will always leave a .
literal to show where elements have been abbreviated. The tracer pattern converter also does not implement Log4j 2 integer precision abbreviation, only pattern abbreviation.
Tracer name | Pattern | Output |
---|---|---|
trace.default.resourceadaptorentity.simplera.example |
%logger{*.0.0.*} |
trace…simplera.example |
trace.default.resourceadaptorentity.simplera.example |
%tracer{*.0.0.*} |
trace.simplera.example |
Tracer abbreviation behaviour can be managed through REM or by editing an exported logging.xml.
The default tracer pattern converter shipped with Rhino is shown below
<component plugin-name="MarkerPatternSelector" >
<property name="defaultPattern" value="%d{yyyy-MM-dd HH:mm:ss.SSSZ} %-7level [%logger] <%threadName> %mdc %msg{nolookups}%n%throwable"/>
<component plugin-name="PatternMatch">
<property name="key" value="Trace"/>
<property name="pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSSZ} ${plainLevel} [%tracer{*.0.0.*}] <%threadName> %mdc %msg{nolookups}%n%throwable"/>
</component>
<component plugin-name="PatternMatch">
<property name="key" value="SbbTrace"/>
<property name="pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSSZ} ${plainLevel} [%tracer{*.0.0.*.0.*.*.0.0.*}] <%threadName> %mdc %msg{nolookups}%n%throwable"/>
</component>
</component>
Note that there are three patterns in use here.
Marker | Pattern | Usecase |
---|---|---|
None (defaultPattern) |
%logger |
Used for non-tracer log messages |
SbbTrace |
%tracer{*.0.0.*.0.*.*.0.0.*} |
Used for Tracer messages logged from an SBB. |
Trace |
%tracer{*.0.0.*} |
Used for Tracer messages logged from anything other than an SBB |
Different patterns are required for SBB and non-SBB Tracers, due to the more complex notification source identity of SBB notification sources. An SBB notification source includes both SBB id and Service ID. All other notification sources have no equivalent of Service ID.
Creating a File Appender
To create a file appender, use the following rhino-console command or related MBean operation. Since Rhino 2.6 there are many varieties of file appenders supported.
There are two major classes of file appenders discussed below. Non rolling file appenders do not rollover logfiles ever. Rolling file appenders are able to roll over logfiles, and must be configured with automatic rollover rules.
FileName arguments are paths to files, not just filenames. To create a logfile in the configured logging directory (Default is ${NODE_HOME}/work/log ) one can use the property ${logDir} as the leading element of the filename |
Non rolling file appenders
These appenders cannot be rolled over with the rolloverlogfiles console command.
Console command: createfileappender
Command |
createfileappender <appenderName> <fileName> [-append <true|false>] [-bufferedIO <true|false>] [-bufferSize size] [-createOnDemand <true|false>] [-immediateFlush <true|false>] [-locking <true|false>] [-ignoreExceptions <true|false>] [-pattern <pattern>] Description The FileAppender is an appender that writes to the File named in the <fileName> parameter. Required Arguments appenderName The name of the Appender. fileName The name of the file to write to. If the file, or any of its parent directories, do not exist, they will be created. Options -append When true, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written. The default is true. -bufferedIO When true, records will be written to a buffer and the data will be written to disk when the buffer is full or, if immediateFlush is set, when the record is written. File locking cannot be used with bufferedIO. The default is true. -bufferSize The buffer size. The default is 8192 bytes. -createOnDemand When true, the appender creates the file on-demand. The default is false. -immediateFlush When true, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. The default is true. -locking When true, I/O operations will occur only while the file lock is held. The default is false. -ignoreExceptions When true, exceptions encountered while appending events will be internally logged and then ignored. The default is true. -pattern The pattern to use for logging output. |
---|---|
Example |
To create a logfile in the configured logging directory
$ ./rhino-console createfileappender myappender "${logDir}/myappender.log" Done.
To create a logfile in an absolute path
$ ./rhino-console createfileappender myappender /var/logs/rhino/myappender.log Done. |
Console command: createrandomaccessfileappender
Command |
createrandomaccessfileappender <appenderName> <fileName> [-append <true|false>] [-immediateFlush <true|false>] [-bufferSize size] [-ignoreExceptions <true|false>] [-pattern <pattern>] Description The RandomAccessFileAppender is an appender that writes to the File named in the <fileName> parameter. It is similar to the standard FileAppender except it is always buffered. Required Arguments appenderName The name of the Appender. fileName The name of the file to write to. If the file, or any of its parent directories, do not exist, they will be created. Options -append When true, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written. The default is true. -immediateFlush When true, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. The default is true. -bufferSize The buffer size. The default is 8192 bytes. -ignoreExceptions When true, exceptions encountered while appending events will be internally logged and then ignored. The default is true. -pattern The pattern to use for logging output. |
---|---|
Example |
$ ./rhino-console createrandomaccessfileappender myappender "${logDir}/myappender.log" Done. |
Console command: creatememorymappedfileappender
Command |
creatememorymappedfileappender <appenderName> <fileName> [-append <true|false>] [-immediateFlush <true|false>] [-regionLength length] [-ignoreExceptions <true|false>] [-pattern <pattern>] Description The MemoryMappedFileAppender maps a part of the specified file into memory and writes log events to this memory, relying on the operating system's virtual memory manager to synchronize the changes to the storage device Required Arguments appenderName The name of the Appender. fileName The name of the file to write to. If the file, or any of its parent directories, do not exist, they will be created. Options -append When true, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written. The default is true. -immediateFlush When true, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. The default is true. -regionLength The length of the mapped region, defaults to 32 MB. -ignoreExceptions When true, exceptions encountered while appending events will be internally logged and then ignored. The default is true. -pattern The pattern to use for logging output. |
---|---|
Example |
$ ./rhino-console creatememorymappedfileappender myappender "${logDir}/myappender.log" Done. |
Rolling file appenders
Console command: createrollingfileappender
Command |
createrollingfileappender <appenderName> <fileName> <filePattern> <size> [-append <true|false>] [-bufferedIO <true|false>] [-bufferSize size] [-createOnDemand <true|false>] [-immediateFlush <true|false>] [-min <min>] [-max <max>] [-ignoreExceptions <true|false>] [-pattern <pattern>] Description The RollingFileAppender is an appender that writes to the File named in the <fileName> parameter and rolls the file over according the values set by the <size> [-min][-max] options. Required Arguments appenderName The name of the Appender. fileName The name of the file to write to. If the file, or any of its parent directories, do not exist, they will be created. filePattern The pattern of the file name of the archived log file. Both a date/time pattern compatible with SimpleDateFormat and/or a %i which represents an integer counter can be used. size The file size required before a roll over will occur. Options -append When true, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written. The default is true. -bufferedIO When true, records will be written to a buffer and the data will be written to disk when the buffer is full or, if immediateFlush is set, when the record is written. File locking cannot be used with bufferedIO. The default is true. -bufferSize The buffer size. The default is 8192 bytes. -createOnDemand When true, the appender creates the file on-demand. The default is false. -immediateFlush When true, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. The default is true. -min The minimum value of the roll over counter. The default value is 1. -max The maximum value of the roll over counter. Once this values is reached older archives will be deleted on subsequent rollovers. -ignoreExceptions When true, exceptions encountered while appending events will be internally logged and then ignored. The default is true. -pattern The pattern to use for logging output. |
---|---|
Example |
$ ./rhino-console createrollingfileappender myappender "${logDir}/myappender.log" Done. |
Console command: createrollingrandomaccessfileappender
Command |
createrollingrandomaccessfileappender <appenderName> <fileName> <filePattern> <size> [-append <true|false>] [-bufferSize size] [-immediateFlush <true|false>] [-min <min>] [-max <max>] [-ignoreExceptions <true|false>] [-pattern <pattern>] Description The RollingRandomAccessFileAppender is an appender that writes to the File named in the <fileName> parameter and rolls the file over according the values set by the <size>[-min][-max] options. It is similar to the standard RollingFileAppender except it is always buffered. Required Arguments appenderName The name of the Appender. fileName The name of the file to write to. If the file, or any of its parent directories, do not exist, they will be created. filePattern The pattern of the file name of the archived log file. Both a date/time pattern compatible with SimpleDateFormat and/or a %i which represents an integer counter can be used. size The file size required before a roll over will occur. Options -append When true, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written. The default is true. -bufferSize The buffer size. The default is 8192 bytes. -immediateFlush When true, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. The default is true. -min The minimum value of the roll over counter. The default value is 1. -max The maximum value of the roll over counter. Once this values is reached older archives will be deleted on subsequent rollovers. -ignoreExceptions When true, exceptions encountered while appending events will be internally logged and then ignored. The default is true. -pattern The pattern to use for logging output. |
---|---|
Example |
$ ./rhino-console createrollingrandomaccessfileappender myappender "${logDir}/myappender.log" Done. |
Create a Socket Appender
Rhino 2.6 supports two varieties of socket appenders, configurable format socket appenders, and syslog format. To create either use the following rhino-console commands or related MBean operations
Console command: createsocketappender
Command |
createsocketappender <appenderName> <host> <port> [-bufferedIO <true|false>] [-bufferSize size] [-connectTimeoutMillis <timeout(ms)>] [-immediateFail <true|false>] [-immediateFlush <true|false>] [-protocol <protocol>] [-reconnectionDelayMillis <delay(ms)>] [-keyStoreLocation <location>] [-keyStorePassword <password>] [-trustStoreLocation <location>] [-trustStorePassword <password>] [-ignoreExceptions <true|false>] Description The SocketAppender is an appender that writes its output to a remote destination specified by a host and port. The data can be sent over either TCP or UDP and the default format of the data is to send a Serialized LogEvent. Required Arguments appenderName The name of the Appender. host The name or address of the system that is listening for log events. port The port on the host that is listening for log events. Options -bufferedIO When true, records will be written to a buffer and the data will be written to disk when the buffer is full or, if immediateFlush is set, when the record is written. File locking cannot be used with bufferedIO. The default is true. -bufferSize The buffer size. The default is 8192 bytes. -connectTimeoutMillis The connect timeout in milliseconds. The default is 0 (infinite timeout). -immediateFail When set to true, log events will not wait to try to reconnect and will fail immediately if the socket is not available. -immediateFlush When true, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. The default is true. -protocol 'TCP' (default), 'SSL' or 'UDP'. -reconnectionDelayMillis If set to a value greater than 0, after an error there will be an attempt to reconnect to the server after waiting the specified number of milliseconds. -keyStoreLocation The location of the keystore for SSL connections. -keyStorePassword The password of the keystore for SSL connections. -trustStoreLocation The location of the truststore for SSL connections. -trustStorePassword The password of the truststore for SSL connections. -ignoreExceptions When true, exceptions encountered while appending events will be internally logged and then ignored. The default is true. |
---|---|
Example |
$ ./rhino-console createsocketappender myappender localhost 12000 Done. |
Console command: createsyslogappender
Command |
createsyslogappender <appenderName> <host> <port> <facility> [-advertise <true|false>] [-appName <name>] [-charset <name>] [-connectTimeoutMillis <timeout(ms)>] [-enterpriseNumber <number>] [-format <name>] [-id <id>] [-immediateFail <true|false>] [-immediateFlush <true|false>] [-includeMDC <true|false>] [-mdcExcludes <key1,key2...>] [-mdcId <id>] [-mdcIncludes <key1,key2...>] [-mdcRequired <key1,key2...>] [-mdcPrefix <prefix>] [-messageId <msgid>] [-newLine <true|false>] [-reconnectionDelayMillis <delay(ms)>] [-keyStoreLocation <location>] [-keyStorePassword <password>] [-trustStoreLocation <location>] [-trustStorePassword <password>] [-ignoreExceptions <true|false>] [-protocol <protocol>] Description The SyslogAppender is a SocketAppender that writes its output to a remote destination specified by a host and port in a format that conforms with either the BSD Syslog format or the RFC 5424 format. Required Arguments appenderName The name of the Appender. host The name or address of the system that is listening for log events. port The port on the host that is listening for log events. facility The facility is used to try to classify the message. The facility option must be set to one of 'KERN', 'USER', 'MAIL', 'DAEMON', 'AUTH', 'SYSLOG', 'LPR', 'NEWS', 'UUCP', 'CRON', 'AUTHPRIV', 'FTP', 'NTP', 'AUDIT', 'ALERT', 'CLOCK', 'LOCAL0', 'LOCAL1', 'LOCAL2', 'LOCAL3', 'LOCAL4', 'LOCAL5', 'LOCAL6', or 'LOCAL7'. Options -advertise Indicates whether the appender should be advertised. -appName The value to use as the APP-NAME in the RFC 5424 syslog record. -charset The character set to use when converting the syslog String to a byte array. The String must be a valid Charset. If not specified, the default system Charset will be used. -connectTimeoutMillis The connect timeout in milliseconds. The default is 0 (infinite timeout). -enterpriseNumber The IANA enterprise number as described in RFC 5424 -format If set to 'RFC5424' the data will be formatted in accordance with RFC 5424. Otherwise, it will be formatted as a BSD Syslog record. -id The default structured data id to use when formatting according to RFC 5424. If the LogEvent contains a StructuredDataMessage the id from the Message will be used instead of this value. -immediateFail When set to true, log events will not wait to try to reconnect and will fail immediately if the socket is not available. -immediateFlush When true, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. The default is true. -includeMDC Indicates whether data from the ThreadContextMap will be included in the RFC 5424 Syslog record. Defaults to true. -mdcExcludes A comma separated list of mdc keys that should be excluded from the LogEvent. -mdcId The id to use for the MDC Structured Data Element. -mdcIncludes A comma separated list of mdc keys that should be included in the FlumeEvent. -mdcRequired A comma separated list of mdc keys that must be present in the MDC. -mdcPrefix A string that should be prepended to each MDC key in order to distinguish it from event attributes -messageId The default value to be used in the MSGID field of RFC 5424 syslog records. -newLine If true, a newline will be appended to the end of the syslog record. The default is false. -reconnectionDelayMillis If set to a value greater than 0, after an error there will be an attempt to reconnect to the server after waiting the specified number of milliseconds. -keyStoreLocation The location of the keystore for SSL connections. -keyStorePassword The password of the keystore for SSL connections. -trustStoreLocation The location of the truststore for SSL connections. -trustStorePassword The password of the truststore for SSL connections. -ignoreExceptions When true, exceptions encountered while appending events will be internally logged and then ignored. The default is true. -protocol 'TCP' (default), 'SSL' or 'UDP'. |
---|---|
Example |
$ ./rhino-console createsyslogappender myappender localhost 12000 USER Done. |
Creating a Console Appender
To create a new Console appender, use the following rhino-console command or related MBean operation.
Console command: createconsoleappender
Command |
createconsoleappender <appenderName> [-follow <true|false>] [-direct <true|false>] [-target <SYSTEM_OUT|SYSTEM_ERR>] [-ignoreExceptions <true|false>] [-pattern <pattern>] Description Appends log events to System.out or System.err using a layout specified by the user. Required Arguments appenderName The name of the Appender. Options -follow Identifies whether the appender honors reassignments of System.out or System.err -direct Write directly to java.io.FileDescriptor and bypass java.lang.System.out/.err. Can give up to 10x performance boost when the output is redirected to file or other process. -target Either 'SYSTEM_OUT' or 'SYSTEM_ERR'. The default is 'SYSTEM_OUT'. -ignoreExceptions When true, exceptions encountered while appending events will be internally logged and then ignored. The default is true. -pattern The pattern to use for logging output. |
---|---|
Example |
$ ./rhino-console createconsoleappender myappender -target SYSTEM_OUT Done. |
Remove an Appender
To remove a no-longer required appender, use the following rhino-console commands, and related MBean methods.
Console command: removeappender
Command |
removeappender <appenderName> Description Remove all references to an appender and remove the appender. Required Arguments appenderName The name of the Appender. |
---|---|
Example |
$ ./rhino-console removeappender TraceNotification Removed appender: TraceNotification |
Attaching an Appender to a Logger
To attach an appender to a logger, use the following rhino-console command or related MBean operation.
Console command: addappenderref
Command |
addappenderref <logKey> <appenderName> Description Adds an appender for a log key. Required Arguments logKey The log key of the logger. appenderName The name of the Appender. |
---|---|
Example |
To configure log keys to output their logger’s messages to a specific file appender: $ ./rhino-console addappenderref root myappender Added appender reference of myappender to root. |
Console command: removeappenderref
Command |
removeappenderref <logKey> <appenderName> Description Removes an appender for a log key. Required Arguments logKey The log key of the logger. appenderName The name of the Appender. |
---|---|
Example |
$ ./rhino-console removeappenderref rhino.main AlarmsLog Removed appender reference of AlarmsLog from rhino.main. |
Configure a Logger
To configure/reconfigure a Logger, use the following console commands and related MBean methods. Since 2.6, Rhino has offered fully asynchronous logging through asynchronous loggers. Asynchronous logging is based on the idea of returning control to the processing thread as early as possible, for maximum throughput.
Rhino allows any individual logger to be asynchronous. This requires careful setup, as the way that log messages are logged is not entirely straightfoward.
In order to get the expected behaviour, that messages to logger foo
are logged asynchronously, and only once, logger foo
must be configured as follows:
-
asynchronous
set to true. Make this logger asynchronous -
additivity
set to false. This prevents double logging of messages if any parent logger also has a reference to the same appenders. -
add relevant appender refs. A non-additive logger must have at least one appender ref to log anything
-
set
level
. Asynchronous loggers do not inherit levels from synchronous parents.
As a result of this complexity, there is no rhino-console command to set or get asynchronous
alone. Configuring an Asynchronous Logger shows an example.
Possible behaviours with Asynchronous Loggers.
An asynchronous logger may not necessarily behave as expected, with all messages always logged asynchronously. To determine the actual behaviour of an asynchronous logger requires examining the whole path back to the first non-additive parent (or root logger)
Configuration |
Behaviour |
Logger
name : rhino.main level : INFO additivity : false asynchronous: true appenders : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]
Parent
name : root level : INFO additivity : true asynchronous: <not configured - default is false> appenders : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender] |
|
Logger
name : rhino.main level : INFO additivity : false asynchronous: true appenders : []
Parent
name : root level : INFO additivity : true asynchronous: <not configured - default is false> appenders : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender] |
|
Logger
name : rhino.main level : INFO additivity : true asynchronous: true appenders : []
Parent
name : root level : INFO additivity : true asynchronous: <not configured - default is false> appenders : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender] |
|
Logger
name : rhino.main level : INFO additivity : true asynchronous: true appenders : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender]
Parent
name : root level : INFO additivity : true asynchronous: <not configured - default is false> appenders : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender] |
|
Logger
name : rhino.main level : INFO additivity : true asynchronous: true appenders : [mainAppender]
Parent
name : root level : INFO additivity : true asynchronous: <not configured - default is false> appenders : [STDERR, RhinoLog, LogNotification, PolledMemoryAppender] |
|
Console command: configurelogger
Command |
configurelogger <logKey> [-level <level>] [-additivity <additivity>] [-asynchronous <asynchronosity>] [-appender <appender-ref>]* [-plugin <plugin-name>]* Description Set the configuration for a logger. At least one option must be specified. Plugins can be defined using the defineplugincomponent command. |
---|---|
Example |
$ ./rhino-console configurelogger root -level info -additivity true -appender STDERR -appender RhinoLog -appender LogNotification Created/updated logger configuration for root |
Console command: getloggerconfig
Command |
getloggerconfig <logKey> Description Get the configuration for a logger. Required Arguments logKey The log key of the logger. |
---|---|
Example |
$ ./rhino-console getloggerconfig rhino Logger rhino is not configured $ ./rhino-console getloggerconfig rhino.main name : rhino.main level : INFO additivity : <not configured - default is true> asynchronous: <not configured - default is false> appenders : [] |
Mananging a Logger’s Additivity
To specify whether or not a logger is additive, use the following rhino-console command or related MBean operation.
The meaning of "additivity" is explained in the Appender additivity section of the About Logging page.
Loggers are additive by default. |
Console command: setadditivity
Command |
setadditivity <logKey> <additivity> Description Sets whether the log key inherits the log filter level of its parent logger. Required Arguments logKey The log key of the logger. additivity Set to true for enabled, false for disabled, or - to use the platform default |
---|---|
Example |
To make a logger additive: $ ./rhino-console setadditivity rhino.foo true Done. |
Console command: getadditivity
Command |
getadditivity <logKey> Description Get the configured additivity for a logger. Required Arguments logKey The log key of the logger. |
---|---|
Example |
To make a logger additive: $ ./rhino-console getadditivity rhino Logger rhino is not configured - the default additivity (true) would apply to this log key $ ./rhino-console getadditivity root Additivity for root is true |
Managing a Logger’s Log Level
To manage the log level for a log, use the following rhino-console command or related MBean operation.
Console command: setloglevel
Command |
setloglevel <logKey> <logLevel> Description Set the log level for a logger. Required Arguments logKey The log key of the logger. logLevel The log level. |
---|---|
Example |
$ ./rhino-console setloglevel rhino.main info Log level for rhino.main set to: INFO |
Console command: getloglevel
Command |
getloglevel <logKey> Description Get the configured log level for a logger. Displays the effective log level if no explicit level is set. Required Arguments logKey The log key of the logger. |
---|---|
Examples |
$ ./rhino-console getloglevel rhino Logger rhino does not exist but it has sub-loggers. Log level for rhino is not set. Effective (inherited) log level is: INFO $ ./rhino-console getloglevel rhino.main Log level for rhino.main is: INFO |
Listing Log Appenders
Listing Log Keys
To list log keys, use the following rhino-console command or related MBean operation.
Console command: listlogkeys
Command |
listlogkeys [-configured <true|false>] [-prefix <prefix>] Description Returns an array of known log keys. If configured is true, return only explicitly configured logkeys, otherwise return all known keys. Options -configured If true, list only keys with explicit configuration, otherwise list all known keys -prefix Limit results to log keys matching prefix |
---|---|
Example |
[Rhino@localhost (#3)] listlogkeys fastserialize framework framework.bulk.manager framework.bulk.ratelimiter framework.csi framework.dlv framework.groupheartbeat framework.mcp framework.mcp.preop framework.mcpclient-mplexer framework.rmi.network framework.rmi.result framework.rmi.server framework.rmi.skeleton.com.opencloud.rhino.configmanager.runtime.ConfigurationStateImpl ... |
Managing Logging Properties
Rhino 2.6 allows logging configuration to use property subsitutions in almost all logging configuration.
To manage properties available for substitution use the following rhino-console commands and related MBean methods.
Console command: getloggingproperties
Command |
getloggingproperties [-property <property>] Description Get the values of Logging properties. Options -property The name of the Property |
---|---|
Example |
$ ./rhino-console getloggingproperties name value ------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------- colourLevel %highlight{%-7level}{${consoleColours}} consoleColours SEVERE=RED BRIGHT, WARNING=YELLOW, INFO=GREEN, CONFIG=CYAN, FINE=BRIGHT_BLACK, FINER=BRIGHT_BLACK, FINEST=BRIGHT_BLACK, CLEAR=GREEN, CRITICAL=RED, MAJOR=RED, logDir ${sys:rhino.dir.work}/log maxArchivedFiles 4 plainLevel %-7level rhinoVersionHeader %d{yyyy-MM-dd HH:mm:ss.SSSZ} ${rhino:version} log file started%n 6 rows |
Console command: setloggingproperty
Command |
setloggingproperty <property> <value> Description Set a Logging property. Overwrites if it already exists Required Arguments property The name of the Property value The value of the Property |
---|---|
Example |
$ ./rhino-console setloggingproperty maxArchivedFiles 5 Set property maxArchivedFiles to 5 |
Console command: removeloggingproperty
Command |
removeloggingproperty <property> Description Remove a logging property if not in use. Required Arguments property The name of the Property |
---|---|
Example |
$ ./rhino-console removeloggingproperty consoleColours An error occurred executing command 'removeloggingproperty': com.opencloud.rhino.configmanager.exceptions.ConfigurationException: Property consoleColours in use by property colourLevel $ ./rhino-console removeloggingproperty colourLevel Removed logging property colourLevel |
Define a Plugin Component
Console command: defineplugincomponent
Command |
defineplugincomponent <alias-name> <plugin-name> [(<property-name> <property-value>)]* [(-plugin <name>)]* Description Define a plugin component that can be used with the configurelogger command or other plugin definitions. Plugin definitions exist only in the client, and will be lost when the client terminates |
---|---|
Example |
[SLEE Stopped] [admin@localhost (#11)] defineplugincomponent fooPattern PatternLayout pattern "%d{yyyy-MM-dd HH:mm:ss.SSSZ} %-7level [%logger] <%threadName> %mdc %msg{nolookups}%n%throwable" Defined plugin component with name PatternLayout |
Annotating Log files
To append a message to a given logger, use the following console commands and related MBean methods
Console command: annotatelog
Command |
annotatelog <logKey> <logLevel> <message> Description Logs a message to all nodes in the cluster using Rhino's logging subsystem. Required Arguments logKey The log key of the logger. logLevel The log level. message The message to log. |
---|---|
Example |
To configure log keys to output their logger’s messages to a specific file appender: $ ./rhino-console annotatelog root info "a log annotation" Annotating log. Done.
rhino.log
... 2017-12-04 11:53:23.010+1300 INFO [root] <GroupRMI-thread-1> {} a log annotation ... |
Rolling-Over All Rolling File Appenders
To backup and truncate all existing rolling file appenders, use the following rhino-console command or related MBean operation.
Overriding default rollover behaviour
The default behaviour for log files is to automatically rollover when they reach 100MB in size. You can instead request rollover at any time, using the |
Console command: rolloverlogfiles
Command |
rolloverlogfiles Description Triggers a rollover of all existing rolling appenders. |
---|---|
Example |
$ ./rhino-console rolloverLogFiles Done. |
Logging Plugins
Rhino uses the Log4j 2 plugin architecture to support any Log4j 2 appender and allow addition of custom appenders.
See the Apache Log4j 2 Plugins documentation for more information about plugins. |
Many of the appenders provided by Log4j 2 have additional dependencies. These are not packaged with Rhino. Some examples include:
-
Cassandra Appender
-
JDBC Appender
-
Kafka Appender
Installing appender dependencies and custom plugins
If you want to use a custom plugin or a Log4j 2 appender that requires additional dependencies, you must put the required JARs into ${RHINO_HOME}/lib/logging-plugins/
. Any jars found in this directory are added to the core logging classloader.
Files in ${RHINO_HOME}/lib/logging-plugins are only scanned at node boot time. |
This classloader is not visible to the main application classloaders. Because it is isolated, it can contain versions of libraries that would otherwise conflict with versions of libraries deployed in the SLEE. (It can’t contain multiple versions of the same library if different appenders require different versions.)
Custom plugins may affect the stability of Rhino nodes.
Custom plugins
Log4j 2 provides multiple mechanisms for plugin discovery. The only mechanism supported by Rhino is use of the Log4j 2 annotation processor during the plugin build phase.
The Log4j 2 annotation processor works by scanning for Log4j 2 plugins and generating a metadata file in your processed classes.
The Java compiler will automatically pick up the annotation processor if it is in the classpath. If the annotation processor is disabled during the compilation phase, you must add another compiler pass to your build process that does annotation processing for org.apache.logging.log4j.core.config.plugins.processor.PluginProcessor
.
Importing a Rhino export
Dependencies for logging plugins are not included in a Rhino export, even if there is configuration that requires those dependencies. So when using rhino-export and importing into a new Rhino instance, the logging plugins must be copied manually. Copy ${RHINO_HOME}/lib/logging-plugins
to the new Rhino location.
Known Issues
-
Highlighting of alarm and tracer levels does not work in colour console (LOG4J2-1999, LOG4J2-2000, LOG4J2-2005)
-
Rolling file appenders do not support live reconfiguration between time-based and sequential numbering (LOG4J2-2009)
-
The createOnDemand configuration option for file appenders, including rolling file appenders does not work if the appender layout has a header (LOG4J2-2027)
Staging
Staging refers to the micro-management of work within the Rhino SLEE.
This work is divided up into items, executed by workers. A system-level thread represents each worker. You can configure the number of threads available to process items on the stage, to minimise latency, and thus increase the performance capacity of the SLEE.
The staging-thread system
Rhino performs event delivery on a pool of threads, called staging threads. The staging-thread system operates a queue of units of work for Rhino to perform, called stage items. Typically, these units of work involve the delivery of SLEE events to SBBs. A stage item enters staging in a processing queue. Then, the first available staging thread removes it, to perform its associated work. How long the thread spends in the staging queue, before a stage worker processes it, contributes to the overall latency in handling the event. Thus, it is important to make sure that the SLEE is using staging threads optimally.
Tunable parameters
To improve performance, you can tune the following staging parameters: maximumSize
, threadCount
, maximumAge
, queueType
.
The node must be restarted for any change in maximumSize , maximumAge , or queueType to take effect. |
For instructions on tuning staging parameters, see Configuring Staging Parameters. You can observe the effects of configuration changes in the statistics client by simulating heavy concurrency using a load simulator. |
maximumSize
Description |
Maximum size of the staging queue. Determines how many stage items may be queued awaiting processing. When the queue reaches maximum size, the SLEE automatically fails and removes the oldest item, to accomodate new items. |
---|---|
Default |
3000 |
Recommendation |
The default works well for most scenarios. Should be high enough that the SLEE can ride out short bursts of peak traffic, but not so large that under extreme overload stage items wait in the queue for too long (to be useful to the protocol generating the event), before being properly failed. |
threadCount
Description |
Number of staging threads in the thread pool.
|
||
---|---|---|---|
Default |
30 |
||
Recommendation |
The default works well for many applications on a wide range of hardware. However for some applications, or with hardware using four or more CPUs, more staging threads may be useful. In particular, when the SLEE is running services that perform high-latency blocking requests to an external system, more staging threads may often be necessary. For example, for a credit-check application that only allows a call setup to continue after performing a synchronous call to an external system:
|
In real-world applications, it is seldom a matter of applying a simple formula to work out the optimal number of staging threads. Instead, performance-monitoring tools would be used to examine the behaviour of staging, alongside such metrics as event-processing time and system-CPU usage, to find a suitable value for this parameter. |
maximumAge
Description |
Maximum possible age of a staging item, in milliseconds. Determines how long an item of work can remain in the staging queue and still be considered valid for processing. Staging threads automatically fail and remove stage items that stay in the staging queue for longer than this maximum age. Tuning this (along with |
---|---|
Default |
10000 |
queueType
Description |
Determines ordering of the staging queue. These options are available:
|
---|---|
Default |
LIFO |
Recommendation |
The default |
Configuring Staging Parameters
To configure staging parameters, use the following rhino-console commands or related MBean operations.
configurestagingqueues
command
Command |
configurestagingqueues [maximumAge <age>] [maximumSize <size>] [threadCount <count>] Description set some or all of the staging-queues configuration properties |
---|---|
Example |
$ ./rhino-console configurestagingqueues maximumAge 11000 maximumSize 4000 threadCount 40 Updated staging-queue config properties: maximumSize=4000 threadCount=40 maximumAge=11000 |
getstagingqueuesconfig
command
Command |
getstagingqueuesconfig Description get the staging-queues configuration properties |
---|---|
Example |
$ ./rhino-console getstagingqueuesconfig Configuration properties for staging-queues: maximumAge=11000 maximumSize=4000 threadCount=40 |
MBean operations
Use the following MBean operations to configure staging queue parameters, defined on the Staging Queue Management MBean
interface.
Operations |
Usage |
||
---|---|---|---|
To get and set the maximum number of items permitted in the staging queue: public int getMaximumSize() throws ConfigurationException; public void setMaximumSize(int size) throws ConfigurationException, ValidationException; |
|||
To get and set the maximum age of items permitted in the staging queue: public long getMaximumAge() throws ConfigurationException; public void setMaximumAge(long ms) throws ConfigurationException, ValidationException; Queued work items do not immediately expire if their age (measured in milliseconds) exceeds the maximum allowed. Instead, the SLEE discards them when they leave the staging queue (when it’s their turn for processing).
|
|||
To get and set the number of threads available for processing items on the staging queue: public int getThreadCount() throws ConfigurationException; public void setThreadCount(int threads) throws ConfigurationException, ValidationException; |
Object Pools
The Rhino SLEE uses groups of object pools to manage the Java objects representing SBBs and profile tables.
Throughout the lifecycle of an object, it may move from one pool to another. Although the defaults are generally suitable, each object pool’s maximum size can be configured if needed.
Pools
There are several types of object pools, however OpenCloud recommends that only the initial pooled pool sizes be changed by system administrators. The other pool sizes are best set during performance testing and only after the maximum workload without tuning has been determined. When tuning the pool sizes consideration should be given to the maximum load nodes are expected to process and the memory consumed by the pools.
The JAIN SLEE specification describes the purpose of the object pools with respect to SBBs:
The SLEE creates and manages a pool of SBB objects. At runtime, the SLEE may assign zero or more SBB objects to represent an SBB entity. When an SBB object is assigned to represent an SBB entity, the SBB object is in the Ready state (see Section 6.3). It can receive and fire events, receive synchronous method invocations, and access and update the persistent state of the SBB entity. Another viewpoint is that the SBB object caches a copy of the persistent data of the SBB entity to provide transactional semantics.
Rhino has five types of object pool. Each is managed per-service or per-profile table, if a service or profile table does not have a pool configuration it inherits the default configuration for its type.
Pooled pool |
Contains SBB objects and profile objects in the If the pool is empty, the SLEE must create and initialise a new object the next time it needs one. This may take time, particularly if the |
Ready pool |
Contains SBB objects and profile objects in the On startup this pool is always empty. It is populated only with objects from the stale pool or pooled pool, or objects created on demand if the pooled pool was empty. |
Stale pool |
Contains SBB objects and profile objects that are associated with an SBB entity or profile that has been modified in another transaction. This pool exists as a partner for the ready pool to avoid unnecessary calls to On startup this pool is always empty. It is populated with objects from the ready pool if and when they become stale. |
Persistent state pool |
A persistent state object holds the MemDB representation of CMP and CMR field data for an SBB entity or profile. A new persistent state object is required for every transaction in which CMP or CMR field data is updated. The purpose of the persistent state pool is to reduce the GC impact caused by the cycling of these objects as SBB entities and profiles are created, updated, and removed. |
State pool |
State objects provide the interface between SBB entities and profiles and the persistent state objects holding their CMP and CMR field data. State objects are associated with an SBB object or profile object when the object is associated with an SBB entity or profile. The state pool should be configured to be at least the size of the ready pool. The maximum amount of state objects in use at any one time, and thus the maximum recommended state pool size, is limited to the sum of:
|
Object pool statistics are available in the ObjectPools parameter set. |
Configuring Object Pools
The configuration of object pools is structured as follows:
-
A global defaults object pool configuration contains the set of base defaults.
-
A service defaults object pool configuration contains the default configuration for services. When a service is deployed, its initial object pool configuration is copied from the service defaults configuration.
If, for some reason, the service defaults configuration is missing when it is required, it will be recreated based on the global defaults configuration. -
A profile table defaults object pool configuration contains the default configuration for profile tables. When a profile table is created, its initial object pool configuration is copied from the profile table defaults configuration.
If, for some reason, the profile table defaults configuration is missing when it is required, it will be recreated based on the global defaults configuration. -
When a new namespace is created, each of the default object pool configurations for that namespace are initialised as a copy of the corresponding configurations from the default namespace.
Object pools can be configured, for example, with rhino-console
using the following commands:
-
getobjectpoolconfig
can be used to view an object pool configuration; and -
configureobjectpools
can be used to change the sizes of the object pools in a configuration.
Please see the online help in rhino-console
for more information on using these commands.
Alternatively, MBeans operations can be used to configure object pools. Please see:
-
ObjectPoolManagementMBean
for obtaining the ObjectName of Object Pool MBeans; and -
ObjectPoolMBean
for managing an individual object pool configuration.
The useDefaults flag of an object pool configuration is deprecated and no longer has any function. |
Licenses
As well as an overview of licenses, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
|
License Management →
|
|
|
License Management →
|
|
|
License Management →
|
|
|
License Management →
|
About Licenses
To be activated, services and resource adaptors need a valid license loaded into Rhino (at least for core or "default" functions). See the following details on license properties, validity, alarms, statistics and an example.
License properties
Each license has the following properties;
-
a unique serial identifier
-
a start date (before which the license is not valid)
-
an end date (after which the license is not valid)
-
a set of licenses that are superseded by this license
-
licensed-product functions — for the Rhino family of products, these are "Rhino" (used by the production Rhino build for its core functions) and "Rhino-SDK" (used by the SDK Rhino build for its core functions)
-
licensed-product versions
-
licensed-product capacities
-
one or more descriptive fields (optional, not actually used for licensing calculations).
Each license can contain one or more sets of (function, version, capacity). For example, a license could be for "Rhino-SDK, version 2.1, 1000" as well as "Rhino, version 2.1, 500".
Highest-capacity licenses display on startup
Licenses display when Rhino starts up — not the complete list, but only those with the highest licensed capacity for each licensed function/version. (If you have a big license and a small license for the same function/version installed, only the largest will display on startup.) |
License validity
A license is considered valid if:
-
The current date is after the license start date, but before the license end date.
-
The list of license functions in that license contains the required function.
-
The list of product versions contains the required version.
-
The license is not superseded by another.
If Rhino finds multiple valid licenses for the same function, it uses the one with the largest licensed capacity.
Upon activating a service or resource adaptor, Rhino checks the list of functions that they require against the list of installed valid licenses. If all required functions are licensed, the service or resource adaptor will activate. (If one or more functions is unlicensed, they will not activate.)
Licensing applies to explicit activation (by way of a management client) and implicit activation (on Rhino restart). There is one exception: if a node joins an existing cluster that has an active service for which there is no valid license, the service does become active on that node.
In the production Rhino, services and resource adaptors that are already active will continue to successfully process events for functions that are no longer licensed, such as when a license has expired. For the SDK Rhino, services and resource adaptors that are already active will stop processing events for the core "Rhino-SDK" function if it becomes unlicensed, typically after a license has expired.
License alarms
Typically, Rhino raises an alarm when:
-
A license has expired.
-
A license is due to expire in the next 7 days.
-
License units are being processed for a currently unlicensed function.
-
A license function is processing more accounted units than it is licensed for. The audit log shows how long it’s been over limit.
System administrators are responsible for verifying and canceling alarms (through the management console (command console or Rhino Element Manager.
Cancelled capacity alarms are re-generated for licensed functions that continue to run over capacity. |
Enforcing license limits
Production Rhino never enforces the "hard limit" on a license. The SDK version of Rhino will enforce the "hard limit" on the core "Rhino-SDK" function, by rejecting incoming work.
Contact your OpenCloud sales representative or OpenCloud support if you require a greater capacity Rhino SDK license during development. |
Audit logs
Rhino SLEE generates two copies of the same audit log: unencrypted and encrypted. The Rhino SLEE system administrator can use the unencrypted log for a self-audit. OpenCloud support may request the encrypted log (which contains an exact duplicate of the information in the unencrypted log) to perform a license audit. Audit logs are subject to "rollover", just like any other rolling log appender log — for a full audit log for a particular period, several logs may need to be concatenated. (Older logs are named audit.log.0
, audit.log.1
, and so on.)
See also License Audit Log Format. |
License statistics
The standard Rhino SLEE statistics interfaces include:
-
the root
License Accounting
statistic -
statistics for each function, with both
accountedUnits
andunaccountedUnits
values.
Only accountedUnits
count towards licensed limits. Rhino records unaccountedUnits
for services and resource adaptors with licensed functions configured as accounted="false"
.
License Audit Log Format
License audit logs track information over time, about cluster membership, installed licenses, and license function usage.
Each line in the audit logs describes one of these items at a particular time, as detailed below. All lines start with a full time stamp which includes a GMT offset.
Every Rhino node writes an audit log; but all audit logs detail cluster-wide information and usage statistics (not per-node information). |
Cluster membership
These lines in the audit logs show the current set of node IDs following a cluster membership change.
When logged |
Whenever the active node set in the cluster changes. |
---|---|
Format |
<timestamp>, CLUSTER_MEMBERS_CHANGED, [<comma>,<separated>,<node>,<list>] |
Example |
2014-03-01 01:02:03 +0000, CLUSTER_MEMBERS_CHANGED, [101,102] |
Installed licenses
These lines in the audit logs list and describe changes to installed licenses.
When logged |
Whenever the set of valid licenses changes. For example, when:
|
---|---|
Format |
<timestamp>,LICENSE,"<license description>" |
Example |
2013-09-27 01:34:20 +0100,LICENSE,"[LicenseInfo serial=116eaaffde9,validFrom=Tue May 20 12:53:35 NZST 2008,...]" |
License function usage
These lines in the audit logs show the following information about license function usage:
-
the start and end timestamps
-
number of records
-
minimum, maximum, and average times (each logged period is made up of several smaller periods).
When logged |
Every ten minutes. |
||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Format |
Each line represents a single license function, in the following format: <timestamp> <startTimeMillis> <endTimeMillis> <intervalMillis> <nodeCount> <function> <totalAccounted> <avgAccounted> <totalUnaccounted> <avgUnaccounted> <capacity> where the fields are as follows:
|
||||||||||||||||||||
Example |
2013-05-31 02:17:40 +1200, 1369922860253, 1369923460253, 600000, 2, Rhino, 5999195, 9998.66, 0, 0.00, 10000 2013-05-31 02:27:40 +1200, 1369923460253, 1369924060253, 600000, 2, Rhino-CGIN-Base, 0, 0.00, 0, 0.00, 10000 |
Sample log file
2013-05-30 13:27:32 +1200, CLUSTER_MEMBERS_CHANGED, [101,102,105] 2013-05-30 13:27:40 +1200,LICENSE,"[LicenseInfo serial=13ee33e9f4d,licensee=Open Cloud Limited (Test license),validFrom=Mon May 27 11:48:52 NZST 2013,validUntil=Fri Jul 26 11:48:52 NZST 2013,valid=true,supercedes=[],components=[[LicenseComponent function=Rhino,version=2.*,capacity=100000],[LicenseComponent function=Rhino-CGIN,version=2.*,capacity=100000],[LicenseComponent function=Rhino-CGIN-Base,version=2.*,capacity=100000],[LicenseComponent function=Rhino-Resources,version=2.*,capacity=0],metadata=[Purpose=Testing] ]" 2013-05-30 13:37:40 +1200, 1369877260251, 1369877860253, 600002, 2, Rhino-CGIN-Base, 0, 0.00, 4, 0.01, 100000 2013-05-30 13:37:40 +1200, 1369877260251, 1369877860253, 600002, 2, Rhino, 1769753, 2949.58, 0, 0.00, 100000 2013-05-30 13:47:40 +1200, 1369877860253, 1369878460253, 600000, 2, Rhino-CGIN-Base, 0, 0.00, 0, 0.00, 100000 2013-05-30 13:47:40 +1200, 1369877860253, 1369878460253, 600000, 2, Rhino, 5830285, 9717.14, 0, 0.00, 100000 2013-05-30 13:57:40 +1200, 1369878460253, 1369879060253, 600000, 2, Rhino-CGIN-Base, 0, 0.00, 0, 0.00, 100000 2013-05-30 13:57:40 +1200, 1369878460253, 1369879060253, 600000, 2, Rhino, 5999897, 9999.83, 0, 0.00, 100000 2013-05-30 14:07:40 +1200, 1369879060253, 1369879660253, 600000, 2, Rhino-CGIN-Base, 0, 0.00, 0, 0.00, 100000 2013-05-30 14:07:40 +1200, 1369879060253, 1369879660253, 600000, 2, Rhino, 6000031, 10000.05, 0, 0.00, 100000 2013-05-30 14:17:40 +1200, 1369879660253, 1369880260253, 600000, 2, Rhino-CGIN-Base, 0, 0.00, 0, 0.00, 100000 2013-05-30 14:17:40 +1200, 1369879660253, 1369880260253, 600000, 2, Rhino, 5999872, 9999.79, 0, 0.00, 100000 2013-05-30 14:27:40 +1200, 1369880260253, 1369880860253, 600000, 2, Rhino-CGIN-Base, 0, 0.00, 0, 0.00, 100000 2013-05-30 14:27:40 +1200, 1369880260253, 1369880860253, 600000, 2, Rhino, 5999944, 9999.91, 0, 0.00, 100000 2013-05-30 14:37:40 +1200, 1369880860253, 1369881460253, 600000, 2, Rhino-CGIN-Base, 0, 0.00, 0, 0.00, 100000 2013-05-30 14:37:40 +1200, 1369880860253, 1369881460253, 600000, 2, Rhino, 5992571, 9987.62, 0, 0.00, 100000
Listing Current Licenses
To list current licenses, use the following rhino-console command or related MBean operation.
Console command: listlicenses
Command |
listlicenses <brief|verbose> Description Displays a summary of the currently installed licenses |
---|---|
Example |
$ ./rhino-console listLicenses Installed licenses: [LicenseInfo serial=107baa31c0e,validFrom=Wed Nov 23 14:00:50 NZDT 2008, validUntil=Fri Dec 02 14:00:50 NZDT 2008,capacity=400,hardLimited=false, valid=false,functions=[Rhino],versions=[2.1],supersedes=[]] [LicenseInfo serial=10749de74b0,validFrom=Tue Nov 01 16:28:34 NZDT 2008, validUntil=Mon Jan 30 16:28:34 NZDT 2009,capacity=450,hardLimited=false, valid=true,functions=[Rhino,Rhino-IN-SIS],versions=[2.1,2.1], supersedes=[]] Total: 2 In this example, two licenses are installed:
Both are for Rhino version 2.1. |
MBean operation: getLicenses
MBean |
|
---|---|
Rhino operation |
public String[] getLicenses() |
Getting Licensed Capacity
To get the licensed capacity for a specified product and function (to determine how much throughput the Rhino cluster has), use the following rhino-console command or related MBean operation.
Console command: getlicensedcapacity
Command |
getlicensedcapacity <function> <version> Description Gets the currently licensed capacity for the specified function and version |
---|---|
Example |
$ ./rhino-console getlicensedcapacity Rhino 2.1 Licensed capacity for function 'Rhino' and version '2.1': 450 |
MBean operation: getLicensedCapacity
MBean |
|
---|---|
Rhino operation |
public long getLicensedCapacity(String function, String version) |
Installing a License
To install a license, use the following rhino-console command or related MBean operation.
Console command: installlicense
Command |
installlicense <license file> Description Install the license at the specified SLEE-accessible filename or file: URL |
---|---|
Example |
$ ./rhino-console installLicense file:/home/user/rhino/rhino.license Installing license from file:/home/user/rhino/rhino.license |
License files must be on the local filesystem of the host where the node is running. |
MBean operation: install
MBean |
|
---|---|
Rhino operation |
Install a license from a raw byte array
public void install(byte[] filedata) throws LicenseException, LicenseAlreadyInstalledException, ConfigurationException |
Uninstalling a License
To uninstall a license, use the following rhino-console command or related MBean operation.
Console command: uninstalllicense
Command |
uninstalllicense <serial id> Description Uninstalls the specified license |
---|---|
Example |
To uninstall the license with serial ID "105563b8895": $ ./rhino-console uninstalllicense 105563b8895 Uninstalling license with serial ID: 105563b8895 |
License files must be on the local filesystem of the host where the node is running. |
MBean operation: uninstall
MBean |
|
---|---|
Rhino operation |
public void uninstall(String serial) throws UnrecognizedLicenseException, ConfigurationException |
Rate Limiting (Overload Control)
Rhino’s carrier-grade overload control manages the rate of work Rhino accepts, processes, and generates for other systems. A Rhino administrator can:
-
design the entire end-to-end system query, transaction, and session/message rates
-
configure fine-grained processing rates, so Rhino can participate in the entire end-to-end system without overloading other network equipment.
Rate limiting can be useful in high-load situations, to ensure that once an activity starts it can run to completion without overloading the SLEE or related systems. |
Granular control using "rate limiters"
Rhino administrators use rate limiters to set specific "flow rates" for different types of messages. These limit the number of messages per second that the SLEE will process or generate, and throttle-back the flow for certain messages in favor of others. Rhino also lets administrators monitor and refine rate-limiter parameters, to achieve desired throughput.
This section includes the following topics:
About Rate Limiting
For rate limiting with the Rhino SLEE:
-
An administrator creates limiters and assembles them into hierarchies.
-
The administrator connects those limiters to limiter endpoints.
-
RAs and SBBs determine the number of units needed for a particular piece of work.
-
RAs, SBBs, and Rhino code use limiter endpoints to determine if a piece of work can be done (for example, if a message can be processed or sent).
Per-node configuration
Some limiter properties can be overridden on a per-node basis (a value set this way is called a per-node value). For example, a rate limiter’s maximum allowed rate could be set differently for different sized machines. Each node always independently maintains the working state of each limiter (counts of units used and so on). |
What are limiters?
A limiter is an object that decides if a piece of work can be done or not. How the decision is made depends on the type of limiter. Limiters are always created and removed "globally". That is, they always exist on all nodes in the cluster.
Limiter names
Each limiter has a name. A limiter’s name must be globally unique within the scope of the Rhino instance.
Name character restriction
The limiter name cannot include the "/" character. |
See also Limiter Types for details on limiter properties, and Managing Limiters for procedures to create, remove, set properties, inspect, and list limiters. |
Limiter hierarchies
Limiters can optionally be linked to a single parent limiter and/or multiple child limiters. A limiter only allows a piece of work if all of its ancestors (its parent, and its parents parent, and so on) also allow the work. You configure a hierarchy by setting the parent property on each limiter.
The limiter hierarchy is the same on all nodes — per-node hierarchies are not possible. (Nor is it possible to create a cycle among parent/child limiters.) |
Bypassing a limiter
All limiters have a bypassed
property. If the flag is true
, then the limiter itself takes no part in the decision about allowing work. If it has a parent, it delegates the question to the parent. If is doesn’t have a parent, it always allows all work.
Rhino has no concept of enabling or disabling a limiter. Instead, you use the bypassed property.
Default limiter hierarchy
By default Rhino has two limiters, with the following configuration:
Name | Type | Parent | Bypassed | Configuration |
---|---|---|---|---|
QueueSaturation |
QUEUE_SATURATION |
<none> |
❌ |
maxSaturation=85% |
SystemInput |
RATE |
QueueSaturation |
✅ |
maxRate=0 timeUnit=seconds depth=1 |
So by default, limiting only happens when the event staging queue is 85% or more full. Both limiters can be reconfigured as necessary. QueueSaturation
can be removed, but SystemInput
cannot (although it doesn’t have to be used for anything).
See also Listing Limiters and Limiter Hierarchies. |
Limiter endpoints
A limiter endpoint is the interface between code that uses rate limiting, and the rate-limiting system itself. Administrators cannot create limiter endpoints — they are created as part of RA entities and SBBs. The only configuration available for a limiter endpoint is whether or not it is connected to a limiter. Limiter endpoints are not the same as SLEE endpoints — they are different concepts.
Endpoints in RAs and SBBs
RAs and SBBs may have any number of limiter endpoints, and there is no restriction on what they can be used for. Documentation with each RA or SBB should list and explain the purpose of its limiter endpoints. Typical uses include throttling output of messages to external resources and throttling input messages before passing them to the SLEE.
RA "Input" endpoints
The SLEE automatically creates a limiter endpoint named RAEntity/<entityname>/Input
for every RA entity. These endpoints let the SLEE throttle incoming messages from RA entities. By default each "Input" endpoint is connected to the built-in "SystemInput" limiter, but the administrator can connect or disconnect it to another limiter.
The SLEE will try to use one bxref:#unit on the "Input" endpoint every time a new activity is started. If endpoint denies the unit then the SLEE rejects the activity. The SLEE will forcibly use one unit every time the RA passes in an event or ends an activity. This functionality is built into Rhino, and automatically happens for all RA entities, regardless of whether or not they use other limiter endpoints.
See also Managing Limiter Endpoints, for procedures to list limiter endpoints, connect them to and disconnect them from a limiter, and find which limiter is connected to them. |
What are units?
Units are an abstract concept representing the cost of doing a piece of work. For example, one unit might represent a normal piece of work, so three units indicate a piece of work that needs three times as much processing.
The RA or SBB determines the number of units used for a particular piece of work. Some might be configurable through configuration properties or deployment descriptors. This information should be documented for each individual RA or SBB.
Using units
Code can ask an endpoint "Can I do x units of work?". If the endpoint is connected to a limiter, the limiter will answer yes or no. If not connected to a limiter, the answer is always yes. If the answer is yes, the units are said to have been used. If the answer is no, the units are said to have been rejected.
Code can also tell the endpoint "I am doing x units of work that cannot be throttled". The endpoint passes this message to a limiter if connected (otherwise, it ignores the message). The units in this case are said to have been forcibly used.
Future limiter decisions do not differentiate between units used and those forcibly used. Rhino just counts both as having been "used".
Example
The following diagram illustrates an example rate limiting configuration, with two limiter hierarchies. Incoming traffic from the MSC is limited by the "FromSwitch" limiter and the limiters further up the chain. The Location and SRI services have defined limiter endpoints, which the administrator has connected (directly or indirectly) to the "To HLR" limiter to limit the total rate of requests to the HLR.
Limiter Types
Rhino comes with two types of limiter: rate limiter and queue-saturation limiter.
Rate limiter
Rate limiters limit the rate of work. It is typically used to limit the rate of incoming events, or outgoing requests.
Type Name |
|
||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Rejects work when… |
the number of units used (or forced) during a given timeUnit exceeds maxRate. TimeUnit can be one second, one minute, one hour, or one day (24-hour period, not calendar day). Rhino implements rate limiters with a token bucket algorithm, where the depth property determines the bucket size. The actual bucket size is maxRate * depth. The default setting for depth is 1.0. So "50/sec" means "allow for 50 per second". When depth is 2, "50/sec" means "allow an initial burst of 100 and then 50 per second." The recommended setting for maxRate is where your CPU is around 85%. |
||||||||||||||||||||||||
Example |
Configured as |
||||||||||||||||||||||||
Properties |
|
Queue-saturation limiter
The queue-saturation limiter rejects work when the event-staging queue (explained in the Staging section) passes a given saturation. It provides some overload protection, by limiting incoming activities in cases where too much work is backlogged, while allowing enough headroom to process existing activities.
For example, the default configuration has the QueueSaturation
limiter configured with an allowed capacity of 85%
. With the default maximum queue size of 3000, this limiter starts rejecting new activities when 2250 or more items are in the queue (leaving 15% headroom for processing existing activities).
Type Name |
|
||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Rejects work when… |
the number of items in the staging queue reaches maxSaturation, expressed as a percentage of the queue’s capacity. |
||||||||||||||||
Example |
Configured as |
||||||||||||||||
Properties |
|
Migrating Rate Limiting from Rhino 2.0 to 2.1
The normal Rhino import processes automatically migrate a 2.0 configuration to the new 2.1 features.
Rhino 2.0 provided a single rate limiter, that could throttle new activities based on the rate of inbound activities and events. Each RA entity could optionally be included or excluded from this process. The allowed rate could be configured to ramp up to the specified maximum beginning from a small value when the node started.
How features translate
When you migrate from Rhino 2.0 to 2.1, rate-limiting features translate as follows:
Rhino 2.0 | Rhino 2.1 |
---|---|
Rate limiting enabled |
Bypassed parameter of SystemInput limiter |
Included/excluded RAs |
RA entity "Input" limiter endpoints connected to SystemInput limiter (or not) |
Ramp-up |
Ramp-up of SystemInput limiter |
Managing Rate Limiting
Below are a summary of the MBeans and a listing of the procedures available for managing rate limiting with the Rhino SLEE.
Rate-limiting MBeans
Rate limiting exposes several MBean classes:
MBean | What it does | Where to get it |
---|---|---|
The main limiting MBean. |
||
|
Each limiter has one. Both classes extend |
|
Controls ramp-up of the SystemInput limiter. |
For convenience, you can get the MBean for the SystemInput limiter through LimiterManagementMBean.getSystemInputLimiterMBean() . |
Managing Limiters
This section includes instructions for performing the following Rhino SLEE procedures with explanations, examples, and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
createlimiter |
LimiterManagementMBean → createLimiter |
|
removelimiter |
LimiterManagementMBean → removeLimiter |
|
configureratelimiter |
LimiterMBean |
|
getlimiterinfo |
LimiterMBean |
|
listlimiters |
LimiterManagementMBean → getLimiters |
Creating a Limiter
To create a limiter, use the following rhino-console command or related MBean operation.
Name character restriction
The limiter name cannot include the "/" character. |
Console command: createlimiter
Command |
createlimiter <limitername> [-type <limitertype>] Description Creates a new limiter with the given name, and of the given type if specified. If no type is specified, then a RATE limiter is created by default. |
---|---|
Example |
To create a queue-saturation type limiter named $ ./rhino-console createlimiter saturation1 -type QUEUE_SATURATION Successfully created queue_saturation limiter 'saturation1'. |
MBean operation: createLimiter
MBean |
|
---|---|
Rhino operation |
void createLimiter(String type, String name) throws NullPointerException, InvalidArgumentException, ConfigurationException, ManagementException, LimitingManagementException; |
See also About Rate Limiting and Limiter Types. |
Removing a Limiter
To remove a limiter, use the following rhino-console command or related MBean operation.
A limiter cannot be removed if it is the parent of any limiters or if any limiter endpoints are connected to it. Also the |
Console command: removelimiter
Command |
removelimiter <limitername> Description Remove the specified limiter |
---|---|
Example |
To remove limiter $ ./rhino-console removelimiter saturation1 The Limiter saturation1 has been successfully removed. |
MBean operation: removeLimiter
MBean |
|
---|---|
Rhino operation |
void removeLimiter(String name) throws NullPointerException, InvalidArgumentException, ConfigurationException, LimitingManagementException; |
Setting Limiter Properties
To set limiter properties, use the following rhino-console commands or related MBean operations.
Limiters can only be configured administratively — RAs or services cannot configure limiters. |
Console commands
For details of available properties for each limiter type, see Limiter Types. |
configureratelimiter
Command |
configureratelimiter <limitername> [-nodes node1,node2...] <[-property value] [-property value] ... > Description Sets the values of the specified configuration properties of the limiter on the given node(s) |
---|---|
Example |
To set rate limiter properties: $ ./rhino-console configureratelimiter SystemInput -nodes 101 -bypassed false -maxrate 100 Updated config properties for limiter 'SystemInput': maxrate=100 bypassed=false |
configuresaturationlimiter
Command |
configuresaturationlimiter <limitername> [-nodes node1,node2...] <[-property value] [-property value] ... > Description Sets the values of the specified configuration properties of the limiter on the given node(s) |
---|---|
Example |
To set saturation limiter properties: $ ./rhino-console configuresaturationlimiter QueueSaturation -maxsaturation 75 Updated config properties for limiter 'QueueSaturation': maxsaturation=7 |
If You cannot change the name or type of a limiter — these are set when a limiter is created. |
MBean operations
Limiter
MBean operations
Operation | Usage |
---|---|
void setBypassedDefault(boolean bypassed) throws ConfigurationException; |
|
void setBypassedForNode(boolean[] bypassed, int[] nodeIDs) throws NullPointerException, ConfigurationException, InvalidArgumentException; |
|
void setParent(String parentName) throws ConfigurationException, ValidationException, NullPointerException, InvalidArgumentException; |
RateLimiter
MBean operations
Operation | Usage |
---|---|
void setDepthDefault(double depth) throws ConfigurationException, ValidationException; |
|
void setDepthForNode(double[] depth, int[] nodeIDs) throws ConfigurationException, ValidationException, NullPointerException, InvalidArgumentException; |
|
void setMaxRateDefault(double depth) throws ConfigurationException, ValidationException; |
|
void setMaxRateForNode(double[] depth, int[] nodeIDs) throws ConfigurationException, ValidationException, NullPointerException, InvalidArgumentException; |
|
void setTimeUnit(String timeUnit) throws ConfigurationException, ValidationException; |
QueueSaturationLimiter
MBean operations
Operation | Usage |
---|---|
void setMaxSaturationDefault(double maxSaturation) throws ConfigurationException, ValidationException; |
|
void setMaxSaturationForNode(double[] maxSaturation, int[] nodeIDs) throws ConfigurationException, ValidationException,NullPointerException, InvalidArgumentException; |
Inspecting a Limiter
To inspect a limiter, use the following rhino-console command or related MBean operations.
Console command
getlimiterinfo
Command |
getlimiterinfo <limitername> [-c] Description Displays the current configuration settings of the specified limiter. If the -c flag is provided, all stored default and per node settings for the limiter are listed. Otherwise the current configuration of all event routing nodes (as derived from the stored settings) is listed.
|
||
---|---|---|---|
Examples |
To view all configuration properties stored for a limiter $ ./rhino-console getlimiterinfo SystemInput -c limiter-name node-id bypassed depth maxrate parent time-unit type ------------- --------- --------- ------ -------- ---------------- ---------- ----- SystemInput defaults true 1.0 0.0 QueueSaturation SECONDS RATE n/a 101 false * 100.0 n/a n/a n/a 2 rows '*' means no value set 'n/a' means setting not configurable per node NOTE: Ramp up of SystemInput limiter is currently disabled To view the effective configuration for limiter $ ./rhino-console getlimiterinfo SystemInput limiter-name node-id bypassed depth maxrate parent time-unit type ------------- -------- --------- ------ -------- ---------------- ---------- ----- SystemInput 101 false 1.0 100.0 QueueSaturation SECONDS RATE 1 rows '*' means no value set NOTE: Ramp up of SystemInput limiter is currently disabled |
MBean operations
Limiter
MBean operations
Operation | Usage |
---|---|
TabularData getConfigSummary(); |
|
TabularData getInfoSummary(int[] nodeIDs) throws NullPointerException, ConfigurationException, InvalidArgumentException; |
|
String[] getChildLimiters() throws ConfigurationException; |
|
String[] getConnectedEndPoints() throws ConfigurationException; |
|
String getName() throws ConfigurationException; |
|
String getParent() throws ConfigurationException; |
|
String getType() throws ConfigurationException; |
|
boolean isBypassedDefault() throws ConfigurationException; |
|
boolean[] isBypassedForNode(int[] nodeIDs) throws NullPointerException, ConfigurationException, InvalidArgumentException; |
RateLimiter
MBean operations
Operation | Usage |
---|---|
double getDepthDefault() throws ConfigurationException; |
|
double[] getDepthForNode(int[] nodeIDs) throws ConfigurationException, NullPointerException, InvalidArgumentException; |
|
double getMaxRateDefault() throws ConfigurationException; |
|
double[] getMaxRateForNode(int[] nodeIDs) throws ConfigurationException, NullPointerException, InvalidArgumentException; |
|
double getTimeUnit() throws ConfigurationException; |
SaturationLimiter
MBean operations
Operation | Usage |
---|---|
double getMaxSaturationDefault() throws ConfigurationException; |
|
double[] getMaxSaturationForNode(int[] nodeIDs) throws ConfigurationException, NullPointerException, InvalidArgumentException; |
Listing Limiters and Limiter Hierarchies
To list all limiters and limiter hierarchies, use the following rhino-console command or related MBean operations.
Console command: listlimiters
Command |
listlimiters [-v] Description Lists all limiters. If the '-v' flag is provided, all limiter hierarchies and connected endpoints are displayed. |
---|---|
Example |
To list all limiters: $ ./rhino-console listlimiters QueueSaturation rate1 SystemInput To display all limiter hierarchies and connected endpoints: $ ./rhino-console listlimiters -v QueueSaturation +- SystemInput +- Endpoint:RAEntity/entity1/Input +- Endpoint:RAEntity/entity2/Input rate1 (Has no children or endpoints) |
MBean operation: getLimiters
MBean |
|
---|---|
Rhino operation |
String[] getLimiters() throws ConfigurationException; |
MBean operation: getHierarchySummary
MBean |
|
---|---|
Rhino operation |
String getHierarchySummary() throws ConfigurationException, ManagementException; |
Managing Limiter Endpoints
This section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
connectlimiterendpoint |
LimiterManagementMBean→ connectLimiterEndpoint |
|
disconnectlimiterendpoint |
LimiterManagementMBean→ disconnectLimiterEndpoint |
|
listlimiterendpoints |
LimiterManagementMBean→ getLimiterEndpoints |
|
getlimiterforlimiterendpoint |
LimiterManagementMBean→ getLimiterForEndpoint |
Connecting a Limiter Endpoint to a Limiter
To connect a limiter endpoint to a limiter, use the following rhino-console command or related MBean operation.
Console command: connectlimiterendpoint
Command |
connectlimiterendpoint <limiterendpoint> <limiter> Description Sets the limiter endpoint to use the specified limiter |
---|---|
Example |
To connect limiter endpoint $ ./rhino-console connectlimiterendpoint RAEntity/entity1/Input rate1 Connected limiter endpoint 'RAEntity/entity1/Input' to limiter 'rate1' |
MBean operation: connectLimiterEndpoint
MBean |
|
---|---|
Rhino operation |
void connectLimiterEndpoint(String limiterEndpointID, String limiterName) throws NullPointerException, InvalidArgumentException, ConfigurationException, ManagementException, LimitingManagementException; |
Disconnecting a Limiter Endpoint from a Limiter
To disconnect a limiter endpoint from a limiter, use the following rhino-console command or related MBean operation.
Console command: disconnectlimiterendpoint
Command |
disconnectlimiterendpoint <limiterendpoint> Description Removes the limiter for a limiter endpoint |
---|---|
Example |
To disconnect limiter endpoint $ ./rhino-console disconnectlimiterendpoint RAEntity/entity1/Input Disconnected limiter endpoint 'RAEntity/entity1/Input' |
MBean operation: disconnectLimiterEndpoint
MBean |
|
---|---|
Rhino operation |
void disconnectLimiterEndpoint(String limiterEndpointID) throws NullPointerException, InvalidArgumentException, ConfigurationException, LimitingManagementException, ManagementException; |
Listing Limiter Endpoints
To list all limiter endpoints, use the following rhino-console command or related MBean operation.
Console command: listlimiterendpoints
Command |
listlimiterendpoints [-v] Description Lists all available limiter endpoints. If the '-v' flag is provided, the limiter endpoint's current used limiter is also provided. |
---|---|
Example |
$ ./rhino-console listlimiterendpoints RAEntity/entity1/Input RAEntity/entity1/inbound |
MBean operation: getLimiterEndpoints
MBean |
|
---|---|
Rhino operation |
String[] getLimiterEndpoints() throws ConfigurationException, ManagementException; |
Finding which Limiter is Connected to a Limiter Endpoint
To find which limiter is connected to a limiter endpoint, use the following rhino-console command or related MBean operation.
Console command: getlimiterforlimiterendpoint
Command |
getlimiterforlimiterendpoint <limiterendpoint> Description Returns the name of the limiter that the limiter endpoint is using |
---|---|
Example |
To find which limiter is connected to limiter endpoint $ ./rhino-console getlimiterforlimiterendpoint RAEntity/entity1/Input LimiterEndpoint 'RAEntity/entity1/Input' is using the limiter 'rate1' |
MBean operation: getLimiterForEndpoint
MBean |
|
---|---|
Rhino operation |
String getLimiterForEndpoint(String limiterEndpointID) throws NullPointerException, InvalidArgumentException, ConfigurationException, ManagementException; |
Managing the SystemInput Limiter Ramp-up
As well as an overview of ramp-up of the SystemInput limiter, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
enablerampup |
LimiterRampUpMBean → enableRampUp |
|
disablerampup |
LimiterRampUpMBean → disableRampUp |
|
getrampupconfiguration |
LimiterRampUpMBean → isEnabled |
About the SystemInput Limiter Ramp-up
Ramp-up is an optional procedure that gradually increases the rate that the SystemInput
limiter allows — from a small value when a node starts, up to the configured maximum.
This allows time for events such as Just-In-Time
compilation and cache-loading, before the maximum work rate applies to the node.
Enabling or disabling ramp-up
Enabled | Disabled |
---|---|
|
Nothing special happens when the node starts — the maximum rate the SystemInput limiter allows is simply |
Ramp-up has no effect if the SystemInput limiter’s bypassed flag is true . |
You configure ramp-up globally, but each node ramps up independently. So if a node restarts, it ramps up again — without affecting other already running nodes. |
Enabling SystemInput Limiter Ramp-up
To enable SystemInput
limiter ramp-up, use the following rhino-console command or related MBean operation.
Console command: enablerampup
Command |
enablerampup <startrate> <rateincrement> <eventsperincrement> Description Enables rampup of the SystemInput limiter rate with the provided startrate, rateincrement, and eventsperincrement |
---|---|
Example |
To enable $ ./rhino-console enablerampup 10 10 1000 Enabled rampup of the SystemInput limiter rate with config properties: startrate=10.0 rateincrement=10.0 eventsperincrement=1000 |
MBean operation: enableRampUp
MBean |
|
---|---|
Rhino operation |
void enableRampUp(double startRate, double rateIncrement, int eventsPerIncrement) throws ConfigurationException; |
The SystemInput limiter’s bypassed flag must be false for ramp-up to have any effect. |
Disabling SystemInput Limiter Ramp-up
To disable SystemInput
limiter ramp-up, use the following rhino-console command or related MBean operation.
Console command: disablerampup
Command |
disablerampup Description Disables rampup of the SystemInput limiter rate |
---|---|
Example |
To disable $ ./rhino-console disablerampup Disabled rampup of the SystemInput limiter rate |
MBean operation: disableRampUp
MBean |
|
---|---|
Rhino operation |
void disableRampUp() throws ConfigurationException; |
Inspecting the SystemInput Limiter Ramp-up Configuration
To inspect the SystemInput
limiter’s ramp-up configuration, use the following rhino-console command or related MBean operation.
Console command: getrampupconfiguration
Command |
getrampupconfiguration Description Retrieves the limiter rampup configuration settings, if it is enabled |
---|---|
Example |
To inspect the $ ./rhino-console getrampupconfiguration Rampup of the SystemInput limiter is active with the following config properties: startrate=10.0 rateincrement=10.0 eventsperincrement=100 |
LimiterRampUp
MBean operations
Operation |
Usage |
---|---|
boolean isEnabled() throws ConfigurationException; |
|
double getStartRate() throws ConfigurationException; |
|
double getRateIncrement() throws ConfigurationException; |
|
int getEventsPerIncrement() throws ConfigurationException; |
Monitoring Limiter Statistics
You can monitor limiter statistics using the rhino-stats tool or the StatsManagement
MBean.
The root parameter set is called Limiters
, and it has one child parameter set for each limiter; for example, Limiters.SystemInput
or Limiters.QueueSaturation
.
Limiter parameters recorded
Rhino records the following limiter parameters:
Records the total number of.. | Increments.. | Decrements.. |
---|---|---|
Total units successfully used or forced — |
||
…units allowed to be used and units forced to be used. |
…whenever units are allowed to or forced to be used. |
…never. |
Total units denied by a limiter’s parent — |
||
…units not allowed to be used, because the parent of the limiter denied their use. This includes units denied because of any ancestor (such as the parent of the parent). The |
…whenever the parent of a limiter denies unit use. |
…never. |
Total units denied by a limiter — |
||
…units not allowed to be used, because a limiter or its parent denied their use. This includes units denied because of any ancestor (such as the parent of the parent). |
…whenever a limiter denies unit use. |
…never. |
Example
The following excerpt shows the number of units a limiter allows and rejects, second by second.
$ ./rhino-stats -m Limiters.SystemInput 2009-03-11 06:57:43.903 INFO [rhinostat] Connecting to localhost:1199 2009-03-11 06:57:44.539 INFO [dispatcher] Establish direct session DirectSession[host=server1 port=17400 id=56928181032135173] 2009-03-11 06:57:44.542 INFO [dispatcher] Connecting to localhost/127.0.0.1:17400 Limiters.SystemInput time rejected rejectedByParent used ----------------------- --------------------------------- 2009-03-11 06:57:46.604 - - - 2009-03-11 06:57:47.604 14 0 103 2009-03-11 06:57:48.604 14 0 102 2009-03-11 06:57:49.604 11 0 101 2009-03-11 06:57:50.604 12 0 99 2009-03-11 06:57:51.604 13 0 102 2009-03-11 06:57:52.604 14 0 101 2009-03-11 06:57:53.604 8 0 96
(In this example, rejectedByParent
is 0
, as SystemInput
has no parent.)
Using Alarms with Limiters
Threshold Alarms can be configured for a limiter based on any limiter statistics.
See the Configuring Rules section for general instructions on installing threshold alarm rules, and the configuration example on this page. |
Pre-existing alarms
By default Rhino has two threshold alarms pre-configured to indicate when one of the two pre-configured limiters rejects work: the SystemInput Rejecting Work alarm for the SystemInput
limiter, and the QueueSaturation Rejecting Work alarm for the QueueSaturation
limiter. Each rate limiter may also generate a Negative capacity alarm if it reaches a limit to the amount of forced work it can keep track of.
SystemInput rejecting work
Alarm Message |
SystemInput rate limiter is rejecting work |
---|---|
Type |
LIMITING |
Instance ID |
system-input-limiter-rejecting-work |
Level |
MAJOR |
Raised if… |
…the SystemInput limiter is rejecting work for more than one second. |
Cleared if… |
…the SystemInput limiter has not rejected any work for five seconds. |
Example output |
2009-03-02 17:13:43.893 Major [rhino.facility.alarm.manager] <Timer-2> Alarm 101:136455512705:8 [SubsystemNotification[subsystem=ThresholdAlarms],LIMITING,system-input-limiter-rejecting-work] was raised at 2009-03-02 17:13:43.893 to level Major SystemInput rate limiter is rejecting work |
QueueSaturation Rejecting Work
Alarm Message |
QueueSaturation limiter is rejecting work |
---|---|
Type |
LIMITING |
Instance ID |
queue-saturation-limiter-rejecting-work |
Level |
MAJOR |
Raised if… |
…the QueueSaturation limiter is rejecting work for more than one second. |
Cleared if… |
…the QueueSaturation limiter has not rejected any work for five seconds. |
Example output |
2009-03-02 17:16:37.697 Major [rhino.facility.alarm.manager] <Timer-1> Alarm 101:136455512705:10 [SubsystemNotification[subsystem=ThresholdAlarms],LIMITING,queue-saturation-limiter-rejecting-work] was raised at 2009-03-02 17:16:34.592 to level Major QueueSaturation limiter is rejecting work |
Negative capacity alarm
Alarm Message |
Token count in rate limiter "<LIMITER_NAME>" capped at negative saturation point on node <NODE_ID>. Too much work has been forced. Alarm will clear once token count >= 0. |
---|---|
Type |
ratelimiter.below_negative_capacity |
Instance ID |
nodeID=<NODE_ID>,limiter=<LIMITER_NAME> |
Level |
WARNING |
Raised if… |
…a very large number of units have been forcibly used and the internal token counter has reached the biggest possible negative number (-2,147,483,648). |
Cleared if… |
…token counter >= 0 |
Example output |
2009-03-05 01:14:59.597 Warning [rhino.facility.alarm.manager] <Receiver for switchID 1236168893> Alarm 101:136654511648:16 [SubsystemNotification[subsystem=LimiterManager],limiting.ratelimiter.below_negative_capacity,nodeID=101,limiter=SystemInput] was raised at 2009-03-05 01:14:59.596 to level Warning Token count in rate limiter "SystemInput" capped at negative saturation point on node 101. Too much work has been forced. Alarm will clear once token count >= 0. |
Threshold alarm example
The following configuration example defines the pre-existing system-input-limiter-rejecting-work
alarm.
<threshold-rules active="true" name="system-input-limiter-rejecting-work">
<trigger-conditions name="Trigger conditions" operator="OR" period="1000">
<simple-threshold operator=">" value="0.0">
<select-statistic calculate-delta="true" parameter-set="Limiters.SystemInput" statistic="unitsRejected"/>
</simple-threshold>
</trigger-conditions>
<reset-conditions name="Reset conditions" operator="OR" period="5000">
<simple-threshold operator="==" value="0.0">
<select-statistic calculate-delta="true" parameter-set="Limiters.SystemInput" statistic="unitsRejected"/>
</simple-threshold>
</reset-conditions>
<trigger-actions>
<raise-alarm-action level="Major" message="SystemInput rate limiter is rejecting work" type="LIMITING"/>
</trigger-actions>
<reset-actions>
<clear-raised-alarm-action/>
</reset-actions>
</threshold-rules>
The default threshold alarms can be modified or removed as needed. |
Security
Security is an essential feature of the JAIN SLEE standard and Rhino.
It provides access control for: m-lets (management applets), JAIN SLEE components (including resource adaptors, services and libraries) and Rhino node and cluster administration. Rhino’s security subsystem implements a pessimistic security model — to prevent untrusted resource adaptors, m-lets, services or human users from performing restricted functions.
Transport-layer security and the general security of the remote host and server are important considerations when interconnection with third-party servers. Any security planning can be foiled by an incumbent with a key! |
The Rhino security model is based on: the standard Java security model, the Java Authentication and Authorisation Service (JAAS), and the SLEE specification default permission sets for components.
Key features of Rhino security include:
Configuring Java Security of Rhino
The following standard Java security policy file defines the Rhino codebase security configuration.
As Rhino starts, it:
|
Disabling or debugging security
There may be times when you want to disable security (for example, during development), or enable fine-grained security tracing in Rhino (for example, to track down security-related issues in Rhino).
Disabling security completely
You can disable security two ways:
-
Insert a rule into the policy file that grants
AllPermission
to all code:grant { permission java.security.AllPermission; };
-
Disable the use of a security manager — edit
$RHINO_HOME/node-XXX/read-config-variables
, commenting out the following line:#OPTIONS="$OPTIONS -Djava.security.manager"
Enable security when running Rhino
OpenCloud recommends you always run Rhino with security enabled. |
Debugging security
You can debug Rhino’s security configuration by enabling security tracing (so that the security manager produces trace logs) — edit $RHINO_NODE_HOME/read-config-variables
, adding the following line:
OPTIONS="$OPTIONS -Djava.security.debug=access,failure"
This option will produce a lot of console output. To capture it, redirect the standard out and standard error streams from Rhino to a file. For example: $ start-rhino.sh > out 2>&1 |
Excerpt of rhino.policy
Below is an excerpt of $RHINO_HOME/node-XXX/config/rhino.policy
:
grant {
permission java.io.FilePermission "${java.home}${/}lib${/}rt.jar", "read";
permission java.io.FilePermission "${java.home}${/}lib${/}jaxp.properties","read";
// Needed by default logging configuration.
permission java.io.FilePermission "${rhino.dir.work}${/}log${/}-","read,write";
// Java "standard" properties that can be read by anyone
permission java.util.PropertyPermission "java.version", "read";
permission java.util.PropertyPermission "java.vendor", "read";
permission java.util.PropertyPermission "java.vendor.url", "read";
permission java.util.PropertyPermission "java.class.version", "read";
permission java.util.PropertyPermission "os.name", "read";
permission java.util.PropertyPermission "os.version", "read";
permission java.util.PropertyPermission "os.arch", "read";
permission java.util.PropertyPermission "file.separator", "read";
permission java.util.PropertyPermission "path.separator", "read";
permission java.util.PropertyPermission "line.separator", "read";
permission java.util.PropertyPermission "java.specification.version", "read";
permission java.util.PropertyPermission "java.specification.vendor", "read";
permission java.util.PropertyPermission "java.specification.name", "read";
permission java.util.PropertyPermission "java.vm.specification.version", "read";
permission java.util.PropertyPermission "java.vm.specification.vendor", "read";
permission java.util.PropertyPermission "java.vm.specification.name", "read";
permission java.util.PropertyPermission "java.vm.version", "read";
permission java.util.PropertyPermission "java.vm.vendor", "read";
permission java.util.PropertyPermission "java.vm.name", "read";
};
// Standard extensions get all permissions by default
grant codeBase "file:///${java.home}/lib/ext/*" {
permission java.security.AllPermission;
};
// ...
Java Security Properties
Configuration introduced in Rhino 2.6.0.1 |
A per node configuration file $RHINO_NODE_HOME/config/rhino.java.security
has been added to allow overriding of JVM security settings. This file includes default values for the following networking security properties:
networkaddress.cache.ttl=30 networkaddress.negative.cache.ttl=10
The value of these properties control how long Resource Adaptors and Rhino based applications cache network addresses after successful and unsuccessful DNS queries. These values override the ones specified in the JVMs java.security
file. See Oracle’s Networking Properties documentation for more details. The JVM default for networkaddress.cache.ttl
is to cache forever. (-1) Therefore the introduction of this file to Rhino’s per-node configuration will alter an applications caching behavior on upgrade to a newer Rhino version.
Use of a different java.security
configuration file can be achieved by modifying the following line in $RHINO_NODE_HOME/read-config-variables
:
OPTIONS="$OPTIONS -Djava.security.properties=${SCRIPT_WORK_DIR}/config/rhino.java.security"
Secure Access for OA&M Staff
Rhino provides a set of management tools for OA&M staff, including the Rhino Element Manager and various command-line tools.
The following topics explain how you can:
Authentication
The Java Authentication and Authorization Service (JAAS) allows integration with enterprise systems, identity servers, — databases and password files.
JAAS configuration
The file rhino.jaas
defines the JAAS modules Rhino uses for authentication:
Unresolved directive in rhino-administration-and-deployment-guide/rhino-configuration/security/secure-access-for-oa-m-staff/authentication.adoc - include::/mnt/ephemeral0/workspace/product/rhino-documentation/release-2.6.0/Auto/rhino-docs/target/source/rhino/install/config/sdk/rhino.jaas[]
See the Javadoc for the JAAS Configuration class for details about flags such as REQUIRED . |
The system property java.security.auth.login.config
defines the location of rhino.jaas
(in read-config-variables
for a production Rhino instance and jvm_args
for the Rhino SDK.)
File login module
The FileLoginModule
reads login credentials and roles from a file. It is the default login module for a new Rhino installation.
The parameters to the FileLoginModule
are:
-
file
- specifies location of password file. -
hash
- password hashing algorithm. Usenone
for clear text passwords, or a validjava.security.MessageDigest
algorithm name (e.g.md5
orsha-1
). If not specified, clear text passwords are used.
Password File Format
<username>:<password>:<role,role...>
-
username
- user’s name -
password
- user’s password (or hashed password). May be prefixed by the hash method in{}
. -
roles
- comma-separated list of role names that the user belongs to, eg.rhino,view
.
Using flags and hashed passwords
By default, Rhino stores passwords in cleartext, in the password file. For increased security, store a secure one-way hash of the password instead:
|
LDAP login module
The LdapLoginModule
reads login credentials and roles from an LDAP directory server.
To use this module, edit the JAAS configuration file ${RHINO_HOME}/config/rhino.jaas
, and add an entry to the jmxr-adaptor
declaration:
jmxr-adaptor {
com.opencloud.rhino.security.auth.LdapLoginModule SUFFICIENT
properties="config/ldapauth.properties";
/* a "backup" login module would typically go here */
};
Configuration Properties
The properties file contains LDAP connection parameters. The properties that this module uses are documented in the example ldapauth.properties
file, along with default values and examples
The file config/ldapauth.properties
defines the LDAP-connection configuration:
### Properties for JAAS LDAP login module (LdapLoginModule) # # The commented values are the default values that will be used if the given property is not specified. # The ldap.url property has no default and must be specified. # # This properties file should be supplied to the LdapLoginModule using the "properties" property, e.g. # # jmxr-adaptor { # com.opencloud.rhino.security.auth.LdapLoginModule SUFFICIENT # properties="config/ldapauth.properties"; # }; # ### Connection properties # An LDAP URL of the form ldap://[host[:port]]/basedn or ldaps://host[:port]/basedn # Some examples: # Connect to local directory server #ldap.url=ldap:///dc=example,dc=com # Connect to remote directory server #ldap.url=ldap://remoteserver/dc=example,dc=com # Connect to remote directory server using SSL #ldap.url=ldaps://remoteserver/dc=example,dc=com ldap.url= # Use TLS. When set to true, the LdapLoginModule attempts a "Start TLS" request when it connects to the # directory server. This should NOT be set to true when using an ldaps:// (SSL) URL. #ldap.usetls=false # To use TLS or SSL, you must have your directory server's X509 certificate installed in Rhino's trust # store, located at $RHINO_BASE/rhino-server.keystore. ### Authentication properties ## Direct mode # In "direct mode", the login module attempts to bind using a DN calculated from the pattern property. # Direct mode is used if the ldap.userdnpattern property is specified. # A DN pattern that can be used to directly login users to LDAP. This pattern is used for creating a DN string for # 'direct' user authentication, where the pattern is relative to the base DN in the LDAP URL. # {0} will be replaced with the submitted username. # A typical value for this property might be "uid={0},ou=People" #ldap.userdnpattern= ## Search mode # In "search mode", the login module binds using the given manager credentials and searches for the user. # Authentication to LDAP will be done from the DN found if successful. # Search mode is used if the ldap.userdnpattern property is not specified. # Bind credentials to search for the user. May be blank if the directory server allows anonymous connections, or if # using direct mode. #ldap.managerdn= #ldap.managerpw= # A filter expression used to search for the user DN that will be used in LDAP authentication. # {0} will be replaced by the submitted username. #ldap.searchfilter=(uid={0}) # Context name to search in, relative to the base DN in the LDAP URL. #ldap.searchbase= ### Role resolution properties # A search is performed using the search base (ldap.role.searchbase), and filter (ldap.role.filter). The results of # the search define the Rhino roles. The role name is in the specified attribute (ldap.roles.nameattr) and must match # role definitions in Rhino configuration. The members of each role are determined by examining the values of the # member attribute (ldap.role.memberattr) and must contain the DN of the authenticated user. # Attribute on the group entry which denotes the group name. #ldap.rolenameattr=cn # A multi-value attribute on the group entry which contains user DNs or ids of the group members (e.g. uniqueMember,member) #ldap.rolememberattr=uniqueMember # The LDAP filter used to search for group entries. #ldap.rolefilter=(objectclass=groupOfUniqueNames) # A search base for group entry DNs, relative to the DN that already exists on the LDAP server's URL. #ldap.rolesearchbase=ou=Groups
SLEE profile login module
The ProfileLoginModule
reads login credentials and roles from a SLEE profile table.
To use this module, edit the JAAS configuration file ${RHINO_HOME}/config/rhino.jaas
, and add an entry to the jmxr-adaptor declaration:
jmxr-adaptor {
com.opencloud.rhino.security.auth.ProfileLoginModule SUFFICIENT
profiletable="UserLoginProfileTable"
passwordattribute="HashedPassword"
rolesattribute="Roles"
hash="md5";
/* a "backup" login module would typically go here */
};
ProfileLoginModule
supports the following options:
Option | Description | Default | ||
---|---|---|---|---|
profiletable |
name of the profile table to use |
UserLoginProfileTable |
||
passwordattribute |
profile attribute to compare the password against |
HashedPassword |
||
rolesattribute |
profile attribute to load the roles from |
Roles |
||
hash |
hashing algorithm to use for the password
|
md5 |
The profile login module:
-
finds the profile in a specified table with a name matching the supplied username
-
compares the supplied password with the password stored in the profile; if authentication succeeds, retrieves the roles for that user from the profile.
Rhino comes with a profile specification that you can use to create a profile table for the profile login module (in $RHINO_HOME/lib/user-login-profile-du.jar
). It contains a profile specification called UserLoginProfileSpec
. You can install it using the rhino-console
:
[Rhino@localhost (#3)] installlocaldu ../../lib/user-login-profile-du.jar installed: DeployableUnitID[url=file:/tmp/rhino/lib/user-login-profile-du.jar] [Rhino@localhost (#4)] listprofilespecs ProfileSpecificationIDname=AddressProfileSpec,vendor=javax.slee,version=1.0 ProfileSpecificationIDname=AddressProfileSpec,vendor=javax.slee,version=1.1 ProfileSpecificationIDname=ResourceInfoProfileSpec,vendor=javax.slee,version=1.0 ProfileSpecificationID[name=UserLoginProfileSpec,vendor=Open Cloud,version=1.0]
A profile table named UserLoginProfileTable created using this specification will work with the default configuration values listed above. |
Creating a profile table fallback
OpenCloud recommends configuring a file login module as a fallback mechanism, in case the profile table is accidentally deleted or renamed, or the admin
user profile is deleted or changed.
Without a fallback you would not be able to fix the profile table problem, since no user would be able to login using a management client! |
To create a profile table fallback, give ProfileLoginModule
a SUFFICIENT
flag and the FileLoginModule
a REQUIRED
flag:
jmxr-adaptor { com.opencloud.rhino.security.auth.ProfileLoginModule SUFFICIENT profiletable="UserLoginProfileTable" passwordattribute="HashedPassword" rolesattribute="Roles" hash="md5"; com.opencloud.rhino.security.auth.FileLoginModule REQUIRED file="$${rhino.dir.base}/rhino.passwd" hash="md5"; };
Encrypted Communication with SSL
By default, the interconnection between Rhino and a management client uses the Secure Sockets Layer (SSL) protocol.
(You can disable SSL by editing the JMX Remote Adaptor m-let configuration.)
How does SSL work?
An SSL connection for sending data protects it by using encryption, which prevents eavesdropping and tampering. SSL uses a cryptographic system that doubly encrypts the data, with both a public key known to everyone and a private (or "secret") key known only to the recipient of the message. For more about SSL, please see SSL Certificates HOWTO from the Linux Documentation Project, and Java SE Security Documentation from Oracle. |
SSL in Rhino
Several keystores store the keys Rhino uses during user authentication. For example, a Rhino SDK installation includes:
Keystore | Used by… | To… |
---|---|---|
$RHINO_HOME/rhino-public.keystore |
clients |
identify themselves, and confirm the server’s identity |
$RHINO_HOME/rhino-private.keystore |
Rhino |
identify itself, confirm a client’s identity |
$RHINO_HOME/client/rhino-public.keystore |
Rhino OA&M clients (like command line console) |
duplicate |
The installation process generates keystores, keys, and certificates for Rhino. |
Using keytool
to manage keystores
You can use keytool
to manage keystores. For example:
$ keytool -list -keystore rhino-public.keystore Enter keystore password: <password> Keystore type: jks Keystore provider: SUN Your keystore contains 2 entries jmxr-ssl-server, Jul 2, 2008, trustedCertEntry, Certificate fingerprint (MD5): 8F:A4:F1:68:59:DC:66:C0:67:D8:91:C8:18:F5:C7:14 jmxr-ssl-client, Jul 2, 2008, keyEntry, Certificate fingerprint (MD5): 99:8F:53:66:D9:BD:AE:3C:86:9C:0F:CD:42:6F:DA:83
Change the default passphrase
Rhino keystores and keys have a default passphrase of keytool -storepasswd -keystore rhino-public.keystore |
Enabling Remote Access
By default, only Rhino’s management tools (such as the command-line console or stats console) can run on the same host as Rhino. You can, however, securely manage Rhino from a remote host.
As discussed in the preceding topic, Rhino uses SSL to secure its interconnect with management clients. To configure Rhino to support remote management clients:
-
Copy the client directory to the remote machine.
-
Allow the remote host to connect to the JMX remote adaptor.
Set up the client directory on the remote machine
The client
directory (and subdirectories) contain all the scripts, configuration files and other dependencies needed for Rhino management clients. To setup the client directory on the remote machine:
-
Copy the entire directory structure to the remote host:
$ scp -r client <user>@<host>:<destination>/
-
Edit
client/etc/client.properties
and changerhino.remote.host
:# RMI properties, file names are relative to client home directory rhino.remote.host=<rhino host> rhino.remote.port=1199 # ...
Allow the remote host to connect to the JMX remote adaptor
All management tools connect to Rhino using the JMX Remote Adaptor m-let. By default this component only permits access from the same host that Rhino is running on.
The security-spec
section of the node-XXX/config/permachine-mlet.conf
and node-XXX/config/pernode-mlet.conf
files defines the security environment of an m-let. To allow a remote host to connect to the JMX remote adaptor, edit the security-permission-spec
sections of the node-XXX/config/permachine-mlet.conf
file, to enable remote access with appropriate java.net.SocketPermission
:
<mlet enabled="true">
<classpath>
<jar-url>@FILE_URL@@RHINO_BASE@/lib/jmxr-adaptor.jar</jar-url>
<security-permission-spec>
grant {
...
permission java.net.SocketPermission "<REMOTE_HOST>","accept";
...
};
...
</mlet>
If you would like to connect to Rhino SDK, the file that defines the m-let configuration is $RHINO_SDK/config/mlet.conf . |
Configuring the SLEE Component Java Sandbox
All JAIN SLEE components run within a "sandbox" defined by a set of Java security permissions.
-
defines a default set of security permissions for each SLEE component jar (such as SBB jars, resource adaptor jars and library jars).
-
explains how you can grant additional security permissions for SBB, profile specification, resource adaptor and library components (over and above the default set).
This section draws heavily from material in the JAIN SLEE 1.1 specification. |
Default Security Permissions for SLEE Components
The following table defines the Java platform security permissions that Rhino grants to the instances of SLEE component classes at runtime.
The term "grant" means that Rhino grants the permission, the term "deny" means that Rhino denies the permission.
Permission name | SLEE policy |
---|---|
java.security.AllPermission |
deny |
java.awt.AWTPermission |
deny |
java.io.FilePermission |
deny |
java.net.NetPermission |
deny |
java.util.PropertyPermission |
grant |
java.lang.reflect.ReflectPermission |
deny |
java.lang.RuntimePermission |
deny |
java.lang.SecurityPermission |
deny |
java.io.SerializablePermission |
deny |
java.net.SocketPermission |
deny |
This permission set is defined by the JAIN SLEE 1.1 specification section 12.1.1.1-slee-component-security-permissions. This section also explains how SBB, profile specification, resource adaptor and library components can be granted additional security permissions over and above the default set. |
Adding Security Permissions to SLEE Components
SBB, profile specification, resource adaptor and library components can be granted additional security permissions over and above the default set of security permissions granted by the SLEE — by using the security-permissions
element in their respective deployment descriptor.
Each security-permissions
element contains the following sub-elements:
-
description
— an optional informational element -
security-permission-spec
— an element that identifies the security permission policies used by component jar file classes. (For thesecurity-permission-spec
element syntax definition, please see the J2SE security documentation).
If the
|
Below are a sample component jar deployment descriptor with added security permissions, and a table of security requirements that apply to methods invoked on classes loaded from different types of component jars with added permissions.
Sample component jar deployment descriptor with added security permissions
Below is an example of a resource adaptor component jar with added security permissions:
<resource-adaptor-jar>
<resource-adaptor>
<description> ... </description>
<resource-adaptor-name> Foo JCC </resource-adaptor-name>
<resource-adaptor-vendor> com.foo </resource-adaptor-vendor>
<resource-adaptor-version> 10/10/20 </resource-adaptor-version>
...
</resource-adaptor>
<security-permissions>
<description>
Allow the resource adaptor to modify thread groups and connect to remotehost on port 1234
</description>
<security-permission-spec>
grant {
permission java.lang.RuntimePermission "modifyThreadGroup";
permission java.net.SocketPermission "remotehost:1234", "connect";
};
</security-permission-spec>
</security-permissions>
</resource-adaptor-jar>
Security requirements for methods invoked on classes loaded from component jars
The following table describes the security requirements that apply to methods invoked on classes loaded from different types of component jars:
Component jar type | Security requirements |
---|---|
SBB |
|
Profile spec |
|
Resource adaptor |
|
Library |
|
External Databases
The Rhino SLEE requires the use of an external database for persistence of management and profile data. Rhino can also provide SLEE applications with access to an external database for persistence of their own data.
Rhino can connect to any external database which has support for JDBC 2.0 and JDBC 2.0’s standard extensions. The JDBC API is the industry standard for database-independent connectivity between the Java programming language and a wide range of databases. The JDBC API provides a call-level API for SQL-based database access. JDBC technology lets you use the Java programming language to exploit "Write Once, Run Anywhere" capabilities for applications that require access to enterprise data. For more information, please see https://docs.oracle.com/javase/tutorial/jdbc.
External database integration is managed in Rhino using the following configurable entities:
Configurable entity | What it does |
---|---|
Persistence instance |
Defines the parameters Rhino needs to be able to connect to an external database using the database vendor’s database driver code. |
Persistence resource |
Links a Rhino in-memory database with one or more persistence instances. |
JDBC resource |
Provides a SLEE application with access to a persistence instance. |
This section includes instructions and details on:
-
adding the JDBC driver for a database
-
managing persistence instances
-
managing persistence resources
-
managing JDBC resources
-
the external persistence configuration file format
Adding the JDBC Driver
The JDBC driver for an external database needs to be added to Rhino’s runtime environment before Rhino can connect to it. You’ll need the JDBC 2.0 driver from the database vendor. (You’ll only need to do this once per Rhino installation and database vendor.)
To install the driver, you need to add it to Rhino’s runtime environment and grant permissions to the classes in the JDBC driver. Rhino needs to be restarted after making these changes for them to take effect.
Add the library
To add the library to Rhino’s runtime environment, copy the JDBC driver jar file to $RHINO_BASE/lib
, then add the jar to Rhino’s classpath. The method for adding classpath entries differs between the Rhino SDK and Rhino production versions.
Rhino SDK
For the Rhino SDK, add an entry for the JDBC driver jar file into the rhino.runtime.classpath
system property in $RHINO_HOME/config/jvm_args
.
Below is an example that includes the PostgreSQL and Oracle JDBC drivers.
# Required classpath -Drhino.runtime.classpath=${RHINO_BASE}/lib/postgresql.jar;${RHINO_BASE}/lib/derby.jar;${RHINO_BASE}/lib/ojdbc6.jar;${JAVA_HOME}/lib/tools.jar
Rhino Production
For the production version of Rhino, add an entry for the JDBC driver jar file into the RUNTIME_CLASSPATH
environment variable in $RHINO_BASE/defaults/read-config-variables
, and the $RHINO_HOME/read-config-variables
file in any node directory that has already been created.
Below is an example for adding the Oracle JDBC driver:
# Set classpath LIB=$RHINO_BASE/lib CLASSPATH="${CLASSPATH:+${CLASSPATH}:}$LIB/RhinoBoot.jar" RUNTIME_CLASSPATH="$LIB/postgresql.jar" # Add Oracle JDBC driver to classpath RUNTIME_CLASSPATH="$RUNTIME_CLASSPATH:$LIB/ojdbc6.jar"
Grant permissions to the JDBC driver
To grant permissions to the classes in the JDBC driver, edit the Rhino security policy file, adding an entry for the JDBC driver jar file.
In the Rhino SDK, the policy file is $RHINO_HOME/config/rhino.policy
. In the production version, the policy files are $RHINO_BASE/defaults/config/rhino.policy
, and $RHINO_HOME/config/rhino.policy
in any node directory that has already been created.
Below is an example for the Oracle JDBC driver:
// Add permissions to Oracle JDBC driver grant codeBase "file:$${rhino.dir.base}/lib/ojdbc6.jar" { permission java.net.SocketPermission "*", "connect,resolve"; permission java.lang.RuntimePermission "getClassLoader"; permission java.util.PropertyPermission "oracle.*", "read"; permission java.util.PropertyPermission "javax.net.ssl.*", "read"; permission java.util.PropertyPermission "user.name", "read"; permission javax.management.MBeanPermission "oracle.jdbc.driver.OracleDiagnosabilityMBean", "registerMBean"; };
Persistence Instances
As well as an overview of persistence instances, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
|
Persistence Management →
|
|
|
Persistence Management → |
|
|
Persistence Management →
|
|
|
Persistence Management →
|
About Persistence Instances
A persistence instance defines how Rhino connects to an external database endpoint.
A persistence instance requires the following configuration properties:
-
A unique name that identifies the persistence instance in the SLEE.
-
The fully-qualified name of the Java class from the database driver that implements the
javax.sql.DataSource
interface or thejavax.sql.ConnectionPoolDataSource
interface. For more information on the distinction between these interfaces and their implications for application-level JDBC connection pooling in Rhino, please see Managing database connections. -
Configuration properties for the datasource. Each datasource has a number of JavaBean properties (as defined by the JDBC specification). For each configured property, its name, expected Java type, and value must be specified.
Variables may be used in the construct of JavaBean property values. Variables are indicated using the${...}
syntax, where the value between the braces is the variable name. Rhino attempts to resolve the variable name by looking in the following places in this order:-
The content of the
$RHINO_HOME/config/config_variables
file -
Java system properties
-
User environment variables
-
At a minimum, configuration properties that inform the JDBC driver where to connect to the database server must be specified. |
Creating Persistence Instances
To create a persistence instance, use the following rhino-console command or related MBean operation.
Console command: createpersistenceinstance
Command |
createpersistenceinstance <name> <datasource-class-name> [(<property-name> <property-type> <property-value)*] Description Create a persistence instance configuration. A datasource class name must be specified along with any configuration parameters necessary for the datasource to connect with the persistence instance. |
---|---|
Example |
This example creates a new persistence instance with the following configuration properties:
$ ./rhino-console createpersistenceinstance oracle \ oracle.jdbc.pool.OracleDataSource \ URL java.lang.String jdbc:oracle:thin:@oracle_host:1521:db \ user java.lang.String ${MANAGEMENT_DATABASE_USER} \ password java.lang.String ${MANAGEMENT_DATABASE_PASSWORD} \ loginTimeout java.lang.Integer 30 Created persistence instance oracle |
MBean operation: createPersistenceInstance
MBean |
|
---|---|
Rhino operation |
public void createPersistenceInstance(String name, String dsClassName, ConfigProperty[] configProperties) throws NullPointerException, InvalidArgumentException, DuplicateNameException, ConfigurationException; |
Displaying Persistence Instances
To list current persistence instances or display the configuration parameters of a persistence instance, use the following rhino-console commands or related MBean operations.
listpersistenceinstances
Command |
listpersistenceinstances Description List all currently configured persistence instances. |
---|---|
Example |
$ ./rhino-console listpersistenceinstances oracle postgres postgres-jdbc |
dumppersistenceinstance
Command |
dumppersistenceinstance <name> [-expand] Description Dump the current configuration for the named persistence instance. The -expand option will cause any property values containing variables to be expanded with their resolved value (if resolvable) |
---|---|
Example |
|
MBean operations
getPersistenceInstances
MBean |
|
---|---|
Rhino operation |
public String[] getPersistenceInstances() throws ConfigurationException; This operation returns an array containing the names of the persistence instances. |
getPersistenceInstance
MBean |
|
---|---|
Rhino operation |
public CompositeData getPersistenceInstance(String name) throws NullPointerException, NameNotFoundException, ConfigurationException; This operation returns a JMX |
Updating Persistence Instances
The configuration properties of an existing persistence instance can be updated at runtime. If the persistence instance is in use at the time of a reconfiguration, then new connections will be established with the new configuration properties, and any existing connections to the database will be closed when they become idle.
To update an existing persistence instance, use the following rhino-console command or related MBean operation.
Console command: updatepersistenceinstance
Command |
updatepersistenceinstance <name> [-ds <datasource-class-name>] [-set-property <property-name> <property-type> <property-value)]* [-remove-property <property-name>]* Description Update a persistence instance configuration. |
---|---|
Example |
$ ./rhino-console updatepersistenceinstance oracle \ -set-property URL java.lang.String jdbc:oracle:thin:@oracle_backup:1521:db \ -set-property user java.lang.String ${MANAGEMENT_DATABASE_USER} \ -set-property password java.lang.String ${MANAGEMENT_DATABASE_PASSWORD} Updated persistence instance oracle |
MBean operation: updatePersistenceInstance
MBean |
|
---|---|
Rhino operation |
public void updatePersistenceInstance(String name, String dsClassName, ConfigProperty[] configProperties) throws NullPointerException, InvalidArgumentException, NameNotFoundException, ConfigurationException; |
Removing Persistence Instances
To remove an existing persistence instance, use the following rhino-console command or related MBean operation.
A persistence instance cannot be removed while it is referenced by a persistence resource or JDBC resource. |
Console command: removepersistenceinstance
Command |
removepersistenceinstance <name> Description Remove an existing persistence instance configuration. |
---|---|
Example |
$ ./rhino-console removepersistenceinstance oracle Removed persistence instance oracle |
MBean operation: removePersistenceInstance
MBean |
|
---|---|
Rhino operation |
public void removePersistenceInstance(String name) throws NullPointerException, NameNotFoundException, InvalidStateException, ConfigurationException; |
Persistence Resources
As well as an overview of persistence resources, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
Managing persistence resources |
||
|
Persistence Management →
|
|
|
Persistence Management →
|
|
|
Persistence Management →
|
|
Managing persistence instance references |
||
|
Persistence Management →
|
|
|
Persistence Management →
|
|
|
Persistence Management →
|
About Persistence Resources
A persistence resource links a Rhino in-memory database with one or more persistence instances. State stored in the in-memory database is replicated to the external databases for persistence, so that if the Rhino SLEE cluster is shut down then management and provisioned data can be restored.
The persistence resources that Rhino requires are defined in the config/rhino-config.xml
file. An in-memory database, identified by a <memdb>
element in this file, that persists its state externally contains a reference to a persistence resource using a <persistence-resource-ref>
element. In the default configuration, Rhino requires the persistence resources named below:
Persistence Resource | What it’s used for |
---|---|
management |
Persistence of installed deployable units, component activation states, configuration information, and so on. |
profiles |
Persistence of all provisioned data in profile tables. |
While it is possible to add and remove persistence resources from Rhino, there is typically never a need to do so. Rhino only utilises the persistence resources named in config/rhino-config.xml
, and all must exist for Rhino to function correctly.
Active session state is stored in an in-memory database that is not backed by a persistence resource. |
Persistence resources and persistence instances
A persistence resource can be associated with zero or more persistence instances. By associating a persistence resource with a persistence instance, in-memory database state corresponding with that resource will be persisted to the external database endpoint identified by that persistence instance. Any given persistence instance may be used concurrently by multiple persistence resources. Each persistence resource uses a unique set of tables such that overlap in a single database will not occur.
Upon a successful connection, Rhino will keep each persistence instance synchronised with the state of the persistence resource. Naturally, at least one persistence instance reference must be configured for persistence to occur.
When the first node of a cluster boots, Rhino will attempt to connect to all persistence instances used by a persistence resource, and will initialise corresponding in-memory database state from a connected persistence instance that contains the most recent data. The node will fail to boot if it cannot successfully connect to at least one persistence instance for each required persistence resource.
If Rhino connects to any persistence instance that contain out-of-date data, it will be resynchronised with the latest data. Native database replication should not be used between the persistence instances that Rhino connects to — Rhino will handle the synchronisation itself.
A persistence resource should never be associated with two persistence instances that connect to the same physical database. Due to table locking this causes a deadlock when the first Rhino cluster node boots, and it can also cause corruption to database state. |
Using multiple persistence instances for a persistence resource
While only a single PostgreSQL or Oracle database is required for the entire Rhino SLEE cluster, the Rhino SLEE supports communications with multiple database servers.
Multiple servers add an extra level of fault tolerance for the runtime configuration and the working state of the Rhino SLEE. Rhino’s in-memory databases will be constantly synchronized to each persistence instance so if the cluster is restarted it will be able to restore state if any of the databases are operational. If a persistence instance database fails or is no longer network-reachable, Rhino will continue to persist updates to the other instances associated with the persistence resource. Updates will be queued for unreachable instances and stored when the instances come back online.
Configuring multiple instances
Prepare the database servers
Before adding a database to a persistence resource you must prepare the database, by executing $RHINO_NODE_HOME/init-management-db.sh
for each server.
$ init-management-db.sh -h dbhost-1 -p dbport -u dbuser -d database postgres
$ init-management-db.sh -h dbhost-2 -p dbport -u dbuser -d database postgres
You will be prompted for a password on the command line
Create persistence instances for the databases
Once the databases are initialised on each database server, configure new persistence instances in Rhino and attach them to the persistence resources. To create persistence instances, follow the instructions at Creating persistence instances.
Add the new persistence instances to the configured persistence resources
When persistence instances have been created for each database, add them to the persistence resources. Instructions to do so are at Adding persistence instances to a persistence resource. An example of the procedure is shown below:
$ ./rhino-console createpersistenceinstance oracle \
oracle.jdbc.pool.OracleDataSource \
URL java.lang.String jdbc:oracle:thin:@oracle_host:1521:db \
user java.lang.String ${MANAGEMENT_DATABASE_USER} \
password java.lang.String ${MANAGEMENT_DATABASE_PASSWORD} \
loginTimeout java.lang.Integer 30
Created persistence instance oracle
$ ./rhino-console addpersistenceinstanceref persistence management oracle
Added persistence instance reference 'oracle' to persistence resource management
$ ./rhino-console addpersistenceinstanceref persistence profiles oracle
Added persistence instance reference 'oracle' to persistence resource profiles
It is also possible to configure the persistence instances before starting Rhino by editing the persistence.xml
configuration file. This is useful for initial setup of the cluster but should not be used to change a running configuration as changes to the file cannot be reloaded without restarting. An example persistence.xml
is shown below:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE rhino-persistence-config PUBLIC "-//Open Cloud Ltd.//DTD Rhino Persistence Config 2.3//EN" "rhino-persistence-config-2.3.dtd">
<rhino-persistence-config config-version="2.3" rhino-version="Rhino (version='2.5', release='0-TRUNK.0-SNAPSHOT.1-DEV13-pburrowes', build='201610251631', revision='6c862fc (dirty)')" timestamp="1477629656508">
<!--Generated Rhino configuration file: 2016-10-28 17:40:56.507-->
<persistence>
<jdbc-resource jndi-name="jdbc">
<persistence-instance-ref name="postgres-jdbc"/>
<connection-pool connection-pool-timeout="5000" idle-check-interval="30" max-connections="15" max-idle-connections="15" max-idle-time="600" min-connections="0"/>
</jdbc-resource>
<persistence-instances>
<persistence-instance datasource-class-name="org.postgresql.ds.PGSimpleDataSource" name="postgres-1">
<parameter name="serverName" type="java.lang.String" value="${MANAGEMENT_DATABASE_HOST}"/>
<parameter name="portNumber" type="java.lang.Integer" value="${MANAGEMENT_DATABASE_PORT}"/>
<parameter name="databaseName" type="java.lang.String" value="${MANAGEMENT_DATABASE_NAME}"/>
<parameter name="user" type="java.lang.String" value="${MANAGEMENT_DATABASE_USER}"/>
<parameter name="password" type="java.lang.String" value="${MANAGEMENT_DATABASE_PASSWORD}"/>
<parameter name="loginTimeout" type="java.lang.Integer" value="30"/>
<parameter name="socketTimeout" type="java.lang.Integer" value="15"/>
<parameter name="prepareThreshold" type="java.lang.Integer" value="1"/>
</persistence-instance>
<persistence-instance datasource-class-name="org.postgresql.ds.PGSimpleDataSource" name="postgres-2">
<parameter name="serverName" type="java.lang.String" value="${MANAGEMENT_DATABASE_HOST2}"/>
<parameter name="portNumber" type="java.lang.Integer" value="${MANAGEMENT_DATABASE_PORT}"/>
<parameter name="databaseName" type="java.lang.String" value="${MANAGEMENT_DATABASE_NAME}"/>
<parameter name="user" type="java.lang.String" value="${MANAGEMENT_DATABASE_USER}"/>
<parameter name="password" type="java.lang.String" value="${MANAGEMENT_DATABASE_PASSWORD}"/>
<parameter name="loginTimeout" type="java.lang.Integer" value="30"/>
<parameter name="socketTimeout" type="java.lang.Integer" value="15"/>
<parameter name="prepareThreshold" type="java.lang.Integer" value="1"/>
</persistence-instance>
</persistence-instances>
<persistence-resource name="management">
<persistence-instance-ref name="postgres-1"/>
<persistence-instance-ref name="postgres-2"/>
</persistence-resource>
<persistence-resource name="profiles">
<persistence-instance-ref name="postgres-1"/>
<persistence-instance-ref name="postgres-2"/>
</persistence-resource>
</persistence>
</rhino-persistence-config>
Creating Persistence Resources
To create a persistence resource, use the following rhino-console command or related MBean operation.
Console command: createdatabaseresource
Command |
createdatabaseresource <resource-type> <name> Description Create a database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console createdatabaseresource persistence myresource Created persistence resource myresource |
MBean operation: createPersistenceResource
MBean |
|
---|---|
Rhino operation |
public void createPersistenceResource(String name) throws NullPointerException, InvalidArgumentException, DuplicateNameException, ConfigurationException; |
Displaying Persistence Resources
To list the current persistence resources use the following rhino-console command or related MBean operation.
Console command: listdatabaseresources
Command |
listdatabaseresources <resource-type> Description List all currently configured database resources. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console listdatabaseresources persistence management profiles |
MBean operation: getPersistenceResources
MBean |
|
---|---|
Rhino operation |
public String[] getPersistenceResources() throws ConfigurationException; This operation returns an array containing the names of the persistence resources that have been created. |
Removing Persistence Resources
To remove an existing persistence resource, use the following rhino-console command or related MBean operation.
Console command: removedatabaseresource
Command |
removedatabaseresource <resource-type> <name> Description Remove an existing database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console removedatabaseresource persistence myresource Removed persistence resource myresource |
MBean operation: removePersistenceResource
MBean |
|
---|---|
Rhino operation |
public void removePersistenceResource(String name) throws NullPointerException, NameNotFoundException, ConfigurationException; |
Adding Persistence Instances to a Persistence Resource
To add a persistence instance to a persistence resource, use the following rhino-console command or related MBean operation.
Console command: addpersistenceinstanceref
Command |
addpersistenceinstanceref <resource-type> <resource-name> <persistence-instance-name> Description Add a persistence instance reference to a database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console addpersistenceinstanceref persistence management oracle Added persistence instance reference 'oracle' to persistence resource management |
MBean operation: addPersistenceResourcePersistenceInstanceRef
MBean |
|
---|---|
Rhino operation |
public void addPersistenceResourcePersistenceInstanceRef(String persistenceResourceName, String persistenceInstanceName) throws NullPointerException, NameNotFoundException, DuplicateNameException, ConfigurationException; |
Displaying a Persistence Resource’s Persistence Instances
To display the persistence instances that have been added to a persistence resource, use the following rhino-console command or related MBean operation.
Console command: listpersistenceinstancerefs
Command |
listpersistenceinstancerefs <resource-type> <resource-name> Description List the persistence instance references for a database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console listpersistenceinstancerefs persistence management postgres |
MBean operation: getPersistenceResourcePersistenceInstanceRefs
MBean |
|
---|---|
Rhino operation |
public String[] getPersistenceResourcePersistenceInstanceRefs(String persistenceResourceName) throws NullPointerException, NameNotFoundException, ConfigurationException; This operation returns an array containing the names of the persistence instances used by the persistence resource. |
Removing Persistence Instances from a Persistence Resource
To remove a persistence instance from a persistence resource, use the following rhino-console command or related MBean operation.
Console command: removepersistenceinstanceref
Command |
removepersistenceinstanceref <resource-type> <resource-name> <persistence-instance-name> Description Remove a persistence instance reference from a database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console removepersistenceinstanceref persistence management oracle Removed persistence instance reference 'oracle' from persistence resource management |
MBean operation: removePersistenceResourcePersistenceInstanceRef
MBean |
|
---|---|
Rhino operation |
public void removePersistenceResourcePersistenceInstanceRef(String persistenceResourceName, String persistenceInstanceName) throws NullPointerException, NameNotFoundException, ConfigurationException; |
JDBC Resources
JDBC resources are used by application components such as service building blocks (SBBs) to execute SQL statements against an external database. A systems administrator can configure new external database resources for applications to use.
As well as an overview on how SBBs can use JDBC to execute SQL and an overview on managing physical database connections, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
Managing JDBC resources |
||
|
Persistence Management →
|
|
|
Persistence Management →
|
|
|
Persistence Management →
|
|
Managing persistence instance references |
||
|
Persistence Management →
|
|
|
Persistence Management →
|
|
|
Persistence Management →
|
|
Managing database connections |
||
|
Persistence Management →
|
|
|
Persistence Management → |
|
|
JDBC Resource Connection Pool Management →
|
|
|
Persistence Management →
|
How SBBs use JDBC to execute SQL
An SBB can use JDBC to execute SQL statements. It must declare this intent in an extension deployment descriptor: the oc-sbb-jar.xml
file (contained in the SBB
jar file in the META-INF
directory). The <resource-ref>
element (which must be inside the <sbb>
element of oc-sbb-jar.xml
) defines the JDBC datasource it will use.
Sample <resource-ref>
Below is a sample <resource-ref>
element defining a JDBC datasource:
<resource-ref>
<!-- Name under the SBB's java:comp/env tree where this datasource will be bound -->
<res-ref-name>foo/datasource</res-ref-name>
<!-- Resource type - must be javax.sql.DataSource -->
<res-type>javax.sql.DataSource</res-type>
<!-- Only Container auth supported -->
<res-auth>Container</res-auth>
<!-- Only Shareable scope supported -->
<res-sharing-scope>Shareable</res-sharing-scope>
<!-- JNDI name of target JDBC resource, relative to Rhino's java:resource tree. -->
<res-jndi-name>jdbc/myresource</res-jndi-name>
</resource-ref>
In the above example, the <res-jndi-name> element has the value jdbc/myresource , which maps to the JDBC resource created in the example. |
How an SBB obtains a JDBC connection
An SBB can get a reference to an object that implements the datasource interface using a JNDI lookup. Using that object, the SBB can then obtain a connection to the database. The SBB uses that connection to execute SQL queries and updates.
For example:
import javax.naming.*;
import javax.slee.*;
import javax.sql.DataSource;
import java.sql.Connection;
import java.sql.SQLException;
...
public abstract class SimpleSbb implements Sbb {
public void setSbbContext(SbbContext context) {
try {
Context myEnv = (Context)new InitialContext().lookup("java:comp/env");
ds = (DataSource)myEnv.lookup("foo/datasource");
}
catch (NamingException e) {
// JNDI lookup failed
}
}
public void onSimpleEvent(SimpleEvent event, ActivityContextInterface context) {
Connection conn;
try {
conn = ds.getConnection();
}
catch (SQLException e) {
// could not get database connection
}
...
}
...
private DataSource ds;
...
}
SQL programming
When an SBB executes in a transaction and invokes SQL statements, the SLEE controls transaction management of the JDBC connection. This lets the SLEE perform last-resource-commit
optimisation.
Invoking JDBC methods which affect transaction management have no effect or undefined semantics when called from an application component isolated by a SLEE transaction. The methods (including any overridden form) that affect transaction management on the java.sql.Connection
interface are listed below. These methods should not be invoked by SLEE components:
-
close
-
commit
-
rollback
-
setAutoCommit
-
setIsolationLevel
-
setSavePoint
-
releaseSavePoint
Creating JDBC Resources
JDBC resources are identified by a unique name that identifies where in the JNDI tree the JDBC resource will be bound. This name is relative to the java:resource/jdbc
namespace, for example the JNDI name oracle/db1
will result in the JDBC resource being bound to the name java:resource/jdbc/oracle/db1
.
The JNDI location is not accessible to SBBs directly. Each SBB links to the JNDI name in the SBB deployment descriptor. (For more on SBB deployment descriptor entries please see how SBBs use JDBC to execute SQL.) |
All JDBC resources required by the SBBs in a service must exist before that service can be activated. A JDBC resource must also have a persistence instance associated with it in order for it to be able to provide database connections to SBBs that request them. |
To create a JDBC resource, use the following rhino-console command or related MBean operation.
Console command: createdatabaseresource
Command |
createdatabaseresource <resource-type> <name> Description Create a database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console createdatabaseresource jdbc myresource Created JDBC resource myresource |
MBean operation: createJdbcResource
MBean |
|
---|---|
Rhino operation |
public void createJdbcResource(String jndiName) throws NullPointerException, InvalidArgumentException, DuplicateNameException, ConfigurationException; |
Displaying JDBC Resources
To list current JDBC resources, use the following rhino-console command or related MBean operation.
Console command: listdatabaseresources
Command |
listdatabaseresources <resource-type> Description List all currently configured database resources. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console listdatabaseresources jdbc jdbc myresource |
MBean operation: getJdbcResources
MBean |
|
---|---|
Rhino operation |
public String[] getJdbcResources() throws ConfigurationException; This operation returns an array containing the names of the JDBC resources that have been created. |
Removing JDBC Resources
To remove an existing JDBC resource, use the following rhino-console command or related MBean operation.
A JDBC resource cannot be removed while it is referenced by an SBB in an activated service. |
Console command: removedatabaseresource
Command |
removedatabaseresource <resource-type> <name> Description Remove an existing database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console removedatabaseresource jdbc myresource Removed JDBC resource myresource |
MBean operation: removeJdbcResource
MBean |
|
---|---|
Rhino operation |
public void removeJdbcResource(String jndiName) throws NullPointerException, NameNotFoundException, InvalidStateException, ConfigurationException; |
Adding A Persistence Instance to a JDBC Resource
A JDBC resource can be associated with at most one persistence instance.
Rhino SLEE treats different JDBC resources as different database managers, even if they use the same persistence instance. Therefore, even if two JDBC resources use the same persistence instance, and a single transaction uses both JDBC resources, Rhino treats them as multiple resource managers. |
To add a persistence instance to a JDBC resource, use the following rhino-console command or related MBean operation.
Console command: addpersistenceinstanceref
Command |
addpersistenceinstanceref <resource-type> <resource-name> <persistence-instance-name> Description Add a persistence instance reference to a database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console addpersistenceinstanceref jdbc myresource oracle Added persistence instance reference 'oracle' to JDBC resource myresource |
MBean operation: setJdbcResourcePersistenceInstanceRef
MBean |
|
---|---|
Rhino operation |
public void setJdbcResourcePersistenceInstanceRef(String jdbcResourceJndiName, String persistenceInstanceName) throws NullPointerException, NameNotFoundException, ConfigurationException; |
Displaying a JDBC Resource’s Persistence Instance
To display the persistence instance that has been added to a JDBC resource, use the following rhino-console command or related MBean operation.
Console command: listpersistenceinstancerefs
Command |
listpersistenceinstancerefs <resource-type> <resource-name> Description List the persistence instance references for a database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console listpersistenceinstancerefs jdbc myresource oracle |
MBean operation: getJdbcResourcePersistenceInstanceRef
MBean |
|
---|---|
Rhino operation |
public String getJdbcResourcePersistenceInstanceRef(String jndiName) throws NullPointerException, NameNotFoundException, ConfigurationException; This operation returns the name of any persistence instance that has been associated with the JDBC resource. |
Removing the Persistence Instance from a JDBC Resource
To remove the persistence instance from a JDBC resource, use the following rhino-console command or related MBean operation.
Console command: removepersistenceinstanceref
Command |
removepersistenceinstanceref <resource-type> <resource-name> <persistence-instance-name> Description Remove a persistence instance reference from a database resource. The resource-type parameter must be either 'persistence' or 'jdbc'. |
---|---|
Example |
$ ./rhino-console removepersistenceinstanceref jdbc myresource oracle Removed persistence instance reference 'oracle' from JDBC resource myresource |
MBean operation: setJdbcResourcePersistenceInstanceRef
MBean |
|
---|---|
Rhino operation |
public void setJdbcResourcePersistenceInstanceRef(String jdbcResourceJndiName, String persistenceInstanceName) throws NullPointerException, NameNotFoundException, ConfigurationException; To remove an existing persistence instance reference, pass in |
Managing Database Connections
JDBC 2.0 with standard extensions provides two mechanisms for connecting to the database:
-
The
javax.sql.DataSource
interface provides unmanaged physical connections. -
The
javax.sql.ConnectionPoolDataSource
interface provides managed physical connections. To connect to a connection pooling data source, you need a managedConnectionPoolDataSource
connection.
Using a connection pool with a JDBC resource
By default, a JDBC resource does not use connection pooling. A connection pool may, however, be attached to a JDBC resource to improve efficiency. When a JDBC resource uses connection pooling, the way Rhino manages connections depends on what interface the datasource class of the persistence instance used by the JDBC resource is an implementation of, as follows:
Interface | How Rhino manages connections |
---|---|
javax.sql.DataSource |
uses an internal implementation of |
javax.sql.ConnectionPoolDataSource |
uses managed connections from the |
Connection pool configurable parameters
A connection pool has the following configurable parameters:
Parameter | What it specifies |
---|---|
max-connections |
Maximum number of active connections a Rhino process can use at any one time. |
max-idle-connections |
Maximum number of inactive connections that should be maintained in the connection pool. This value must be less than or equal to |
min-connections |
Minimum number of connections that should be maintained in the connection pool. |
max-idle-time |
Time in seconds after which an inactive connection may become eligible for discard. An idle connection will not be discarded if doing so would reduce the number of idle connections below the If this parameter has the value |
idle-check-interval |
Time in seconds between idle connection discard checks. |
connection-pool-timeout |
Maximum time in milliseconds an SBB will wait for a free connection before a timeout error occurs. |
Adding a Connection Pool Configuration to a JDBC Resource
To add a connection pool configuration to a JDBC resource, use the following rhino-console command or related MBean operation.
Console command: createjdbcresourceconnectionpoolconfig
Command |
createjdbcresourceconnectionpoolconfig <name> Description Create a connection pool configuration for a JDBC resource. |
---|---|
Example |
$ ./rhino-console createjdbcresourceconnectionpoolconfig myresource Connection pool configuration created |
MBean operation: createJdbcResourceConnectionPoolConfig
MBean |
|
---|---|
Rhino operation |
public ObjectName createJdbcResourceConnectionPoolConfig(String jndiName) throws NullPointerException, NameNotFoundException, InvalidStateException, ConfigurationException; This method returns the JMX |
Displaying a JDBC Resource’s Connection Pool Configuration
To display the connection pool configuration for a JDBC resource, use the following rhino-console command or related MBean operation.
Console command: dumpjdbcresourceconnectionpoolconfig
Command |
dumpjdbcresourceconnectionpoolconfig <name>. Description Dump the connection pool configuration of a JDBC resource. |
---|---|
Example |
$ ./rhino-console dumpjdbcresourceconnectionpoolconfig myresource connection-pool-timeout : 5000 idle-check-interval : 30 max-connections : 2147483647 max-idle-connections : 2147483647 max-idle-time : 0 min-connections : 0 |
MBean operations:
getJdbcResourceConnectionPoolConfigMBean
MBean |
|
---|---|
Rhino operation |
public ObjectName getJdbcResourceConnectionPoolConfigMBean(String jndiName) throws NullPointerException, NameNotFoundException, InvalidStateException, ConfigurationException; This method returns the JMX |
JDBC Resource Connection Pool Management
MBean |
|
---|---|
Rhino operations |
public int getMaxConnections() throws ConfigurationException; public int getMinConnections() throws ConfigurationException; public int getMaxIdleConnections() throws ConfigurationException; public int getMaxIdleTime() throws ConfigurationException; public int getIdleCheckInterval() throws ConfigurationException; public long getConnectionPoolTimeout() throws ConfigurationException; These methods return the current value of the corresponding connection pool configuration parameter. public CompositeData getConfiguration() throws ConfigurationException; This operation returns a JMX |
Updating a JDBC Resource’s Connection Pool Configuration
To update the connection pool configuration for a JDBC resource, use the following rhino-console command or related MBean operation.
Console command: setjdbcresourceconnectionpoolconfig
Command |
setjdbcresourceconnectionpoolconfig <name> [-max-connections max-size] [-min-connections size] [-max-idle-connections size] [-max-idle-time time] [-idle-check-interval time] [-connection-pool-timeout time] Description Update the connection pool configuration of a JDBC resource. Size parameters must be integer values. The max-idle-time and idle-check-interval parameters are measured in seconds and must be integer values. The connection-pool-timeout parameter is measured in milliseconds and must be a long value. |
---|---|
Example |
In the example below, the maximum idle connections is set to 20, the maximum number of connections is set to 30, and the maximum time an idle connection remains in the connection pool is set to 60s: $ ./rhino-console setjdbcresourceconnectionpoolconfig myresource \ -max-idle-connections 20 -max-connections 30 -max-idle-time 60 Connection pool configuration updated for JDBC resource myresource $ ./rhino-console dumpjdbcresourceconnectionpoolconfig myresource connection-pool-timeout : 5000 idle-check-interval : 30 max-connections : 30 max-idle-connections : 20 max-idle-time : 60 min-connections : 0 |
MBean operations:
getJdbcResourceConnectionPoolConfigMBean
MBean |
|
---|---|
Rhino operation |
public ObjectName getJdbcResourceConnectionPoolConfigMBean(String jndiName) throws NullPointerException, NameNotFoundException, InvalidStateException, ConfigurationException; This method returns the JMX |
JDBC Resource Connection Pool Management
MBean |
|
---|---|
Rhino operations |
public void setMaxConnections(int maxConnections) throws ConfigurationException; public void setMinConnections(int minConnections) throws ConfigurationException; public void setMaxIdleConnections(int maxIdleConnections) throws ConfigurationException; public void setMaxIdleTime(int maxIdleTime) throws ConfigurationException; public void setIdleCheckInterval(int idleCheckInterval) throws ConfigurationException; public void setConnectionPoolTimeout(long timeout) throws ConfigurationException; These methods set a new value for the corresponding connection pool configuration parameter. |
Removing the Connection Pool Configuration from a JDBC Resource
To remove the connection pool configuration from a JDBC resource, use the following rhino-console command or related MBean operation.
Console command: removejdbcresourceconnectionpoolconfig
Command |
removejdbcresourceconnectionpoolconfig <name>. Description Remove the connection pool configuration from a JDBC resource. |
---|---|
Example |
$ ./rhino-console removejdbcresourceconnectionpoolconfig myresource Connection pool configuration removed |
MBean operation: removeJdbcResourceConnectionPoolConfig
MBean |
|
---|---|
Rhino operation |
public void removeJdbcResourceConnectionPoolConfig(String jndiName) throws NullPointerException, NameNotFoundException, InvalidStateException, ConfigurationException; |
Persistence Configuration File Format
In most circumstances, it will never be necessary to manually edit the external persistence configuration file. However, as the default Rhino install is configured to connect to a PostgreSQL database, one of the most likely reasons for needing to manually edit the file is if some other vendor’s database needs to be used, for example Oracle, as Rhino will not start if it doesn’t have an external database that it can connect to.
This section describes the format of the persistence configuration file.
Persistence configuration file location
The persistence configuration file can be found in ${RHINO_HOME}/config/persistence.xml
. However this file only exists if the Rhino node has been started at least once. If the node has yet to be started, or if the persistence.xml
file is deleted, then the persistence configuration is obtained from ${RHINO_HOME}/config/defaults.xml
Every node in a Rhino cluster has the same persistence configuration. A Rhino node that boots and joins an existing cluster will obtain its persistence configuration from the other nodes in the cluster. The cluster configuration will be saved into the node’s ${RHINO_HOME}/config/persistence.xml file, potentially overwriting any local changes that may have been made to it. |
XML Format of a Persistence Configuration
The persistence configuration is contained within the <persistence>
element in the configuration file. The <persistence>
element may contain any number of the following elements:
Element | Description |
---|---|
<persistence-instance> |
Contains the configuration information for a single persistence instance |
<presistence-resource> |
Contains the configuration information for a single persistence resource |
<jdbc-resource> |
Contains the configuration information for a single jdbc resource |
Persistence instance configuration
A persistence instance configuration is contained in a <persistence-instance>
element. This element must have the following attributes:
Attribute | Description |
---|---|
name |
The name of the persistence instance. This name must be unique between all persistence instance configurations. |
datasource-class-name |
The fully-qualified name of the Java class from the database driver that implements the |
A <persistence-instance>
element may also include zero or more <parameter>
elements. Each <parameter>
element identifies the name, Java type, and value of a configuration property of the datasource class using the following element attributes:
Attribute | Description |
---|---|
name |
The name of a JavaBean property defined by the datasource class. |
type |
The fully-qualified Java class name of the JavaBean property’s type. |
value |
The value that should be assigned to the configuration property.
|
Example
Below is an example of the default configuration that connects to a PostgreSQL database:
<persistence-instance datasource-class-name="org.postgresql.ds.PGSimpleDataSource" name="postgres"> <parameter name="serverName" type="java.lang.String" value="${MANAGEMENT_DATABASE_HOST}"/> <parameter name="portNumber" type="java.lang.Integer" value="${MANAGEMENT_DATABASE_PORT}"/> <parameter name="databaseName" type="java.lang.String" value="${MANAGEMENT_DATABASE_NAME}"/> <parameter name="user" type="java.lang.String" value="${MANAGEMENT_DATABASE_USER}"/> <parameter name="password" type="java.lang.String" value="${MANAGEMENT_DATABASE_PASSWORD}"/> <parameter name="loginTimeout" type="java.lang.Integer" value="30"/> <parameter name="socketTimeout" type="java.lang.Integer" value="15"/> <parameter name="prepareThreshold" type="java.lang.Integer" value="1"/> </persistence-instance>
Persistence resource configuration
A persistence resource configuration is contained in a <persistence-resource>
element. This element must have a name
attribute, which specifies the name of the persistence resource. The name must be unique between all persistence resource configurations.
A <persistence-resource>
element may also include zero or more <persistence-instance-ref>
elements. Each <persistence-instance-ref>
element must have a name
attribute, which must be the name of a persistence instance defined elsewhere in the configuration file. The persistence resource will store relevant in-memory database state into each referenced persistence instance.
JDBC resource configuration
A JDBC resource configuration is contained in a <jdbc-resource>
element. This element must have a jndi-name
attribute, which specifies the JNDI name relative to the java:resource/jdbc
namespace where the resource will be bound in the JNDI tree. The JNDI name must be unique between all JDBC resource configurations.
A <jdbc-resource>
element may also optionally include a <persistence-instance-ref>
element and a <connection-pool>
element.
The <persistence-instance-ref>
element must have a name
attribute, which must be the name of a persistence instance defined elsewhere in the configuration file. The JDBC resource will use the database identified by the referenced persistence instance to execute SQL queries.
The presence of a <connection-pool>
element indicates to Rhino that a connection pool should be used to manage the physical connections used by the JDBC resource. The element may define attributes with the names of the connection pool configurable parameters. If a given parameter is absent in the element’s attribute list then the default value for that parameter is assumed.
Example
Below is an example of a JDBC resource:
<jdbc-resource jndi-name="jdbc"> <persistence-instance-ref name="postgres-jdbc"/> <connection-pool connection-pool-timeout="5000" idle-check-interval="30" max-connections="15" max-idle-connections="15" max-idle-time="600" min-connections="0"/> </jdbc-resource>
Cluster Membership
Rhino maintains a single system image by preventing inconsistent nodes from forming a cluster. It determines cluster membership based on the set of cluster nodes reachable within a time-out period.
This page explains the strategies available for managing and configuring cluster membership.
Cluster membership is not a concern when using the Rhino SDK Getting Started Guide, where the cluster membership is always just the single SDK node. |
Below are descriptions of:
How nodes "go primary"
What is primary component selection?
A cluster node runs a primary component selection algorithm to determine whether the component it belongs to is primary or non-primary — without a priori global knowledge of the cluster. |
The primary component is the authoritative set of nodes in the cluster. A node can only perform work when in the primary component. When a node enters the primary component, we say it "goes primary". Likewise, when a node leaves the primary component, we say it "goes non-primary".
The component selector manages which nodes are in the primary component. Rhino provides a choice of two component selectors: DLV or 2-node. The component selector needs to maintain a consistent view of the primary component in several scenarios, to maintain the single system image provided by Rhino.
Segmentation and split-brain
Nodes can become isolated from each other if some networking failure causes a network segmentation. This carries the risk of a "split brain" scenario, where nodes on both sides of the segment consider themselves primary. Rhino, which is managed as a single system image, does not allow split brain scenarios. The DLV and 2-node selectors use different strategies for avoiding split-brain scenarios.
Starting and stopping nodes
Nodes may stop and start the following ways:
-
node failure — Individual cluster nodes may fail, for example due to a hardware failure. From the point of view of the remaining nodes, node failures are indistinguishable from network segmentation. Behaviour of the surviving members is determined by the component selector.
-
automatic shutdown with restart — There are cases described in this guide where the component selector "shuts down" a node, for example to prevent split-brain scenarios. It does this by shifting the node from primary to non-primary. Whenever a node goes from primary to non-primary, it self-terminates. The node will still restart if the “-k” flag was passed to the
start-rhino.sh
script. The node will become primary again as soon as the component selector determines it’s safe to do so. -
node start or restart — When a booting node enters a cluster which is primary, the node will also go primary, and will receive state from existing nodes.
-
remerge — A remerge happens after a network segmentation, when connectivity between network segments is restored. When a network segment of non-primary nodes merges with a segment of primary nodes, the non-primary nodes will also go primary, and receive state from the other nodes. In the unlikely case that two primary segments try to merge, Rhino will shut down the nodes in one of the segments, to maintain the single system image. This should only happen if two sides of a network segment are manually activated using the
-p
flag when using DLV (an administrative error), or after a network failure when using the 2-node selector.
Specifying the component selector
The main configuration choice related to cluster membership is the choice of component selector. If no component selector is specified, Rhino uses DLV as the default.
To specify the component selector, set the system property com.opencloud.rhino.component_selection_strategy
to 2node
or dlv
on each node. Add this line near the end of read-config-variables
file under the node directory to use the 2-node strategy:
OPTIONS="$OPTIONS -Dcom.opencloud.rhino.component_selection_strategy=2node"
This property must be set consistently on every node in the cluster. Rhino will shut down a node trying to enter a cluster using a different component selector. |
The DLV component selector
What is DLV?
The DLV component selector is inspired by the dynamic-linear voting (DLV) algorithm described by Jajodia and Mutchler in their research paper Dynamic voting algorithms for maintaining the consistency of a replicated database. |
DLV is the default primary component strategy. It is suitable for most deployments, and recommended when using three or more Rhino nodes, or two nodes plus a quorum node.
The DLV component selector uses a voting algorithm where the membership of previous primary components plays a role in the selection of the next primary component. Each node persists its knowledge of the last known primary component. When the cluster membership changes, each node exchanges a voting message that contains its own knowledge of previous primary components. Once voting completes, each node, independently, uses these votes to make the same decision on whether to be primary or non-primary. A component can be primary if there are enough members present from the last known configuration to form a quorum.
The DLV component selector guarantees that in the case of a network segmentation (where sets of nodes are isolated from each other), that at most one of the segments will remain primary, to avoid a 'split-brain' scenario where two segments consider themselves primary. This is achieved by considering any component smaller than cluster_size/2
to be non-primary. In the case of an exactly even split (4 node cluster with 2 nodes failed), the component with the lowest nodeID survives.
Manually activating DLV
Upon first starting a cluster using DLV, the primary component must be activated. You do this by passing the -p
flag to start-rhino.sh
when booting the first node. DLV persists the primary/non-primary state to disk, so specifying the -p
flag is not required after the first time.
Using quorum nodes to distinguish node failure from network segmentation
What is a quorum node?
A quorum node is a lightweight node added to distinguish between network segmentation and node failure (as described above). It does not process events or run SLEE services (nodes that are not quorum nodes are sometimes called "event-router nodes"). Quorum nodes have much lighter hardware requirements than event-router nodes. To start a quorum node, you pass the |
A quorum node is useful to help distinguish between node failure and network segmentation when using just two event-router nodes. Given a cluster of nodes {1,2}
, there are two node-failure cases:
-
If node
{2}
fails, the remaining node{1}
will stay primary because it is the distinguished node (having the lowest node ID). -
If node
{1}
fails, the remaining node{2}
will go non-primary and shut down. DLV can’t distinguish this from network segmentation, so it shuts down node{2}
to prevent the possibility of a split-brain scenario. This usually isn’t desirable, and there are two approaches for solving this case: use a 2-node component selector or add a single quorum node to the cluster.
The 2-node component selector
The 2-node selector is designed exclusively for configurations with exactly two Rhino nodes, with a redundant network connection between them. It differs from DLV in how it handles node failures. When one node fails, the other node stays primary, regardless of which of the nodes failed. Conceptually, the responsibility of avoiding a split-brain scenario shifts to the redundant network connection. For this reason, this strategy should only be used when a redundant connection is available. If network segmentation happens, and two primary components remerge, one side of the segment will be shut down.
Quorum nodes cannot be used with a 2-node selector. If you choose a 2-node selector, Rhino will prevent quorum nodes from booting. |
Activating 2-node selectors automatically
2-node selectors automatically go primary when booting. (The -p
flag is not necessary when using the 2-node selector, and is ignored.) When both nodes are visible, they become primary without delay. When a single node boots, it waits for a short time (defaulting to five seconds) before going primary. This prevents a different split-brain case when introducing new nodes.
Communications mode
Cluster membership may be run on exactly one of two communication methods. The communication mode must be chosen at cluster creation time and cannot be live reconfigured.
Multicast
This communication mode uses UDP multicast for communication between nodes. This requires that UDP multicast be available and correctly working on all hosts in the cluster.
Scattercast
This communication mode uses UDP unicast in a mesh topology for communication between nodes. This mode is intended for use where multicast support is not available, such as in the cloud. Scattercast requires significantly more complex configuration, and incurs some network overhead. Thus we do not recommend scattercast where multicast is available.
About Multicast
Multicast is the default communications mode used in Rhino.
This mode allows for automated cluster membership discovery by using the properties of multicast behaviour. When the network supports multicast this is the preferred communication mode, as it is much easier to configure in Rhino.
Nodes communicate by sending messages to well-known multicast groups. These are received by all nodes within the same network.
Configuring multicast
Configuration of multicast is very simple. A multicast address range must be specified; addresses from this range are used for different internal groups. Configuring this is handled by the rhino-install.sh
script.
Troubleshooting multicast
A troubleshooting guide for multicast clusters can be found in Clustering.
About Scattercast
Scattercast is implemented as a replacement for UDP multicast clustering in environments that do not support multicast.
Cluster-wide communication mode
Choosing a cluster communication mode is a cluster-wide decision. It should be made before installation begins. The cluster cannot correctly form when the cluster communication mode is inconsistent; two independent, primary clusters will form. |
How does it work?
Normally Savanna will send UDP datagrams to a well-known multicast group address / port combination to maintain cluster membership. Message transfer happens on separate multicast group address / port combinations that are allocated at runtime from a pool.
Scattercast replaces each multicast UDP datagram with multiple unicast UDP datagrams, one to each involved node. Each node has a unique unicast address / port combination ("scattercast endpoint") used for cluster membership. A separate unicast address is used for each message group. Another separate unicast address is used for co-ordinating state transfer between members. This is created from membership IP address, membership port + state_distribution_port_offset
(default 100
. Configured in {$NODE_HOME}/config/savanna/cluster.properties
).
All nodes must a priori know the endpoint addresses of all other nodes in the same cluster. To achieve this, a configuration file scattercast.endpoints
is stored on each node. This file is created during install and is subsequently managed using the Scattercast Management commands.
Separate endpoints for message transfer are allocated at runtime based on the membership address, and a port chosen from the range first_port
to last_port
(defaults to 46700, 46800
). This is configured in {$NODE_HOME}/config/savanna/cluster.properties
).
Scattercast uses separate groups for message transfer and membership. Ports used for the membership group in scattercast endpoints must not overlap with the port range used for message groups. UDP broadcast addresses are not supported in scattercast endpoints. These will not be rejected by the installer, scattercast commands, or the recovery tool, but must be avoided by users. |
Scattercast endpoint configuration is versioned, and hashed to ensure consistency. The cluster will prefer the newest version if multiple versions are detected when a node tries to join the cluster. Nodes that detect an out-of-date local version will shut down immediately. Nodes that detect a hash mismatch will also shut down immediately, as this indicates corrupt or manually modified contents.
All clustering configuration is stored per node, and must be updated on all nodes to remain in sync. It is expected that this should not be changed often, if at all.
What’s the downside?
Scattercast requires sending additional traffic. For an N-node cluster, scattercast will generate about (N-1) times as many datagrams as the equivalent multicast cluster. That is, there is no penalty for a 2-node cluster; a 3-node cluster will generate about 2x traffic; a 4-node cluster will generate 3x traffic; and so on. At high loads you may run out of network bandwidth sooner; also, there is some CPU overhead involved in sending the extra datagrams.
Scattercast cannot automatically discover nodes; you must explicitly provide endpoint information for all nodes in the cluster. To add nodes, remove nodes, or update nodes at runtime, online management commands should be used.
Manual editing of the configuration file scattercast.endpoints is not supported. Manual editing will cause edited nodes to fail to boot. |
Initial setup
A cluster must be seeded with an initial scattercast endpoints file containing valid mappings for all initial nodes. Without a valid scattercast endpoints file a node is unable to boot in scattercast comms mode. This initial endpoints set may be generated by the Rhino installer. When choosing to install in scattercast mode, the installer script must be provided with an initial endpoints set. Details can be found in Unpack and Gather Information.
If the initial cluster size is known at installation time, providing the full endpoint set here is recommended, as there is no manual step required when this is done.
Troubleshooting guide
A troubleshooting guide for scattercast can be found in Scattercast Clustering.
Scattercast Management
Online management
Once a cluster has been established following the procedures in initial setup, online management of the scattercast endpoints becomes possible. There are four basic management commands, to get, add, delete, or update scattercast endpoints.
Each command applies the result to all currently executing nodes. If there is a node not currently executing that requires the new endpoints set, this must be done manually. To provide an up-to-date endpoints set to an offline node, copy {$NODE_HOME}/config/savanna/scattercast.endpoints
from any up-to-date node to the matching path for the new node. All currently running nodes should have an up-to-date copy of this file. This can be verified by using the getscattercastendpoints command.
scattercast.endpoints cannot be manually edited. Nodes will not boot with a manually edited scattercast.endpoints . |
Multicast, localhost, and wildcard addresses are not permitted in scattercast endpoints. As an endpoint address is used to both send and recieve, the localhost address confines the cluster to a single host. Wildcard addresses are only usable to listen on all interfaces. These addresses cannot be sent from, and thus are invalid in scattercast endpoints.
Inconsistent states
When manually copying scattercast endpoint sets to all cluster members, the cluster will reject all write-management commands until it is rebooted. This occurs because persistent and in-memory state are not identical across all nodes.
This may also occur in other ways, and can be resolved with the following steps
-
If the disk state is correct on all nodes, reboot the cluster.
-
If the disk state is not correct or not the same on all nodes:
-
If the disk state incorrect, use the
recover-scattercast-endpoints.sh
tool in repairing to create a new, correct file; and copy to all nodes before reboot. -
If disk state is correct on some but not all nodes, copy the file from correct nodes to all other nodes.
-
Repair
In most cases where the scattercast configuration is inconsistent, the faulty nodes can be restored by copying the scattercast.endpoints
file from an operational node. If the node has been deleted from the current configuration, it should first be re-added using the addscattercastendpoints rhino-console command. The configuration file can be found at $RHINO_HOME/$NODE/config/savanna/scattercast.endpoints
.
If no nodes are operational, such as after a major change to network addressing, the tool recover-scattercast-endpoints.sh
can be used to rebuild the configuration from scratch.
After running recover-scattercast-endpoints.sh
, you must copy the generated file to $RHINO_HOME/$NODE/config/savanna/scattercast.endpoints
for each node.
Recovering scattercast endpoints
The recover-scattercast-endpoints.sh
script is used to rebuild the scattercast config file after a major network change or configuration data loss. It can be run interactively, prompting for node,IP,port
tuples, or using the new configuration on the command line. Options must be provided before the list of endpoints. If you want to use automatic port assignment, you must provide the baseport and offset options to allow calculation of a valid port set.
Usage:
$ cd rhino
$ ./recover-scattercast-endpoints.sh -?
Usage: ./recover-scattercast-endpoints.sh [options] [node,ip-address[,port]]*
Creates a seed scattercast endpoints file. The generated file needs to be copied to {$NODE_HOME}/config/savanna/scattercast.endpoints in all cluster nodes.
If no endpoints are provided, enters interactive mode.
arguments:
-f, --file Relative path to output file.
-b, --baseport The scattercast base port, used to derive a port when no port is specified in endpoints.
-o, --offset The scattercast port offset, used to derive a port when no port is specified in endpoints.
-?, --help Displays this message.
Example:
$RHINO_HOME/recover-scattercast-endpoints.sh -b 19000 -o 100 101,192.168.1.1,19000 102,192.168.1.2 103,192.168.1.2,19003
If baseport and offset are provided, they are used only for the recovery tool. Nodes added or updated with management commands will continue to use values in cluster.properties . |
Add Scattercast Endpoint(s)
addscattercastendpoints
adds one or more new endpoints to the scattercast endpoints set.
This must be done before the new node is booted because a node cannot boot if it is not in the scattercast endpoints set. After running the add command successfully, the scattercast endpoints file must be copied from an existing node to the new node. This cannot be done with rhino-management commands.
If an endpoint is added with the wrong ip/port, this can be resolved by deleting and re-adding the endpoint. |
Command |
addscattercastendpoints <node,ip-address[,port]>* Description Add scattercast endpoints for new cluster members. If port is omitted, one will be assigned automatically. |
---|---|
Examples |
Add endpoints for nodes [Rhino@localhost (#1)] addscattercastendpoints 102,192.168.0.127 103,192.168.0.127 Endpoints added successfully. Displaying new scattercast endpoints mappings: NodeID Address ------- -------------------- 101 192.168.0.127:12000 102 192.168.0.127:12001 103 192.168.0.127:12002 3 rows |
Attempt to add an invalid address: [Rhino@localhost (#4)] addscattercastendpoints 104,224.0.101.1 Multicast addresses are not permitted in scattercast endpoints: 224.0.101.1 Invalid usage for command 'addscattercastendpoints'. Usage: addscattercastendpoints <node,ip-address[,port]>* Add scattercast endpoints for new cluster members. If port is omitted, one will be assigned automatically. |
|
Add a node while node [Rhino@localhost (#7)] addscattercastendpoints 104,192.168.0.127 Failed to add endpoints: Node 102 reports: Disk state does not match memory state. No write commands available. |
Delete Scattercast Endpoint(s)
deletescattercastendpoints
removes endpoints for shut-down nodes.
A node’s endpoint cannot be deleted while in use. This means that the node must be shut down and have left the cluster before a delete can be issued.
A node that has been deleted cannot rejoin the cluster unless it is re-added, and the new scattercast endpoints file copied over. Copying an older scattercast endpoints file will not work, as the cluster uses versioning to protect against out-of-sync endpoints files.
Command |
deletescattercastendpoints <-nodes node1,node2,...> Description Delete scattercast endpoints for cluster members being removed. |
---|---|
Examples |
Delete a shut-down node: [Rhino@localhost (#3)] deletescattercastendpoints -nodes 104 Endpoints deleted successfully, removed nodes shut down. New scattercast endpoints mappings: NodeID Address ------- -------------------- 101 192.168.0.127:12000 102 192.168.0.127:12001 103 192.168.0.127:12002 3 rows |
Delete a running node: [Rhino@localhost (#9)] deletescattercastendpoints -nodes 101 Failed to delete scattercast endpoints due to: Node: 101 currently running. Please shutdown nodes before deleting. |
Get Scattercast Endpoints
getscattercastendpoints
reports the set of scattercast endpoints known to all currently running cluster members.
This command may be issued at any time. If cluster membership changes during the read command, this causes an immediate failure, reporting that the cluster membership changed. This is for consistency with write commands.
Command |
getscattercastendpoints Description Get the scattercast endpoints for the cluster. |
---|---|
Examples |
Single node read with consistent endpoints: [Rhino@localhost (#1)] getscattercastendpoints [Consensus] Disk Mapping : Coherent [Consensus] Memory Mapping : [101] Address : 192.168.0.127:12000 [102] Address : 192.168.0.127:12001 |
Two nodes read, where node [Rhino@localhost (#2)] getscattercastendpoints [101] Disk Mapping : Coherent [101] Memory Mapping : [101] Address : 192.168.0.127:12000 [102] Address : 192.168.0.127:12001 [102] Disk Mapping : [101] Address : 192.168.0.127:12000 [102] Address : 192.168.0.127:12001 [103] Address : 192.168.0.127:18000 [102] Memory Mapping : [101] Address : 192.168.0.127:12000 [102] Address : 192.168.0.127:12001 |
|
Read failed due to a cluster-membership change: [Rhino@localhost (#3)] getscattercastendpoints [Consensus] Disk Mapping : Cluster membership change detected, command aborting [Consensus] Memory Mapping : Cluster membership change detected, command aborting |
Update Scattercast Endpoint(s)
updatescattercastendpoints
updates the endpoints for currently running nodes.
In order to update scattercast endpoints, the SLEE must be stopped clusterwide. If this is successful, it triggers an immediate cluster restart to reload scattercast state.
Update commands make a best-efforts attempt to validate that the updated value will be usable after cluster reboot. This is done by attempting to bind the new address.
Updates cannot be applied to non-running cluster nodes. To update a node that is out of the cluster, simply delete the node and add with the new address.
Command |
updatescattercastendpoints <node,ip-address[,port]>* Description Update scattercast endpoints for existing cluster members. If port is omitted, one will be assigned automatically. WARNING: This command will cause a cluster restart. |
---|---|
Examples |
Update with the whole cluster in a stopped state: [Rhino@localhost (#0)] updatescattercastendpoints 101,192.168.0.127,18000 Update executed successfully, cluster shutting down now. |
Update while the SLEE is in a running state: [Rhino@localhost (#3)] updatescattercastendpoints 101,192.168.0.127,12000 Failed to update scattercast endpoints due to: Cannot update scattercast endpoints while SLEE is running. |
|
Update a non-running node: [Rhino@localhost (#5)] updatescattercastendpoints 102,192.168.0.127,12000 Failed to update scattercast endpoints due to: 102 is not currently alive. Updates can only be done against currently live nodes |
Errors
If the update command fails part way through execution, it is likely that the update will not have been applied. Under some rare circumstances, such as multiple disk errors or filesystem-access problems, the update will have only been applied to some nodes. Check the current configuration by running the getscattercastendpoints command to verify that the on-disk config is either coherent with the in-memory config (the command rolled back cleanly) or consistent across all nodes (the command failed after writing all changes). Identify and fix the fault that caused the command to fail, then reboot the cluster. If the update failed before writing the new configuration, rerun the update after fixing the fault that caused the initial attempt to fail.
Alarms
If the updatescattercastendpoints
command is unable to reboot the cluster automatically, for example due to a timeout writing state to the persistent database, it raises a CRITICAL
alarm of type rhino.scattercast.update-reboot-required
and a message:
Scattercast endpoints have been updated. A cluster reboot is required to apply the update as soon as possible otherwise a later partial reboot e.g. due to network segmentation could result in a split-brain cluster.
Static Replication Domaining
This section covers what resources are domainable in Rhino 2.3 and late, instructions for configuring basic and advanced features of static replication domaining, and how to display the current domaining configuration.
What is static replication domaining?
Static replication domaining means partitioning Rhino’s replication mechanisms to perform replication only between selected subsets of nodes. A subset of nodes is called a "domain". This provides better scaling for larger clusters, while still providing a level of replication to ensure fault tolerance. Prior to Rhino 2.3.0, a cluster could be considered as having one and only one domain, and every node could be considered a member of that domain. Domain configuration consists of a set of domain definitions, each associated with one or more domainable resources and one or more cluster nodes. |
Domainable resources
Rhino includes two types of domainable resources:
-
persistence resources — instances of MemDB (Rhino’s in-memory database) that act as storage for SBB, RA, or profile replication
-
activity handler resources — the existence and the state of Activity Context Interfaces, Activity Contexts, and associated attributes.
Activity Handlers, SBB Persistence, and RA Persistence replicated resources are domainable in Rhino 2.3.0 and later |
The Null Activity Factory and Activity Context Naming are not domainable. This means that these resources are replicated cluster wide. |
Configuring Static Replication Domaining
Below are instructions for configuring static replication domaining.
Configuring basic domaining settings
To configure domaining, you edit the config/rhino-config.xml
in each Rhino node directory. The domain definitions in those files look like this:
<domain name="domain-name" nodes="101,102,...,n"> ... resources associated with the domain ... </domain>
Domainable resources
Inside each domain configuration block, each resource is defined using the following format and resource names:
Persistence resources | Activity Handler resources | |
---|---|---|
Format |
Inside a <memdb-resource> ...memory database name... </memdb-resource> |
Inside an <ah-resource> ...activity handler name... </ah-resource> |
Name |
Same as the jndi-name used in its declaration in <memdb> <jndi-name>DomainedMemoryDatabase</jndi-name> <message-id>10005</message-id> <group-name>rhino-db</group-name> <committed-size>100M</committed-size> <resync-rate>100000</resync-rate> </memdb> |
Same as its group-name in rhino-config.xml: <activity-handler> <group-name>rhino-ah</group-name> <message-id>10000</message-id> <resync-rate>100000</resync-rate> </activity-handler> |
It is extremely important that the domaining configuration section of Some persistence resources are not domainable as they contain data which either makes no sense to domain, or which must be global to the entire cluster. The current undomainable persistence resources are |
Example configuration
rhino-config.xml
includes the following sample domaining configuration, commented out by default. It configures an 8-node cluster into 4 domains, with each domain containing 2 nodes — specifying that replication of SBB and RA shared state only happens between each pair of nodes.
<!-- Example replication domain configuration. This example splits the cluster into several 2-node domain pairs for the purposes of service state replication. This example does not cover replication domaining for writeable profiles. --> <domain name="domain-1" nodes="101,102"> <memdb-resource>DomainedMemoryDatabase</memdb-resource> <ah-resource>rhino-ah</ah-resource> </domain> <domain name="domain-2" nodes="201,202"> <memdb-resource>DomainedMemoryDatabase</memdb-resource> <ah-resource>rhino-ah</ah-resource> </domain> <domain name="domain-3" nodes="301,302"> <memdb-resource>DomainedMemoryDatabase</memdb-resource> <ah-resource>rhino-ah</ah-resource> </domain> <domain name="domain-4" nodes="401,402"> <memdb-resource>DomainedMemoryDatabase</memdb-resource> <ah-resource>rhino-ah</ah-resource> </domain>
This example contains node IDs which start with the same number as their corresponding domain. While it’s not required, OpenCloud recommends this naming scheme as it clarifies which nodes are associated with a particular domain. |
Default domain
The default domain (named domain-0
) is not configurable and contains all replicated resources which are not explicitly domained as part of the configuration in rhino-config.xml
. If a node is booted into the cluster, and does not have an associated domain configuration associated with it, it will use the default domain for all persistence resources. If no domains are configured at all, all resources will belong to the default domain.
Advanced configuration
It is possible, though less usual, to configure overlapping domains with different resources. The only constraint on the domaining configuration is that for each domainable resource, it may only occur in a single domain for any given node. For example, the following configuration is valid, despite multiple nodes containing the same NodeIDs.
This example builds on the basic example, adding two more domains (domain-profiles-1 and domain-profiles-2 ). These additional domains allow replication of writeable profiles (backed by MyWriteableProfileDatabase ) across a larger set of nodes than the domains used for service replication. |
<domain name="domain-profiles-1" nodes="101,102,201,202"> <memdb-resource>MyWriteableProfileDatabase</memdb-resource> </domain> <domain name="domain-profiles-2" nodes="301,302,401,402"> <memdb-resource>MyWriteableProfileDatabase</memdb-resource> </domain> <domain name="domain-services-1" nodes="101,102"> <memdb-resource>DomainedMemoryDatabase</memdb-resource> <ah-resource>rhino-ah</ah-resource> </domain> <domain name="domain-services-2" nodes="201,202"> <memdb-resource>DomainedMemoryDatabase</memdb-resource> <ah-resource>rhino-ah</ah-resource> </domain> <domain name="domain-services-3" nodes="301,302"> <memdb-resource>DomainedMemoryDatabase</memdb-resource> <ah-resource>rhino-ah</ah-resource> </domain> <domain name="domain-services-4" nodes="401,402"> <memdb-resource>DomainedMemoryDatabase</memdb-resource> <ah-resource>rhino-ah</ah-resource> </domain>
The configuration and setup of the memory database for use with writeable profiles is beyond the scope of this documentation. |
Displaying the Current Domaining Configuration
To display the current domaining configuration, use the following rhino-console command or MBean operation.
Console command: getdomainstate
Command |
getdomainstate Description Display the current state of all configured domains |
---|---|
Output |
Display the current state of all configured domains. |
Example |
$ ./rhino-console getDomainState domain-1: DomainedMemoryDatabase, rhino-ah 101 Running 102 Running domain-2: DomainedMemoryDatabase, rhino-ah 201 Running 202 Running domain-3: DomainedMemoryDatabase, rhino-ah 301 Stopped 302 - domain-4: DomainedMemoryDatabase, rhino-ah 401 - 402 - |
Nodes which are configured with domain information but are not current part of the cluster are represented by a - . |
MBean operation: getDomainConfig
MBean |
|
---|---|
Rhino extension |
public TabularData getDomainConfig() throws ManagementException; (See the |
Data Striping
This section covers which MemDB instances support data striping, instructions for configuring basic and advanced features of MemDB data striping, how to display the current striping configuration, and striping-related statistics.
What is MemDB data striping?
MemDB data striping means dividing a MemDB instance into partially independent "stripes". This can remove bottlenecks in MemDB, letting Rhino better use more available cores (in machines with many CPU cores). In other words, the primary purpose of data striping is to increase vertical scalability. MemDB data striping was introduced in Rhino 2.3.1. |
MemDB instances
Rhino includes two types of MemDB (Rhino’s in-memory database):
-
local MemDB — contains state local to the Rhino node, used by non-replicated applications running in "high-availability mode".
-
replicated MemDB — contains state replicated across the cluster, domain, or sub-cluster, used by replicated applications running in "fault-tolerant mode".
Both local MemDB and replicated MemDB support data striping. |
MemDB instances backed by disk storage — including the profile database and management database — do not support striping. |
Configuring Data Striping
Below are instructions for configuring data striping.
Configuring basic striping settings
The number of stripes can be configured for each instance of MemDB.
How does the stripe count work?
To scale well on increasingly multi-core systems, it’s important to understand how the stripe count works:
In summary, stripe count is the measure of commit concurrency. |
Below are details on the default settings for stripe counts, and how to choose and set the stripe count for your MemDB instances.
Choosing a stripe count
The stripe count must be 1 or greater, and must be a power of two (1, 2, 4, 8, 16, …). The stripe count should be proportional to the number of CPU cores in a server. A good rule of thumb is that the stripe count should be about 1/2 the number of CPU cores.
To disable striping, use a stripe count of 1.
In some cases when nodes are regularly leaving and joining a cluster, there is a chance of all cluster nodes being restarted as a result of striping being enabled.
We recommend that you consult with OpenCloud before enabling striping to ensure it is configured correctly in a stable and consistent network. |
Setting the stripe count
Each MemDB instance has its own stripe count. To configure the stripe count for a particular MemDB instance, you edit the MemDB configuration for that instance, in the config/rhino-config.xml
file in each Rhino node directory.
The stripe count for a MemDB instance must be the same on all nodes in the cluster. A new node will not start if it contains a stripe count which is inconsistent with other nodes in the cluster. Therefore, the stripe count cannot be changed while a cluster is running. |
The striping configuration for a local MemDB instance looks like this:
<memdb-local> ... <stripe-count>8</stripe-count> </memdb-local>
The striping configuration for a replicated MemDB instance looks like this:
<memdb> ... <stripe-count>8</stripe-count> </memdb>
Advanced configuration: using a stripe offset
It is possible (though very uncommon), to configure a stripe offset for replicated MemDB instances.
What is a "stripe offset"?
Each replicated MemDB declares which Savanna group it runs on. For example: <memdb> ... <group-name>rhino-db</group-name> ... </memdb> When using data striping, each stripe runs on a separate Savanna group. By default, these groups are zero-indexed, taking names (in this example) such as A stripe offset raises the index of the first stripe. So, for example, a stripe offset of |
Below are details on when to consider using and how to set a stripe offset.
When should I use a stripe offset?
Very likely, never. The combination of Savanna group name and stripe offset allow an administrator greater flexibility in mapping data stripes to groups.
Typically, when choosing which Savanna groups to use for MemDB, use of the group-name
should be sufficient. To run multiple MemDB instances on the same Savanna group, configure them with the same group-name
value. To run multiple MemDB instances on different Savanna groups, configure them with different group-name
values.
How do I set the stripe offset?
The default stripe offset is 0.
To change the offset for a particular MemDB instance, edit the MemDB configuration for that instance in the config/rhino-config.xml
file, in each Rhino node directory, adding a stripe-offset
element.
The stripe offset for a MemDB instance must be the same on all nodes in the cluster. A new node will not start if it contains a stripe offset which is inconsistent with other nodes in the cluster. Therefore, the stripe offset cannot be changed while a cluster is running. |
The stripe offset configuration for a replicated MemDB instance looks like this:
<memdb> ... <stripe-count>8</stripe-count> <stripe-offset>4</stripe-offset> </memdb>
Displaying the Current Striping Configuration
To display the current striping configuration, use the following rhino-console command or MBean operations.
Console command: getstripingstate
Command |
getstripingstate Description Display the striping configuration of all MemDB instances |
---|---|
Output |
Display the striping configuration of all MemDB instances. |
Example |
$ ./rhino-console getstripingstate Striping configuration for replicated MemDB instances: memdb-resource stripe-count stripe-offset ------------------------- ------------- -------------- ManagementDatabase 1 0 ProfileDatabase 1 0 ReplicatedMemoryDatabase 8 0 3 rows Striping configuration for local MemDB instances on node 101: memdb-resource stripe-count -------------------- ------------- LocalMemoryDatabase 8 1 rows |
MBean operation: getReplicatedMemDBStripingConfig
MBean |
|
---|---|
Rhino extension |
public TabularData getReplicatedMemDBStripingConfig() throws ManagementException; (See the |
MBean operation: getLocalMemDBStripingNodeConfig
MBean |
|
---|---|
Rhino extension |
public TabularData getLocalMemDBStripingNodeConfig() throws ManagementException; (See the |
MemDB and striping statistics
There are two sets of statistics related to MemDB data striping: MemDB statistics and striping-statistics.
MemDB statistics and striping
MemDB collects statistics under the MemDB-Replicated
and MemDB-Local
parameter sets, within each data stripe. They can be monitored on a per-stripe basis, or viewed as an aggregate across all stripes.
The parameter set names of the per-stripe statistics end with suffix of the form .stripe-N
. For example, the stats for the first stripe will have suffix .stripe-0
.
Striping statistics
MemDB maintains atomicity, consistency and isolation of data across stripes. This involves managing the versions of data exposed to various client transactions. The MemDB-Timestamp
parameter set contains the relevant statistics.
This is a listing of the statistics available for a particular MemDB instance, within the MemDB-Timestamp
parameter set:
Counter type statistics: Id: Name: Label: Description: 0 waitingThreads waiting The number of threads waiting for a timestamp to become safe 1 unexposedCommits unexposed The number of commits which are not yet safe to expose
A database transaction containing at least one write is considered "safe to expose to client transactions" when (as shown by these statistics) all its changes — as well as all the write transactions that precede them — are available across all stripes.
These statistics are expected to have low values even under load (often with value zero), and should stay at zero when Rhino is not under load. |
MetaView Service Assurance Server (SAS) Tracing
MetaView Service Assurance Server (SAS) is a platform that records traces of network flows and service logic.
Rhino provides a SAS facility for components to send events to SAS.
This section describes the commands for managing the SAS facility and resource bundles.
SAS Configuration
This page describes the commands used to configure and enable the SAS facility. SAS configuration is namespace-aware: all these commands apply to the current namespace for the client (selected with setactivenamespace
).
Events sent to SAS are associated with a resource identifier. All components within a Rhino namespace use the same resource identifier. The resource identifier can be set with the setsasresourceid
command.
The resource identifier will be in the generated resource bundle that will be imported into SAS.
Rhino supports connecting to some or all SAS server instances in a federation. This is maintained as an internal list of servers and ports. Servers may be added with addsasserver
and removed with removesasserver
. By default SAS listens on port 6761. If the port is omitted from the add command, then the default port will be used.
Console command: addsasserver
Command |
addsasserver <servers> Description Add one or more servers to the set of configured SAS servers Required Arguments servers Comma delimited list of host:port pairs for SAS servers |
---|---|
Example |
[Rhino@localhost (#1)] addsasserver localhost:12000 Added server(s) to SAS configuration properties: servers=localhost:12000 [Rhino@localhost (#2)] addsasserver 127.0.0.1:12001,127.0.0.2 Added server(s) to SAS configuration properties: servers=127.0.0.1:12001,127.0.0.2 |
Console command: removesasserver
Command |
removesasserver <servers> Description Remove one or more servers from the set of configured SAS servers Required Arguments servers Comma delimited list of host:port pairs for SAS servers |
---|---|
Example |
[Rhino@localhost (#1)] removesasserver localhost:12000 Removed server(s) from SAS configuration properties: servers=localhost:12000 [Rhino@localhost (#2)] removesasserver 127.0.0.1:12001,127.0.0.2 Removed server(s) from SAS configuration properties: servers=127.0.0.1:12001,127.0.0.2 |
Console command setsassystemname
Command |
setsassystemname <systemName> [-appendID <appendID>] Description Configure the SAS resource identifier. Required Arguments systemName The unique system name to use. Cluster wide Options -appendID If true, append node ID to system name |
---|---|
Example |
$ ./rhino-console setsassystemname mmtel Set SAS system name: systemName=mmtel $ ./rhino-console setsassystemname mmtel -appendID true Set SAS system name: systemName=mmtel appendID=true |
Console command setsasresourceid
Command |
setsasresourceid <resourceIdentifier> Description Configure the SAS resource identifier. Required Arguments resourceIdentifier The resource identifier to use. |
---|---|
Example |
$ ./rhino-console setsasresourceid com.metaswitch.rhino Set SAS resource identifier: resourceIdentifier=com.metaswitch.rhino |
Console command setsasqueuesize
Command |
setsasqueuesize <queueSize> Description Configure the per server SAS message queue limit. Required Arguments queueSize The maximum number of messages to queue for sending to the SAS server. |
---|---|
Example |
$ ./rhino-console setsasqueuesize 100000 Set SAS queue size: queueSize=100000 |
Console command: getsasconfiguration
Command |
getsasconfiguration Description Display SAS tracing configuration |
---|---|
Example |
$ ./rhino-console getsasconfiguration SAS tracing is currently disabled. Configuration properties for SAS: servers=[sas-server] systemName=mmtel appendNodeIdToSystemName=true resourceIdentifier=com.metaswitch.rhino queueSize=10000 per server |
Enabling and Disabling SAS Tracing
SAS tracing can be enabled and disabled using the setsasenabled
command. The Rhino SAS facility must be configured with both a resource identifier and server list before being enabled. SAS tracing state is namespace-aware: this command applies to the current namespace for the client (selected with setactivenamespace
).
Disabling SAS tracing on a running SLEE requires use of the -force
option in order to shut down SAS tracing. When the SLEE is running, there may be activities actively tracing to SAS. Live reconfiguration of the SAS facility will result in breaking all trails started before the reconfiguration. If this is acceptable, then the -force
parameter will allow a clean shutdown of SAS tracing for reconfiguration.
Console command: setsasenabled
Command |
setsasenabled <enable> [-force <force>] Description Enable or disable SAS tracing. Configure SAS before enabling. Required Arguments enable True to enable SAS tracing, false to disable. Options -force True to override the SLEE state check when disabling SAS tracing state. SAS tracing state cannot normally be disabled when the SLEE is not in the Stopped state, because this may cause incomplete trails to be created in SAS for sessions that are in progress. |
---|---|
Example |
To enable SAS tracing: $ ./rhino-console setsasenabled true SAS tracing enabled |
SAS Bundle Mappings
Rhino uses a prefix per mini-bundle to generate full event IDs included in the exported final SAS resource bundle. Generated event IDs are 24 bits, composed of the assigned bundle mapping prefix in the upper 16 bits and the mini-bundle event index in the lower 8 bits.
By default, these mappings need to be manually assigned. To have Rhino automatically assign prefixes to unmapped bundles, set assignbundlemappings
to true
when installing a deployable unit.
Reporting of events from mini-bundles that have not been assigned a prefix will be ignored and will not be sent to SAS. To check for any mini-bundles with missing prefix mappings, use the listunmappedsasbundles
console command.
To manage these mappings, Rhino provides the following methods.
Console command: listsasbundlemappings
Command |
listsasbundlemappings [-sortBy <sortBy>] Description Lists all the SAS bundle mappings. Options -sortBy The column to sort the bundle mappings by for display. Either 'name' or 'prefix' |
---|---|
Example |
[Rhino@localhost (#1)] listsasbundlemappings name prefix ---------------------------------------- ------- com.opencloud.slee.services.example.sas 0x0001 1 rows |
Console command: setsasbundlemapping
Command |
setsasbundlemapping <name> <prefix> Description Sets a SAS bundle mapping. Required Arguments name The fully qualified name of the bundle. prefix The prefix for the bundle mapping, as a decimal, hex, or octal string. |
---|---|
Example |
[Rhino@localhost (#1)] setsasbundlemapping com.opencloud.slee.services.example.sas 0x0001 Added a SAS bundle mapping from com.opencloud.slee.services.example.sas to 0x0001. |
Console command: removesasbundlemapping
Command |
removesasbundlemapping <name> Description Removes a SAS bundle mapping. Required Arguments name The fully qualified name of the bundle. |
---|---|
Example |
[Rhino@localhost (#1)] removesasbundlemapping com.opencloud.slee.services.example.sas Prefix for com.opencloud.slee.services.example.sas removed. |
SAS Bundle Generation
SAS requires at least one resource bundle file containing definitions of all events that will be sent to the server. These definitions show SAS how to display and interpret data sent to the server.
Rhino verifies that a SAS enabled deployable unit includes a resource bundle containing definitions for all events it uses. These per DU resource bundles are called mini-bundles
.
Rhino provides console commands to export a resource bundle suitable for use by SAS, containing mini-bundles from all installed deployable units.
Console command: exportsasbundle
Command |
exportsasbundle <bundleFileName> Description Export SAS bundle. Required Arguments bundleFileName The bundle file name. |
---|---|
Example |
[Rhino@localhost (#1)] exportsasbundle my-bundle.yaml Wrote combined bundle to: ~/my-bundle.yaml
Exported bundle
info: identifier: my-rhino minimum_sas_version: '9.1' version: '1522714397579' events: 0x000100: summary: Test Event level: 100 enums: { } |
System Properties
Below is a list of system properties that can be used to modify Rhino behaviour.
Name | Description |
---|---|
Interval between sending per-group heartbeat messages |
|
com.opencloud.savanna2.framework.GroupHeartbeat.loss_threshold |
Number of unreceived pending heartbeats before the group heartbeat watchdog condition triggers |
Transaction timeout for eventrouter transactions |
|
Log level to set on the Log4j status logger |
|
Maximum number of pending JMX notifications before notifications are dropped |
|
Number of JMX notification threads to run |
|
Set the max number of AH GC threads |
|
Set the maximum lock timeout for local activity handler |
|
Set the maximum lock timeout for replicated activities |
|
Default ah migration timeout |
|
Format of CSV management audit log |
|
Distributed lock timeout used during node bootup |
|
Number of resource adaptor entity notifier (callback) threads |
|
Maximum number of threads available to handle group RMI operations. |
|
The timeout in milliseconds before a JVM shutdown is forcibly terminated with a Runtime.halt(). |
|
Location of default Rhino license file |
|
Set the default timeout for lock acquisitions |
|
Timeout used for write operations during exceptionally long SLEE starts (during Rhino boot) |
|
Queue size for Distributed Resource Manager’s misc. runnable stage |
|
Number of threads used in Distributed Resource Manager’s misc. runnable stage |
|
Enable the embedded Rhino Element Manager (SDK only). |
|
Whether to skip check that prevents a resource adaptor from creating an activity in the STOPPING state. |
|
Queue size for Transaction Manager’s executor for blocking resource callbacks |
|
Number of threads used in Transaction Manager’s executor for blocking resource callbacks |
|
Queue size for Transaction Manager’s executor for synchronization callbacks |
|
Number of threads used in Transaction Manager’s executor for synchronization callbacks |
|
Print exception stack traces for DU verification errors |
|
Interval in milliseconds to wait before clearing the queue-full alarm |
|
Maximum time a Savanna receive-thread may remain busy before it is considered stuck |
|
Default lock acquisition timeout for SNMP config update thread |
|
Interval between scans of the LIFO queue’s tail to check for expired items |
|
Minimum percentage of staging threads that must remain alive to prevent a watchdog restart |
|
Default transaction age in milliseconds before a long-running transaction is aborted |
|
Interval in milliseconds between checks for transactions that need timing out |
|
Transaction age (as a percentage of transaction timeout) to warn about long-running transactions at |
|
Interval in milliseconds between watchdog checks |
|
Maximum delay in watchdog scheduling before a warning is displayed |
|
Override the default behaviour of the watchdog to disable terminating the JVM.Do not use in a production deployment. An alarm will be raised when this mode is active. |
|
Number of consecutive times below the limit required the clear the failed count |
|
Number of times above the limit required to trigger the condition |
|
Minimum free memory or 0 to disable (deprecated: use watchdog.oom_percent_used instead, this value takes precedence) |
|
Maximum used memory or 0 to disable |
|
Maximum watchdog 'early wakeup' in milliseconds before a reverse-timewarp warning is displayed |
|
Minimum interval in milliseconds between displaying timewarp warnings |
Description |
Interval between sending per-group heartbeat messages |
---|---|
Valid values |
time in milliseconds |
Default value |
5000 |
Description |
Number of unreceived pending heartbeats before the group heartbeat watchdog condition triggers |
---|---|
Valid values |
positive integer |
Default value |
10 |
Description |
Transaction timeout for eventrouter transactions |
---|---|
Valid values |
milliseconds |
Default value |
30000 |
Description |
Log level to set on the Log4j status logger |
---|---|
Valid values |
ERROR,WARN,INFO,DEBUG,TRACE |
Default value |
ERROR |
Description |
Maximum number of pending JMX notifications before notifications are dropped |
---|---|
Valid values |
number of pending notifications, >= 0 |
Default value |
500 |
Description |
Number of JMX notification threads to run |
---|---|
Valid values |
number of threads; ⇐0 implies same-thread delivery |
Default value |
1 |
Description |
Set the max number of AH GC threads |
---|---|
Valid values |
>2 |
Default value |
2 |
Description |
Set the maximum lock timeout for local activity handler |
---|---|
Valid values |
time in milliseconds |
Default value |
15000 |
Description |
Set the maximum lock timeout for replicated activities |
---|---|
Valid values |
time in milliseconds |
Default value |
15000 |
Description |
Default ah migration timeout |
---|---|
Valid values |
time in milliseconds |
Default value |
60000 |
Description |
Format of CSV management audit log |
---|---|
Valid values |
2.4 (old format) or 2.5 (includes an extra namespace field) |
Default value |
2.5 |
Description |
Distributed lock timeout used during node bootup |
---|---|
Valid values |
Positive integer (seconds) |
Default value |
120 |
Description |
Number of resource adaptor entity notifier (callback) threads |
---|---|
Valid values |
Positive integer |
Default value |
1 |
Description |
Maximum number of threads available to handle group RMI operations. |
---|---|
Valid values |
Positive integer |
Default value |
10 |
Description |
The timeout in milliseconds before a JVM shutdown is forcibly terminated with a Runtime.halt(). |
---|---|
Valid values |
null |
Default value |
60000 |
Description |
Location of default Rhino license file |
---|---|
Valid values |
absolute or relative file path |
Default value |
../rhino.license (rhino-sdk.license for Rhino SDK) |
Description |
Set the default timeout for lock acquisitions |
---|---|
Valid values |
time in milliseconds |
Default value |
60000 |
Description |
Timeout used for write operations during exceptionally long SLEE starts (during Rhino boot) |
---|---|
Valid values |
Positive integer (seconds) |
Default value |
120 |
Description |
Queue size for Distributed Resource Manager’s misc. runnable stage |
---|---|
Valid values |
Positive integer |
Default value |
100 |
Description |
Number of threads used in Distributed Resource Manager’s misc. runnable stage |
---|---|
Valid values |
Positive integer |
Default value |
2 |
Description |
Enable the embedded Rhino Element Manager (SDK only). |
---|---|
Valid values |
true,false |
Default value |
true |
Description |
Whether to skip check that prevents a resource adaptor from creating an activity in the STOPPING state. |
---|---|
Extended Description |
This property should be set to When set to See the documentation reference for more details. |
Valid values |
true,false |
Default value |
false |
Reference |
Description |
Queue size for Transaction Manager’s executor for blocking resource callbacks |
---|---|
Valid values |
Positive integer |
Default value |
100 |
Description |
Number of threads used in Transaction Manager’s executor for blocking resource callbacks |
---|---|
Valid values |
Positive integer |
Default value |
2 |
Description |
Queue size for Transaction Manager’s executor for synchronization callbacks |
---|---|
Valid values |
Positive integer |
Default value |
500 |
Description |
Number of threads used in Transaction Manager’s executor for synchronization callbacks |
---|---|
Valid values |
Positive integer |
Default value |
2 |
Description |
Print exception stack traces for DU verification errors |
---|---|
Valid values |
true, false |
Default value |
false |
Description |
Interval in milliseconds to wait before clearing the queue-full alarm |
---|---|
Valid values |
positive integer |
Default value |
5000 |
Description |
Maximum time a Savanna receive-thread may remain busy before it is considered stuck |
---|---|
Valid values |
time in milliseconds |
Default value |
5000 |
Description |
Default lock acquisition timeout for SNMP config update thread |
---|---|
Valid values |
time in milliseconds |
Default value |
30000 |
Description |
Interval between scans of the LIFO queue’s tail to check for expired items |
---|---|
Valid values |
time in milliseconds |
Default value |
1000 |
Description |
Minimum percentage of staging threads that must remain alive to prevent a watchdog restart |
---|---|
Valid values |
0 - 100 |
Default value |
25 |
Description |
Default transaction age in milliseconds before a long-running transaction is aborted |
---|---|
Valid values |
|
Default value |
120000 |
Description |
Interval in milliseconds between checks for transactions that need timing out |
---|---|
Valid values |
time in milliseconds |
Default value |
10000 |
Description |
Transaction age (as a percentage of transaction timeout) to warn about long-running transactions at |
---|---|
Valid values |
0 - 100 |
Default value |
75 |
Description |
Interval in milliseconds between watchdog checks |
---|---|
Valid values |
positive integer |
Default value |
1000 |
Description |
Maximum delay in watchdog scheduling before a warning is displayed |
---|---|
Valid values |
|
Default value |
1000 |
Description |
Override the default behaviour of the watchdog to disable terminating the JVM.Do not use in a production deployment. An alarm will be raised when this mode is active. |
---|---|
Valid values |
true,false |
Default value |
false |
Description |
Number of consecutive times below the limit required the clear the failed count |
---|---|
Valid values |
Positive integer |
Default value |
3 |
Description |
Number of times above the limit required to trigger the condition |
---|---|
Valid values |
Positive integer |
Default value |
5 |
Description |
Minimum free memory or 0 to disable (deprecated: use watchdog.oom_percent_used instead, this value takes precedence) |
---|---|
Valid values |
Positive integer (bytes) |
Default value |
33554432 (32MB) |
Description |
Maximum used memory or 0 to disable |
---|---|
Valid values |
Positive integer (percentage of heap) |
Default value |
95 |
Description |
Maximum watchdog 'early wakeup' in milliseconds before a reverse-timewarp warning is displayed |
---|---|
Valid values |
|
Default value |
500 |
Description |
Minimum interval in milliseconds between displaying timewarp warnings |
---|---|
Valid values |
|
Default value |
15000 |
Application-State Maintenance
As well as an overview of application-state maintenance, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean → Operation(s) |
---|---|---|
|
Rhino Housekeeping → getClusterHousekeeping Rhino Housekeeping → getNodeHousekeeping |
|
findactivities |
Housekeeping → getActivities |
|
getactivityinfo |
Housekeeping → getActivityInfo |
|
removeactivity |
Housekeeping → removeActivity |
|
removeallactivities |
Rhino Housekeeping → markAllActivitiesForRemoval |
|
findsbbs |
Housekeeping → getSbbs |
|
getsbbinfo |
Housekeeping → getSbbInfo |
|
removesbb |
Housekeeping → removeSbb |
|
removeallsbbs |
Rhino Housekeeping → RemoveAllSbbs |
|
getenventries |
Deployment → getEnvEntries |
|
setenventries |
Deployment → setEnvEntries |
|
getsecuritypolicy |
Deployment → getSecurityPolicy |
|
setsecuritypolicy |
Deployment → setSecurityPolicy |
About Application-State Maintenance
During normal operation, Rhino removes SBB entities when they are no longer needed to process events on the activities they are attached to — usually when all those activities have ended.
Sometimes, however, the normal SBB lifecycle is interrupted and obsolete entities remain. For example:
-
An SBB might be attached to an activity that didn’t end correctly, due to a problem in the resource adaptor entity that created it.
-
The
sbbRemove
method might throw an exception.
Unexpected problems such as these, with deployed resource adaptors or services, may cause resource leaks. Rhino provides an administration interface, the Node Housekeeping MBean
, which lets you find and remove stale or problematic:
-
activities
-
SBB entities
-
activity context naming bindings
-
timers.
See also How-to Use Rhino Housekeeping on the OpenCloud Developer Portal. |
Finding Housekeeping MBeans
To find Node or Cluster Housekeeping MBeans, if using MBean operations directly, use the Rhino Housekeeping
MBean, as follows.
Cluster vs Node Housekeeping
Rhino includes two types of Housekeeping MBean, which provide the same set of functions for either an entire cluster or a single node:
Many of the housekeeping commands available in rhino-console accept a |
MBean operation: getClusterHousekeeping
MBean |
|
---|---|
Rhino operation |
public ObjectName getClusterHousekeeping() throws ManagementException; This operation returns the JMX Object Name of a Cluster Housekeeping MBean. |
MBean operation: getNodeHousekeeping
MBean |
|
---|---|
Rhino operation |
public ObjectName getNodeHousekeeping(int) throws InvalidArgumentExcepiton, ManagementException; This operation returns the JMX Object Name of a Node Housekeeping MBean for the given node. |
Both the Cluster Housekeeping MBean and Node Housekeeping MBean expose the NodeHousekeepingMBean interface. |
Finding Activities
To find activities in the SLEE, use the following rhino-console command or related MBean operations.
Console command: findactivities
Command |
findactivities [-all] [-maxpernode maxrows] [-node nodeid] [-ra <ra-entity>] [-created-after date|time|offset] [-created-before date|time|offset] [-updated-after date|time|offset] [-updated-before date|time|offset] Description Find activities. Use -all to display activities removed but not garbage collected. |
|||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Options |
Times for the above options may be entered in either absolute or relative format:
|
|||||||||||||||||||||||||||||||||||||||||
Examples |
To display all activities in the SLEE:
$ ./rhino-console findactivities pkey attach-count handle node ra-entity ref-count replicated submission-time update-time -- -- -- -- -- -- -- -- ------------------ 0.101:108896044881:1.1 0 ServiceActivity[ServiceID[name=Simp 101 Rhino Inte 0 true 20080417 16:48:19 20080417 16:48:19 A.101:108896044898:100.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:19 20080417 17:32:19 A.101:108896044898:104.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:20 20080417 17:32:20 A.101:108896044898:106.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:21 20080417 17:32:21 A.101:108896044898:108.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:21 20080417 17:32:21 A.101:108896044898:110.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:23 20080417 17:32:23 A.101:108896044898:112.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:24 20080417 17:32:24 A.101:108896044898:116.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:25 20080417 17:32:25 ... 100 rows
Finding stale activities
A common search would be for stale activities. Rhino performs a periodic activity-liveness scan, checking all active activities and ending those detected as stale. Sometimes, however, a failure in the network or inside a resource adaptor might prevent the liveness scan from detecting and ending some activities. In this case, the Administrator would have to locate and end the stale activities manually.
To narrow the search:
To search for activities belonging to node 101 (replicated or non-replicated activities owned by 101) that are more than one hour old, you would use the parameters $ ./rhino-console findactivities -node 101 -cb 1h pkey attach-count handle node ra-entity ref-count replicated submission-time update-time -- -- -- -- -- -- -- -- ------------------ A.101:108896044898:104.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:20 20080417 17:32:20 A.101:108896044898:106.1 1 SAH[switchID=1208407667,connectionI 101 simple 1 true 20080417 17:32:21 20080417 17:32:21 2 rows (This example returned two activities.) |
MBean operation: getActivities
MBean |
|||||
---|---|---|---|---|---|
Rhino operations |
Get summary information for all activities
public TabularData getActivities(int maxPerNode, boolean includeRemoved) throws ManagementException; This operation returns tabular data summarising all activities.
Get summary information for activities belonging to a resource adaptor entity
public TabularData getActivities(int maxPerNode, String entityName, boolean includeRemoved) throws ManagementException, UnrecognizedResourceAdaptorEntityException; This operation returns tabular data summarising the activities owned by the given resource adaptor entity.
Get summary information for activities using time-based criteria
public TabularData getActivities(int maxPerNode, String entityName, long createdAfter, long createdBefore, long updatedAfter, long updatedBefore, boolean includeRemoved) throws ManagementException, UnrecognizedResourceAdaptorEntityException; This operation returns tabular data summarising the activities owned by the given resource adaptor entity using the time-based criteria specified (in milliseconds, as used by
|
Inspecting Activities
To get detailed information about an activity, use the following rhino-console command or related MBean operation.
Console command: getactivityinfo
Command |
getactivityinfo [-v] <activity pkey>* Description Get activity information [-v = verbose] |
||
---|---|---|---|
Example |
To display activity information for activity $ ./rhino-console getactivityinfo A.102:108976017309:355.1 pkey : A.102:108976017309:355.1 activity : SAH[switchID=1208487638,connectionID=11261,address=1] creating-gen : 74295 ending : false events-submitted : 4 flags : 0x0 handle : SAH[switchID=1208487638,connectionID=11261,address=1] head-event : null last-event-time : 20080418 15:39:16 node : 102 ra-entity : simple replicated : true submission-time : 20080418 15:38:45 submitting-node : 102 update-time : 20080418 15:39:47 event-queue : no rows generations : [76343] ending : true [76343] refcount : 0 [76343] removed : true [76343] attached-sbbs : no rows [76343] timers : no rows This command returns a snapshot of the activity’s state at the time you execute it. Some values (such as fields
|
MBean operation: getActivityInfo
MBean |
|||
---|---|---|---|
Rhino operation |
public CompositeData getActivityInfo(String activityPKey, boolean showAllGenerations) throws ManagementException, InvalidPKeyException, UnknownActivityException; This operation returns tabular data with detailed information on the given activity.
|
Activity Information Fields
The getactivityinfo
console command displays information about:
Activity information
The getactivityinfo
console command displays the following values about an activity:
Field | Description |
---|---|
pkey |
The activity’s primary key. Uniquely identifies this activity within Rhino. |
activity |
The activity object, in string form. Its exact content is resource adaptor dependent (and may or may not contain useful human-readable information). |
creating-gen |
The database generation in which the activity was created. |
ending |
Boolean flag indicating if the activity is ending. |
events-submitted |
The number of events that have been submitted for processing on the activity. |
flags |
Hexadecimal value of the flags the activity was created with (if any). |
handle |
The activity handle assigned by the activity’s resource adaptor entity, in string form. The exact content is resource adaptor dependent (and may or may not contain useful human-readable information). |
head-event |
The event at the head of the activity’s event queue (the next event to be processed on the activity). |
last-event-time |
When the most recent event was submitted on the activity. |
node |
The Rhino cluster node that currently owns the activity. If this value is different to the |
ra-entity |
The resource adaptor entity that created this activity. |
replicated |
Boolean flag indicating if the activity is a replicated activity. |
submission-time |
When the activity was created. |
submission-node |
The Rhino cluster node that created the activity. |
update-time |
When the activity was last updated (when the most recent database generation record was created). Useful in some situations for evaluating whether an activity is still live. |
A list of events queued for processing on the activity. |
|
A list of generational information stored in the database for the activity. If |
Event-queue information
The getactivityinfo
console command displays the following values for each event in an activity’s event queue:
Field | Description |
---|---|
position |
The position of the event in the queue. |
event-type |
The event-type component identifier of the event. |
event |
The event object, in string form. Its exact content is resource adaptor dependent (and may or may not contain useful human-readable information). |
flags |
Hexadecimal value of the flags the event was fired with (if any). |
Generational information
The getactivityinfo
console command displays values for the following fields, in an activity’s generational information:
Field | Description |
---|---|
generation |
Not displayed as a field but included in square brackets before the rest of the generational information, for example: |
ending |
Boolean flag indicating if the activity is ending. |
refcount |
The number of references made to the activity by the Timer Facility and the Activity Context Naming Facility. |
removed |
Boolean flag indicating if the activity no longer exists in the SLEE. Only |
A list of SBBs attached to the activity. |
|
A list of Timer Facility timers set on the activity. |
Attached-SBB information
The getactivityinfo
console command displays values for the following fields, for each SBB entity attached to an activity:
Field | Description |
---|---|
pkey |
The primary key of the SBB entity. |
sbb-component-id |
The component identifier of the SBB for the SBB entity. |
service-component-id |
The component identifier of the service the SBB belongs to. |
Activity-timer information
The getactivityinfo
console command displays values for the following fields, for each timer active on an activity:
Field | Description |
---|---|
pkey |
The primary key of the timer. |
activity-pkey |
The primary key of the activity the timer is set on. |
submission-time |
The time the timer was initially set. |
period |
The timer period (for periodic timers). |
repetitions |
The number of repetitions the timer will fire before it expires. |
preserve-missed |
Boolean flag indicating if missed timers should still fire an event into the SLEE. |
replicated |
Boolean flag indicating whether the timer is replicated. (Always equals the replication status of the activity the timer was set on.) |
Removing Activities
To forcefully remove an activity, use the following rhino-console command or related MBean operation.
Consult the spec before ending an activity
The JAIN SLEE 1.1 specification provides detailed rules for ending no-longer-required activities. |
Console command: removeactivity
Command |
removeactivity <activity pkey>* Description Remove activities |
---|---|
Example |
To remove the activities with the primary keys $ ./rhino-console removeactivity A.103:108976019104:36.1 A.103:108976019104:56.1 2 activities removed |
MBean operation: removeActivity
MBean |
|
---|---|
Rhino operation |
public void removeActivity(String activityPKey) throws InvalidPKeyException, UnknownActivityException, ManagementException; This operation removes the activity with the given primary key. |
Removing All Activities
To mark all activities of a resource adaptor entity for removal, use the following rhino-console command or related MBean operation.
Use extreme care when removing forcibly
Occasionally an administrator will want to remove all activities belonging to a resource adaptor entity. Typically, this would be to deactivate a resource adaptor when upgrading or reconfiguring. Under normal conditions, these actions would be performed automatically, by allowing existing activities to drain over time. Rhino provides the following housekeeping commands to forcibly speed up the draining process, although these should be used with extreme care on production systems — they will interrupt service for any existing network activities belonging to the resource adaptor entity. |
Console command: removeallactivities
Command |
removeallactivities <ra-entity> [-nodes node1,node2,...] Description Remove all activities belonging to a resource adaptor entity in the Stopping state (on the specified nodes) |
---|---|
Example |
To remove all activities owned by the resource adaptor entity called $ ./rhino-console removeallactivities sipra -nodes 101,102 Activities marked for removal on node(s) [101,102] |
MBean operation: markAllActivitiesForRemoval
MBean |
|
---|---|
Rhino operation |
public void markAllActivitiesForRemoval(String entityName, int[] nodeIDs) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; This operation marks all the activities owned by the given resource adaptor entity on the given nodes for removal. |
Resource adaptor entity (or SLEE) must be STOPPING
As a safeguard, this command (or MBean operation) cannot be run unless the specified resource adaptor entity, or the SLEE, is in the STOPPING state on the specified nodes. (It may also be run against nodes where the resource adaptor entity is in the INACTIVE state (or the SLEE is in the STOPPED state) for convenience in asymmetric cluster configurations, but has no effect against such nodes since no activities exist for the resource adaptor entity on nodes where it is INACTIVE (or the SLEE is STOPPED).) |
Why "mark" (instead of just ending)?
This command does not remove all activities immediately, because that might overload the system (from processing too many activity-end events at once). Instead, |
Finding SBB Entities
To find SBB entities in the SLEE, use the following rhino-console command or related MBean operation.
Console command: findsbbs
Command |
findsbbs [-maxpernode maxrows] [-node nodeid] <-service service> [-sbb sbb] [-created-after date|time|offset] [-created-before date|time|offset] Description Find SBBs |
|||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Options |
Times for the above options may be entered in either absolute or relative format:
Rhino assumes relative time format is in the past. For example, 1h30m means 1 hour and 30 minutes ago. |
|||||||||||||||||||||||||||||||||
Examples |
To display all SBB entities owned by the JCC VPN service in the SLEE:
$ ./rhino-console findsbbs -service "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" pkey creation-time parent-pkey replicated sbb-component-id service-component-id -- -- -- -- -- -------------------------------------------------- 101:109504372474:0 20080424 18:12:15 101:109504372533:1 false SbbID[name=AnytimeInterrogation sbb,vendor=Open Cl ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372474:1 20080424 18:12:38 101:109504372533:4 false SbbID[name=AnytimeInterrogation sbb,vendor=Open Cl ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372474:10 20080424 18:12:44 101:109504372533:23 false SbbID[name=AnytimeInterrogation sbb,vendor=Open Cl ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372474:11 20080424 18:12:45 101:109504372533:24 false SbbID[name=AnytimeInterrogation sbb,vendor=Open Cl ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372474:12 20080424 18:12:45 101:109504372533:26 false SbbID[name=AnytimeInterrogation sbb,vendor=Open Cl ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372474:13 20080424 18:12:45 101:109504372533:27 false SbbID[name=AnytimeInterrogation sbb,vendor=Open Cl ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi ... 101:109504372533:0 20080424 18:12:15 false SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,versi ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372533:101 20080424 18:13:00 false SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,versi ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372533:106 20080424 18:13:02 false SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,versi ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372533:107 20080424 18:13:02 false SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,versi ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504372533:112 20080424 18:13:02 false SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,versi ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi ... 100 rows
To narrow the search:
To search for SBB entities belonging to node 101 (replicated or non-replicated SBB entities owned by 101) that are more than one hour old, you would use the parameters $ ./rhino-console findsbbs -service "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" -node 101 -cb 1h pkey creation-time parent-pkey replicated sbb-component-id service-component-id -- -- -- -- -- -------------------------------------------------- 101:109504172475:229 20080424 17:17:33 101:109504172535:443 false SbbID[name=AnytimeInterrogation sbb,vendor=Open Cl ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 101:109504172535:443 20080424 17:17:33 false SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,versi ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,versi 2 rows This example returned two SBB entities, one the parent SBB entity of the other. |
MBean operations: getSbbs
MBean |
|||||
---|---|---|---|---|---|
Rhino operations |
Get summary information for all SBB entities owned by a service
public TabularData getSbbs(int maxPerNode, ServiceID serviceID) throws UnrecognizedServiceException, ManagementException; This operation returns tabular data summarising all SBB entities in the given service.
Get summary information for all SBB entities owned by a service using time-based criteria
public TabularData getSbbs(int maxPerNode, ServiceID serviceID, long createdAfter, long createdBefore) throws ManagementException, UnrecognizedServiceException; This operation returns tabular data summarising the SBB entities owned by the given service using the time-based criteria specified (in milliseconds, as used by java.util.Date, or the value 0 to ignore a particular parameter).
Get summary SBB entity information for a particular SBB in a service using time-based criteria
public TabularData getSbbs(int maxPerNode, ServiceID serviceID, SbbID sbbType, long createdAfter, long createdBefore) throws ManagementException, UnrecognizedSbbException, UnrecognizedServiceException; This operation returns tabular data summarising only SBB entities of the given SBB in the given service using the time-based criteria specified (in milliseconds, as used by
|
Inspecting SBB Entities
To get detailed information about an SBB entity, use the following rhino-console command or related MBean operation.
Console command: getsbbinfo
Command |
getsbbinfo <serviceid> <sbbid> <sbb pkeys>* Description Get SBB information |
---|---|
Example |
To display information for SBB entity $ ./rhino-console getsbbinfo "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" "name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0" 101:109823584382:145 parent-pkey : pkey : 101:109823584382:145 convergence-name : :::::131:JccCall[call=CapDialog[appContext=2,appID=26918,dialogID=194,incarnationID=0],provider=JccProvider[capra:default,state=IN_SERVICE]] creating-node-id : 101 creation-time : 20080428 10:31:04 priority : 0 replicated : false sbb-component-id : SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0] service-component-id : ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0] attached-activities : > pkey handle ra-entity replicated > -- -- -- ----------- > C.101:109823577380:4.0 ConnectionHandle[ConnectionID[call=CallHandle[CapDialog[appContext=2,appID=26918,dialogID=194,incarnationID=0]],address=63941012 capra false > 1 rows This command returns a snapshot of the SBB entity’s state at the time you execute it. Some values (such as |
See SBB Entity Information Fields for a description of the fields getsbbinfo returns. |
MBean operation: getSbbInfo
MBean |
|
---|---|
Rhino operation |
public CompositeData getSbbInfo(ServiceID serviceID, SbbID sbbID, String sbbPKey) throws ManagementException, InvalidPKeyException, UnrecognizedSbbException, UnrecognizedServiceException, UnknownSbbEntityException; This operation returns tabular data with detailed information on the given SBB entity. |
For a description of the format of the tabular data that this operation returns, see the javadoc . |
SBB Entity Information Fields
The getsbbinfo
console command displays information about an SBB entity and each activity it is attached to.
SBB entity information
The getsbbinfo
console command displays the following values about an SBB entity:
Field | Description |
---|---|
pkey |
The primary key of the SBB entity. Unique to the SBB within the service (SBB entities of other SBBs in the same or another service may have the same primary key). |
parent-pkey |
The primary key of the SBB entity’s parent SBB entity. (For a root SBB entity, with no parent, this field is empty.) |
convergence-name |
The convergence name, for a root SBB entity. (If not a root SBB entity, this field is empty.) |
creating-node-id |
The Rhino node that created the SBB entity. |
creation-time |
The date and time the SBB entity was created. |
priority |
The SBB entity’s current event-delivery priority. |
replicated |
Boolean flag indicating if the SBB entity’s state is replicated. |
sbb-component-id |
The SBB-component identifier, in string form. Identifies the SBB entity’s SBB component. |
service-component-id |
The service-component identifier, in string form. Identifies the service that the SBB entity is providing a function for. |
A list of the activities the SBB entity is attached to. |
Attached-activity information
The getsbbinfo
console command displays the following values about each activity the SBB entity is attached to:
Field |
Description |
pkey |
The primary key of the activity. Uniquely identifies this activity within Rhino. |
handle |
The activity handle assigned by the activity’s resource adaptor entity, in string form. Its exact content is resource adaptor dependent (and may or may not contain useful human-readable information). |
ra-entity |
The resource adaptor entity that created this activity. |
replicated |
Boolean flag indicating if the activity is a replicated activity. |
Diagnosing SBB Entities
New development-and-diagnostics tool in Rhino 2.3.1.12
SBB Diagnostics lets you pull service-defined diagnostic information from SBB objects at runtime. You can use the For an SBB to provide this diagnostics information, it must implement one of these methods:
|
|
To get detailed diagnostics information about an SBB entity, use the following rhino-console command or related MBean operation.
Console command: getsbbdiagnostics
Command |
getsbbdiagnostics <serviceid> <sbbid> <sbb pkeys>* Description Get SBB info and diagnostics (if supported by sbb implementation). |
---|---|
Example |
To display information for SBB entity $ ./rhino-console getsbbdiagnostics name=sentinel.sip,vendor=OpenCloud,version=1.0 name=sentinel.sip,vendor=OpenCloud,version=1.0 101:146698001375232/-815184026 parent-pkey : pkey : 101:146698001375232/-815184026 convergence-name : APK[ah=19,id=101:146697745919051]:::::-1 creating-node-id : 101 creation-time : 20131203 14:01:43 priority : 0 replicated : false sbb-component-id : SbbID[name=sentinel.sip,vendor=OpenCloud,version=1.0] service-component-id : ServiceID[name=sentinel.sip,vendor=OpenCloud,version=1.0] attached-activities : > pkey handle ra-entity replicated > -- -- -- ----------- > 13.101:146697745919051.0 SessionAH[3] sip-sis-ra false > 1 rows SBB diagnostics: SentinelSipFeatureAndOcsSbbSupportImpl Child SBBs ================================================= SubscriberDataLookup SBB: No diagnostics available for this feature sbb. SipAdhocConference SBB: No child SBB currently exists for SipAdhocConference. DiameterRoOcs SBB: DiameterRoOcsMultiFsmSbb Service FSM States =========================================== DiameterIECFSM [current state = NotCharging, InputRegister[scheduled=[], execution=[]], Endpoints[Endpoint[local],Endpoint[DiameterMediation],Endpoint[DiameterToOcs]]] DiameterSCURFSM [previous state = WaitForInitialCreditCheckAnswer, current state = WaitForNextCreditControlRequest, InputRegister[scheduled=[local_errorsEndSession], execution=[]], Endpoints[Endpoint[local],Endpoint[DiameterMediation],Endpoint[DiameterToOcs,aci=[set,sbb-attached]]]] SentinelSipSessionStateAggregate Session State ============================================== Account: 6325 ActivityTestHasFailed: false AllowPresentationOfDivertedToUriToOriginatingUser: false AllowPresentationOfServedUserUriToDivertedToUser: false AllowPresentationOfServedUserUriToOriginatingUser: false AnnouncementCallIds: null AnnouncementID: 0 AnytimeFreeDataPromotionActive: false CFNRTimerDuration: 0 CallHasBeenDiverted: false CallType: MobileOriginating CalledPartyAddress: tel:34600000001 CalledPartyCallId: null CallingPartyAddress: tel:34600000002 CallingPartyCallId: null ChargingResult: 2001 ClientChargingType: sessionCharging ClientEventChargingMethod: null ClosedUserGroupCall: null ClosedUserGroupDropIfNoGroupMatch: null ClosedUserGroupEnabled: true ClosedUserGroupIncomingAccessAllowed: null ClosedUserGroupList: [CUG1Profile] ClosedUserGroupOutgoingAccessAllowed: null ... etc. This command returns a snapshot of the SBB entity’s state and SBB-defined diagnostics information at the time you execute it. Some values (such as The diagnostics output (from the "SBB diagnostics:" line onwards) is free-form and SBB defined. The above output is only representative of a single service-defined diagnostics method. |
See SBB Entity Information Fields for a description of the fields getsbbdiagnostics returns. |
MBean operation: getSbbDiagnostics
MBean |
|
---|---|
Rhino operation |
public CompositeData getSbbDiagnostics(ServiceID serviceID, SbbID sbbID, String sbbPKey) throws ManagementException, InvalidPKeyException, UnrecognizedSbbException, UnrecognizedServiceException, UnknownSbbEntityException; This operation returns tabular data with detailed information on the given SBB entity, including SBB-defined diagnostics information. |
For a description of the format of the tabular data that this operation returns, see the javadoc . |
Example usage
The following is a basic example showing an auto-generated ocGetSbbDiagnostics(StringBuilder sb)
method. In this case, the method was generated based on CMP fields declared by the SBB, and demonstrates diagnostics information being obtained from both the current class and the super class:
---
public abstract class ExampleSessionStateImpl extends com.opencloud.sentinel.ant.BaseSbb implements ExampleSessionState {
public void ocGetSbbDiagnostics(StringBuilder sb) {
final String header = "ExampleSessionState Session State";
sb.append(header).append("\n");
for (int i=0; i < header.length(); i++)
sb.append("=");
sb.append("\n\n");
// diagnostics: ClashingType (from com.opencloud.sentinel.ant.ExampleSessionStateInterface)
if (getClashingType() == null) {
sb.append("ClashingType: null\n");
} else {
sb.append("ClashingType: ").append(getClashingType()).append("\n");
}
// diagnostics: ExampleField (from com.opencloud.sentinel.ant.ExampleSessionStateInterface)
if (getExampleField() == null) {
sb.append("ExampleField: null\n");
} else {
sb.append("ExampleField: ").append(getExampleField()).append("\n");
}
sb.append("\n");
super.ocGetSbbDiagnostics(sb);
}
...
---
Removing SBB Entities
To forcibly remove an SBB entity, use the following rhino-console) command or related MBean operation.
Only forcibly remove SBB entities if necessary
SBB entities should only be forcibly removed if they do not remove themselves due to some unforeseen error during event processing. |
Console command: removesbb
Command |
removesbb <serviceid> <sbbid> <sbb pkey>* Description Remove SBBs |
---|---|
Example |
To remove the SBB entity of the JCC VPN SBB in the JCC VPN service with the primary key $ ./rhino-console removesbb "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" \ "name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0" 101:109823584382:145 SBB entity 101:109823584382:145 removed |
MBean operation: removeSbb
MBean |
|
---|---|
Rhino operation |
public void removeSbb(ServiceID serviceID, SbbID sbbID, String sbbPKey) throws UnrecognizedServiceException, UnrecognizedSbbException, InvalidPKeyException, UnknownSbbEntityException, ManagementException; This operation removes the SBB entity with the given primary key from the given service. |
Removing All SBB Entities
To remove all SBB entities of a service, use the following rhino-console command or related MBean operation.
Use extreme care when removing forcibly
Occasionally an administrator will want to remove all SBB entities in a particular service. Typically, this would be to deactivate the service when upgrading or reconfiguring. Under normal conditions, these actions would be performed automatically, by allowing existing SBB entities to drain over time. Rhino provides the following housekeeping commands to forcibly speed up the draining process, although these should be used with extreme care on production systems — they will interrupt service for any existing network activities belonging to the service.
Service (or SLEE) must be STOPPING
As a safeguard, this command (or MBean operation) cannot be run unless the specified service, or the SLEE, is in the |
Console command: removeallsbbs
Command |
removeallsbbs <serviceid> [-nodes node1,node2,...] Description Remove all SBBs from a service in the Stopping state (on the specified nodes) |
---|---|
Example |
To remove all SBB entities for the JCC VPN service on nodes 101 and 102: $ ./rhino-console removeallsbbs "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" -nodes 101,102 SBB entities removed from node(s) [101,102] |
MBean operation: removeAllSbbs
MBean |
|
---|---|
Rhino operation |
public void removeAllSbbs(ServiceID serviceID, int[] nodeIDs) throws NullPointerException, UnrecognizedServiceException, InvalidStateException, ManagementException; This operation removes all SBB entities for the given service on the given nodes. |
Runtime Component Configuration
As of Rhino 2.3, Rhino supports modifying environment entry configuration and security permissions for deployed components. |
To configure runtime components, see:
Inspecting Environment Entries
To inspect a component’s environment entries, use the following rhino-console command or related MBean operation.
Console command: getenventries
Command |
getenventries <ComponentID> [<true|false>] Description Returns the env entries for the specified SbbID or ProfileSpecificationID. The original env entries will be returned if the final argument is 'true'. |
---|---|
Example |
To list all environment entries for the SIP Registrar SBB: ./rhino-console getenventries SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8] Getting env entries for component: SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8] sipACIFactoryName:slee/resources/ocsip/acifactory sipProviderName:slee/resources/ocsip/provider |
MBean operation: getEnvEntries
MBean |
|
---|---|
Rhino operation |
public Map<String, String> getEnvEntries(ComponentID id, boolean original) throws NullPointerException, UnrecognizedComponentException, ManagementException, IllegalArgumentException; This operation returns the environment entries associated with a component as a map of strings. |
Setting Environment Entries
To modify a component’s environment entries, use the following rhino-console command or related MBean operation.
Console command: setenventries
Command |
setenventries <ComponentID> <name1:value1> <name2:value2> ... Description Sets the env entries associated with the specified SbbID or ProfileSpecificationID. |
---|---|
Example |
To modify the environment entries for the SIP Registrar SBB: ./rhino-console setenventries SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8] sipACIFactoryName:slee/resources/ocsip/mycustomacifactory,sipProviderName:slee/resources/ocsip/mycustomprovider Setting env entries for component: SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8] |
MBean operation: getEnvEntries
MBean |
|
---|---|
Rhino operation |
public void setEnvEntries(ComponentID id, Map<String, String> entries) throws NullPointerException, UnrecognizedComponentException, ManagementException, IllegalArgumentException; This operation sets the environment entries associated with a component. |
Inspecting Security Permissions
To inspect a component’s security permissions, use the following rhino-console command or related MBean operation.
The security permissions for a component may be shared with multiple other components. For example, SBBs in the same jar may share their permissions. |
Console command: getsecuritypolicy
Command |
getsecuritypolicy (<ComponentID> | <LibraryID> [jarname]) [true|false] Description Returns the security policy associated with the specified ComponentID. The optional 'jarname' argument can be used to specify a nested library jar for LibraryIDs. The original policy will be returned if the final argument is 'true'. |
---|---|
Example |
To list the security permissions for the SIP resource adaptor: ./rhino-console getsecuritypolicy ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] grant { permission java.util.PropertyPermission "opencloud.sip.*", "read"; permission java.io.FilePermission "/etc/resolv.conf", "read"; permission java.net.SocketPermission "*", "resolve"; permission java.net.SocketPermission "*:1024-", "listen,resolve"; permission java.net.SocketPermission "*:1024-", "accept,connect"; permission java.lang.RuntimePermission "modifyThread"; permission java.io.FilePermission "sip-ra-ssl.truststore", "read"; permission java.util.PropertyPermission "javax.sip.*", "read"; permission java.io.FilePermission "sip-ra-ssl.keystore", "read"; permission java.net.SocketPermission "*:53", "connect,accept"; }; |
MBean operation: getSecurityPolicy
MBean |
|
---|---|
Rhino operation |
public String getSecurityPolicy(ComponentID id, String subId, boolean original) throws NullPointerException, UnrecognizedComponentException, IllegalArgumentException, ManagementException; This operation returns the security permissions associated with a component. |
Modifying Security Permissions
To modify a component’s security permissions, use the following rhino-console command or related MBean operation.
The security permissions for a component may be shared with multiple other components. For example, SBBs in the same jar may share their permissions. |
Console command: setsecuritypolicy
Command |
setsecuritypolicy (<ComponentID> | <LibraryID> [jarname]) <SecurityPolicy> Description Sets the current security policy associated with the specified ComponentID. |
---|---|
Example |
To set the security permissions for the SIP resource adaptor: ./rhino-console setsecuritypolicy 'ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1]' 'grant { permission java.net.SocketPermission "*:53", "connect,accept"; };' Setting security policy for component: ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] |
The command-line console only supports a single line as an argument. To easily modify multi-line security policies, use the Rhino Element Manager instead. |
MBean operation: getSecurityPolicy
MBean |
|
---|---|
Rhino operation |
public void setSecurityPolicy(ComponentID id, String subId, String policyText) throws NullPointerException, UnrecognizedComponentException, IllegalArgumentException, ManagementException; This operation sets the security permissions associated with a component. |
Backup and Restore
This section includes an overview of backup and restore procedures, and instructions for:
About Backup and Restore
Backup strategies for your configuration
To backup a Rhino SLEE, an administrator will typically use a combination of utilities, depending on backup requirements and the layout of Rhino nodes and the external persistence database. Common approaches include:
Backing up and restoring… | using… | to support… | ||
---|---|---|---|---|
individual nodes |
OS-level facilities such as LVM |
…recovery after a node failure, by creating a snapshot of the volumes containing the Rhino installation and persistence database data files.
|
||
cluster-wide SLEE state |
the rhino-export utility (to save the management state of the SLEE), and the rhino-import utility (to restore the state to a new cluster) |
…recovery from a cluster failure, recovery from data corruption, rolling back state to an earlier time, and migrating a cluster to a new version of Rhino.
|
||
subset of the SLEE state |
…restoring a subset of SLEE state, for example during development, after updating a set of SLEE components. |
|||
SLEE profile state |
…backing up only the state of SLEE profiles (a subset of the management state of the SLEE). |
|||
external persistence database |
…backing up the contents of the external persistence database. |
For many deployments, the combination of disk-volume backups and rhino-export backups is sufficient. |
Rhino SLEE State
As well as an overview of Rhino exports, this section includes instructions, explanations and examples of performing the following Rhino SLEE procedures:
Procedure | Utility |
---|---|
rhino-export |
|
rhino-import |
|
rhino-import |
About Rhino Exports
Why export Rhino?
Administrators and programmers can export a Rhino SLEE’s deployment and configuration state to a set of human-readable text files, which they can then import into another (or the same) Rhino SLEE instance. This is useful for backing up the state of the SLEE, migrating the state of one Rhino SLEE to another Rhino SLEE instance, or migrating the SLEE state between different versions of the Rhino SLEE. |
Using rhino-export
to backup SLEE state
The rhino-export tool exports the entire current state of the cluster, preserving all management state in a human-readable form, which:
-
is fairly easy to modify or examine
-
consists of an Ant build script and supporting files
-
includes management commands to restore the entire cluster state.
To restore exported SLEE state, reinstall Rhino and use the rhino-import tool with the saved export. |
What’s exported?
The exported image records the following state from the SLEE:
-
for each namespace:
-
all deployable units
-
all profile tables
-
all profiles
-
all resource adaptor entities
-
configured trace levels for all components
-
current activation state of all services and resource adaptor entities
-
-
runtime configuration:
-
external database resources
-
per-node/symmetric activation state mode
-
access control roles and permissions
-
logging
-
rate limiter
-
staging queue dimensions
-
object pool dimensions
-
threshold alarms
-
SNMP mappings.
-
The exporter uses profile specification references to determine relationships between profiles. It does not recognise undeclared relationships expressed in the profile validation methods. If profile spec X contains a dependency reference to profile spec Y, then any create-profile-table targets for profile tables of X will include a dependency on any create-profile-table targets for any profile tables of Y. If there is no profile specification reference between two profile specifications that have a functional dependency, the exporter will not handle the dependency between the profile tables. |
Exported profile data DTD
The format of the exported profile data is defined by the DTD file exported_profile_data_1_1.dtd
, which can be found in the doc/dtd/
directory of your Rhino installation.
An administrator can write custom XML scripts, or modify exported profile data, according to the structure defined by this DTD.
The included profile
element can be used to create, replace or remove a profile.
Refer to the documentation in exported_profile_data_1_1.dtd for more details.
You should also backup any modified or created files which are not stored in Rhino’s management database (for example, using cron and tar ). This applies to modified files under the Rhino installation (such as changes to security permissions or JVM options) and to any important generated output (such as CDR files). |
Exporting a SLEE
To export a Rhino SLEE using rhino-export
, see the following instructions, example, and list of files exported.
Invoke rhino-export
To export a Rhino SLEE, use the $RHINO_HOME/client/bin/rhino-export
shell script.
You cannot run rhino-export unless the Rhino SLEE is available and ready to accept management commands, and you include at least one argument — the name of the directory to write the export image to. |
Command-line arguments
You can include the following command-line arguments with rhino-export
:
$ client/bin/rhino-export Valid command line options are: -h <host> - The hostname to connect to. -p <port> - The port to connect to. -u <username> - The user to authenticate as. -w <password> - The password used for authentication. -D - Display connection debugging messages. -J - Export profile tables using JMX and write as XML. (default) -R - Take a snapshot of profile tables and write as raw data. The raw data files can be decoded later using snapshot-to-export. -s - Only export DUs and component states. Does not export configuration (object pools, logging, licenses). This is useful for creating exports to migrate data between Rhino versions. -q - Quiet mode. Disables profile export progress bar. This is useful in cases where the terminal is too small for the progress bar to display properly, or when console output is being redirected to a pipe or file. <output-directory> - The destination directory for the export. Usually, only the <output-directory> argument must be specified. All other arguments will be read from 'client.properties'.
Sample export
For example, an export might run as follows:
user@host:~/rhino/client/bin$ ./rhino-export ../../rhino_export Connecting to localhost:1199 Exporting state from the default namespace... 9 deployable units found to export Establishing dependencies between deployable units... Exporting file:jars/sip-profile-location-service.jar... Exporting file:jars/sip-presence-event.jar... Exporting file:du/jsip-library-1.2.du.jar... Exporting file:du/jsip-ratype-1.2.du.jar... Exporting file:du/ocsip-ratype-2.2.du.jar... Exporting file:du/ocsip-ra-2.3.1.19.du.jar... Exporting file:jars/sip-registrar-service.jar... Exporting file:jars/sip-presence-service.jar... Exporting file:jars/sip-proxy-service.jar... Generating import build file... Exporting 1 profile(s) from profile table sip-registrations Export complete
Exported files
The exporter creates a new sub-directory, such as rhino_export
(as specified by argument), that contains all the deployable units that are installed in the SLEE, and an Ant script called build.xml
which can be used later to initiate the import process.
If there are any user-defined namespaces, each of these are exported into separate Ant scripts named namespace-<namespace-name>.xml
which can be used individually to restore only the corresponding namespace.
Here is an example export subdirectory:
user@host:~/rhino$ cd rhino_export/ user@host:~/rhino/rhino_export$ ls build.xml configuration import.properties profiles rhino-ant-management_2_3.dtd units
Exported files and directories include:
File or directory | Description |
---|---|
build.xml |
The main Ant build file, which gives Ant the information it needs to import all the components of this export directory into the SLEE. |
namespace-<namespace-name>.xml |
An Ant build file, as above, but specific to a user-defined namespace. |
import.properties |
Contains configuration information, specifically the location of the Rhino "client" directory where required Java libraries are found. |
configuration |
A directory containing the licenses and configured state that the SLEE should have. |
units |
A directory containing deployable units. |
profiles |
A directory containing XML files with the contents of profile tables. |
(snapshots) |
Directories containing "snapshots" of profile tables. These are binary versions of the XML files in the profiles directory, created only by the export process (and not used for importing). See Profile Snapshots. |
Importing a SLEE
To import an exported Rhino SLEE, using rhino-import
, see the following instructions and example.
Run rhino-import
To import the state of an export directory into a Rhino SLEE:
-
change the current working directory to the export directory (
cd
into it) -
run
$RHINO_HOME/client/bin/rhino-import
.
(You can also manually run Ant directly from the export directory, provided that the import.properties
file has been correctly configured to point to the location of your $RHINO_HOME/client
directory.)
Generally, OpenCloud recommends importing into a SLEE with no existing deployed components — except when you need to merge exported state with existing state in the SLEE. (Some import operations may fail if the SLEE already includes components or objects with the same identity. These failures will not halt the build process, however, if the failonerror property of the management subtasks is set to false in the build file.) |
Sample Rhino import
$ cd /path/to/export $ rhino-import Buildfile: ./build.xml management-init: [echo] Open Cloud Rhino SLEE Management tasks defined login: [slee-management] Establishing new connection to localhost:1199 [slee-management] Connected to localhost:1199 (101) [Rhino-SDK (version='2.5', release='0.0', build='201609201052', revision='eb71e6f')] import-persistence-cfg: import-snmp-cfg: import-snmp-node-cfg: import-snmp-parameter-set-type-cfg: import-snmp-configuration: set-symmetric-activation-state-mode: [slee-management] Disabling symmetric activation state mode [slee-management] [Failed] Symmetric activation state mode is already disabled pre-deploy-config: init: [slee-management] Setting the active namespace to the default namespace install-du-javax-slee-standard-types.jar: install-du-jsip-library-1.2.du.jar: [slee-management] Install deployable unit file:du/jsip-library-1.2.du.jar install-du-jsip-ratype-1.2.du.jar: [slee-management] Install deployable unit file:du/jsip-ratype-1.2.du.jar install-du-ocsip-ra-2.3.1.19.du.jar: [slee-management] Install deployable unit file:du/ocsip-ra-2.3.1.19.du.jar install-du-ocsip-ratype-2.2.du.jar: [slee-management] Install deployable unit file:du/ocsip-ratype-2.2.du.jar install-du-sip-presence-event.jar: [slee-management] Install deployable unit file:jars/sip-presence-event.jar install-du-sip-presence-service.jar: [slee-management] Install deployable unit file:jars/sip-presence-service.jar install-du-sip-profile-location-service.jar: [slee-management] Install deployable unit file:jars/sip-profile-location-service.jar install-du-sip-proxy-service.jar: [slee-management] Install deployable unit file:jars/sip-proxy-service.jar install-du-sip-registrar-service.jar: [slee-management] Install deployable unit file:jars/sip-registrar-service.jar install-all-dus: install-deps-of-du-javax-slee-standard-types.jar: verify-du-javax-slee-standard-types.jar: install-deps-of-du-jsip-library-1.2.du.jar: verify-du-jsip-library-1.2.du.jar: [slee-management] Verifying deployable unit file:du/jsip-library-1.2.du.jar install-deps-of-du-jsip-ratype-1.2.du.jar: verify-du-jsip-ratype-1.2.du.jar: [slee-management] Verifying deployable unit file:du/jsip-ratype-1.2.du.jar install-deps-of-du-ocsip-ratype-2.2.du.jar: install-deps-of-du-ocsip-ra-2.3.1.19.du.jar: verify-du-ocsip-ra-2.3.1.19.du.jar: [slee-management] Verifying deployable unit file:du/ocsip-ra-2.3.1.19.du.jar verify-du-ocsip-ratype-2.2.du.jar: [slee-management] Verifying deployable unit file:du/ocsip-ratype-2.2.du.jar install-deps-of-du-sip-presence-event.jar: verify-du-sip-presence-event.jar: [slee-management] Verifying deployable unit file:jars/sip-presence-event.jar install-deps-of-du-sip-profile-location-service.jar: install-deps-of-du-sip-presence-service.jar: verify-du-sip-presence-service.jar: [slee-management] Verifying deployable unit file:jars/sip-presence-service.jar verify-du-sip-profile-location-service.jar: [slee-management] Verifying deployable unit file:jars/sip-profile-location-service.jar install-deps-of-du-sip-proxy-service.jar: verify-du-sip-proxy-service.jar: [slee-management] Verifying deployable unit file:jars/sip-proxy-service.jar install-deps-of-du-sip-registrar-service.jar: verify-du-sip-registrar-service.jar: [slee-management] Verifying deployable unit file:jars/sip-registrar-service.jar verify-all-dus: deploy-du-javax-slee-standard-types.jar: deploy-du-jsip-library-1.2.du.jar: [slee-management] Deploying deployable unit file:du/jsip-library-1.2.du.jar deploy-du-jsip-ratype-1.2.du.jar: [slee-management] Deploying deployable unit file:du/jsip-ratype-1.2.du.jar deploy-du-ocsip-ra-2.3.1.19.du.jar: [slee-management] Deploying deployable unit file:du/ocsip-ra-2.3.1.19.du.jar deploy-du-ocsip-ratype-2.2.du.jar: [slee-management] Deploying deployable unit file:du/ocsip-ratype-2.2.du.jar deploy-du-sip-presence-event.jar: [slee-management] Deploying deployable unit file:jars/sip-presence-event.jar deploy-du-sip-presence-service.jar: [slee-management] Deploying deployable unit file:jars/sip-presence-service.jar [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1],sbb=SbbID[name=EventStateCompositorSbb,vendor=OpenCloud,version=1.0]] root tracer to Info [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1],sbb=SbbID[name=NotifySbb,vendor=OpenCloud,version=1.1]] root tracer to Info [slee-management] Setting service ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] starting priority to 0 [slee-management] Setting service ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] stopping priority to 0 [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1],sbb=SbbID[name=EventStateCompositorSbb,vendor=OpenCloud,version=1.0]] root tracer to Info [slee-management] Setting service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] starting priority to 0 [slee-management] Setting service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] stopping priority to 0 [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0],sbb=SbbID[name=ProfileLocationSbb,vendor=OpenCloud,version=1.0]] root tracer to Info [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0],sbb=SbbID[name=PublishSbb,vendor=OpenCloud,version=1.0]] root tracer to Info [slee-management] Setting service ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] starting priority to 0 [slee-management] Setting service ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] stopping priority to 0 deploy-du-sip-profile-location-service.jar: [slee-management] Deploying deployable unit file:jars/sip-profile-location-service.jar [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0],sbb=SbbID[name=ProfileLocationSbb,vendor=OpenCloud,version=1.0]] root tracer to Info [slee-management] Setting service ServiceID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0] starting priority to 0 [slee-management] Setting service ServiceID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0] stopping priority to 0 deploy-du-sip-proxy-service.jar: [slee-management] Deploying deployable unit file:jars/sip-proxy-service.jar [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8],sbb=SbbID[name=ProfileLocationSbb,vendor=OpenCloud,version=1.0]] root tracer to Info [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8],sbb=SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8]] root tracer to Info [slee-management] Setting service ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] starting priority to 0 [slee-management] Setting service ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] stopping priority to 0 deploy-du-sip-registrar-service.jar: [slee-management] Deploying deployable unit file:jars/sip-registrar-service.jar [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8],sbb=SbbID[name=ProfileLocationSbb,vendor=OpenCloud,version=1.0]] root tracer to Info [slee-management] Set trace level of SbbNotification[service=ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8],sbb=SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8]] root tracer to Info [slee-management] Setting service ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] starting priority to 0 [slee-management] Setting service ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] stopping priority to 0 deploy-all-dus: create-sip-registrations-profile-table: [slee-management] Create profile table sip-registrations from specification ComponentID[name=SipRegistrationProfile,vendor=OpenCloud,version=1.0] [slee-management] Importing profiles into profile table: sip-registrations [slee-management] 1 profile(s) processed: 0 created, 0 replaced, 0 removed, 1 skipped [slee-management] Set trace level of ProfileTableNotification[table=sip-registrations] root tracer to Info create-all-profile-tables: create-ra-entity-sipra: [slee-management] Deploying ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] [slee-management] [Failed] Component ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] already deployed [slee-management] Create resource adaptor entity sipra from ComponentID[name=OCSIP,vendor=OpenCloud,version=2.3.1] [slee-management] Bind link name OCSIP to sipra [slee-management] Set trace level of RAEntityNotification[entity=sipra] root tracer to Info [slee-management] Setting resource adaptor entity sipra starting priority to 0 [slee-management] Setting resource adaptor entity sipra stopping priority to 0 create-all-ra-entities: deploy: deploy-all: set-subsystem-tracer-levels: [slee-management] Set trace level of SubsystemNotification[subsystem=AbnormalExecutionAlarms] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=ActivityHandler] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=ClusterStateListener] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=ConfigManager] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=DatabaseStateListener] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=ElementManager] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=LicenseManager] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=LimiterManager] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=MLetStarter] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=RhinoStarter] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=ServiceStateManager] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=SNMP] root tracer to Info [slee-management] Set trace level of SubsystemNotification[subsystem=ThresholdAlarms] root tracer to Info import-runtime-cfg: import-logging-cfg: import-auditing-cfg: import-license-cfg: import-object-pool-cfg: import-staging-queues-cfg: import-limiting-cfg: import-threshold-cfg: import-threshold-rules-cfg: import-access-control-cfg: import-container-configuration: post-deploy-config: activate-service-SIP Notification Service-OpenCloud-1.1: [slee-management] Activate service ComponentID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] on nodes [101] activate-service-SIP Presence Service-OpenCloud-1.1: [slee-management] Activate service ComponentID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] on nodes [101] activate-service-SIP Profile Location Service-OpenCloud-1.0: [slee-management] Activate service ComponentID[name=SIP Profile Location Service,vendor=OpenCloud,version=1.0] on nodes [101] activate-service-SIP Proxy Service-OpenCloud-1.8: [slee-management] Activate service ComponentID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] on nodes [101] activate-service-SIP Publish Service-OpenCloud-1.0: [slee-management] Activate service ComponentID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] on nodes [101] activate-service-SIP Registrar Service-OpenCloud-1.8: [slee-management] Activate service ComponentID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] on nodes [101] activate-all-services: activate-ra-entity-sipra: [slee-management] Activate RA entity sipra on node(s) [101] activate-all-ra-entities: activate: activate-all: all: BUILD SUCCESSFUL Total time: 27 seconds
Partial Imports
To perform a partial import, first view the available targets in the export, then select the target to import.
Why import only part of an export?
A partial import is useful when you only need to import selected items from an export, such as:
For example, you can use a partial import to recreate resource adaptor entities without also activating them. |
It is also possible to exclude certain configuration steps by setting an exclusion property. For example to exclude the persistence configuration you would use -Dexclude-persistence-cfg=true
as part of the command. These exclusion properties exist for all configuration targets.
View targets
To list the available targets in the export, use rhino-import -l
. For example:
$ cd /path/to/export $ rhino-import -l Buildfile: ./build.xml Main targets: Other targets: activate activate-all activate-all-ra-entities activate-all-services activate-ra-entity-sipra activate-service-SIP Notification Service-OpenCloud-1.1 activate-service-SIP Presence Service-OpenCloud-1.1 activate-service-SIP Profile Location Service-OpenCloud-1.0 activate-service-SIP Proxy Service-OpenCloud-1.8 activate-service-SIP Publish Service-OpenCloud-1.0 activate-service-SIP Registrar Service-OpenCloud-1.8 all create-all-profile-tables create-all-ra-entities create-ra-entity-sipra create-sip-registrations-profile-table deploy deploy-all deploy-all-dus deploy-du-javax-slee-standard-types.jar deploy-du-jsip-library-1.2.du.jar deploy-du-jsip-ratype-1.2.du.jar deploy-du-ocsip-ra-2.3.1.19.du.jar deploy-du-ocsip-ratype-2.2.du.jar deploy-du-sip-presence-event.jar deploy-du-sip-presence-service.jar deploy-du-sip-profile-location-service.jar deploy-du-sip-proxy-service.jar deploy-du-sip-registrar-service.jar import-access-control-cfg import-auditing-cfg import-container-configuration import-license-cfg import-limiting-cfg import-logging-cfg import-object-pool-cfg import-persistence-cfg import-runtime-cfg import-snmp-cfg import-snmp-configuration import-snmp-node-cfg import-snmp-parameter-set-type-cfg import-staging-queues-cfg import-threshold-cfg import-threshold-rules-cfg init install-all-dus install-deps-of-du-javax-slee-standard-types.jar install-deps-of-du-jsip-library-1.2.du.jar install-deps-of-du-jsip-ratype-1.2.du.jar install-deps-of-du-ocsip-ra-2.3.1.19.du.jar install-deps-of-du-ocsip-ratype-2.2.du.jar install-deps-of-du-sip-presence-event.jar install-deps-of-du-sip-presence-service.jar install-deps-of-du-sip-profile-location-service.jar install-deps-of-du-sip-proxy-service.jar install-deps-of-du-sip-registrar-service.jar install-du-javax-slee-standard-types.jar install-du-jsip-library-1.2.du.jar install-du-jsip-ratype-1.2.du.jar install-du-ocsip-ra-2.3.1.19.du.jar install-du-ocsip-ratype-2.2.du.jar install-du-sip-presence-event.jar install-du-sip-presence-service.jar install-du-sip-profile-location-service.jar install-du-sip-proxy-service.jar install-du-sip-registrar-service.jar login management-init post-deploy-config pre-deploy-config set-subsystem-tracer-levels set-symmetric-activation-state-mode verify-all-dus verify-du-javax-slee-standard-types.jar verify-du-jsip-library-1.2.du.jar verify-du-jsip-ratype-1.2.du.jar verify-du-ocsip-ra-2.3.1.19.du.jar verify-du-ocsip-ratype-2.2.du.jar verify-du-sip-presence-event.jar verify-du-sip-presence-service.jar verify-du-sip-profile-location-service.jar verify-du-sip-proxy-service.jar verify-du-sip-registrar-service.jar Default target: all
Select target
To import a selected target, run rhino-import
with the target specified.
If you don’t specify a target, this operation will import the default (all ). |
For example, to recreate all resource adaptor entities included in the export:
$ cd /path/to/export $ rhino-import . create-all-ra-entities Buildfile: ./build.xml management-init: [echo] Open Cloud Rhino SLEE Management tasks defined login: [slee-management] Establishing new connection to localhost:1199 [slee-management] Connected to localhost:1199 (101) [Rhino-SDK (version='2.5', release='0.0', build='201609201052', revision='eb71e6f')] init: [slee-management] Setting the active namespace to the default namespace install-du-jsip-library-1.2.du.jar: [slee-management] Install deployable unit file:du/jsip-library-1.2.du.jar install-deps-of-du-jsip-library-1.2.du.jar: install-du-jsip-ratype-1.2.du.jar: [slee-management] Install deployable unit file:du/jsip-ratype-1.2.du.jar install-deps-of-du-jsip-ratype-1.2.du.jar: install-du-ocsip-ratype-2.2.du.jar: [slee-management] Install deployable unit file:du/ocsip-ratype-2.2.du.jar install-deps-of-du-ocsip-ratype-2.2.du.jar: install-du-ocsip-ra-2.3.1.19.du.jar: [slee-management] Install deployable unit file:du/ocsip-ra-2.3.1.19.du.jar install-deps-of-du-ocsip-ra-2.3.1.19.du.jar: verify-du-ocsip-ra-2.3.1.19.du.jar: [slee-management] Verifying deployable unit file:du/ocsip-ra-2.3.1.19.du.jar deploy-du-ocsip-ra-2.3.1.19.du.jar: [slee-management] Deploying deployable unit file:du/ocsip-ra-2.3.1.19.du.jar create-ra-entity-sipra: [slee-management] Deploying ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] [slee-management] [Failed] Component ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] already deployed [slee-management] Create resource adaptor entity sipra from ComponentID[name=OCSIP,vendor=OpenCloud,version=2.3.1] [slee-management] Bind link name OCSIP to sipra [slee-management] Set trace level of RAEntityNotification[entity=sipra] root tracer to Info [slee-management] Setting resource adaptor entity sipra starting priority to 0 [slee-management] Setting resource adaptor entity sipra stopping priority to 0 create-all-ra-entities: BUILD SUCCESSFUL Total time: 5 seconds
Profile Snapshots
As well as an overview of profile snapshots, this section includes instructions, explanations and examples of performing the following Rhino SLEE procedures:
Procedure | Script | Console command |
---|---|---|
rhino-snapshot |
|
|
snapshot-decode |
|
|
snapshot-to-export |
|
|
|
importprofiles |
About SLEE Profile Snapshots
The rhino-snapshot
script exports the state of some or all profile tables out of the SLEE in binary format. These binary snapshots can then be inspected, converted into a form suitable for re-importing, and re-imported later.
You extract and convert binary images of profile tables using the following commands, available in client/bin/
:
For usage information on any of these scripts, run them without arguments. |
Saving a Profile Snapshot
To create a snapshot (for example, to extract the state of an individual profile table to a snapshot directory), run the rhino-snapshot
script.
Options |
$ ~/rhino/client/bin/rhino-snapshot Rhino Snapshot Client This tool creates snapshots of currently installed profile tables. Usage: rhino-snapshot [options] <action> [options] [<profile table name>*|--all] Valid options: -? or --help - Display this message -h <host> - Rhino host -p <port> - Rhino port -u <username> - Rhino username -w <password> - Rhino password Valid actions: --list - list all profile tables --info - get profile table info only (do not save any data) --snapshot - snapshot and save profile table data Action modifier options: --namespace <name> - restrict action to the given namespace use an empty string to denote the default namespace --outputdir <directory> - sets the directory where snapshot files are created defaults to the current working directory --zip - save to .zip archives instead of directories --all - snapshot all profile tables |
---|---|
Example |
ubuntu@ip-172-31-25-31:~/RhinoSDK/client/bin$ ./rhino-snapshot -h localhost --snapshot --outputdir snapshot-backup --all Rhino Snapshot Client Connecting to node: localhost:1199 Connected to node: 101 Snapshot server port is: 22000 Taking snapshot for OpenCloud_ShortCode_AddressListEntryTable Saving OpenCloud_ShortCode_AddressListEntryTable.jar (1,335kb) Streaming profile table 'OpenCloud_ShortCode_AddressListEntryTable' snapshot to OpenCloud_ShortCode_AddressListEntryTable.data (3 entries) [###############################################################################################################################################################################################################################] 3/3 entries Taking snapshot for SCCCamelToIMSReOriginationConfigProfileTable Saving SCCCamelToIMSReOriginationConfigProfileTable.jar (4,937kb) Streaming profile table 'SCCCamelToIMSReOriginationConfigProfileTable' snapshot to SCCCamelToIMSReOriginationConfigProfileTable.data (2 entries) [###############################################################################################################################################################################################################################] 2/2 entries Taking snapshot for OpenCloud_RegistrarPublicIdToPrivateIdTable Saving OpenCloud_RegistrarPublicIdToPrivateIdTable.jar (1,398kb) Streaming profile table 'OpenCloud_RegistrarPublicIdToPrivateIdTable' snapshot to OpenCloud_RegistrarPublicIdToPrivateIdTable.data (1 entries) [###############################################################################################################################################################################################################################] 1/1 entries ... Extracted 1,626 of 1,626 entries (251kb) Snapshot timestamp 2016-10-31 15:08:46.917 (1477926526917) Critical region time : 0.003 s Request preparation time : 0.090 s Data extraction time : 113.656 s Total time : 113.746 s |
Inspecting a Profile Snapshot
To print the contents of a snapshot directory or zip file, run the snapshot-decode
script.
Options |
$ ~/rhino/client/bin/snapshot-decode Syntax: snapshot-decode <snapshot .zip | snapshot directory> [max# of records to print, default=all] |
---|---|
Example |
$ ~/rhino/client/bin/snapshot-decode snapshots-backup/ActivityTestConfigurationTable OpenCloud::::,2000,1.5,300,10000,chargingPeriodMultiple OpenCloud:OpenCloud:sipcall::,2000,1.5,1800,10000,chargingPeriodMultiple |
Notes |
The If you exported all profile tables (by passing the ~/rhino/client/bin/rhino-snapshot -h localhost --outputdir snapshots-backup --all …then you would run commands such as the following to inspect all the tables: ~/rhino/client/bin/snapshot-decode snapshots-backup/ActivityTestConfigurationTable ~/rhino/client/bin/snapshot-decode snapshots-backup/AnnouncementProfileTable ... |
Preparing a Snapshot for Importing
To convert a snapshot to XML (so that it can be re-imported into another SLEE), run the snapshot-to-export
script. To convert the raw data of snapshot from rhino-export result, run the convert-export-snapshots-to-xml
script.
snapshot-to-export
Options |
$ ~/rhino/client/bin/snapshot-to-export Snapshot .zip file or directory required Syntax: snapshot-to-export <snapshot .zip | snapshot directory> <output .xml file> [--max max records, default=all] |
---|---|
Example |
$ ~/rhino/client/bin/snapshot-to-export snapshots-backup/snapshots/ActivityTestConfigurationTable ActivityTestConfigurationTable.xml Creating profile export file ActivityTestConfigurationTable.xml [###########################################################################################################] converted 3 of 3 [###########################################################################################################] converted 3 of 3 Created export for 3 profiles in 0.1 seconds |
Notes |
The If you exported all profile tables (by passing the ~/rhino/client/bin/rhino-snapshot -h localhost --outputdir snapshots-backup --all …then you would run commands such as the following to convert all the tables: ~/rhino/client/bin/snapshot-to-export snapshots-backup/ActivityTestConfigurationTable Creating profile export file snapshots-backup/ActivityTestConfigurationTable.xml [############################################################################################################] converted 3 of 3 [############################################################################################################] converted 3 of 3 Created export for 3 profiles in 0.1 seconds ~/rhino/client/bin/snapshot-to-export snapshots-backup/sis_configs_sip Creating profile export file snapshots-backup/sis_configs_sip.xml [############################################################################################################] converted 2 of 2 [############################################################################################################] converted 2 of 2 Created export for 2 profiles in 0.9 seconds ... |
convert-export-snapshots-to-xml
Options |
$ ~/rhino/client/bin/convert-export-snapshots-to-xml Export directory name must be specified Syntax: convert-export-snapshots-to-xml <export directory> |
---|---|
Example |
$ ~/rhino/client/bin/convert-export-snapshots-to-xml exports/ Converting table test-profile-table from snapshot exports/snapshots/test_profile_table to XML test_profile_table.xml |
Importing a Profile Snapshot
To import a converted snapshot XML file into a Rhino SLEE, run the importprofiles
command in rhino-console.
Options |
importprofiles <filename.xml> [-table table-name] [-replace] [-max profiles-per-transaction] [-noverify] Description Import profiles from xml data |
---|---|
Example |
$ ~/rhino/client/bin/rhino-console importprofiles snapshots-backup/ActivityTestConfigurationTable.xml Interactive Rhino Management Shell Connecting as user admin Importing profiles into profile table: ActivityTestConfigurationTable 3 profile(s) processed: 3 created, 0 replaced, 0 removed, 0 skipped |
External Persistence Database Backups
During normal operation, all SLEE management and profile data resides in Rhino’s own in-memory distributed database. The memory database is fault tolerant and can survive the failure of a node. However, for management and profile data to survive a total restart of the cluster, it must be persisted to a permanent, disk-based data store. OpenCloud Rhino SLEE uses an external database for this purpose, both PostgreSQL and Oracle databases are supported.
When to export instead of backing up
You can only successfully restore database backups using the same Rhino SLEE version as the backup was made from. For backups that can reliably be used for restoring to different versions of the Rhino SLEE, create an export image of the SLEE. |
The procedures for backup and restore of the external database that Rhino stores persistent state differs depending on the database vendor. Consult the documentation provided by your database vendor.
PostgreSQL documentation for PostgreSQL 9.6 can be found at https://www.postgresql.org/docs/9.6/static/backup.html
Oracle documentation for Oracle Database 12C R2 can be found at https://docs.oracle.com/database/122/BRADV/toc.htm
When installing Rhino a database schema is initialised. To backup the Rhino database you must dump a copy of all the tables in this schema. The schema to be backed up is the database name chosen during the Rhino installation. This value is stored as the MANAGEMENT_DATABASE_NAME
variable in the file $RHINO_HOME/config/config_variables
where $RHINO_HOME
is the path of a rhino node directory.
Database schema
The tables below are typical of a Rhino installation. Depending on your deployed services and configuration the schema may contain more tables than these. Backups should always include all the tables to allow restoration to a usable state without additional manual operations.
Table | Contents |
---|---|
keyspaces |
Names of tables holding data for MemDB keyspaces and config options for these keyspaces. Each MemDB instance contains a number of keyspaces for different types of stored data. These are mapped to tables in the persistent DB. |
timestamps |
Snapshot timestamps and MemDB generation IDs for persistent MemDB databases. This table records the current timestamp for each persistent MemDB so nodes can determine which backing database holds the most recent version when starting up. See About Persistence Resources for an explanation of how to use multiple redundant backing databases for persistent storage. |
domain-0-rhino_management_RHINO internal metadata suppocfa90e0c domain-0-rhino_management_RHINO internal metadata suppo302ee56d … |
Rhino configuration data. Each table contains one keyspace for a different type of configuration data, e.g. SLEE state. |
domain-0-rhino_management_rhino:deployment |
Deployed service classes. Entries correspond to jars, metadata files and checksums for deployed components. |
domain-0-rhino_profiles_Profile Table 1:ProfileOCBB … |
Profile table data. Each table corresponds to a profile table in the SLEE. Each record corresponds to a profile in the profile table. |
domain-0-rhino_profiles_Profile Table 1:ProfileIndexAddressOCBB … |
Index data for the profile tables. Each indexed field in a profile has a matching table |
Managing the Rhino SNMP Subsystem
Rhino includes an SNMP agent, for interoperability with external SNMP-aware management clients. The Rhino SNMP agent provides a read-only view of Rhino’s statistics (through SNMP polling), and supports sending SNMP notifications for platform events to an external monitoring system.
In a clustered environment, individual Rhino nodes will run their own instances of the SNMP agent, so that statistics and notifications can still be accessed in the event of node failure. When multiple Rhino nodes exist on the same host, Rhino assigns each SNMP agent a unique port, so that the SNMP agents for each node can be distinguished. The port assignments are also persisted by default, so that in the event of node failure or cluster restart the Rhino SNMP agent for a given node will resume operation on the same port it was using previously.
This section includes the following topics:
Accessing SNMP Statistics and Notifications
Below is an overview of the statistics and notifications available from the Rhino SNMP agent, and the OIDs they use.
SNMP statistics
The Rhino SNMP agent provides access to all non-sample based Rhino statistics (all gauges and counters), in the form of SNMP tables that each represent a single parameter set type. The values in each table represent statistics from the individual parameter sets associated with the table’s parameter set type. Each table uses the name of a parameter set, converted to an OID, as a table index. Individual table rows represent parameter sets, while the table columns represent statistics from the parameter set type. The first column is special, as it contains the parameter set index value as a string. For the purposes of SNMP, the name of the root parameter set for each parameter set type can be considered to be "(root)". All other parameter sets use their normal parameter set names, converted to OIDs, as index keys.
For example, the following output (generated using snmpwalk
) shows the JVM
parameter set type’s representation in the SNMP agent as OIDs:
What the colors represent
|
Exceptionally long parameter set names may be truncated if their OID representation is longer than 255 characters. This is to prevent pathname length problems with management clients which store SNMP statistics in files named after the index OID. |
The statistics provided by this interface can be considered the "raw" statistics values for individual Rhino nodes. They will not in general reflect any activity which is occurring on Rhino nodes other than the one the SNMP agent is providing statistics for. Unlike the command-line statistics client, the SNNP agent does not collate statistics from other nodes. |
For usage parameter set types, the base OID and individual statistics can be specified in annotations described in Annotations. For stats parameter set type in resource adaptor, the base OID can be specified in element stats-parameter-set-type in oc-resource-adaptor-jar.xml . Otherwise, they will be dynamically allocated according to the range specified in Dynamic Rhino monitoring parameter sets. |
You can get the mapping of table columns to statistics by exporting the parameter set type specific MIB, using the Rhino command-line console. |
SNMP notifications
The Rhino SNMP agent supports sending SNMP notifications to a designated host/port. It forwards the following, which include all standard JAIN SLEE 1.1 notifications, plus several Rhino specific notifications. See Configuring SNMP Notifications for instructions on how to configure Rhino to send SNMP notifications.
Notifications | Sent when… | Details | Trap type OID (1.3.6.1.4.1.19808.2.101.x) |
---|---|---|---|
Alarms |
…alarms are raised or cleared (only once per alarm) |
1.3.6.1.4.1.19808.2.101.1 |
|
Resource Adaptor |
…an RA entity changes state |
1.3.6.1.4.1.19808.2.101.2 |
|
Service |
…a service changes state |
1.3.6.1.4.1.19808.2.101.3 |
|
SLEE |
…the SLEE changes state |
1.3.6.1.4.1.19808.2.101.4 |
|
Trace |
…a trace message is generated by a component |
1.3.6.1.4.1.19808.2.101.5 |
|
Usage |
…usage notifications required |
1.3.6.1.4.1.19808.2.101.6 |
|
Log |
…log messages exceed a specified threshold |
1.3.6.1.4.1.19808.2.101.7 |
|
Logfile Rollover |
…a log file rolls over |
1.3.6.1.4.1.19808.2.101.8 |
The notification MIB structure is a set of SNMP variable bindings containing the time ticks since Rhino started, the notification message, type, timestamp, node IDs and additional data. These are also documented in the RHINO-NOTIFICATIONS.MIB file generated by rhino-console exportmibs
.
Common Notification VarBinds (Notification argument data)
Name | OID | Description |
---|---|---|
message |
1.3.6.1.4.1.19808.2.102.1 |
Notification message. For alarms this is the alarm message text |
type |
1.3.6.1.4.1.19808.2.102.2 |
Notification type. A notification type in dotted hierachical notation e.g. "javax.slee.management.alarm.raentity" |
sequence |
1.3.6.1.4.1.19808.2.102.3 |
An incrementing sequence number, indexed for each notification type |
timestamp |
1.3.6.1.4.1.19808.2.102.4 |
A timestamp in ms since 1-Jan-1970 |
nodeIDs |
1.3.6.1.4.1.19808.2.102.5 |
The node IDs reporting this notification. An array of Rhino node IDs represented as a string [101,102,…] |
source |
1.3.6.1.4.1.19808.2.102.9 |
The source of the notification. This can be an SBB, RA entity or subsystem. |
namespace |
1.3.6.1.4.1.19808.2.102.13 |
The namespace of the notification. The field will be an empty string for default namespace. |
Component state change
Name | OID | Description |
---|---|---|
oldState |
1.3.6.1.4.1.19808.2.102.6 |
Old state of the component |
newState |
1.3.6.1.4.1.19808.2.102.7 |
new state of the component |
component |
1.3.6.1.4.1.19808.2.102.8 |
ID of the component. This can be a service or RA entity name |
Alarm
Name | OID | Description |
---|---|---|
alarmID |
1.3.6.1.4.1.19808.2.102.10 |
Alarm ID |
alarmLevel |
1.3.6.1.4.1.19808.2.102.11 |
Alarm level (Critical, Major, Minor, etc) |
alarmInstance |
1.3.6.1.4.1.19808.2.102.12 |
Alarm instance ID. |
alarmType |
1.3.6.1.4.1.19808.2.102.14 |
Alarm type ID. The value of this depends on the source of the alarm, for example a failed connection alarm from the DB Query RA would have a value like |
Tracer
Name | OID | Description |
---|---|---|
traceLevel |
1.3.6.1.4.1.19808.2.102.50 |
Tracer level (Error, Warning, Info, …) |
Usage parameter notification (Rhino Statistics)
Name | OID | Description |
---|---|---|
usageName |
1.3.6.1.4.1.19808.2.102.60 |
Usage parameter name, one per parameter in the parameter set. |
usageSetName |
1.3.6.1.4.1.19808.2.102.61 |
Parameter set name |
usageValue |
1.3.6.1.4.1.19808.2.102.62 |
Value of the usage parameter at the moment the notification was generated |
Logging
Name | OID | Description |
---|---|---|
logName |
1.3.6.1.4.1.19808.2.102.70 |
Log key |
logLevel |
1.3.6.1.4.1.19808.2.102.71 |
Log level (ERROR, WARN, INFO, …) |
logThread |
1.3.6.1.4.1.19808.2.102.72 |
Thread the message was logged from |
Log file rollover
Name | OID | Description |
---|---|---|
oldFile |
1.3.6.1.4.1.19808.2.102.80 |
The log file that was rolled over |
newFile |
1.3.6.1.4.1.19808.2.102.81 |
The new name of the log file |
A sample SNMP trap for an alarm follows. The OctetString values are text strings containing the alarm notification data.
Simple Network Management Protocol
version: v2c (1)
community: public
data: snmpV2-trap (7)
snmpV2-trap
request-id: 1760530310
error-status: noError (0)
error-index: 0
variable-bindings: 11 items
1.3.6.1.2.1.1.3.0: 559356
Object Name: 1.3.6.1.2.1.1.3.0 (iso.3.6.1.2.1.1.3.0)
Value (Timeticks): 559356
1.3.6.1.6.3.1.1.4.1.0: 1.3.6.1.4.1.19808.2.101.1 (iso.3.6.1.4.1.19808.2.101.1)
Object Name: 1.3.6.1.6.3.1.1.4.1.0 (iso.3.6.1.6.3.1.1.4.1.0)
Value (OID): 1.3.6.1.4.1.19808.2.101.1 (iso.3.6.1.4.1.19808.2.101.1)
1.3.6.1.4.1.19808.2.102.1: 44617461536f7572636520686173206661696c6564
1.3.6.1.4.1.19808.2.102.2: 6a617661782e736c65652e6d616e6167656d656e742e616c...
Object Name: 1.3.6.1.4.1.19808.2.102.2 (iso.3.6.1.4.1.19808.2.102.2)
Value (OctetString): 6a617661782e736c65652e6d616e6167656d656e742e616c...
1.3.6.1.4.1.19808.2.102.3: 34
1.3.6.1.4.1.19808.2.102.4: 31343634313535323331333630
1.3.6.1.4.1.19808.2.102.5: 5b3130315d
1.3.6.1.4.1.19808.2.102.9: 5241456e746974794e6f74696669636174696f6e5b656e74...
1.3.6.1.4.1.19808.2.102.10: 3130313a313836363934363631333134353632
1.3.6.1.4.1.19808.2.102.11: 4d616a6f72
1.3.6.1.4.1.19808.2.102.12: 6c6f63616c686f737420284f7261636c6529
1.3.6.1.4.1.19808.2.102.14: 646271756572792e64732e6661696c757265
Notification trap type OID: 1.3.6.1.4.1.19808.2.101.1 (Alarm) | |
Message: "DataSource has failed" | |
Notification type: javax.slee.management.alarm.raentity | |
Sequence number: 34 | |
Timestamp: 1464155231360 | |
Node IDs: [101] | |
Source: RAEntityNotification[entity=dbquery-0] | |
Alarm ID: 101:186694661314562 | |
Alarm level: Major | |
Alarm instance: localhost (Oracle) | |
Alarm type: dbquery.ds.failure |
|
Log notification appender
Rhino 2.2 introduced a log notification appender for use with the SNMP agent. This appender will create notifications for all log messages above its configured threshold. These notifications will in turn be forwarded by the SNMP agent to the designated host/port. This is intended as a catch-all for any errors or warnings which don’t have specific alarms associated with them. |
OID hierarchy
The Rhino SNMP agent uses the following OIDs. (All statistics and notifications that it provides use these OIDs as a base.)
.1.3.6.1.4.1.19808 |
OpenCloud Enterprise OID |
.1.3.6.1.4.1.19808.2 |
Rhino |
.1.3.6.1.4.1.19808.2.1 |
Rhino Statistics |
.1.3.6.1.4.1.19808.2.101 |
Rhino Notifications |
.1.3.6.1.4.1.19808.2.102 |
Rhino Notification VarBinds |
Configuring the Rhino SNMP Agent
This section includes instructions for configuring the Rhino SNMP agent, with explanations and examples.
Procedure | rhino-console command(s) |
---|---|
Configuring authentication details |
setsnmpcommunity setsnmpuserdetails setsnmpuserengineid getsnmpuserengineid listsnmpengineids enablesnmpversion disablesnmpversion |
Managing SNMP status |
snmpstatus activatesnmp deactivatesnmp restartsnmp |
Configuring port and interface bindings |
setsnmpsubnet setsnmpportrange setloopbackaddressesallowed setaddressbindingssaved setportbindingssaved |
Managing per-node state |
clearsnmpsavedconfig setsnmpsavedconfig |
setsnmpdetails |
|
exportmibs |
|
Configuring SNMP notifications |
enablesnmpnotifications disablesnmpnotifications addsnmptarget removesnmptarget listsnmpnotificationtypes setsnmpnotificationenabled |
removeappenderref |
|
Configuring SNMP OID mappings |
listsnmpoidmappings setsnmpoidmapping createsnmpmappingconfig removesnmpmappingconfig removeinactivesnmpmappingconfigs |
Configuring SNMP Counter Mappings |
listsnmpcountermappings setsnmpcountermapping |
Configuring Authentication Details
To configure authentication details for accessing Rhino’s SNMP subsystem, use the following rhino-console commands.
setsnmpusercommunity
Command |
setsnmpcommunity <community> Description Sets the SNMP community. |
---|---|
Example |
|
setsnmpuserdetails
Command |
setsnmpuserdetails <username> <authenticationProtocol> <authenticationKey> <privacyProtocol> <privacyKey> Description Sets the SNMP v3 user and authentication details. |
---|---|
Example |
|
setsnmpuserengineid
Command |
setsnmpuserengineid <hex string> Description Sets the user configurable portion of the SNMP LocalEngineID.
|
||||
---|---|---|---|---|---|
Example |
|
getsnmpuserengineid
Command |
getsnmpuserengineid Description Returns the user configurable portion of the SNMP LocalEngineID. |
---|---|
Example |
|
listsnmpengineids
Command |
listsnmpengineids Description Lists the SNMP EngineIDs for each node.
|
||
---|---|---|---|
Example |
|
Managing the Status of the SNMP Agent
To display the status of the SNMP agent, active, deactivate and restart the SNMP agent, use the following rhino-console commands.
snmpstatus
Command |
snmpstatus Description Provides an overview of all current SNMP agent state including the current SNMP defaults, per-node SNMP agent state, and saved per-node SNMP configuration. |
||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Example |
The initial status for a freshly installed single-node cluster looks like this:
|
activatesnmp
Command |
activatesnmp Description Activates the Rhino SNMP agent. |
---|---|
Example |
$ ./rhino-console activatesnmp Rhino SNMP agent enabled. |
deactivatesnmp
Command |
deactivatesnmp Description Deactivates the Rhino SNMP agent. |
---|---|
Example |
$ ./rhino-console deactivatesnmp Rhino SNMP agent disabled. |
restartsnmp
Command |
restartsnmp Description Deactivates (if required) and then reactivates the Rhino SNMP agent. |
---|---|
Example |
$ ./rhino-console restartsnmp Stopped SNMP agent. Starting SNMP agent. Rhino SNMP agent restarted. |
If SNMP agent could not be started successfully, alarms will be raised. Check active alarms using listactivealarms console command for detail. |
Configuring Port and Interface Bindings
To manage port and interface bindings when a new Rhino node joins the cluster, use the following rhino-console commands to configure Rhino SNMP system settings for the default subnet and port range, allowing loopack addresses, and saving address and port bindings.
These settings only affect nodes which don’t have previously saved interface/port configuration settings. If a node has previously saved settings, it will attempt to use those values. If it cannot (for example, if the port is in use by another application), then the SNMP agent will not start on that node. |
Any changes to these settings will require a restart of the SNMP agent to take effect.
setsnmpsubnet
Command |
setsnmpsubnet <x.x.x.x/y> Description Sets the default subnet used by the Rhino SNMP agent when initially determining addresses to bind to.
|
||
---|---|---|---|
Example |
|
setsnmpportrange
Command |
setsnmpportrange <low port> <high port> Description Sets the default port range used by the Rhino SNMP agent.
|
||
---|---|---|---|
Example |
setloopbackaddressesallowed
Command |
setloopbackaddressesallowed <true|false> Description Specifies whether loopback interfaces will considered (true) or not (false) when binding the SNMP agent to addresses. This setting will be ignored if only loopback addresses are available. |
---|---|
Example |
|
setaddressbindingssaved
Command |
setaddressbindingssaved <true|false> Description Specifies whether the address bindings used by the SNMP agent for individual Rhino nodes are persisted in the Rhino configuration.
|
||
---|---|---|---|
Example |
|
setportbindingssaved
Command |
setportbindingssaved <true|false> Description Specifies whether the port bindings used by the SNMP agent for individual Rhino nodes are persisted in the Rhino configuration.
|
||
---|---|---|---|
Example |
|
Managing Per-Node State
To clear or modify the saved per-node SNMP configuration, use the following rhino-console commands.
Viewing the saved per-node state
Saved per-node state displays in the output of the snmpstatus command. For example: Saved per-node configuration ============================= 101 <default>:16100 Here node 101 has a saved address/port binding of |
clearsnmpsavedconfig
Command |
clearsnmpsavedconfig <node1,node1,...|all> Description Clears saved per-node SNMP configuration for the specified nodes (or all nodes). |
---|---|
Example |
To clear the saved configuration for node 101 $ ./rhino-console clearsnmpsavedconfig 101 Per-node SNMP configurations cleared for nodes: 101 |
setsnmpsavedconfig
Command |
setsnmpsavedconfig <node-id> <addresses|default> <port|default> Description Sets the saved address and port configuration used by a node.
|
||||
---|---|---|---|---|---|
Example |
To set SNMP agent’s address and port for node-101: $ ./rhino-console setsnmpsavedconfig 101 localhost 16100 SNMP configuration for node 101 updated. |
Setting SNMP System Information
To set SNMP system information, use the following rhino-console command.
Each Rhino SNMP agent exposes the standard SNMPv2-MIB system variables (sysName , sysDescr , sysLocation , and sysContact ). |
setsnmpdetails
Command |
setsnmpdetails <name> <description> <location> <contact> Description Sets all SNMP text strings (name, description, location, contact).
|
---|
If you need different settings for individual agents, use system property references, in the form: ${property.name} . (These substitute for their associated value, on a per-node basis.) |
The ${node-id} property is synthetic and not a real system property — it will be replaced by the node ID of the Rhino node each agent is running in. |
Exporting MIB Files
To export MIBs, use the following rhino-console command.
What are MIBs?
Management Information Base (MIB) files contain descriptions of the OID hierarchy that SNMP uses to interact with statistics and notifications. |
exportmibs
Command |
exportmibs <dir> Description Exports current SNMP statistics configuration as MIB files to the specified directory.
Once deployed, a service always gets the same OIDs (see Configuring SNMP OID mappings). SNMP management clients usually provide a tool for using or importing the information from MIBs. |
||
---|---|---|---|
Example |
[Rhino@localhost (#19)] exportmibs mibs Writing MIB exports to: /home/user/rhino/client/bin/mibs - writing parameter set type "ActivityHandler" to ACTIVITYHANDLER.MIB - writing parameter set type "ObjectPools" to OBJECTPOOLS.MIB - writing parameter set type "Events" to EVENTS.MIB - writing RHINO-NOTIFICATIONS.MIB - writing parameter set type "Transactions" to TRANSACTIONS.MIB - writing parameter set type "Limiters" to LIMITERS.MIB - writing parameter set type "Savanna-Membership" to SAVANNA-MEMBERSHIP.MIB - writing parameter set type "SystemInfo" to SYSTEMINFO.MIB - writing parameter set type "Services" to SERVICES.MIB - writing parameter set type "EventRouter" to EVENTROUTER.MIB - writing parameter set type "MemDB-Local" to MEMDB-LOCAL.MIB - writing parameter set type "JDBCDatasource" to JDBCDATASOURCE.MIB - writing parameter set type "LockManagers" to LOCKMANAGERS.MIB - writing parameter set type "Activities" to ACTIVITIES.MIB - writing parameter set type "StagingThreads-TM" to STAGINGTHREADS-TM.MIB - writing parameter set type "StagingThreads-Misc" to STAGINGTHREADS-MISC.MIB - writing parameter set type "StagingThreads" to STAGINGTHREADS.MIB - writing parameter set type "LicenseAccounting" to LICENSEACCOUNTING.MIB - writing parameter set type "JVM" to JVM.MIB - writing parameter set type "Savanna-Group" to SAVANNA-GROUP.MIB - writing parameter set type "MemDB-Replicated" to MEMDB-REPLICATED.MIB 21 MIBs exported.
|
Configuring SNMP Notifications
To enable, disable, and specify where to send SNMP notifications, use the following rhino-console commands.
enablesnmpnotifications
Command |
enablesnmpnotifications Description Enables SNMP notification sending (while the SNMP agent is active). |
---|
disablesnmpnotifications
Command |
disablesnmpnotifications Description Disables SNMP notification sending. |
---|
addsnmptarget
Command |
addsnmptarget <v2c|v3> <address> Description Adds the target address for SNMP notifications. |
||
---|---|---|---|
Example |
To send version v2c notifications to 127.0.0.1 port 162: [Rhino@localhost (#8)] addsnmptarget v2c udp:127.0.0.1/162 Added SNMP notifications target: v2c:udp:127.0.0.1/162
|
removesnmptarget
Command |
removesnmptarget <target> Description Removes the specified SNMP notification target. |
---|---|
Example |
To remove a target: [Rhino@localhost (#10)] removesnmptarget v2c:udp:127.0.0.1/162 Removed SNMP notifications target: v2c:udp:127.0.0.1/162 |
listsnmpnotificationtypes
Command |
listsnmpnotificationtypes Description Lists the notification types supported for SNMP notification type filtering. |
---|---|
Example |
To list notification types: [Rhino@localhost (#1)] listsnmpnotificationtypes Supported SNMP Notification types: AlarmNotification LogNotification LogRolloverNotification ResourceAdaptorEntityStateChangeNotification ServiceStateChangeNotification SleeStateChangeNotification TraceNotification UsageNotification |
setsnmpnotificationenabled
Command |
setsnmpnotificationenabled <type> <true|false> Description Specifies whether the notification type should be forwarded by the SNMP subsystem. |
---|---|
Example |
To disable [Rhino@localhost (#4)] setsnmpnotificationenabled SleeStateChangeNotification false SNMP notifications for type 'SleeStateChangeNotification' are now disabled. |
These settings have no effect if SNMP notification delivery is disabled globally. |
The notification types that can be configured to generate SNMP traps are:
-
AlarmNotification
-
LogNotification
-
LogRolloverNotification
-
ResourceAdaptorEntityStateChangeNotification
-
ServiceStateChangeNotification
-
SleeStateChangeNotification
-
TraceNotification
-
UsageNotification
Frequently it is desired to only send SNMP traps for AlarmNotification
and UsageNotification
. In other cases ResourceAdaptorEntityStateChangeNotification
, ServiceStateChangeNotification
and SleeStateChangeNotification
are also desired.
LogNotification and, to a lesser degree, TraceNotification will cause performance degradation due to additional platform load. |
Notification configuration and snmpstatus
Below is an example of output from the snmpstatus command showing notification configuration. Notification configuration ========================== Notification targets: v2c:udp:127.0.0.1/162 v3:udp:127.0.0.1/163 TraceNotification forwarded UsageNotification forwarded ResourceAdaptorEntityStateChangeNotification forwarded LogNotification forwarded SleeStateChangeNotification forwarded ServiceStateChangeNotification forwarded LogRolloverNotification forwarded AlarmNotification forwarded |
Thread pooling
By default, Rhino uses a thread pool for SNMP notifications. The pool can be configured by setting the notifications.notification_threads
system property.
Value |
Effect |
---|---|
0 |
Disable notification threadpooling, use same thread. |
1 |
Default, a single dedicated thread for notification delivery. |
>1 |
Threadpool of N threads for notification delivery. |
Removing the Log Notification Appender
To safely remove the Log Notification Appender, use the following rhino-console commands.
What is the Log Notification Appender?
Introduced in Rhino 2.2, the Log Notification Appender is a log appender that generates SNMP notifications from log messages at or above a specified threshold (by default: |
removeappenderref
Command |
removeappenderref <logKey> <appenderName> Description Removes an appender for a log key. Required Arguments logKey The log key of the logger. appenderName The name of the Appender. |
---|---|
Example |
[Rhino@localhost (#40)] removeappenderref root LogNotification Done. |
Specific log keys and levels
If you only want notifications from specific log keys, after removing the For a different notification-generation log level (such as However, we strongly recommend keeping the Log Notification Appender configured at OpenCloud does not support Log Notification Appender configurations that cause excessive notification generation (such as |
Configuring SNMP OID Mappings
To list, set, create, remove and remove inactive parameter set type → OID mappings, use the following rhino-console commands.
listsnmpoidmappings
Command |
listsnmpoidmappings [parameter set type name|-inactive] Description Lists the current Parameter Set Type -> OID mappings. If a parameter set type name is specified, then only the OID mapping associated with that parameter set type are listed, otherwise all mappings are listed. The -inactive option limits the listing to only inactive mappings. |
---|---|
Example |
[Rhino@localhost (#20)] listsnmpoidmappings Parameter Set Type OID Mappings ================================ 1.3.6.1.4.1.19808.2.1.1 (active) Events 1.3.6.1.4.1.19808.2.1.2 (active) Activities 1.3.6.1.4.1.19808.2.1.3 (active) StagingThreads 1.3.6.1.4.1.19808.2.1.4 (active) LockManagers 1.3.6.1.4.1.19808.2.1.5 (active) Services 1.3.6.1.4.1.19808.2.1.6 (active) Transactions 1.3.6.1.4.1.19808.2.1.7 (active) ObjectPools 1.3.6.1.4.1.19808.2.1.8 (active) SystemInfo 1.3.6.1.4.1.19808.2.1.9 (active) MemDB-Local 1.3.6.1.4.1.19808.2.1.10 (active) MemDB-Replicated 1.3.6.1.4.1.19808.2.1.12 (active) LicenseAccounting 1.3.6.1.4.1.19808.2.1.13 (active) ActivityHandler 1.3.6.1.4.1.19808.2.1.14 (active) JVM 1.3.6.1.4.1.19808.2.1.15 (active) EventRouter 1.3.6.1.4.1.19808.2.1.16 (active) JDBCDatasource 1.3.6.1.4.1.19808.2.1.17 (active) Limiters 1.3.6.1.4.1.19808.2.1.18 (active) Savanna-Membership 1.3.6.1.4.1.19808.2.1.19 (active) Savanna-Group 1.3.6.1.4.1.19808.2.1.20 (active) StagingThreads-TM 1.3.6.1.4.1.19808.2.1.21 (active) StagingThreads-Misc |
setsnmpoidmapping
Command |
setsnmpoidmapping [-namespace] <parameter set type name> <-oid <oid>|-auto|-none> Description Sets or clears the OID used for the specified parameter set type. The -oid option sets the mapping to a specific OID, while the -auto option auto-assigns an available OID. The -none option clears any existing mapping.
|
||
---|---|---|---|
Example |
[Rhino@localhost (#22)] setsnmpoidmapping JVM 1.3.6.1.4.1.19808.2.1.14 |
createsnmpmappingconfig
Command |
createsnmpmappingconfig [-namespace] <parameter set type name> Description Create a new SNMP mapping configuration for the specified parameter set type. The mapping is created in the global environment unless the optional -namespace argument is used, in which case the mapping is created in the current namespace instead. |
---|---|
Example |
[Rhino@localhost (#23)] createsnmpmappingconfig Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0] SNMP mapping config for parameter set type Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0] created |
After creation, the oid mapping for the specified parameter set type is set to cleared status. Command setsnmpoidmapping can be used to set the oid mapping afterwards. |
removesnmpmappingconfig
Command |
removesnmpmappingconfig <parameter set type name> Description Remove the SNMP mapping configuration for the specified parameter set type |
---|---|
Example |
[Rhino@localhost (#24)] removesnmpmappingconfig Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0] SNMP mapping config for parameter set type Usage.Services.SbbID[name=UsageTestSbb,vendor=OpenCloud,version=1.0] removed |
removeinactivesnmpmappingconfigs
Command |
removeinactivesnmpmappingconfigs Description Removes all SNMP mapping configurations that are currently inactive |
---|---|
Example |
[Rhino@localhost (#25)] removeinactivesnmpmappingconfigs Removing mapping configuration for parameter set type Activities Removing mapping configuration for parameter set type Events Removed 2 mapping configurations |
Configuring SNMP Counter Mappings
listsnmpcountermappings
Command |
listsnmpcountermappings [parameter set type name] Description Lists the current Parameter Set Type + Counter Name -> Index mappings. If a parameter set type name is specified, then only the mappings associated with that parameter set type are listed, otherwise all mappings are listed. |
---|---|
What it does |
Lists the current Parameter Set Type + Counter Name to Index mappings used to represent SNMP statistics from each parameter set. Includes three columns of information:
|
Example |
[Rhino@localhost (#20)] listsnmpcountermappings Transactions Counter Mappings ================ Transactions 2 active Transactions 3 committed Transactions 4 rolledBack Transactions 5 started |
setsnmpcountermapping
Command |
setsnmpcountermapping [-namespace] <parameter set type name> <counter name> <-index <index>|-auto|-none> Description Sets or clears the index used for the specified parameter set type and counter. The -index option sets the mapping to a specific index, while the -auto option auto-assigns an available index. The -none option clears any existing mapping.
|
||
---|---|---|---|
What it does |
Sets or clears SNMP counter mappings. |
||
Example |
[Rhino@localhost (#22)] setsnmpcountermapping Services activeRootSbbs 3 SNMP counter mapping for activeRootSbbs set to 3 |
Rhino SNMP OID Mappings
Static Rhino monitoring parameter sets
All of the statically defined parameter set OID mappings that are available in a clean install of Rhino are listed below.
This is the list of the base OID’s which will be used to represent stats from each parameter set.
Parameter Set Type OID Mappings ================================ 1.3.6.1.4.1.19808.2.1.1 (active) Events 1.3.6.1.4.1.19808.2.1.2 (active) Activities 1.3.6.1.4.1.19808.2.1.3 (active) StagingThreads 1.3.6.1.4.1.19808.2.1.4 (active) LockManagers 1.3.6.1.4.1.19808.2.1.5 (active) Services 1.3.6.1.4.1.19808.2.1.6 (active) Transactions 1.3.6.1.4.1.19808.2.1.7 (active) ObjectPools 1.3.6.1.4.1.19808.2.1.9 (active) MemDB-Local 1.3.6.1.4.1.19808.2.1.10 (active) MemDB-Replicated 1.3.6.1.4.1.19808.2.1.12 (active) LicenseAccounting 1.3.6.1.4.1.19808.2.1.13 (active) ActivityHandler 1.3.6.1.4.1.19808.2.1.14 (active) JVM 1.3.6.1.4.1.19808.2.1.15 (active) EventRouter 1.3.6.1.4.1.19808.2.1.16 (active) JDBCDatasource 1.3.6.1.4.1.19808.2.1.17 (active) Limiters 1.3.6.1.4.1.19808.2.1.21 (active) StagingThreads-Misc 1.3.6.1.4.1.19808.2.1.22 (active) EndpointLimiting 1.3.6.1.4.1.19808.2.1.23 (active) ExecutorStats 1.3.6.1.4.1.19808.2.1.24 (active) TimerFacility 1.3.6.1.4.1.19808.2.1.25 (active) MemDB-Timestamp 1.3.6.1.4.1.19808.2.1.26 (active) PooledByteArrayBuffer 1.3.6.1.4.1.19808.2.1.27 (active) UnpooledByteArrayBuffer
Static Rhino monitoring parameter set counter mappings
The static counter mappings are listed below.
Counter Mappings ================ Activities 2 created Activities 3 ended Activities 4 rejected Activities 5 active Activities 6 startSuspended Activities 7 suspendActivity ActivityHandler 2 txCreate ActivityHandler 3 txFire ActivityHandler 4 txEnd ActivityHandler 5 nonTxCreate ActivityHandler 6 nonTxFire ActivityHandler 7 nonTxEnd ActivityHandler 8 nonTxLookup ActivityHandler 9 txLookup ActivityHandler 10 nonTxLookupMiss ActivityHandler 11 txLookupMiss ActivityHandler 12 ancestorCount ActivityHandler 13 gcCount ActivityHandler 14 generationsCollected ActivityHandler 15 activitiesCollected ActivityHandler 16 activitiesUnclean ActivityHandler 17 activitiesScanned ActivityHandler 18 administrativeRemove ActivityHandler 19 livenessQueries ActivityHandler 20 timersSet ActivityHandler 21 timersCancelled ActivityHandler 22 localLockRequests ActivityHandler 23 foreignLockRequests ActivityHandler 24 create ActivityHandler 25 end ActivityHandler 26 fire ActivityHandler 27 lookup ActivityHandler 28 lookupMiss ActivityHandler 29 churn ActivityHandler 30 liveCount ActivityHandler 31 tableSize ActivityHandler 32 timerCount ActivityHandler 33 lockRequests EndpointLimiting 2 submitted EndpointLimiting 3 accepted EndpointLimiting 4 userAccepted EndpointLimiting 5 userRejected EndpointLimiting 6 licenseRejected EventRouter 2 eventHandlerStages EventRouter 3 rollbackHandlerStages EventRouter 4 cleanupStages EventRouter 5 badGuyHandlerStages EventRouter 6 vEPs EventRouter 7 rootSbbFinds EventRouter 8 sbbsResolved EventRouter 9 sbbCreates EventRouter 10 sbbExceptions EventRouter 11 processingRetrys Events 2 accepted Events 3 rejected Events 4 failed Events 5 successful Events 6 rejectedQueueFull Events 7 rejectedQueueTimeout Events 8 rejectedOverload ExecutorStats 2 executorTasksExecuted ExecutorStats 3 executorTasksExecuting ExecutorStats 4 executorTasksRejected ExecutorStats 5 executorTasksSubmitted ExecutorStats 6 executorTasksWaiting ExecutorStats 7 executorThreadsIdle ExecutorStats 8 executorThreadsTotal JDBCDatasource 2 create JDBCDatasource 3 removeIdle JDBCDatasource 4 removeOverflow JDBCDatasource 5 removeError JDBCDatasource 6 getRequest JDBCDatasource 7 getSuccess JDBCDatasource 8 getTimeout JDBCDatasource 9 getError JDBCDatasource 10 putOk JDBCDatasource 11 putOverflow JDBCDatasource 12 putError JDBCDatasource 13 inUseConnections JDBCDatasource 14 idleConnections JDBCDatasource 15 pendingConnections JDBCDatasource 16 totalConnections JDBCDatasource 17 maxConnections JVM 2 heapUsed JVM 3 heapCommitted JVM 4 heapInitial JVM 5 heapMaximum JVM 6 nonHeapUsed JVM 7 nonHeapCommitted JVM 8 nonHeapInitial JVM 9 nonHeapMaximum JVM 10 classesCurrentLoaded JVM 11 classesTotalLoaded JVM 12 classesTotalUnloaded LicenseAccounting 2 accountedUnits LicenseAccounting 3 unaccountedUnits Limiters 2 unitsUsed Limiters 3 unitsRejected Limiters 4 unitsRejectedByParent LockManagers 2 locksAcquired LockManagers 3 locksReleased LockManagers 4 lockWaits LockManagers 5 lockTimeouts LockManagers 6 knownLocks LockManagers 7 acquireMessages LockManagers 8 abortMessages LockManagers 9 releaseMessages LockManagers 10 migrationRequestMessages LockManagers 11 migrationReleaseMessages MemDB-Local 2 committedSize MemDB-Local 3 maxCommittedSize MemDB-Local 4 churnSize MemDB-Local 5 cleanupCount MemDB-Replicated 2 committedSize MemDB-Replicated 3 maxCommittedSize MemDB-Replicated 4 churnSize MemDB-Replicated 5 cleanupCount MemDB-Timestamp 2 waitingThreads MemDB-Timestamp 3 unexposedCommits ObjectPools 2 added ObjectPools 3 removed ObjectPools 4 overflow ObjectPools 5 miss ObjectPools 6 size ObjectPools 7 capacity ObjectPools 8 pruned PooledByteArrayBuffer 2 out PooledByteArrayBuffer 3 in PooledByteArrayBuffer 4 added PooledByteArrayBuffer 5 removed PooledByteArrayBuffer 6 overflow PooledByteArrayBuffer 7 miss PooledByteArrayBuffer 8 poolSize PooledByteArrayBuffer 9 bufferSize PooledByteArrayBuffer 10 poolCapacity Services 2 rootSbbsCreated Services 3 rootSbbsRemoved Services 4 activeRootSbbs StagingThreads 2 itemsAdded StagingThreads 3 itemsCompleted StagingThreads 4 queueSize StagingThreads 5 numThreads StagingThreads 6 availableThreads StagingThreads 7 minThreads StagingThreads 8 maxThreads StagingThreads 9 activeThreads StagingThreads 10 peakThreads StagingThreads 11 dropped StagingThreads-Misc 2 itemsAdded StagingThreads-Misc 3 itemsCompleted StagingThreads-Misc 4 queueSize StagingThreads-Misc 5 numThreads StagingThreads-Misc 6 availableThreads StagingThreads-Misc 7 minThreads StagingThreads-Misc 8 maxThreads StagingThreads-Misc 9 activeThreads StagingThreads-Misc 10 peakThreads StagingThreads-Misc 11 dropped TimerFacility 2 cascadeOverflow TimerFacility 3 cascadeWheel1 TimerFacility 4 cascadeWheel2 TimerFacility 5 cascadeWheel3 TimerFacility 6 jobsExecuted TimerFacility 7 jobsRejected TimerFacility 8 jobsScheduled TimerFacility 9 jobsToOverflow TimerFacility 10 jobsToWheel0 TimerFacility 11 jobsToWheel1 TimerFacility 12 jobsToWheel2 TimerFacility 13 jobsToWheel3 TimerFacility 14 jobsWaiting TimerFacility 15 tasksCancelled TimerFacility 16 tasksFixedDelay TimerFacility 17 tasksFixedRate TimerFacility 18 tasksImmediate TimerFacility 19 tasksOneShot TimerFacility 20 tasksRepeated TimerFacility 21 ticks Transactions 2 active Transactions 3 started Transactions 4 committed Transactions 5 rolledBack UnpooledByteArrayBuffer 2 out UnpooledByteArrayBuffer 3 in UnpooledByteArrayBuffer 4 bytesAllocated UnpooledByteArrayBuffer 5 bytesDiscarded
Management Tools
This section provides an overview of tools included with Rhino for system administrators to manage the Rhino SLEE.
Topics
Using the command-line console, Rhino Element Manager, Apache Ant scripting and the Rhino Remote API. |
|
JMX M-lets including the JMX remote adaptor. |
|
|
|
|
|
|
Also review the memory considerations when using the management tools, especially when running the rhino cluster and management tools on the same host. |
Command-Line Console (rhino-console)
The Rhino SLEE command console (rhino-console
) is a command-line shell which supports both interactive and batch-file commands to manage and configure the Rhino SLEE.
See also the instructions to configure, log into, select a management command from, and configure failover for the command console. |
Below are details on the usage of the command-line console, the available commands, the Java archives required to run the command line console, and the security configuration.
rhino-console
usage
The command console takes the following arguments:
Usage: rhino-console <options> <command> <parameters> Valid options: -? or --help - Display this message -h <host> - Rhino host -p <port> - Rhino port -u <username> - Username -w <password> - Password, or "-" to prompt -D - Display connection debugging messages -r <timeout> - Initial reconnection retry period (in seconds). May be 0 to indicate that the client should reconnect forever. -n <namespace> - Set the initial active namespace If no command is specified, client will start in interactive mode. The help command can be run without connecting to Rhino.
If you don’t specify a command argument, the client starts in interactive mode. If you do give rhino-console
a command argument, it runs in non-interactive mode. (./rhino-console install
is the equivalent of entering install
in the interactive command shell.)
In interactive mode, the client reports alarms when they occur and includes the SLEE state and alarm count in the prompt. It only reports the SLEE state if SLEE of any event routing node is not RUNNING. This new behaviour can be disabled by setting the system property "rhino.console.disable-listeners" to true in $CLIENT_HOME/etc/rhino-client-common
.
|
Command categories
Below are the available categories of rhino-console
commands. Enter help <category name | command name substring>
for a list of available commands in each category.
Category | Description |
---|---|
auditing |
Manage Rhino’s auditing subsystem |
bindings |
Manage component bindings |
config |
Import, export, and manage Rhino configuration |
deployment |
Deploy, undeploy, and view SLEE deployable units |
general |
Rhino console help and features |
housekeeping |
Housekeeping and maintenance functions |
license |
Install, remove, and view Rhino licenses |
limiting |
Manage Rhino’s limiting subsystem |
logging |
Configure Rhino’s internal logging subsystem |
persistence |
Configure Rhino’s external database persistence subsystem |
profile |
Manage profiles and profile tables |
resources |
Manage resource adaptors |
security |
Manage Rhino security |
services |
Manage services running in the SLEE |
sleestate |
Query and manage Rhino SLEE state |
snmp |
Manage Rhino SNMP agent configuration |
thresholdrules |
Manage threshold alarm rules |
trace |
View and set SLEE trace levels using the trace MBean |
usage |
View SLEE service usage statistics |
usertransaction |
Manage client-demarcated transaction boundaries |
Java archives
The classes required to run the command console are packaged as a set of Java libraries. They include:
File/directory | Description |
---|---|
rhino-console.jar |
Command-line client implementation |
rhino-remote.jar |
Rhino remote API for Rhino |
rhino-logging.jar |
Rhino logging system |
slee.jar |
JAIN SLEE 1.1 API |
$RHINO_HOME/client/lib |
Third-part libraries such as jline, log4j and other dependencies |
Security
The command-line console relies on the JMX Remote Adaptor for security.
For a detailed description of JMX security and MBean permission format, see Chapter 12 of the JMX 1.2 specification. See also Security. |
Configuring the Command Console
Generally, you will not need to configure the command console for Rhino (the instructions below are for custom use). |
Below are instructions on configuring ports and usernames and passwords for rhino-console
.
Configure rhino-console
ports
If another application is occupying the default command-console port (1199
), you can change the configuration to use a different instead. For example, to use port 1299
:
-
Go to the
$RHINO_BASE/client
directory.
(This directory will hereafter be referred to as$CLIENT_HOME
.) -
Edit the
$CLIENT_HOME/etc/client.properties
file, to configure the RMI properties as follows:rhino.remote.port=1299
-
Edit the
$RHINO_BASE/etc/defaults/config/config_variables
file (and$RHINO_BASE/node-XXX/config/config_variables
for any node directory that has already been created) to specify port numbers as follows:RMI_MBEAN_REGISTRY_PORT=1299
You need to restart each Rhino node for these changes to take effect. |
Configure rhino-console
usernames and passwords
To edit or add usernames and passwords for accessing Rhino with the command console, edit the $RHINO_BASE/rhino.passwd
file. Its format is:
username:password:rolelist
The role names must match roles defined in the $RHINO_BASE/etc/defaults/config/defaults.xml
file or those otherwise configured at runtime (see Security).
You need to restart the Rhino node for these changes to take effect. |
Logging into the Command Console
Local
To log into the command console on the same host:
1 |
Go to the |
---|---|
2 |
Run Interactive Rhino Management Shell Rhino management console, enter 'help' for a list of commands [Rhino@localhost (#0)] |
Remote
To log into the command console on a remote machine:
1 |
On the remote Rhino machine, edit the security policy to allow connections from the machine where you want to build and deploy (by default only local connections are allowed). To do this, edit the node’s LOCALIPS="192.168.0.1 127.0.0.1 <other-IP-address>"
|
||||
---|---|---|---|---|---|
2 |
Copy
|
||||
3 |
On the build machine, edit the rhino.remote.host=<Address-of-the-Rhino-Machine> Now the |
Alternatively, you can run the client/bin/rhino-console -h <Address-of-the-Rhino-Machine> -p 1199 script on the build machine. |
Selecting a Management Command from the Command Console
As summarised on the Command-Line Console (rhino-console) page, you can view:
-
rhino-console
command usage and a list of command categories, by entering thehelp
command with the rhino-console script (./rhino-console --help
). -
help on a particular command, by entering
help
, specifying the command, within the console:help [command | command-type] get help on available commands
-
a list of
rhino-console
commands in a particular category, by enteringhelp <category name | command name substring>
. For example:[Rhino@localhost (#1)] help getclusterstate getclusterstate Display the current state of the Rhino Cluster
Version-specific commands
Console commands may depend on the Rhino version
Some |
As an example of version-specific rhino-console
commands: between SLEE 1.0 and SLEE 1.1, underlying tracing has changed significantly. As per the SLEE 1.1 specification, the settracerlevel
command can only be used for SBBs, profile abstract classes and resource adaptors (and potentially other SLEE subsystems based on SLEE 1.1-compliant specifications).
As detailed below, the settracelevel
command has been deprecated in SLEE 1.1, replaced by settracerlevel
. However you can still use settracelevel
to set the trace level of a SLEE 1.0-compliant component.
settracerlevel
(SLEE 1.1)
Console command: settracerlevel
Command |
settracerlevel <type> <notif-source> <tracer> <level> Set the trace level for a notification source's tracer |
---|---|
Example |
$ ./rhino-console settracerlevel sbb "service=ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8], sbb=SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8]" "" Finest Set trace level of SbbNotification[service=ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8], sbb=SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8]] root tracer to Finest |
MBean operation: TraceMBean
The TraceMBean
management interface has been extended in SLEE 1.1 so that management clients can easily enable tracing for a particular NotificationSource
and tracer name:
MBean |
|
---|---|
SLEE-defined |
public void setTraceLevel(NotificationSource notificationSource, String tracerName, TraceLevel level) throws NullPointerException, UnrecognizedNotificationSourceException, InvalidArgumentException, ManagementException; |
Arguments |
For this operation, you need to specify the:
|
settracelevel
(SLEE 1.0)
Console command: settracelevel
Command |
settracelevel <type> <id> <level> Set the trace level for a component |
---|---|
Example |
$ ./rhino-console settracelevel sbb "name=FT PingSbb,vendor=OpenCloud,version=1.0" Finest set trace level of SbbID[name=FT PingSbb,vendor=OpenCloud,version=1.0] to Finest |
MBean operation: TraceMBean
This method has been deprecated, since it uses a ComponentID to identify a notification source (which is not compatible with the changes made to the tracing subsystem in SLEE 1.1). It has been replaced with setTracerLevel(NotificationSource, String, TraceLevel)`. |
MBean |
|
---|---|
SLEE-defined |
public void setTraceLevel(ComponentID id, Level traceLevel) throws NullPointerException, UnrecognizedComponentException, ManagementException |
Arguments |
For this operation, you need to specify the:
|
Configuring Failover for the Command Console
To configure the rhino-console
to connect automatically to another node in the cluster if the current node fails, edit $CLIENT_HOME/etc/client.properties
as follows (replacing hostN
with the host names of your cluster and 1199
with the respective port numbers):
rhino.remote.serverlist=host1:1199,host2:1199
Now, if a node in the cluster fails, the command console will automatically connect to the next node in the list. The following example shows failover from node-101
to node-102
:
Cluster state before failover
[Rhino@host1 (#2)] getclusterstate node-id active-alarms host node-type slee-state start-time up-time -- -- -- -- -- -- ------------------ 101 0 host1 event-router Running 20080430 18:02:35 0days,23h,16m,34s 102 0 host2 event-router Running 20080430 18:02:17 0days,23h,16m,16s 2 rows
Cluster state after failover
[Rhino@host2 (#2)] getclusterstate node-id active-alarms host node-type slee-state start-time up-time -- -- -- -- -- -- ------------------ 102 0 host2 event-router Running 20080430 18:02:17 0days,23h,16m,26s 1 rows
Command-console failover is only available for the Production version of Rhino. |
Rhino Element Manager (REM)
The Rhino Element Manager (REM) is a web-based console for monitoring, configuring, and managing a Rhino SLEE. REM provides a graphical user interface (GUI) for many of the management features documented in the Rhino Administration and Deployment Guide.
You can use REM to:
-
monitor a Rhino element (cluster nodes, activities, events, SBBs, alarms, resource adaptor entities, services, trace notifications, statistics, logs)
-
manage a Rhino element (SLEE state, alarms, deployment, profiles, resources), instances available in REM, and REM users
-
configure threshold rules, rate limiters, licenses, logging, and object pools
-
inspect activities, SBBs, timers, transactions, and threads
-
scan key information about multiple Rhino elements on a single screen.
For details, please see the Rhino Element Manager documentation. |
Scripting with Apache Ant
Apache Ant is a Java-based build tool (similar to Make). Ant projects are contained in XML files which specify a number of Ant targets and their dependencies. The body of each Ant target consists of a collection of Ant tasks. Each Ant task is a small Java program for performing common build operations (such as Java compilation and packaging).
Ant features
Ant includes the following features:
-
The configuration files are XML-based.
-
At runtime, a user can specify which Ant target(s) they want to run, and Ant will generate and execute tasks from a dependency tree built from the target(s).
-
Instead of a model extended with shell-based commands, Ant is extended using Java classes. Each task is run by an object that implements a particular task interface.
-
Ant build files are written in XML (and have the default name
build.xml
). -
Each build file contains one project and at least one (default) target.
-
Targets contain task elements.
-
Each task element of the build file can have an
id
attribute and can later be referred to by the value supplied to this. The value has to be unique. -
A project can have a set of properties. These might be set in the build file by the
property
task, or might be set outside Ant. -
Dynamic or configurable build properties (such as path names or version numbers) are often handled through the use of a properties file associated with the Ant build file (often named
build.properties
).
For more about Ant, see the Ant project page. |
Writing an Ant build file by example
It is generally easier to write Ant build files by starting from a working example. The sample applications bundled with the Rhino SDK use Ant management scripts. They are good examples of how to automate the compilation and deployment steps. See rhino-connectivity/sip-*/build.xml
in your Rhino SDK installation folder. Two Rhino tools can be used to create the build.xml
file:
-
The Eclipse plugin creates a
build.xml
file that helps in building and creating components and deployable unit jar files. -
The rhino-export tool creates a
build.xml
file that can redeploy deployed Rhino components to another Rhino instance. This feature is very useful during development — a typical approach is to manually install and configure a number of SLEE components, then userhino-export
to create abuild.xml
file, to automate provisioning steps.
Sample Rhino Ant Tasks
OpenCloud has developed custom Ant tasks for Rhino which can be used in Ant build scripts to deploy and configure SLEE components, including: packaging, deployment, services, resource adaptors, and profiles.
Below are examples of specific Ant tasks: install, createraentity, and activateservice.
The full set of Rhino Ant management tasks is available in the Rhino Management API. |
install
install
is a Rhino management sub-task for installing deployable units.
install
takes the following Ant parameters (attributes for Ant tasks).
Parameter | Description | Required | Default |
---|---|---|---|
failonerror |
Flag to control failure behaviour.
|
No. |
Taken from the Rhino management parent task. |
url |
URL deployable unit to install. |
Not if |
|
srcfile |
Path to deployable unit to install. |
Not if |
For example, to install a deployable unit with a SIP resource adaptor, the build.xml
file would contain:
<target name="install-ocjainsip-1.2-ra-du" depends="login">
<slee-management>
<install srcfile="units/ocjainsip-1.2-ra.jar"
url="file:lib/ocjainsip-1.2-ra.jar"/>
</slee-management>
</target>
createraentity
createraentity
is a Rhino management sub-task for creating resource adaptor entities.
createraentity
takes the following Ant parameters:
Parameter | Description | Required | Default |
---|---|---|---|
failonerror |
Flag to control failure behaviour.
|
No. |
Taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity to create — must be unique within the SLEE. |
Yes. |
|
resourceadaptorid |
Canonical name of the resource adaptor component from which the entity should be created. |
Only required (or allowed) if the component nested element is not present. |
|
properties |
Properties to be set on the resource adaptor. |
No. |
|
component |
Element that identifies the resource adaptor component from which the resource adaptor entity should be created. Available as a nested element. (See sleecomponentelement.) |
Only required (or allowed) if the |
For example, to create a SIP resource adaptor entity, the `build.xml `file would contain:
<target name="create-ra-entity-sipra" depends="install-ocjainsip-1.2-ra-du">
<slee-management>
<createraentity entityname="sipra"
properties="ListeningPoints=0.0.0.0:5060/udp;0.0.0.0:5060/tcp,ExtensionMethods=,OutboundProxy=, UDPThreads=1,TCPThreads=1,OffsetPorts=False,PortOffset=101,RetransmissionFilter=False, AutomaticDialogSupport=False,Keystore=sip-ra-ssl.keystore,KeystoreType=jks,KeystorePassword=, Truststore=sip-ra-ssl.truststore,TruststoreType=jks,TruststorePassword=,CRLURL=,CRLRefreshTimeout=86400, CRLLoadFailureRetryTimeout=900,CRLNoCRLLoadFailureRetryTimeout=60,ClientAuthentication=NEED, MaxContentLength=131072">
<component name="OCSIP" vendor="Open Cloud" version="1.2"/>
</createraentity>
<bindralinkname entityname="sipra" linkname="OCSIP"/>
</slee-management>
</target>
sleecomponentelement
A sleecomponentelement
is an XML element that can be nested as a child in some of other Ant task, to give it a SLEE-component reference. It takes the following form:
<component name="name" vendor="vendor" version="version"/>
Below is the DTD definition:
<!ELEMENT component EMPTY>
<!ATTLIST component id ID #IMPLIED version CDATA #IMPLIED name CDATA #IMPLIED type CDATA #IMPLIED vendor CDATA #IMPLIED>
activateservice
activateservice
is a Rhino management sub-task for activating services.
activateservice
takes the following Ant parameters.
Parameter | Description | Required | Default |
---|---|---|---|
failonerror |
Flag to control failure behaviour.
|
No. |
Taken from the Rhino management parent task. |
serviceid |
Canonical name of the service to activate. |
Only required (or allowed) if the component nested element is not present. |
|
component |
Element that identifies the service to activate. Available as a nested element. (See sleecomponentelement.) |
Only required (or allowed) if the `serviceid is not present. |
|
nodes |
Comma-separated list of node IDs on which the service should be activated. |
No. |
If not specified, the service is activated on all currently live Rhino event router nodes. |
For example, to activate three services based on the SIP protocol, the build.xml
file might contain:
<target name="activate-services" depends="install-sip-ac-location-service-du,install-sip-registrar-service-du,install-sip-proxy-service-du">
<slee-management>
<activateservice>
<component name="SIP AC Location Service" vendor="Open Cloud" version="1.5"/>
</activateservice>
<activateservice>
<component name="SIP Registrar Service" vendor="Open Cloud" version="1.5"/>
</activateservice>
<activateservice>
<component name="SIP Proxy Service" vendor="Open Cloud" version="1.5"/>
</activateservice>
</slee-management>
</target>
abstractbase
Abstract base class extended by other sub tasks.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
activateraentity
A Rhino management sub task for activating Resource Adaptor Entities.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity to activate. |
Yes. |
nodes |
Comma-separated list of node IDs on which the resource adaptor entity should be activated. |
No. If omitted an attempt is made to activate the resource adaptor entity on all current cluster members. |
-
The task is run targeting an already active resource adaptor.
activateservice
A Rhino management sub task for activating Services.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
serviceid |
Canonical name of the service to activate. |
Only required/allowed if the |
nodes |
Comma-separated list of node IDs on which the service should be activated. |
No. If omitted an attempt is made to activate the service on all current cluster members. |
Element | Description | Required |
---|---|---|
component |
Identifies the service to activate. See |
Only required/allowed if the |
-
The task is run targeting an already active service.
addappenderref
A Rhino management sub task for adding an appender to a log key.
Attribute | Description | Required |
---|---|---|
logkey |
Name of the log key to add the appender to. |
Yes. |
appendername |
Name of the appender to add. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be added to the log key, eg. the appender has already been added.
addloggercomponent
A Rhino management sub task for adding a component to a Logger.
Attribute | Type | Description | Required |
---|---|---|---|
logkey |
String |
Name of the log key to add the component to. |
Yes. |
pluginname |
String |
The Log4J plugin for this component. |
Yes. |
properties |
String |
A comma separated list of configuration properties for the component. Each property is a key=value pair. Use this or a nested propertyset. |
No. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
component |
The components to add to this logger. |
No. |
propertyset |
Configuration properties for this component. Alternative to the properties attribute. |
No. |
To use this task provide the configuration of the appender as attributes and sub-elements:
<slee-management>
<addloggercomponent logkey="rhino.snmp" pluginname="DynamicThresholdFilter" properties="key=rhinoKey, onMatch=ACCEPT, onMismatch=NEUTRAL">
<component pluginname="KeyValuePair" properties="key=rhino, value=INFO"/>
</addloggercomponent>
</slee-management>
addpermissionmapping
A Rhino management sub task for adding a permission mapping.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
objectnamepattern |
MBean object name pattern as specified in javax.management.ObjectName |
Yes. |
member |
A MBean member (attribute or operation) |
Only if rhinopermissionsubcategory is specified. |
rhinopermissioncategory |
Primary part of the Rhino permission name |
Yes. |
rhinopermissionsubcategory |
Secondary (optional) part of the Rhino permission name |
Only if member is specified. |
-
Permission mapping already exists
addpermissiontorole
A Rhino management sub task for adding a permission to a role.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
role |
Role name |
Yes. |
permissionName |
Permission name (taken from a permission mapping target as either PermissionCategory or PermissionCategory#PermissionSubcategory) |
Yes. |
permissionActions |
Permission actions to add, either "read" or "read,write" |
Yes. |
-
None
addpersistenceinstanceref
A Rhino management sub task for adding a persistence instance reference to a database resource.
Attribute | Description | Required |
---|---|---|
resourcetype |
Type of resource to add the reference to. Must be one of "persistence" or "jdbc". |
Yes. |
resourcename |
Name of the resource to add the reference to. |
Yes. |
persistenceinstancename |
Name of the persistence instance to reference. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if the persistence instance is already referenced by the resource.
addservicebindings
A Rhino management sub task for adding bindings to a service.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
service |
Identifies the service component. See |
Yes. |
binding |
Identifies a binding descriptor component. See |
Yes. May be repeated as many times as needed to add multiple bindings. |
mapping |
Specifies a mapping for a copied component. If the source component identifier equals a component that will be copied as a result of the binding, then the copied component will have the identity given by the the target identifier, rather than a default value generated by the SLEE. See |
Yes. May be repeated as many times as needed to add multiple mappings. |
-
The task is run targeting a binding descriptor that has already been added to the service.
bindralinkname
A Rhino management sub task for binding Resource Adaptor Entity Link Names.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Canonical name of the resource adaptor entity to bind to the link name. This attribute can reference the Ant property saved by a previous createraentity sub task. |
Yes. |
linkname |
The link name to bind to the resource adaptor entity. |
Yes. |
-
The task is run targeting and already bound linkname.
cascadeuninstall
A Rhino management sub task for uninstalling a deployable unit and all dependencies recursively.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
url |
URL of deployable unit to uninstall. |
Either a url or component element must be declared. |
Element | Description | Required |
---|---|---|
component |
Identifies a component to be removed. The component must be a copied component. See |
Either a url or component element must be declared. |
-
The task is run targeting a non-existent deployable unit or component.
checkalarms
A Rhino management sub task for checking active alarms in the SLEE.
Lists any active alarms, and fails the build iff the failonerror
attribute is set to true
.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
commandline
A Rhino management sub task for interacting directly with the command line client.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
argument |
Used to specify individual command line arguments. See |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
configurelogger
A Rhino management sub task for configuring a logger.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
logKey |
The name of the logger to configure. |
Yes. |
level |
Logger level. |
No. |
additivity |
Boolean value indicating logger additivity. |
No. If not specified, the default value for loggers is used. |
asynchronous |
Boolean value indicating if the logger should be asynchronous. |
No. If not specified, the default value for loggers is used. |
Element |
Description |
Required |
appenderref |
The name of an appender to attach to the logger. Multiple appender references may be specified. See |
No. |
component |
A plugin component of for this logger. Multiple components may be specified. See |
No. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
configureobjectpools
A Rhino management sub task for configuring object pools.
Attribute | Description | Required |
---|---|---|
initialPooledPoolSize |
The initial size of the object pool for objects in the pooled pool. |
No. |
pooledPoolSize |
The current size of the object pool for objects in the pooled pool. |
No. |
statePoolSize |
The current size of the object pool for objects in the state pool. |
No. |
persistentStatePoolSize |
The current size of the object pool for objects in the persistent state pool. |
No. |
readyPoolSize |
The current size of the object pool for objects in the ready pool. |
No. |
stalePoolSize |
The current size of the object pool for objects in the stale pool. |
No. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
configureratelimiter
A Rhino management sub task for configuring a rate limiter.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
name |
The name of the rate limiter to be configured. |
Yes. |
maxrate |
The maximum rate of tokens per second the rate limiter will allow. |
No. |
bypassed |
Whether this rate limiter will be used for limiting rate, if |
No. |
timeunit |
The rate limiter will allow maxrate tokens per timeunit. Allowed values are |
No. |
depth |
Controls the amount of "burstiness" allowed by the rate limiter. |
No. |
parent |
Sets the parent of the limiter, adding the limiter to its parent’s limiter hierarchy. |
No. |
nodes |
Comma-delimited list of nodes to apply this configuration to. Only the maxrate and bypassed configuration properties may be set on a per-node basis, all other properties are set uniformly across all nodes. |
No. |
configuresas
A Rhino management sub task for configuring SAS.
Attribute | Description | Required |
---|---|---|
server |
The hostname/address of the SAS server. If set, will override values in nested |
No. |
resourceIdentifier |
The resource-identifier of the SAS resource bundle to associate with events sent to the SAS server. |
No. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
server |
A SAS server host and optional port specification. See |
No. |
configuresaturationlimiter
A Rhino management sub task for configuring a queue saturation limiter.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
name |
The name of the queue saturation limiter to be configured. |
Yes. |
maxsaturation |
The maximum amount of saturation allowed in the staging queue before rejecting work, expressed as a percentage. |
No. |
bypassed |
Whether this limiter will be used for limiting. If |
No. |
parent |
Sets the parent of the limiter, adding the limiter to its parent’s limiter hierarchy. |
No. |
nodes |
Comma-delimited list of nodes to apply this configuration to. Only the bypassed configuration properties may be set on a per-node basis, all other properties are set uniformly across all nodes. |
No. |
configurestagingqueues
A Rhino management sub task for configuring the staging queue.
Attribute | Description | Required |
---|---|---|
maximumSize |
Maximum size of the staging queue. |
No. If not specified, if there is no previously specified value; if no value has been specified, the default size is used (3000). |
maximumAge |
Maximum possible age of staging items, in milliseconds. Specify an age of -1 to ignorthe age of staging items (i.e. staging items will never be discarded due to their age). |
No. If not specified, the last specified value is used; if there is no previously specified value, the default age is used (10000). |
threadCount |
Number of staging threads in the thread pool. |
No. If not specified, the last specified value is used; if there is no previously specified value, the default size is used (30). |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
connectraentitylimiterendpoint
A Rhino management sub task for connecting an RA Entity limiter endpoint to a limiter.
Attribute | Description | Required |
---|---|---|
entityname |
Name of the resource adaptor entity. |
Yes. |
endpointname |
Name of the endpoint. |
Yes. |
limitername |
Name of the limiter to connect to. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
connectservicelimiterendpoint
A Rhino management sub task for connecting a Service limiter endpoint to a limiter.
Attribute | Description | Required |
---|---|---|
servicename |
Name of the service. |
Yes. |
servicevendor |
Vendor of the service. |
Yes. |
serviceversion |
Version of the service. |
Yes. |
sbbname |
Name of the sbb. |
Yes. |
sbbvendor |
Vendor of the sbb. |
Yes. |
sbbversion |
Version of the sbb. |
Yes. |
endpointname |
Name of the endpoint. |
Yes. |
limitername |
Name of the limiter to connect to. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
copycomponent
A Rhino management sub task for copying a component to a new target identity.
Attribute | Description | Required |
---|---|---|
type |
The component type. See |
Yes. |
installLevel |
The target install level for the copied component. Allowed values are: {@code INSTALLED, VERIFIED, DEPLOYED}. |
No. If not specified, defaults to {@code DEPLOYED}. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
source |
Identifies the source component. See |
Yes. |
target |
Identifies the component to create as a copy of the source component. See |
Yes. |
-
The task is run targeting a component that has already been copied from the given source.
createconsoleappender
A Rhino management sub task for creating a log appender with output directed to the console.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
String |
Name of the appender to create. This name must be unique. |
Yes. |
direct |
boolean |
Log directly to the output stream instead of via the System.out/err PrintWriter |
No. |
follow |
boolean |
follow changes to the destination stream of System.out/err. Incompatible with |
No. |
target |
String |
Either "SYSTEM_OUT" or "SYSTEM_ERR". The default is "SYSTEM_OUT". |
No. |
ignoreexceptions |
boolean |
Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender). |
No. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used. |
No. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
createdatabaseresource
A Rhino management sub task for creating a database resource.
Attribute | Description | Required |
---|---|---|
resourcetype |
Type of resource to create. Must be one of "perstence" or "jdbc". |
Yes. |
resourcename |
Name of the resource to create. This name must be unique. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if a resource with the same type and name already exists.
createfileappender
A Rhino management sub task for creating a log appender writing to a file opened in write-only mode.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
String |
Name of the appender to create. This name must be unique. |
Yes. |
filename |
String |
Name of log file to write to. |
Yes. |
append |
boolean |
When true - the default, records will be appended to the end of the file. When set to false, the file will be cleared before new records are written. |
No. If not specified, defaults to true. |
bufferedio |
boolean |
When true - the default, records will be written to a buffer and the data will be written to disk when the buffer is full or, if immediateFlush is set, when the record is written. File locking cannot be used with bufferedIO. Performance tests have shown that using buffered I/O significantly improves performance, even if immediateFlush is enabled. |
No. If not specified, defaults to true. |
buffersize |
int |
When bufferedIO is true, this is the buffer size, the default is 8192 bytes. |
No. |
createondemand |
boolean |
The appender creates the file on-demand. The appender only creates the file when a log event passes all filters and is routed to this appender. |
No. If not specified, defaults to false. |
immediateflush |
boolean |
When set to true - the default, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. |
No. If not specified, defaults to true. |
locking |
boolean |
When set to true, I/O operations will occur only while the file lock is held allowing FileAppenders in multiple JVMs and potentially multiple hosts to write to the same file simultaneously. This will significantly impact performance so should be used carefully. Furthermore, on many systems the file lock is "advisory" meaning that other applications can perform operations on the file without acquiring a lock. |
No. If not specified, defaults to false. |
ignoreexceptions |
boolean |
When set to true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. |
No. If not specified, defaults to true. |
pattern |
String |
The pattern to use for logging output. |
No. If not specified, the default is %m%n. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used. |
No. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
creategenericappender
A Rhino management sub task for creating a log appender.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
String |
Name of the appender to create. This name must be unique. |
Yes. |
pluginname |
String |
The Log4J plugin for this appender |
Yes. |
properties |
String |
A comma separated list of configuration properties for the appender. Each property is a key=value pair. Use this or a nested propertyset. |
No. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
propertyset |
Configuration properties for this appender. Alternative to the properties attribute. |
No. |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used. |
No. |
component |
Additional components such as loggerFields for SyslogAppender, KeyValuePairs, etc. Multiple components may be specified. See |
No |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
To use this task provide the configuration of the appender as attributes and subelements:
<property name="fileName" value="baz"/>
<slee-management>
<createappender name="foo" properties="fileName=baz" pluginname="File"/>
<createappender name="bar" pluginname="File">
<propertyset>
<propertyref name="fileName"/>
</propertyset>
<component pluginname="filters">
<component pluginname="BurstFilter" properties="level=WARN,rate=3"/>
<component pluginname="ThresholdFilter">
<component pluginname="KeyValuePair" properties="key=rhino, value=INFO"/>
</component>
</component>
<component pluginname="PatternLayout" properties="pattern=%m"/>
</createappender>
</slee-management>
creategenericcomponent
Defines a logging component.
Attribute | Type | Description | Required |
---|---|---|---|
pluginname |
String |
The Log4J plugin name for this component. |
Yes. |
properties |
String |
A comma separated list of configuration properties for the appender. Each property is a key=value pair. Use this or a nested propertyset. |
No. |
Element |
Description |
Required |
propertyset |
Configuration properties for this appender. Alternative to the properties attribute. |
No. |
component |
Additional components such as KeyValuePairs, etc. Multiple components may be specified. |
No |
createjdbcresourceconnectionpool
A Rhino management sub task for adding a connection pool configuration to a JDBC resource.
Attribute | Description | Required |
---|---|---|
resourcename |
Name of the JDBC resource. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if the JDBC resource already has a connection pool configuration.
createlimiter
A Rhino management sub task for creating a limiter.
Attribute | Description | Required |
---|---|---|
name |
Name of the limiter to create. |
Yes. |
limiterType |
The type of limiter to create. |
No. If not specified, defaults to RATE. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if a limiter already exists with the same name.
createlinkedcomponent
A Rhino management sub task for creating a virtual component that is a link to another component.
Attribute | Description | Required |
---|---|---|
type |
The component type. See |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
source |
Identifies the component to create as a link to the target component. See |
Yes. |
target |
Identifies the target component of the link. See |
Yes. |
-
The task is run with a source component that has already been linked to the given target.
creatememorymappedfileappender
A Rhino management sub task for creating a log appender writing to a memory-mapped file.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
String |
Name of the appender to create. This name must be unique. |
Yes. |
filename |
String |
The file to write to |
Yes. |
append |
boolean |
Append to the file if true, otherwise clear the file on open. |
No. |
immediateflush |
boolean |
Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance. |
No. |
regionlength |
Integer |
The length of the mapped region. 256B-1GB. Default 32MB. |
No. |
ignoreexceptions |
boolean |
Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender). |
No. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used. |
No. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
createnamespace
A Rhino management sub task for creating a deployment namespace.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
namespace |
The name of the namespace to create. |
Yes. |
-
The task is run with the name of a namespace that already exists.
createpersistenceinstance
A Rhino management sub task for creating a persistence instance that can be used by a database resource.
Attribute | Description | Required |
---|---|---|
name |
Name of the persistence instance to create. This name must be unique. |
Yes. |
datasourceclass |
Fully-qualified class name the the datasource class to be used by the persistence instance. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
configproperty |
Identifies a configuration property of the datasource class. See |
One {@code ConfigPropertyElement} must be specified per config property. |
-
This task will throw a
NonFatalBuildException
if a persistence instance with the same name already exists.
createprofile
A Rhino management sub task for creating Profiles inside tables.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
profilename |
Name of the profile to create. |
Yes. |
tablename |
Name of the profile table in which the profile will be created. |
Yes. |
Element | Description | Required |
---|---|---|
profilevalue |
Assigns a value to a profile attribute once the profile has been created. See |
No. |
-
The task is run targeting an already existing profile.
createprofiletable
A Rhino management sub task for creating Profile Tables.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
profilespec |
Canonical name of the profile specification from which the profile table should be created. |
Only required/allowed if the |
tablename |
Name of the profile table to create, this name must be unique. |
Yes. |
Element | Description | Required |
---|---|---|
component |
Identifies the profile specification component from which the profile table should be created. See |
Only required/allowed if the |
-
The task is run targeting an already existing table.
createraentity
A Rhino management sub task for creating Resource Adaptor Entities.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity to create, this name must be unique within the SLEE. |
Yes. |
resourceadaptorid |
Canonical name of the resource adaptor component from which the entity should be created. |
Only required/allowed if the |
properties |
Properties to be set on the resource adaptor. |
No. |
Element | Description | Required |
---|---|---|
component |
Identifies the resource adaptor component from which the resource adaptor entity should be created. See |
Only required/allowed if the |
-
The task is run targeting an already existing resource adaptor.
createrandomaccessfileappender
A Rhino management sub task for creating a log appender writing to a file opened in RW mode.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
String |
Name of the appender to create. This name must be unique. |
Yes. |
filename |
String |
The file to write to |
Yes. |
append |
boolean |
Append to the file if true, otherwise clear the file on open. |
No. |
buffersize |
Integer |
The the size of the write buffer. Defults to 256kB |
No. |
immediateflush |
boolean |
Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance. |
No. |
ignoreexceptions |
boolean |
Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender). |
No. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used. |
No. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
createrole
A Rhino management sub task for creating a role.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
role |
Role name |
Yes. |
baseRole |
Role name to copy permissions from |
No. |
-
Role already exists
createrollingfileappender
A Rhino management sub task for creating a log appender writing to a series of files opened in write-only mode.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
String |
Name of the appender to create. This name must be unique. |
Yes. |
filename |
String |
The file to write to |
Yes. |
filepattern |
String |
The pattern of file names for archived log files. Dependent on the rollover policy used, typically contains a date pattern or %i for integer counter. |
Yes. |
append |
boolean |
Append to the file if true, otherwise clear the file on open. |
No. |
bufferedio |
boolean |
Write to an intermediate buffer to reduce the number of write() syscalls. |
No. |
buffersize |
Integer |
The the size of the write buffer. Defults to 256kB |
No. |
createondemand |
boolean |
Only create the file when data is written |
No. |
immediateflush |
boolean |
Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance. |
No. |
ignoreexceptions |
boolean |
Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender). |
No. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used. |
No. |
policy |
The rollover policy to determine when rollover should occur |
Yes. |
strategy |
The strategy for archiving log files. Strategies determine the name, location, number and compression of the archived logs. |
No. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
createrollingrandomaccessfileappender
A Rhino management sub task for creating a log appender writing to a series of files opened in RW mode.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
String |
Name of the appender to create. This name must be unique. |
Yes. |
filename |
String |
The file to write to |
Yes. |
filepattern |
String |
The pattern of file names for archived log files. Dependent on the rollover policy used, typically contains a date pattern or %i for integer counter. |
Yes. |
append |
boolean |
Append to the file if true, otherwise clear the file on open. |
No. |
buffersize |
Integer |
The the size of the write buffer. Defults to 256kB |
No. |
createondemand |
boolean |
Only create the file when data is written |
No. |
immediateflush |
boolean |
Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance. |
No. |
ignoreexceptions |
boolean |
Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender). |
No. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used. |
No. |
policy |
The rollover policy to determine when rollover should occur |
Yes. |
strategy |
The strategy for archiving log files. Strategies determine the name, location, number and compression of the archived logs. |
No. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
createsocketappender
A Rhino management sub task for creating a log socket appender.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
String |
Name of the appender to create. This name must be unique. |
Yes. |
remotehost |
String |
Name or IP address of the remote host to connect to. |
Yes. |
port |
int |
Port on the remote host to connect to. |
Yes. |
protocol |
String |
"TCP" (default), "SSL" or "UDP". |
No. If not specified, the default is TCP |
immediatefail |
boolean |
When set to true, log events will not wait to try to reconnect and will fail immediately if the socket is not available. |
Yes. |
immediateflush |
boolean |
When set to true, each write will be followed by a flush. This will guarantee the data is written to disk but could impact performance. |
No. If not specified, defaults to true. |
bufferedio |
boolean |
When true, events are written to a buffer and the data will be written to the socket when the buffer is full or, if immediateFlush is set, when the record is written. |
No. If not specified, defaults to true. |
buffersize |
int |
When bufferedIO is true, this is the buffer size, the default is 8192 bytes. |
No. |
reconnectiondelaymillis |
int |
If set to a value greater than 0, after an error the SocketManager will attempt to reconnect to the server after waiting the specified number of milliseconds. If the reconnect fails then an exception will be thrown (which can be caught by the application if ignoreExceptions is set to false). |
No. If not specified, the default is 0 |
connecttimeoutmillis |
int |
The connect timeout in milliseconds. The default is 0 (infinite timeout, like Socket.connect() methods). |
No. |
keystorelocation |
String |
The location of the KeyStore which is used to create an SslConfiguration |
No. |
keystorepassword |
String |
The password to access the KeyStore. |
No. |
truststorelocation |
String |
The location of the TrustStore which is used to create an SslConfiguration |
No. |
truststorepassword |
String |
The password of the TrustStore |
No. |
ignoreexceptions |
boolean |
The default is true, causing exceptions encountered while appending events to be internally logged and then ignored. When set to false exceptions will be propagated to the caller, instead. |
No. |
failonerror |
boolean |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. Default value is taken from the Rhino management parent task. |
Element |
Description |
Required |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. If no layout is supplied the default pattern layout of "%m%n" will be used. |
No. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
createsyslogappender
A Rhino management sub task for creating a log socket appender with output formatted for consumption by a syslog daemon.
Attribute | Type | Description | Required |
---|---|---|---|
appendername |
Name of the appender to create. This name must be unique. |
Yes. |
remotehost |
Name or IP address of the remote host to connect to. |
Yes. |
port |
Port on the remote host to connect to. |
Yes. |
advertise |
boolean |
Should the appender be advertised |
No |
appname |
String |
RFC-5424 APP-NAME to use if using the RFC-5454 record layout |
No |
enterprisenumber |
String |
The IANA enterprise number |
No |
facility |
String |
The facility to classify messages as. One of "KERN", "USER", "MAIL", "DAEMON", "AUTH", "SYSLOG", "LPR", "NEWS", "UUCP", "CRON", "AUTHPRIV", "FTP", "NTP", "AUDIT", "ALERT", "CLOCK", "LOCAL0", "LOCAL1", "LOCAL2", "LOCAL3", "LOCAL4", "LOCAL5", "LOCAL6", or "LOCAL7". |
No |
format |
String |
RFC-5424 or BSD |
No |
structureddataid |
String |
The RFC-5424 structured data ID to use if not present in the log message |
No |
includemdc |
boolean |
If true, include MDC fields in the RFC-5424 syslog record. Defaults to true. |
No |
mdcexcludes |
String |
A comma separated list of MDC fields to exclude. Mutually exclusive with mdcincludes. |
No |
mdcincludes |
String |
A comma separated list of MDC fields to include. Mutually exclusive with mdcexcludes. |
No |
mdcrequired |
String |
A comma separated list of MDC fields that must be present in the log event for it to be logged. If any of these are not present the event will be rejected with a LoggingException. |
No |
mdcprefix |
String |
A string that will be prepended to each MDC key. |
No |
messageid |
String |
The default value to be used in the MSGID field of RFC-5424 records. |
No |
newline |
boolean |
Write a newline on the end of each syslog record. Defaults to false. |
No |
No |
protocol |
String |
TCP, UDP or SSL. Defaults to TCP. |
No. |
buffersize |
Integer |
The the size of the write buffer. Defults to 256kB |
No. |
connecttimeoutmillis |
Integer |
Maximum connection wait time in milliseconds if greater than 0. |
No. |
reconnectiondelaymillis |
Integer |
Maximum time to attempt reconnection for before throwing an exception. The default, 0, is to try forever. |
No. |
immediatefail |
boolean |
When set to true log events will be rejected immediately if the socket is unavailable instead of queuing. |
No. |
immediateflush |
boolean |
Flush to disk after every message. Reduces the risk of data loss on system crash at the cost of performance. |
No. |
ignoreexceptions |
boolean |
Log exceptions thrown by this appender then ignore them. If set to false propagate to the caller (used to support selective appenders e.g. FailoverAppender). |
No. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
Element |
Description |
Required |
filter |
A filter to select events that will be reported by this appender. |
No. |
layout |
The layout to use to format log events. Overrides the format attribute if set. Defaults to SyslogLayout. |
No. |
component |
Additional components such as loggerFields |
No |
-
This task will throw a
NonFatalBuildException
if the appender cannot be created, eg. an appender with the same name already exists.
createusageparameterset
A Rhino management sub task for creating usage parameter sets.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
name |
Name of the usage parameter set to create. |
Yes. |
Element | Description | Required |
---|---|---|
sbbNotificationSource |
Identifies an SBB notification source. See |
One and only one of |
raEntityNotificationSource |
Identifies a resource adaptor entity notification source. See |
One and only one of |
profileTableNotificationSource |
Identifies a profile table notification source. See |
One and only one of |
-
The usage parameter set to be created already exists.
deactivateraentity
A Rhino management sub task for deactivating Resource Adaptor Entities.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity to deactivate. |
Yes. |
nodes |
Comma-separated list of node IDs on which the resource adaptor entity should be deactivated. |
No. If omitted an attempt is made to deactivate the resource adaptor entity on all current cluster members. |
-
The task is run targeting an already deactivated entity.
-
The task is run targeting a non-existant entity.
deactivateservice
A Rhino management sub task for deactivating Services.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
serviceid |
Canonical name of the service to deactivate. |
Only required/allowed if the |
nodes |
Comma-separated list of node IDs on which the service should be deactivated. |
No. If omitted an attempt is made to deactivate the service on all current cluster members. |
Element | Description | Required |
---|---|---|
component |
Identifies the service to deactivate. See |
Only required/allowed if the |
-
The task is run targeting a service which is not active.
-
The task is run targeting a non-existant service.
deploycomponent
A Rhino management sub task for deploying an installed component across the cluster.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
component |
Identifies the component to deploy. See |
Yes. |
-
The task is run targeting an already deployed component.
deploydeployableunit
A Rhino management sub task for deploying components in an installed deployable unit across the cluster.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
url |
URL of deployable unit to deploy. |
Yes. |
disablerampup
A Rhino management sub task that disables the ramp up of limiter rate for the system input limiter.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
disablesymmetricactivationstatemode
A Rhino management sub task for disabling symmetric activation state mode.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
The task is run when the symmetric activation state mode is already disabled.
disconnectraentitylimiterendpoint
A Rhino management sub task for disconnecting an RA Entity limiter endpoint from a limiter.
Attribute | Description | Required |
---|---|---|
entityname |
Name of the resource adaptor entity. |
Yes. |
endpointname |
Name of the endpoint. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
disconnectservicelimiterendpoint
A Rhino management sub task for disconnecting a Service limiter endpoint.
Attribute | Description | Required |
---|---|---|
servicename |
Name of the service. |
Yes. |
servicevendor |
Vendor of the service. |
Yes. |
serviceversion |
Version of the service. |
Yes. |
sbbname |
Name of the sbb. |
Yes. |
sbbvendor |
Vendor of the sbb. |
Yes. |
sbbversion |
Version of the sbb. |
Yes. |
endpointname |
Name of the endpoint. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
enablerampup
A Rhino management sub task that configures and enables ramp up of limiter rate for the system input limiter.
Attribute | Description | Required |
---|---|---|
startrate |
The initial number of events per second for the system input limiter (a double). |
Yes. |
rateincrement |
The incremental number of events per second added to the allowed rate if Rhino is successfully processing work (a double). |
Yes. |
eventsperincrement |
The number of events processed before Rhino will add rateincrement events to the allowed rate (an integer). |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
enablesymmetricactivationstatemode
A Rhino management sub task for enabling symmetric activation state mode.
Attribute | Description | Required |
---|---|---|
templatenode |
The ID of the node to base symmetric state one. May be the string value 'any' to allow any node to be selected. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
The task is run when the symmetric activation state mode is already enabled.
importprofiles
A Rhino management sub task for importing previously exported profiles.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
filename |
Source file containing profiles to be imported. |
Yes. |
profile-table |
Name of the profile table to import into. If not specified, profiles are imported into the profile table specified in the profile XML data. |
No. |
maxprofiles |
Maximum number of profiles to handle in one transaction. |
No. |
replace |
Flag indicating whether any existing profiles should be replaced with the new profile data. |
No. |
verify |
Flag indicating whether the {@code profileVerify()} method will be invoked on each of the imported profiles. |
No. Default value is {@code true}. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
install
A Rhino management sub task for installing Deployable Units.
Attribute | Description | Required |
---|---|---|
type |
Type of deployable unit. Default supported types: "du", "bindings" |
No. Defaults to "deployable unit". |
url |
URL deployable unit to install. |
Not required if srcfile is specified. |
installlevel |
The install level to which the deployable unit should be installed. Must be one of "INSTALLED", "VERIFIED", or "DEPLOYED". |
No. Defaults to "DEPLOYED". |
assignbundlemappings |
If true, assign bundle prefixes to any SAS mini-bundles in the DU. |
No. Defaults to 'false'. |
srcfile |
Path to deployable unit to install. |
Not required if url is specified. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
The location of the deployable unit JAR file to read, and the URL to associate with it when passing it to Rhino is determined as follows:
-
If both
srcfile
andurl
parameters are specified the JAR file is read from the file indicatedbysrcfile
and the URL used is that specified byurl
. -
If only
srcfile
is specified then the JAR file is read from this file and the filename is also used to construct a URL. -
If only
url
is specified then the JAR file is read from this location and deployed using the specified URL.
-
The task is run targeting an already deployed deployable unit.
notificationsourcebased
Abstract base class for all sub tasks that accept a notification source element.
profilebased
Abstract base class extended by other sub tasks which deal with profiles.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
profilename |
Name of the profile to create. |
Yes. |
tablename |
Name of the profile table in which the profile will be created. |
Yes. |
Element | Description | Required |
---|---|---|
profilevalue |
Assigns a value to a profile attribute. See |
Implementation dependent. |
-
Implementation dependent.
profilevaluebased
Abstract base class for all sub tasks that accept attribute elements.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
profilevalue |
Assigns a value to a profile attribute. See |
Implementation dependent. |
-
Implementation dependent.
raentitybased
Abstract base class for all sub tasks that take a resource adaptor entity.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity targeted by this task. |
Yes. |
-
Implementation dependent.
removeappenderref
A Rhino management sub task for removing an appender from a log key.
Attribute | Description | Required |
---|---|---|
logkey |
Name of the log key to remove the appender from. |
Yes. |
appendername |
Name of the appender to remove. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if the appender cannot be remove from the log key, eg. the appender is not attached to the log key.
removecopiedcomponents
A Rhino management sub task for removing copied components.
Components can be removed by either specifying the URL of a deployable unit, in which case all copied components in the deployable unit will be removed, or by specifying one or more nested <component> elements.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
url |
URL of deployable unit to remove copied components from. |
Only required/allowed if no nested {@code component} elements are present. |
Element
Description
Required
component
Identifies a component to be removed. See com.opencloud.slee.mlet.ant.SleeComponentElement
. (Note that for the removecopiedcomponent sub task the optional {@code type} attribute of {@code component} is required.)
Only required/allowed if the {@code url} attribute is not present. Multiple {@code component} elements are allowed.
-
The task is run targeting a non-existent deployable unit or component.
removedatabaseresource
A Rhino management sub task for removing a database resource.
Attribute | Description | Required |
---|---|---|
resourcetype |
Type of resource to remove. Must be one of "perstence" or "jdbc". |
Yes. |
resourcename |
Name of the resource to remove. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if a resource with the given type and name doesn’t exists.
removejdbcresourceconnectionpool
A Rhino management sub task for removing a connection pool configuration from a JDBC resource.
Attribute | Description | Required |
---|---|---|
resourcename |
Name of the JDBC resource. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if the JDBC resource doesn’t have a connection pool configuration.
removelimiter
A Rhino management sub task for removing a limiter.
Attribute | Description | Required |
---|---|---|
name |
The name of the limiter to be removed. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if the named limiter does not exist.
removelinkedcomponent
A Rhino management sub task for removing a virtual component that is a link to another component.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
component |
Identifies the linked component to be removed. See |
Yes. |
-
The task is run targeting a non-existent component.
removeloggerconfig
A Rhino management sub task for removing a configured logger.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
logKey |
Name of the logger config to remove. |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
removeloggingproperty
A Rhino management sub task for removing logging properties. This task will always fail to remove in use properties.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
propertyName |
Name of the logging property to remove. This property will not be removed if it is in use. |
Yes. |
-
The task is run targeting an unknown logging property.
removenamespace
A Rhino management sub task for removing a deployment namespace.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
namespace |
The name of the namespace to remove. |
Yes. |
-
The task is run with the name of a namespace that doesn’t exist.
removepermissionfromrole
A Rhino management sub task for removing a permission from a role.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
role |
Role name |
Yes. |
permissionName |
Permission name (taken from a permission mapping target as either PermissionCategory or PermissionCategory#PermissionSubcategory) |
Yes. |
permissionActions |
Permission actions to remove, either "write" or "read,write" |
Yes. |
-
Role does not exist
removepermissionmapping
A Rhino management sub task for removing a permission mapping.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
objectnamepattern |
MBean object name pattern as specified in javax.management.ObjectName |
Yes. |
member |
A MBean member (attribute or operation) |
Only if rhinopermissionsubcategory is specified. |
rhinopermissioncategory |
Primary part of the Rhino permission name |
Yes. |
rhinopermissionsubcategory |
Secondary (optional) part of the Rhino permission name |
Only if member is specified. |
-
Permission mapping does not exist
removepersistenceinstance
A Rhino management sub task for removing a persistence instance.
Attribute | Description | Required |
---|---|---|
name |
Name of the persistence instance to remove. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if a persistence instance with the given name doesn’t exists.
removepersistenceinstanceref
A Rhino management sub task for removing a persistence instance reference from a database resource.
Attribute | Description | Required |
---|---|---|
resourcetype |
Type of resource to remove the reference from. Must be one of "perstence" or "jdbc". |
Yes. |
resourcename |
Name of the resource to remove the reference from. |
Yes. |
persistenceinstancename |
Name of the persistence instance reference to remove. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if the persistence instance is not referenced by the resource.
removeprofile
A Rhino management sub task for removing a Profile from a table.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
profilename |
Name of the profile to remove. |
Yes. |
tablename |
Name of the profile table containing the profile. |
Yes. |
-
The task is run targeting a non-existant profile.
-
The task is run targeting a non-existant profile table.
removeprofiletable
A Rhino management sub task for removing Profile Tables.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
tablename |
Name of the profile table to remove. |
Yes. |
-
The task is run targeting a non-existant profile table.
removeraentity
A Rhino management sub task for removing Resource Adaptor Entities.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity to remove. |
Yes. |
-
The task is run targeting a non-existant resource adaptor.
removerole
A Rhino management sub task for removing a role.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
role |
Role name |
Yes. |
-
Role does not exist
removesasbundlemapping
A Rhino management sub task for removing a SAS bundle mapping.
Attribute | Description | Required |
---|---|---|
name |
The name of the sas bundle. |
Yes. |
removeservicebindings
A Rhino management sub task for removing bindings from a service.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
service |
Identifies the service component. See |
Yes. |
binding |
Identifies a binding descriptor component. See |
Yes. May be repeated as many times as needed to remove multiple bindings. |
-
The task is run targeting a binding descriptor that is not currently added to the service.
removeusageparameterset
A Rhino management sub task for removing usage parameter sets.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
name |
Name of the usage parameter set to remove. |
Yes. |
Element | Description | Required |
---|---|---|
sbbNotificationSource |
Identifies an SBB notification source. See |
One and only one of |
raEntityNotificationSource |
Identifies a resource adaptor entity notification source. See |
One and only one of |
profileTableNotificationSource |
Identifies a profile table notification source. See |
One and only one of |
-
The usage parameter set to be removed doesn’t exist.
setactivenamespace
A Rhino management sub task for setting the active deployment namespace.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
namespace |
The namespace to make active. Use an empty string to denote the default namespace. |
Yes. |
setadditivity
A Rhino management sub task for setting the additivity of a log key.
Attribute | Description | Required |
---|---|---|
logkey |
Name of the log key to set the additivity of. |
Yes. |
additivity |
Appender additivity of the log. |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will never throw a
NonFatalBuildException
.
setjdbcresourceconnectionpoolconfig
A Rhino management sub task for updating the connection pool configuration of a JDBC resource.
Attribute | Description | Required |
---|---|---|
resourcename |
Name of the JDBC resource. |
Yes. |
maxconnections |
The maximum total number of connections that may exist. |
No. |
minconnections |
The minimum total number of connections that should exist. |
No. |
maxidleconnections |
The maximum number of idle connections that may exist at any one time. |
No. |
maxidletime |
The time period (in seconds) after which an idle connection may become eligible for discard. |
No. |
idlecheckinterval |
The time period (in seconds) between idle connection discard checks. |
No. |
connectionpooltimeout |
The maximum time (in milliseconds) that a SLEE application will block for a free connection before a timeout error occurs. |
No. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
This task will throw a
NonFatalBuildException
if the JDBC resource already has a connection pool configuration.
setloggingproperties
A Rhino management sub task for setting logging properties.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
-
.Nested elements
Element |
Description |
Required |
property |
A Logging property to set. |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
setloglevel
A Rhino management sub task for setting the log level of a particular log key.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
logkey |
Name of the log key to change log level of. |
Yes. |
loglevel |
Log level to set the key to. |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
setprofileattributes
A Rhino management sub task for modifying profile attributes.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
profilename |
Name of the profile to create. |
Yes. |
tablename |
Name of the profile table containing the profile. |
Yes. |
Element | Description | Required |
---|---|---|
profilevalue |
Assigns a value to a profile attribute. See |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
setraentitystartingpriority
A Rhino management sub task for setting the starting priority of a Resource Adaptor Entity.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity to update. |
Yes. |
priority |
The new starting priority for the resource adaptor entity. Must be between -128 and 127. Higher priority values have precedence over lower priority values. |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
setraentitystoppingpriority
A Rhino management sub task for setting the stopping priority of a Resource Adaptor Entity.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity to update. |
Yes. |
priority |
The new stopping priority for the resource adaptor entity. Must be between -128 and 127. Higher priority values have precedence over lower priority values. |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
setsasbundlemapping
A Rhino management sub task for setting a SAS bundle prefix mapping.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
bundlemappingfile |
A file of SAS bundle mappings to be installed. Formatted as <key>: <value> pairs per line. |
Only if there are no bundlemapping elements. |
Element | Description | Required |
---|---|---|
bundlemapping |
Maps a fully qualified class name to a prefix |
Only if there is no bundlemappingfile attribute. |
-
The task sets mappings that already map to the requested prefixes
setsastracingenabled
A Rhino management task for setting the state of the SAS tracing.
Attribute | Description | Required |
---|---|---|
tracingEnabled |
Boolean flag indicating if SAS tracing should be enabled (true) or disabled (false). |
Yes. |
force |
If true, override the SLEE state check when disabling SAS tracing state. SAS tracing state cannot normally be disabled when the SLEE is not in the Stopped state, because this may cause incomplete trails to be created in SAS for sessions that are in progress. |
No. Defaults to false. |
setservicemetricsrecordingenabled
A Rhino management sub task for updating the SBB metrics recording status of a service.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
enabled |
Boolean flag indicating if SBB metrics recording should be enabled (true) or disabled (false). |
Yes. |
Element | Description | Required |
---|---|---|
component |
Identifies the service to update. See |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
setservicestartingpriority
A Rhino management sub task for setting the starting priority of a Service.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
priority |
The new starting priority for the service. Must be between -128 and 127. Higher priority values have precedence over lower priority values. |
Yes. |
Element | Description | Required |
---|---|---|
component |
Identifies the service to update. See |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
setservicestoppingpriority
A Rhino management sub task for setting the stopping priority of a Service.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
priority |
The new stopping priority for the service. Must be between -128 and 127. Higher priority values have precedence over lower priority values. |
Yes. |
Element | Description | Required |
---|---|---|
component |
Identifies the service to update. See |
Yes. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
settracelevel
A Rhino management sub task for setting the trace level of components.
This ant task has been deprecated, since it uses a ComponentID to identify a notification source (which is not compatible with the changes made to the tracing subsystem in SLEE 1.1). It has been replaced with com.opencloud.slee.mlet.ant.tasks.SetTracerLevelTask . |
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
componentid |
Canonical name of the component. |
Only required/allowed if the |
type |
Indicates the type of component referenced by |
Only required/allowed if |
level |
Requested trace level, can be one of 'finest', 'finer', 'fine', 'config', 'info', 'warning', 'severe', 'off'. |
Yes. |
Element | Description | Required |
---|---|---|
component |
Identifies the component. See |
Only required/allowed if the |
Component Types
The following names are valid identifiers for a component type in the type
attribute of the settracelevel task or a component
nested element.
-
pspec - profile specification
-
ra - resource adaptor
-
service - service
-
sbb - sbb
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
settracerlevel
A Rhino management sub task for setting the trace level of notification sources.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
tracerName |
Name of the tracer whose level is to be set. |
Yes. |
level |
Requested trace level, can be one of 'finest', 'finer', 'fine', 'config', 'info', 'warning', 'severe', 'off'. |
Yes. |
Element | Description | Required |
---|---|---|
sbbNotificationSource |
Identifies an SBB notification source. See |
One and only one of |
raEntityNotificationSource |
Identifies a resource adaptor entity notification source. See |
One and only one of |
profileTableNotificationSource |
Identifies a profile table notification source. See |
One and only one of |
subsystemNotificationSource |
Identifies a subsystem notification source. See |
One and only one of |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
shadowcomponent
A Rhino management sub task for shadowing an existing component with another component.
Attribute | Description | Required |
---|---|---|
type |
The component type. See |
Yes. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
source |
Identifies the component to be shadowed. See |
Yes. |
target |
Identifies the component that will shadow the source component. See |
Yes. |
-
The task is run with a source component that is already shadowed by the given target.
unbindralinkname
A Rhino management sub task for unbinding Resource Adaptor Entity Link Names.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
linkname |
The link name to unbind. |
Yes. |
-
The task is run targeting an unbound linkname.
undeploycomponent
A Rhino management sub task for undeploying a component across the cluster.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
component |
Identifies the component to undeploy. See |
Yes. |
-
The task is run targeting a component that is not currently deployed.
uninstall
A Rhino management sub task for uninstalling Deployable Units.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
url |
URL of deployable unit to uninstall. |
Yes. |
-
The task is run targeting a non-existent deployable unit.
unsetalltracers
A Rhino management sub task for unsetting the trace level assigned to all tracers of notification sources.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
sbbNotificationSource |
Identifies an SBB notification source. See |
One and only one of |
raEntityNotificationSource |
Identifies a resource adaptor entity notification source. See |
One and only one of |
profileTableNotificationSource |
Identifies a profile table notification source. See |
One and only one of |
subsystemNotificationSource |
Identifies a profile table notification source. See |
One and only one of |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
unshadowcomponent
A Rhino management sub task for removing the shadow from a shadowed component.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
component |
Identifies the component whose shadow is to be removed. See |
Yes. |
-
The task is run targeting a component that is not shadowed.
unverifycomponent
A Rhino management sub task for unverifying a verified component.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
component |
Identifies the component to unverify. See |
Yes. |
-
The task is run targeting a component that is not currently verified.
updatepersistenceinstance
A Rhino management sub task for updating the settings of a persistence instance.
Attribute | Description | Required |
---|---|---|
name |
Name of the persistence instance to create. This name must be unique. |
Yes. |
datasourceclass |
Fully-qualified class name the the datasource class to be used by the persistence instance. |
No. Only needs to be specified if a different datasource class should be used by the persistence instance. |
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
configproperty |
Identifies a configuration property of the datasource class that should be updated. See |
One {@code ConfigPropertyElement} must be specified per config property. |
removeconfigproperty |
Identifies an existing configuration property of the datasource class that should be removed. See |
One {@code RemoveConfigPropertyElement} must be specified per config property to be removed. |
-
This task will never throw a
NonFatalBuildException
. It will always fail (throw aBuildException
) on errors.
usertransaction
A Rhino management sub task that allows its own subtasks to be executed in the context of a single client-demarcated transaction.
This task starts a user transaction then executes the nested subtasks. If all the subtasks complete successfully, the user transaction is committed. If any tasks fails with a fatal BuildException, or fails with a NonFatalBuildException but its failonerror flag is set to true
, the user transaction is rolled back.
The following sub task elements can be provided in any number and in any order. The User Transaction task will execute these sub tasks in the specified order until a sub task fails by throwing a org.apache.tools.ant.BuildException
which will be re-thrown to Ant with some contextual information regarding the sub task that caused it.
|
Create Profiles inside tables. |
|
Remove a Profile from a table. |
|
Modify profile attributes. |
|
Import profile XML data. |
verifycomponent
A Rhino management sub task for verifying an installed component across the cluster.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
Element | Description | Required |
---|---|---|
component |
Identifies the component to verify. See |
Yes. |
-
The task is run targeting an already verified component.
verifydeployableunit
A Rhino management sub task for verifying components in an installed deployable unit across the cluster.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
url |
URL of deployable unit to verify. |
Yes. |
waittilraentityisinactive
A Rhino management sub task for waiting on Resource Adaptor deactivation. This task will block while waiting for the specified Resource Adaptor to deactivate.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
entityname |
Name of the resource adaptor entity targeted by this task. |
Yes. |
-
The task is run targeting a non-existant resource adaptor.
waittilserviceisinactive
A Rhino management sub task for waiting on Service deactivation. This task will block while waiting for the specified Service to deactivate.
Attribute | Description | Required |
---|---|---|
failonerror |
Flag to control failure behaviour. If 'true', the sub task will throw a |
No. default value is taken from the Rhino management parent task. |
serviceid |
Canonical name of the service to deactivate. |
Only required/allowed if the |
Element | Description | Required |
---|---|---|
component |
Identifies the service to deactivate. See |
Only required/allowed if the |
-
The task is run targeting a non-existant service.
Building Custom OA&M Tools with Rhino Remote API
The Rhino remote library is a collection of utility classes to help write remote management clients for Rhino. This library does not implement any proprietary interface to Rhino — it makes using the standard JMX interfaces easier. Rhino implements the JMX Remote API (as per jsr 160) using the Rhino JMX Remote Adaptor. A JMX client can also be written to connect to Rhino using the JMX Remote API directly, without using the Rhino remote library.
When would I use Rhino remote library ?
The most common reason to used the Rhino remote library is to implement a service-specific operations-administration-and-maintenance (OA&M) interface. For example:
-
a web interface to provision profiles interactively
-
a graphical application to monitor statistics in real time
-
a command-line tool to refresh a profile data cache from a master database
-
a daemon process to listen for alarm notifications and forward them to a third-party O&M platform.
How would I use Rhino remote library?
The basic steps to using the Rhino remote library are:
-
Create a connection using
RhinoConnectionFactory
. -
Create a proxy object for the MBean you want to invoke operations on.
-
Invoke the MBean operation by invoking the method on the proxy object.
For example, to stop the SLEE:
MBeanServerConnection connection = RhinoConnectionFactory.connect(properties);
SleeManagementMBean sleeMBean = SleeManagement.getSleeManagementMBean(connection);
sleeMBean.stop();
See also the Rhino Remote API Javadoc. |
Connecting to Rhino
The Rhino remote library contains a class that represents a connection to Rhino (RhinoConnection
). Below are descriptions of various ways of getting and configuring an instance of this class, including:
-
using RhinoConnectionFactory to create connections to Rhino
-
using RhinoConnection for connection objects and an ordered list of servers to try
-
using ExecutionContext to control whether a particular operation or set of operations can failover to another node
-
if Rhino is using the default, SSL connection factory, to connect to the JMX Remote interface, ensuring that it gets a keystore, a trust store and a password for each
-
debugging by logging trace messages from connection objects as they try to connect.
RhinoConnectionFactory
Use the RhinoConnectionFactory
class to create connections to Rhino. RhinoConnectionFactory
has connect
methods that let you specify connection parameters many ways. The RhinoConnectionFactory
connect
methods return objects that implement RhinoConnection
. See the Rhino Remote API for details.
/** * Factory class to create connections to Rhino. * @author Open Cloud */
public class RhinoConnectionFactory {
public static RhinoConnection connect(File propertiesFile)
throws ConnectionException { /* ... */ }
public static RhinoConnection connect(Properties properties)
throws ConnectionException { /* ... */ }
public static RhinoConnection connect(String host, int port, String username, String password)
throws ConnectionException { /* ... */ }
public static RhinoConnection connect(String serverList, String username, String password)
throws ConnectionException { /* ... */ }
public static RhinoConnection connect(String[] servers, String username, String password)
throws ConnectionException { /* ... */ }
public static RhinoConnection connect(String host, int port, String username, String password, Properties sslProps)
throws ConnectionException { /* ... */ }
public static RhinoConnection connect(String[] servers, String username, String password, Properties sslProps)
throws ConnectionException { /* ... */ }
}
RhinoConnection
The RhinoConnection
class represents a connection from an external OA&M tool to Rhino. RhinoConnection
extends MBeanServerConnection
so you can use it anywhere in the JMX Remote API that uses MBeanServerConnection
. Other Rhino remote library classes can use MBeanServerConnection
objects from other sources (such as a local MBeanServer
) however such objects do not have the additional RhinoConnection
features.
You create a RhinoConnection with a list of servers. The OA&M client tries each server in the list — in order — as it connects to Rhino. A connection-failure exception is thrown if no connection can be made with any server in the list. If a connection fails whilst executing a command, the OA&M client tries the next server on the list and executes the pending command. |
/** * Rhino-specific connection interface. * @author OpenCloud */
public interface RhinoConnection extends MBeanServerConnection {
/** * Try to connect this connection. * @throws ConnectionException if the connection attempt failed */
void connect() throws ConnectionException;
/** * Disconnect this connection. */
void disconnect();
/** * Disconnect then connect this connection. * @throws ConnectionException if the connection attempt failed */
void reconnect() throws ConnectionException;
/** * Test this connection by invoking a basic MBean operation. If the test fails, an reconnect * attempt will betriggered automatically. * @throws ConnectionException if the test and the attempt to automatically reconnect failed */
void testConnection() throws ConnectionException;
/** * Get the JMXConnector associated with this MBeanServer connection. * @return a JMXConnector instance */
JMXConnector getJmxConnector();
/** * Whether or not this connection is connected. * @return state of the connection */
boolean getConnectedState();
/** * The host name of the current connection. Useful if this connection is configured to try * multiple hosts. * @return host where this connection object is currently connected to Rhino */
String getServerHost();
/** * The port of the current connection. Useful if this connection is configured to try * multiple hosts. * @return port where this connection object is currently connected to Rhino */
int getServerPort();
/** * The node ID of the server the current connection is to. Useful if this connection is * configured to try multiple hosts. * @return node id of Rhino where this connection object is currently connected to */
int getServerNodeId();
/** * Tell this connection to only allow particular connection contexts. * See {@link com.opencloud.slee.remote.ExecutionContext} for more details. * @param ctx the context to allow */
void setAllowedExecutionContext(ExecutionContext ctx);
}
ExecutionContext
Occasionally you may want control over whether or not a particular operation or set of operations can failover to another node. You can do this using the ExecutionContext
mechanism. (See RhinoConnection.setAllowedExecutionContext(ExecutionContext)
and ExecutionContext
.)
/** * Defines execution contexts for MBean operations. An execution context can be used to control the * behaviour during connection failover. * <p/> * The execution context can be set at any time on a {@link com.opencloud.slee.remote.RhinoConnection} * using {@link com.opencloud.slee.remote.RhinoConnection#setAllowedExecutionContext(ExecutionContext)} * and will remain in force until it is reset. * <p/> * The default value is CLUSTER. * @author OpenCloud */
public enum ExecutionContext {
/** * Allow commands to execute only on the current connection. If the connection fails, an exception * will be thrown. */
CONNECTION,
/** * Allow commands to executed on any connection, as long as a new connection is made to the same * node as the previous command was executed on. <br/> * If a new connection is made to a different node, then an exception will be thrown. */
NODE,
/** * Allow commands to executed on any connection and on any node. An exception will only be thrown * if the client cannot connect to any node. */
CLUSTER
}
Configuring an SSL connection
If Rhino is configured to require SSL connections to the JMX Remote interface (the default setting), then the SSL connection factory that the JMX client uses will need a keystore, a trust store, and a password for each. You can provide these two ways:
-
Getting system properties when starting the JVM that is running the JMX client:
-Djavax.net.ssl.keyStore=$RHINO_CLIENT_HOME/rhino-public.keystore -Djavax.net.ssl.keyStorePassword=xxxxxxxx \ -Djavax.net.ssl.trustStore=$RHINO_CLIENT_HOME/rhino-public.keystore -Djavax.net.ssl.trustStorePassword=xxxxxxxx
-
Putting these settings into a properties file or properties object, and using one of the connection factory methods that takes such a parameter. For example, you could have a properties file containing the following lines:
javax.net.ssl.trustStore=rhino-public.keystore javax.net.ssl.trustStorePassword=xxxxxxxx javax.net.ssl.keyStore=rhino-public.keystore javax.net.ssl.keyStorePassword=xxxxxxxx
and then create a connection as follows:
File propertiesFile = new File("remote.properties");
MBeanServerConnection connection = RhinoConnectionFactory.connect(propertiesFile);
Connection logging
Exceptions that the connection factory methods throw generally contain sufficient detail to determine the cause of any connection problems.
If you need more fine-grained tracing, you can provide a PrintWriter
to the connection factory, so that all connection objects write trace messages to it while trying to connect.
For examples of how to write connection trace messages to stdout or a Log4J logger, see RhinoConnectionFactory.setLogWriter(PrintWriter) . |
MBean Proxies
The API uses MBean proxies extensively. This allows MBean operations to be invoked on the remote MBean server by a direct method call on the proxy object.
Retrieving JSLEE MBean proxies
The SleeManagement
class is used to create proxy instances for SLEE-standard MBeans with well-known Object Names. An MBeanServerConnection
must be obtained first, then one of the methods on the MBeanServerConnection
class can be called.
MBeanServerConnection connection = RhinoConnectionFactory.connect( ... );
SleeManagementMBean sleeManagement = SleeManagement.getSleeManagementMBean(connection);
Retrieving Rhino MBean proxies
The RhinoManagement
class is used to create proxy instances for Rhino-specific MBeans. An MBeanServerConnection
must be obtained first, then one of the methods on the RhinoManagement
class can be called.
MBeanServerConnection connection = RhinoConnectionFactory.connect( ... );
RhinoHousekeepingMBean housekeeping = RhinoManagement.getRhinoHousekeepingMBean(connection);
Working with Profiles
The RemoteProfiles
class contains a number of utility methods to greatly ease working with SLEE profile management operations.
There are methods to:
-
get proxies to ProfileMBeans
-
create and commit a new profile
-
create an uncommitted profile that can have its attributes set before it is committed
-
get attribute names, values and types.
These methods are in addition to the standard management operations available on ProfileProvisioningMBean
.
Creating a profile table
This can be done using the ProfileProvisioningMBean
, but RemoteProfiles
has a utility method to check if a profile table exists:
ProfileSpecificationID profileSpecID =
new ProfileSpecificationID("AddressProfileSpec", "javax.slee", "1.0");
if(RemoteProfiles.profileTableExists(connection, "TstProfTbl")) {
profileProvisioning.removeProfileTable("TstProfTbl");
}
profileProvisioning.createProfileTable(profileSpecID, "TstProfTbl");
Creating a profile
Option 1: Supply the attributes when creating the profile and have it committed
AttributeList list = new AttributeList();
list.add(new Attribute("Addresses",
new Address[] { new Address(AddressPlan.IP, "127.0.0.1") }));
RemoteProfiles.createCommittedProfile(connection, "TstProfTbl", "TstProfile1", list);
Option 2: Create the profile in the uncommitted state, and use a proxy to the Profile Management interface to set the attributes, then call commitProfile
RemoteProfiles.UncommittedProfile<AddressProfileManagement> uncommittedA;
uncommittedA = RemoteProfiles.createUncommittedProfile(connection, "TstProfTbl", "TstProfile2",
AddressProfileManagement.class);
AddressProfileManagement addressProfile = uncommittedA.getProfileProxy();
addressProfile.setAddresses(new Address[] { new Address(AddressPlan.IP, "127.0.0.2") });
uncommittedA.getProfileMBean().commitProfile();
Option 3: Create the profile in the uncommitted state, and use the setAttribute method on the connection, then call commitProfile
RemoteProfiles.UncommittedProfile uncommittedB;
uncommittedB = RemoteProfiles.createUncommittedProfile(connection, "TstProfTbl", "TstProfile3");
connection.setAttribute(uncommittedB.getObjectName(),
new Attribute("Addresses",
new Address[] { new Address(AddressPlan.IP, "127.0.0.3") }));
uncommittedB.getProfileMBean().commitProfile();
Editing a profile
Using the profile management interface as a proxy to the profile object allows set methods to be invoked directly:
ProfileMBean profile
= RemoteProfiles.getProfileMBean(connection, "TstProfTbl", profileName);
profileMBean.editProfile();
AddressProfileManagement addrProfMng
= RemoteProfiles.getProfile(connection, "TstProfTbl", profileName,
AddressProfileManagement.class);
addrProfMng.setAddresses(new Address[] { new Address(AddressPlan.IP, "127.0.1.1") });
profileMBean.commitProfile();
Inspecting profile tables
Using RemoteProfiles
methods to get the attribute names and types for a given profile table:
String[] names = RemoteProfiles.getAttributeNames(connection, "TstProfTbl");
System.out.println("Profile attributes:");
for (String name : names) {
String type = RemoteProfiles.getAttributeType(connection, "TstProfTbl", name);
System.out.println(" " + name + " (" + type + ")");
}
SLEE Management Script
The slee.sh
script provides functionality to operate on the SLEE state either for nodes on this host, or on the cluster as a whole.
It provides a global control point for all nodes in the cluster.
For convenience of administration the script can discover the running set of nodes; however for more control, or if managing multiple clusters, the set of nodes can be configured in the environment. The environment variables used are:
RHINO_SLEE_NODES - List of Rhino SLEE event router node IDs on this host. If not specified, will discover nodes automatically. RHINO_QUORUM_NODES - List of Rhino quorum nodes IDs on this host.
The values of these variables can be specified, if necessary, in the file rhino.env
.
Commands
The commands below control the state of the SLEE on nodes of a Rhino cluster.
They are run by executing slee.sh <command> <arguments>
, for example:
slee.sh start -nodes 101,102
Command | What it does |
---|---|
start |
Starts the SLEE on the cluster or on a set of nodes. Use with no arguments to start the cluster, or with the argument |
stop |
Stops the SLEE on the cluster or a set of nodes. Use with no arguments to stop the cluster, or with the argument |
reboot |
Reboots the cluster or a set of nodes. Use the argument slee.sh reboot -nodes 102,103 -states running,running would reboot nodes slee.sh reboot -cluster -states stopped,stopped,running,running would reboot all four nodes, and set the states to |
shutdown |
Shuts down the cluster or a set of nodes, stopping them if required.
|
state |
Prints the state of all the nodes in the cluster. Also available as the alias |
console |
Runs a management command using the Rhino console. Also available as the aliases |
Rhino Management Script
The script rhino.sh
provides functionality to control and monitor the processes for the Rhino nodes on this host.
It does not connect to a Rhino management node to operate on the SLEE state (except for the stop
command), nor does it affect nodes on other hosts.
For convenience of administration the script can discover the running set of nodes; however for more control, or if managing multiple clusters, the set of nodes can be configured in the environment. The environment variables used are:
RHINO_START_INTERVAL - How long to delay between starting each Rhino node. It is helpful to stagger node startup because a distributed lock is required during boot, and acquisition of that lock may timeout if a very large number of components are deployed. RHINO_SLEE_NODES - List of Rhino SLEE event router node IDs on this host. If not specified, will discover nodes automatically. RHINO_QUORUM_NODES - List of Rhino quorum nodes IDs on this host.
The values of these variables can be specified, if necessary, in the file rhino.env
.
Commands
The commands below control the state of the Rhino nodes.
They are run by executing rhino.sh <command> <arguments>
, for example:
rhino.sh start -nodes 101,102
Command | What it does |
---|---|
start |
Starts a set of nodes that are not operational. Use with no arguments to start all local nodes, or with the argument |
stop |
Stops a set of nodes that are operational. Use with no arguments to stop all local nodes or with the argument This command connects to a management node in order to stop the SLEE on the affected nodes and send them the shutdown command. |
kill |
Kills a set of operational nodes using Use with no arguments to stop all local nodes or with the argument |
restart |
Kill a set of operational nodes using Use with no arguments to stop all local nodes or with the argument |
Java Management Extension Plugins (JMX M-lets)
You can extend Rhino’s OA&M features many ways, including deploying a management component called a management applet (m-let) in the JMX MBean server running in each Rhino node. Below is an introduction to the JMX model, how JMX m-lets work, and how Rhino uses them.
What is JMX?
The Java for Management eXtentions (JMX) specification defines a standard way of instrumenting and managing Java applications. The JMX Instrumentation and Agent Specification, v1.2 (October 2002) summarises JMX like this:
How does JAIN SLEE use JMX?
The JAIN SLEE 1.1 specification mandates using the JMX 1.2.1 MLet (management applet) specification for management clients to gain access to the SLEE MBean server and SLEE MBean objects.
What are m-lets?
An m-let is a management applet service that lets you instantiate and register one or more Java Management Beans (MBeans), from a remote URL, in the MBean server. The server loads text-based m-let configuration file that specifies information about MBeans to be loaded.
OpenCloud typically uses m-lets to implement JMX protocol adaptors. |
How does Rhino use m-lets?
Each node in a Rhino cluster runs an MBean server (Rhino {space-metadata-from:rhino-internal-version} uses the Java VM MBean server). When Rhino starts, it dynamically loads m-lets into those MBean servers, based on m-let text files stored in the following places:
-
Rhino SDK —
$RHINO_HOME/config/mlet.conf
-
Production Rhino —
$RHINO_HOME/node-XXX/config/permachine-mlet.conf
,$RHINO_HOME/node-XXX/config/pernode-node-mlet.conf
.
These configuration files conform to to the OpenCloud M-Let Config 1.1 DTD.
See the OpenCloud M-Let Config 1.1 DTD:
<?xml version="1.0" encoding="ISO-8859-1"?>
<!-- Use: <!DOCTYPE mlets PUBLIC "-//Open Cloud Ltd.//DTD JMX MLet Config 1.1//EN" "http://www.opencloud.com/dtd/mlet_1_1.dtd"> -->
<!ELEMENT mlets (mlet*)>
<!-- The mlet element describes the configuration of an MLet. It contains an optional description, an optional object name, an optional classpath, mandatory class information, and optional class constructor arguments. Constructor arguments must be specified in the order they are defined in the class constructor. -->
<!ELEMENT mlet (description?, object-name?, classpath?, class, arg*)>
<!-- The description element may contain any descriptive text about the parent element. Used in: mlet -->
<!ELEMENT description (#PCDATA)>
<!-- The object-name element contains the JMX object name of the MLet. If the name starts with a colon (:), the domain part of the object name is set to the domain of the agent registering the MLet. Used in: mlet Example: <object-name>Adaptors:name=MyMLet</object-name> -->
<!ELEMENT object-name (#PCDATA)>
<!-- The classpath element contains zero or more jar-url elements specifying jars to be included in the classpath of the MLet and an optional specification identifying security permissions that should be granted to classes loaded from the specifed jars. Used in: mlet -->
<!ELEMENT classpath (jar-url*, security-permission-spec?)>
<!-- The jar-url element contains a URL of a jar file to be included in the classpath of the MLet. Used in: classpath Example: <jar-url>file:/path/to/location/of/file.jar</jar-url> -->
<!ELEMENT jar-url (#PCDATA)>
<!-- The security-permission-spec element specifies security permissions based on the security policy file syntax. Refer to the following URL for definition of Sun's security policy file syntax: http://java.sun.com/j2se/1.3/docs/guide/security/PolicyFiles.html#FileSyntax The security permissions specified here are granted to classes loaded from the jar files identified in the jar-url elements in the classpath of the MLet. Used in: jar Example: <security-permission-spec> grant { permission java.lang.RuntimePermission "modifyThreadGroup"; }; </security-permission-spec> -->
<!ELEMENT security-permission-spec (#PCDATA)>
<!-- The class element contains the fully-qualified name of the MLet's MBean class. Used in: mlet Example: <class>com.opencloud.slee.mlet.mymlet.MyMlet</class> -->
<!ELEMENT class (#PCDATA)>
<!-- The arg element contains the type and value of a parameter of the MLet class' constructor. Used in: mlet -->
<!ELEMENT arg (type, value)>
<!-- The type element contains the fully-qualified name of the parameter type. The currently supported types for MLets are: Java primitive types, object wrappers for Java primitive types, and java.lang.String. Used in: arg Example: <type>int</type> -->
<!ELEMENT type (#PCDATA)>
<!-- The value element contains the value for a parameter. The value must be appropriate for the corresponding parameter type. Used in: arg Example: <value>8055</value> -->
<!ELEMENT value (#PCDATA)>
<!ATTLIST mlet enabled CDATA #IMPLIED >
Structure of the m-let text file
<mlets>
<mlet enabled="true">
<classpath>
<jar-url> </jar-url>
<security-permission-spec>
</security-permission-spec>
</classpath>
<class> </class>
<arg>
<type></type>
<value></value>
</arg>
</mlet>
<mlet enabled="true">
</mlet>
The m-let text file can contain any number of MLET tags, each for instantiating a different MBean.
-
classpath — The classpath defines the code source of the MBean to be loaded.
-
jar-url — The URL to be used for loading the MBean classes.
-
security-permission-spec — Defines the security environment of the MBean.
-
-
class — The main class of the MBean to be instantiated.
-
arg — There may be zero or more arguments to the MBean. Each argument is defined by an arg element. The set of arguments must correspond to a constructor defined by the MBean main class.
-
type — The Java type of the argument.
-
value — The value of the argument.
-
For details on m-lets included in OpenCloud Rhino, see JMX Remote Adaptor M-let |
JMX Remote Adaptor M-let
The JMX Remote Adaptor m-let is a fundamental component of Rhino.
All OpenCloud management tools use the JMX Remote Adaptor to connect to Rhino. This component must be present and active, or Rhino cannot be managed! The JMX Remote API is defined by the Java Management Extensions (JMX) Remote API (jsr 160). The JMX Remote Adaptor uses the JMX Remote API to expose a management port at the following URL:
service:jmx:rmi:///jndi/rmi://<rhino host>:<rhino jmx-r port>/opencloud/rhino
JMX Remote and OpenCloud tools
All OpenCloud management tools (the command-line console, the Rhino Element Manager, the stats client, the import-export tool) all use the JMX Remote API to connect to Rhino using the JMX Remote Adaptor. (See also Building Custom OA&M Tools with Rhino Remote API.)
JMX Remote Adaptor configuration options
In normal conditions you should not need to change the configuration of this component! |
<mlet enabled="true">
<classpath>
<jar-url>
@FILE_URL@@RHINO_BASE@/lib/jmxr-adaptor.jar
</jar-url>
<security-permission-spec>
...
</security-permission-spec>
</classpath>
<class>
com.opencloud.slee.mlet.jmxr.JMXRAdaptor
</class>
<!-- the local rmi registry port -->
<arg>
<type>int</type>
<value>@RMI_MBEAN_REGISTRY_PORT@</value>
</arg>
<!-- the local jmx connector port -->
<arg>
<type>int</type>
<value>@JMX_SERVICE_PORT@</value>
</arg>
<!-- enable ssl -->
<arg>
<type>boolean</type>
<value>true</value>
</arg>
</mlet>
As Rhino starts, it pre-processes m-let configuration files, substitutes configuration variables and creates a working m-let configuration file in the node-XXX/work/config
subdirectory. Note the following arguments:
Argument | Description | Default | |
---|---|---|---|
1 |
rmi registry port |
The port of the RMI registry that the JMX Adaptor registers itself with. |
1199 |
2 |
local JMX connector port |
The JRMP port the JMX Remote Adaptor listens on. |
1202 |
3 |
enable SSL |
Whether SSL is enabled. |
true |
Monitoring and System-Reporting Tools
This section includes details and sample output for the following monitoring and system-reporting tools.
Script | What it does |
---|---|
captures statistical-performance data about the cluster and displays it in tabular-text form on the console, or graphed on a GUI |
|
generates a report of useful system information |
|
sends a signal to the JVM to produce a thread dump |
|
shows dependencies between SLEE components |
rhino-stats
Rhino provides monitoring facilities for capturing statistical-performance data about the cluster, using the client-side application rhino-stats
. This application connects to Rhino using JMX, and samples requested statistics, in real time. You can display extracted statistics in tabular-text form on the console, or graphed on a GUI using various graphing modes.
When correctly configured, monitored, and tuned — using Rhino’s performance-monitoring tools — Rhino SLEEs will surpass industry standards for performance and stability. ---
For service developers and administrators
Much of the statistical information that rhino-stats
gathers is useful to both service developers and administrators:
-
Service developers can use performance data, such as event-processing time statistics, to evaluate the impact of SBB-code changes on overall performance.
-
Administrators can use statistics to evaluate settings for tunable performance parameters. For example, the following can help determine appropriate configuration parameters:
Parameter set type | Tunable parameters |
---|---|
Object pools |
Object pool sizing |
Staging threads |
Staging configuration |
Memory database sizing |
Memory database size limits |
System memory usage |
JVM heap size |
See also:
About Rhino Statistics
Rhino’s statistics subsystem collects three types of statistic:
-
counters — the number of occurrences of a particular event (unbounded and can only increase); for example, lock waits or rejected events
-
gauges — the amount of a particular object or item (can increase and decrease within some arbitrary bound, typically between 0 and some positive number); for example, free memory or active activities
-
samples — values every time a particular event or action occurs; for example, event-processing time or lock-manager wait time.
Rhino records and reports counter- and gauge-type statistics as absolute values. It tallies sample-type statistics, however, in a frequency distribution (which it reports to statistics clients such as rhino-stats).
Parameter sets
Rhino defines a set of related statistics as a parameter set. Many of the available parameter sets are organised hierarchically. Child parameter sets that represent related statistics from a particular source may contribute to parent parameter sets that summarise statistics from a group of sources.
For example, the Events parameter set summarises event statistics from all resource adaptor entities. Below this is a parameter set for each resource adaptor entity which summarises statistics for all the event types produced by that resource adaptor entity. Further below each of these is another parameter set for each event type fired by the resource adaptor entity. These last parameter sets record the raw statistics for each fired event type as summarised by the parent parameter sets. This means, when examining the performance of an application, you can drill down to analyse statistics on a per-event-type basis.
Running rhino-stats
Below are the requirements and options for running rhino-stats
.
Requirements for running rhino-stats
The requirements and recommendations for running the Rhino statistics-gathering tool (rhino-stats
) are as follows.
Run on a workstation, not cluster node
Rhino’s statistics-gathering tool (rhino-stats
) should be run only on a workstation (not a cluster node).
Impact on CPU usage
Executing the statistics client on the production cluster node is not recommended. The statistics client’s GUI can impact CPU usage, such that a cluster may drop calls. (The exact impact depends on the number of distinct parameter sets monitored, the number of simultaneous users and the sample frequency.) |
Direct connections
The host running the rhino-stats
client requires a direct TCP connection to each of the Rhino cluster nodes it get statistics from. Moreover, the client asks each node to create a TCP connection back to it, for the express purpose of sending it statistics data. Therefore, any intermediary firewalls between the client host and the Rhino cluster must be configured to allow these connections.
Single outgoing JMX connection to a cluster node (deprecated)
Versions of the statistics client, before the release of Rhino 1.4.4, retrieved statistics by creating a single outgoing JMX connection to one of the cluster nodes. This statistics-retrieval method was deprecated in favour of the newer direct-connection method, since the old method had a greater performance impact on the Rhino cluster. The single-connection method is still available however, through the use of the -j
option.
Performance implications (minimal impact)
Rhino’s statistics subsystem is designed to have minimal impact on performance. Gathering counter- or gauge-type statistics should not have any noticeable impact on CPU usage or latency. Gathering sample-type statistics is more costly, and will usually result in a 1-2% impact on CPU usage when several parameter sets are monitored. |
rhino-stats
options
rhino-stats
includes the following options:
$ ./rhino-stats One (and only one) of -g (Start GUI), -m (Monitor Parameter Set), -l (List Available Parameter Sets) required. Available command line format: -h <argument> : hostname -p <argument> : port -u <argument> : username -w <argument> : password -D : display connection debugging messages -g : gui mode -l <argument> : query available statistics parameter sets -m <argument> : monitor a statistics parameter set on the console -s <argument> : sample period in milliseconds -i <argument> : internal polling period in milliseconds -d : display actual value in addition to deltas for counter stats -n <argument> : name a tab for display of subsequent graph configuration files -f <argument> : full path name of a saved graph configuration .xml file to redisplay -j : use JMX remote option for statistics download in place of direct statistics download -t <argument> : runtime in seconds (console mode only) -q : quiet mode - suppresses informational messages -T : disable display of timestamps (console mode only) -R : display raw timestamps (console mode only) -C : use comma separated output format (console mode only) -S : no per second conversion of counter deltas -k <argument> : number of hours samples to keep in gui mode (default=6) -r : output one row per node (console mode only)
Gathering Statistics in Console Mode
In console mode, you can run the rhino-stats
client with options to:
List root parameter sets
To list the different types of statistics that can be monitored in Rhino, run rhino-stats
with the -l
parameter. For example:
$ ./rhino-stats -l The following root parameter sets are available for monitoring: Activities, ActivityHandler, EventRouter, Events, JVM, LicenseAccounting, LockManagers, MemDB-Local, MemDB-Replicated, ObjectPools, SLEE-Usage, Savanna-Protocol, Services, StagingThreads, SystemInfo, Transactions For parameter set type descriptions and a list of available parameter sets use the -l <root parameter set name> option
Display parameter set descriptions
The output below illustrates the root parameter set (Events ) with many different child parameter sets. You can use this information to select the level of granularity at which you want statistics reported. (See the instructions for monitoring parameters in real time.) |
To query the available child parameter sets within a particular root parameter set, use -l <root parameter set name>
. For example, for the root parameter set Events
:
$ ./rhino-stats -l Events Parameter Set: Events Parameter Set Type: Events Description: Event Statistics Counter type statistics: Id: Name: Label: Description: 0 accepted acc Accepted events 1 rejected rej Events rejected due to overload 2 failed fail Events that failed in event processing 3 successful succ Event processed successfully Sample type statistics: Id: Name: Label: Description: 4 eventRouterSetupTime ERT Event router setup time 5 sbbProcessingTime SBBT SBB processing time 6 eventProcessingTime EPT Total event processing time Found 164 parameter sets under 'Events': -> "Events" -> "Events.Rhino Internal" -> "Events.Rhino Internal.[javax.slee.ActivityEndEvent javax.slee, 1.0]" -> "Events.Rhino Internal.[javax.slee.facilities.TimerEvent javax.slee, 1.0]" -> "Events.Rhino Internal.[javax.slee.profile.ProfileAddedEvent javax.slee, 1.0]" -> "Events.Rhino Internal.[javax.slee.profile.ProfileRemovedEvent javax.slee, 1.0]" -> "Events.Rhino Internal.[javax.slee.profile.ProfileUpdatedEvent javax.slee, 1.0]" -> "Events.Rhino Internal.[javax.slee.serviceactivity.ServiceStartedEvent javax.slee, 1.0]" -> "Events.Rhino Internal.[javax.slee.serviceactivity.ServiceStartedEvent javax.slee, 1.1]" -> "Events.insis-cap1a" -> "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.CloseInd OpenCloud, 2.0]" -> "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.DelimiterInd OpenCloud, 2.0]" -> "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.NoticeInd OpenCloud, 2.0]" -> "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.OpenConf OpenCloud, 2.0]" -> "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.OpenInd OpenCloud, 2.0]" -> "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.ProviderAbortInd OpenCloud, 2.0]" -> "Events.insis-cap1a.[com.opencloud.slee.resources.in.dialog.UserAbortInd OpenCloud, 2.0]" ...
Parameter set types — required for monitoring
A parameter set can only be monitored by a statistics client such as rhino-stats if it has a parameter set type. |
A parameter set’s type is listed in its description. Most parameter sets have a type, such as the Events parameter set, which has the type "Events". However, the SLEE-Usage
root parameter set, for example, does not have a type, as shown below:
$ ./rhino-stats -l SLEE-Usage Parameter Set: SLEE-Usage (no parameter set type defined for this parameter set) Found 16 parameter sets under 'SLEE-Usage': -> "SLEE-Usage" -> "SLEE-Usage.ProfileTables" -> "SLEE-Usage.RAEntities" -> "SLEE-Usage.Services" -> "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2]" -> "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]" -> "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2].(default)" -> "SLEE-Usage.Services.ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.2]" -> "SLEE-Usage.Services.ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Duration Logging SBB,vendor=OpenCloud,version=0.2]" -> "SLEE-Usage.Services.ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Duration Logging SBB,vendor=OpenCloud,version=0.2].(default)"
Neither the SLEE-Usage
parameter set, nor its immediate child parameter sets (SLEE-Usage.ProfileTables
, SLEE-Usage.RAEntities
, and SLEE-Usage.Services
), have a parameter set type — as usage parameters are defined by SLEE components. The parameter set representing usage for an SBB within a particular service does however have a parameter set type and can be monitored:
$ ./rhino-stats -l "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]" Parameter Set: SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2] Parameter Set Type: Usage.Services.SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2] Description: Usage stats for SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2] Counter type statistics: Id: Name: Label: Description: 0 missingParameters n/a missingParameters 1 tCallAttempts n/a tCallAttempts 2 unknownSubscribers n/a unknownSubscribers 3 oCallAttempts n/a oCallAttempts 4 callsBarred n/a callsBarred 5 callsAllowed n/a callsAllowed Sample type statistics: (none defined) Found 2 parameter sets under 'SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]': -> "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2]" -> "SLEE-Usage.Services.ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2].SbbID[name=Call Barring SBB,vendor=OpenCloud,version=0.2].(default)"
Rhino guarantees that if a particular parameter set has a non-null parameter set type, then all its child parameter sets will also have a non-null parameter set type and can therefore also be individually monitored. |
Monitor parameters in real time
Once started, rhino-stats will continue to extract and print the latest statistics once per second. This period can be changed using the -s option. |
To monitor a parameter set in real time using the console interface, run rhino-stats
with the -m
command-line argument followed by the parameter set name. For example:
$ ./rhino-stats -m "Events.insis-cap1a.[com.opencloud.slee.resources.incc.operation.InitialDPInd OpenCloud, 3.0]" 2008-05-01 17:37:20.687 INFO [rhinostat] Connecting to localhost:1199 2008-05-01 17:37:21.755 INFO [dispatcher] Establish direct session DirectSession[host=x.x.x.x port=17400 id=56914320286194693] 2008-05-01 17:37:21.759 INFO [dispatcher] Connecting to localhost/127.0.0.1:17400 Events.insis-cap1a.[com.opencloud.slee.resources.incc.operation.InitialDPI time acc fail rej succ EPT ERT SBBT 50% 90% 95% 50% 90% 95% 50% 90% 95% ----------------------- -------------------------------------------------------------------------- 2008-05-01 17:37:25.987 69 0 0 69 0.7 1.2 1.4 0.2 0.4 0.4 0.5 0.8 1.0 2008-05-01 17:37:26.989 69 0 0 69 0.9 1.2 1.4 0.2 0.4 0.4 0.5 0.8 1.0 2008-05-01 17:37:27.991 61 0 0 61 0.9 1.3 1.6 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:28.993 67 0 0 67 0.9 1.3 1.4 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:29.996 69 0 0 69 0.9 1.3 1.4 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:30.996 63 0 0 63 0.9 1.3 1.4 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:31.999 71 0 0 71 0.9 1.3 1.4 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:33.001 64 0 0 64 0.9 1.3 1.4 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:34.002 68 0 0 68 0.9 1.3 1.4 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:35.004 60 0 0 60 0.9 1.3 1.4 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:36.006 64 0 0 64 1.0 1.3 1.4 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:37.008 67 0 0 66 1.0 1.3 1.5 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:38.010 61 0 0 62 1.0 1.4 1.5 0.3 0.4 0.4 0.6 0.9 1.0 2008-05-01 17:37:39.012 61 0 0 61 1.0 1.4 1.5 0.3 0.4 0.4 0.6 0.9 1.0
The "50% 90% 95%" headers indicate percentile buckets for sample type statistics. |
Configure console output
The default console output is not particularly useful when you want to do automated processing of the logged statistics. To make post-processing of the statistics easier, rhino-stats
supports a number of command-line arguments to modify the format of statistics output:
-
-R
— outputs raw (single number) timestamps -
-C
— outputs comma-separated statistics -
-d
— display actual value in addition to deltas for counter stats -
-S
— no per second conversion of counter deltas -
-r
— output one row per node -
-q
— suppresses printing non-statistics information
For example, to output a comma-separated log of event statistics:
$ ./rhino-stats -m "Events.insis-cap1a.[com.opencloud.slee.resources.incc.operation.InitialDPInd OpenCloud, 3.0]" -R -C -q time,acc,fail,rej,succ,EPT,ERT,SBBT 1209620311166,64,0,0,64,0.9 1.2 1.3,0.3 0.4 0.4,0.6 0.8 0.9 1209620312168,63,0,0,63,0.9 1.3 1.3,0.3 0.4 0.4,0.6 0.9 0.9 1209620313169,67,0,0,67,0.9 1.3 1.3,0.3 0.4 0.4,0.6 0.9 0.9 1209620314171,66,0,0,66,0.9 1.3 1.3,0.3 0.4 0.4,0.6 0.9 0.9 1209620315174,65,0,0,65,0.9 1.3 1.3,0.3 0.4 0.4,0.6 0.9 0.9 1209620316176,65,0,0,65,0.9 1.3 1.5,0.3 0.4 0.4,0.6 0.9 1.0 1209620317177,62,0,0,62,0.9 1.3 1.4,0.3 0.4 0.4,0.6 0.9 0.9 1209620318179,66,0,0,66,1.0 1.3 1.5,0.3 0.4 0.4,0.6 0.9 1.0 1209620319181,58,0,0,58,1.0 1.3 1.6,0.3 0.4 0.4,0.6 0.9 1.1 1209620320181,69,0,0,69,1.0 1.3 1.6,0.3 0.4 0.4,0.6 0.9 1.0 1209620321182,68,0,0,68,1.0 1.3 1.6,0.3 0.4 0.4,0.6 0.9 1.0 1209620322183,65,0,0,65,1.0 1.3 1.5,0.3 0.4 0.4,0.6 0.9 1.0 1209620323184,67,0,0,67,1.0 1.3 1.5,0.3 0.4 0.4,0.6 0.9 1.0 ...
Gathering Statistics in Graphical Mode
To create a graph start the rhino-stats
client in graphical mode using the -g
option:
$ ./rhino-stats -g
After the client has downloaded parameter set information from Rhino, the main application window displays. Below are some of the options available for a graph, and instructions for creating a quick or complex graph.
Graphing options
When run in graphical mode, rhino-stats
offers a range of options for interactively extracting and graphically displaying statistics gathered from Rhino, including:
-
counter/gauge plot — displays the values of gauges, or the change in values of counters, over time; displays multiple counters or gauges using different colours; stores an hour’s worth of statistics history
-
sample distribution plot — displays the 5th, 25th, 50th, 75th and 95th percentiles of a sample distribution as it changes over time, either as a bar-and-whisker type graph or as a series of line plots
-
sample distribution histogram — displays a constantly updating histogram of a sample distribution in both logarithmic and linear scales.
Quick graph
The client includes a browser panel at left, with the available parameter sets displayed in a hierarchy tree. To quickly create a simple graph, right-click a parameter set, and select a parameter and type of graph. For example, the following illustration shows selecting a quick plot of lockTimeouts
:
Complex graph
To create more complex graphs, comprising multiple statistics, use the "graph creation wizard". The following steps are illustrated with an example that creates a plot of event-processing counter statistics from the IN-SIS.
1 |
Start the wizard
The wizard presents a selection of graph types. The following types are available:
|
||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
2 |
Select statistics
The wizard presents a selection of graph components. This screen displays a table of statistics selected for the line plot. Initially, this is empty. To add statistics, click Add. |
||||||||||
3 |
Select parameter sets
The Select Parameter Set dialog displays.
|
||||||||||
4 |
Select colours
The Select Graph Components screen redisplays with the components added.
|
||||||||||
5 |
Name the graph
The wizard prompts you to name the graph.
|
||||||||||
6 |
View the graph
The client creates the graph and begins populating it with statistics extracted from Rhino. The client will continue collecting statistics periodically from Rhino and adding them to the graph. By default the graph will only display the last one minute of information. This can be changed using the timescale drop-down list on the toolbar or clicking the magnify or demagnify buttons either side of the drop-down list — the x-axis scales from 30 seconds up to 10 minutes. Each line graph stores approximately one hour of data (using the default sample frequency of 1 second). To view stored data that is not currently visible in the graph window, click and drag the scrollbar underneath the graph, or click directly on a position within the scrollbar. |
Details of Available Statistics
Further details about the Rhino SNMP OID mappings are available here |
Activities
Activity Statistics
OID: 1.3.6.1.4.1.19808.2.1.2
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
created |
2 |
Activities created |
counter |
|||||
ended |
3 |
Activities ended |
counter |
|||||
rejected |
4 |
Activities rejected |
counter |
|||||
active |
5 |
Activity count |
counter |
gauge |
||||
startSuspended |
stsusp |
6 |
Number of activities started suspended |
counter |
activities |
|||
suspendActivity |
susp |
7 |
Number of activities suspended |
counter |
activities |
ActivityHandler
Rhino Activity Handler statistics
OID: 1.3.6.1.4.1.19808.2.1.13
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
gCTime |
gct |
GC time |
sample |
µsec |
time/nanoseconds |
time/milliseconds |
||
txCreate |
txCre |
2 |
Transacted Activity Creation |
counter |
activities |
|||
txFire |
txFire |
3 |
Events Fired Transacted |
counter |
events |
|||
txEnd |
txEnd |
4 |
Transacted Activity Ends |
counter |
activities |
|||
nonTxCreate |
NtxCre |
5 |
Non-transacted Activity Creation |
counter |
activities |
|||
nonTxFire |
NtxFire |
6 |
Events Fired Non-transacted |
counter |
events |
|||
nonTxEnd |
NtxEnd |
7 |
Non-transacted Activity Ends |
counter |
activities |
|||
nonTxLookup |
NtxLook |
8 |
Non-transacted lookups |
counter |
lookups |
|||
txLookup |
txLook |
9 |
Transacted lookups |
counter |
lookups |
|||
nonTxLookupMiss |
NtxLkMiss |
10 |
Misses in non-transacted lookups |
counter |
lookups |
|||
txLookupMiss |
txLkMiss |
11 |
Misses in transacted lookups |
counter |
lookups |
|||
ancestorCount |
ances |
12 |
Ancestor activities created |
counter |
activities |
|||
gcCount |
agcs |
13 |
Number of activities queried by GC |
counter |
sweeps |
|||
generationsCollected |
gensC |
14 |
Activity MVCC generations collected |
counter |
generations |
|||
activitiesCollected |
actsC |
15 |
Number of activities collected |
counter |
activities |
|||
activitiesUnclean |
uncln |
16 |
Number of activities not cleaned by GC and retained for next GC |
counter |
activities |
|||
activitiesScanned |
scan |
17 |
Number of activities scanned by GC |
counter |
activities |
|||
administrativeRemove |
admRem |
18 |
Number of activities administratively removed |
counter |
activities |
|||
livenessQueries |
qlive |
19 |
Number of activity liveness queries performed |
counter |
queries |
|||
timersSet |
tmset |
20 |
Number of SLEE timers created |
counter |
timers |
|||
timersCancelled |
tmcanc |
21 |
Number of SLEE timers cancelled |
counter |
timers |
|||
localLockRequests |
llock |
22 |
Number of activity state locks requested for activities owned by the same node |
counter |
locks |
|||
foreignLockRequests |
flock |
23 |
Number of activity state locks requested for activities owned by another node |
counter |
locks |
|||
create |
24 |
Activities created |
counter |
activities |
||||
end |
25 |
Activities ended |
counter |
activities |
||||
fire |
fire |
26 |
Events fired |
counter |
events |
|||
lookup |
look |
27 |
Activity lookups attempted |
counter |
lookups |
|||
lookupMiss |
lkMiss |
28 |
Activity lookups failed |
counter |
lookups |
|||
churn |
churn |
29 |
Activity state churn |
counter |
units |
gauge |
||
liveCount |
live |
30 |
Activity handler live activities count |
counter |
activities |
gauge |
||
tableSize |
tblsz |
31 |
Activity lookup table size |
counter |
activities |
gauge |
||
timerCount |
timers |
32 |
Number of SLEE timers |
counter |
timers |
gauge |
||
lockRequests |
locks |
33 |
Number of activity state locks requested |
counter |
locks |
ClassLoading
JVM Class Loading Statistics
OID: 1.3.6.1.4.1.19808.2.1.28
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
loadedClassCount |
loadClasses |
2 |
Number of classes currently loaded |
counter |
gauge |
|||
totalLoadedClassCount |
totLoadClasses |
3 |
Total number of classes loaded since JVM start |
counter |
gauge |
|||
unloadedClassCount |
unloadClasses |
4 |
Total number of classes unloaded since JVM start |
counter |
gauge |
Compilation
JVM Compilation Statistics
OID: 1.3.6.1.4.1.19808.2.1.29
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
totalCompilationTime |
totCompTime |
2 |
The total compilation time |
counter |
gauge |
EndpointLimiting
SLEE Endpoint Limiting Statistics
OID: 1.3.6.1.4.1.19808.2.1.22
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
submitted |
sub |
2 |
Activities and events submitted to a SLEE endpoint |
counter |
||||
accepted |
acc |
3 |
Activities and events accepted by a SLEE endpoint |
counter |
||||
userAccepted |
usrAcc |
4 |
Activities and events accepted by the user rate limiter (SystemInput) |
counter |
||||
userRejected |
usrRej |
5 |
Activities and events rejected by the user rate limiter (SystemInput) |
counter |
||||
licenseRejected |
licRej |
6 |
Activities and events rejected due to the SDK license limit |
counter |
EventRouter
EventRouter Statistics
OID: 1.3.6.1.4.1.19808.2.1.15
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
eventHandlerStages |
evh |
2 |
Event handler stages executed |
counter |
||||
rollbackHandlerStages |
rbh |
3 |
Rollback handler stages executed |
counter |
||||
cleanupStages |
clh |
4 |
Cleanup stages executed |
counter |
||||
badGuyHandlerStages |
bgh |
5 |
Badguy handler stages executed |
counter |
||||
vEPs |
vep |
6 |
Event router setup (virgin events) |
counter |
||||
rootSbbFinds |
rootf |
7 |
Number of root SBBs resolved |
counter |
||||
sbbsResolved |
res |
8 |
Number of SBBs resolved |
counter |
||||
sbbCreates |
cre |
9 |
Number of SBBs created |
counter |
||||
sbbExceptions |
exc |
10 |
Number of SBB thrown exceptions caught |
counter |
||||
processingRetrys |
retr |
11 |
Number of event processing retries due to concurrent activity updates |
counter |
Events
Event Statistics
OID: 1.3.6.1.4.1.19808.2.1.1
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
eventRouterSetupTime |
ERT |
Event router setup time |
sample |
µsec |
time/nanoseconds |
time/milliseconds |
||
sbbProcessingTime |
SBBT |
SBB processing time |
sample |
µsec |
time/nanoseconds |
time/milliseconds |
||
eventProcessingTime |
EPT |
Total event processing time |
sample |
µsec |
time/nanoseconds |
time/milliseconds |
||
accepted |
acc |
2 |
Accepted events |
counter |
||||
rejected |
rej |
3 |
Events rejected due to overload (total) |
counter |
||||
failed |
fail |
4 |
Events that failed in event processing |
counter |
||||
successful |
succ |
5 |
Event processed successfully |
counter |
||||
rejectedQueueFull |
rejqf |
6 |
Events rejected due to overload (queue full) |
counter |
||||
rejectedQueueTimeout |
rejqt |
7 |
Events rejected due to overload (queue timeout) |
counter |
||||
rejectedOverload |
rejso |
8 |
Events rejected due to overload (system overload) |
counter |
ExecutorStats
Executor pool statistics
OID: 1.3.6.1.4.1.19808.2.1.23
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
executorTaskExecutionTime |
etExecTime |
Time a task spends executing |
sample |
time/nanoseconds |
time/milliseconds |
|||
executorTaskWaitingTime |
etWaitTime |
Time a task spends waiting for execution |
sample |
time/nanoseconds |
time/milliseconds |
|||
executorTasksExecuted |
etExecuted |
2 |
Number of executor tasks executed |
counter |
delta |
|||
executorTasksExecuting |
etExecuting |
3 |
Number of executor tasks currently executing |
counter |
gauge |
|||
executorTasksRejected |
etRejected |
4 |
Number of executor tasks rejected |
counter |
delta |
|||
executorTasksSubmitted |
etSubmitted |
5 |
Number of executor tasks submitted |
counter |
delta |
|||
executorTasksWaiting |
etWaiting |
6 |
Number of executor tasks waiting to execute |
counter |
gauge |
|||
executorThreadsIdle |
thrIdle |
7 |
Number of idle executor threads |
counter |
gauge |
|||
executorThreadsTotal |
thrTotal |
8 |
Total number of executor threads |
counter |
gauge |
GarbageCollector
JVM Garbage Collector Statistics
OID: 1.3.6.1.4.1.19808.2.1.30
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
collectionCount |
collCount |
2 |
Garbage Collector Collection Count |
counter |
gauge |
|||
collectionTime |
collTime |
3 |
Garbage Collector Collection Time |
counter |
gauge |
JDBCDatasource
JDBC Datasource Statistics
OID: 1.3.6.1.4.1.19808.2.1.16
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
getWait |
getWait |
Time spent by threads waiting for a connection (that eventually succeeded) |
sample |
msec |
time/milliseconds |
time/milliseconds |
||
duration |
duration |
Time that connections spent in use (allocated to a client) |
sample |
msec |
time/milliseconds |
time/milliseconds |
||
create |
create |
2 |
Number of new connections created |
counter |
||||
removeIdle |
remIdle |
3 |
Number of connections removed from the pool due to being idle |
counter |
||||
removeOverflow |
remOver |
4 |
Number of connections removed from the pool due to the idle pool being full |
counter |
||||
removeError |
remErr |
5 |
Number of connections removed from the pool due to a connection error |
counter |
||||
getRequest |
getReq |
6 |
Number of getConnection() requests made |
counter |
||||
getSuccess |
getOk |
7 |
Number of getConnection() requests that succeeded |
counter |
||||
getTimeout |
getTO |
8 |
Number of getConnection() requests that failed due to a timeout |
counter |
||||
getError |
getErr |
9 |
Number of getConnection() requests that failed due to a connection error |
counter |
||||
putOk |
putOk |
10 |
Number of connections returned to the pool that were retained |
counter |
||||
putOverflow |
putOver |
11 |
Number of connections returned to the pool that were closed because the idle pool was at maximum size |
counter |
||||
putError |
putErr |
12 |
Number of connections returned to the pool that were closed due to a connection error |
counter |
||||
inUseConnections |
cInUse |
13 |
Number of in use connections |
counter |
gauge |
|||
idleConnections |
cIdle |
14 |
Number of idle, pooled, connections |
counter |
gauge |
|||
pendingConnections |
cPend |
15 |
Number of connections in the process of being created |
counter |
gauge |
|||
totalConnections |
cTotal |
16 |
Total number of open connections |
counter |
gauge |
|||
maxConnections |
cMax |
17 |
Maximum number of open connections |
counter |
gauge |
JVM
JVM Statistics
OID: 1.3.6.1.4.1.19808.2.1.14
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
heapUsed |
husd |
2 |
Used heap memory |
counter |
gauge |
|||
heapCommitted |
hcomm |
3 |
Committed heap memory |
counter |
gauge |
|||
heapInitial |
hinit |
4 |
Initial heap memory |
counter |
gauge |
|||
heapMaximum |
hmax |
5 |
Maximum heap memory |
counter |
gauge |
|||
nonHeapUsed |
nhusd |
6 |
Used non-heap memory |
counter |
gauge |
|||
nonHeapCommitted |
nhcomm |
7 |
Committed non-heap memory |
counter |
gauge |
|||
nonHeapInitial |
nhinit |
8 |
Initial non-heap memory |
counter |
gauge |
|||
nonHeapMaximum |
nhmax |
9 |
Maximum non-heap memory |
counter |
gauge |
|||
classesCurrentLoaded |
cLoad |
10 |
Number of classes currently loaded |
counter |
gauge |
|||
classesTotalLoaded |
cTotLoad |
11 |
Total number of classes loaded since JVM start |
counter |
gauge |
|||
classesTotalUnloaded |
cTotUnload |
12 |
Total number of classes unloaded since JVM start |
counter |
gauge |
LicenseAccounting
License Accounting Information
OID: 1.3.6.1.4.1.19808.2.1.12
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
accountedUnits |
acc |
2 |
Accounted License Units Consumed |
counter |
units |
|||
unaccountedUnits |
unacc |
3 |
Unaccounted License Units Consumed |
counter |
units |
Limiters
Limiter Statistics
OID: 1.3.6.1.4.1.19808.2.1.17
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
unitsUsed |
used |
2 |
Number of units used |
counter |
||||
unitsRejected |
rejected |
3 |
Number of units rejected (both here and by parent) |
counter |
||||
unitsRejectedByParent |
rejectedByParent |
4 |
Number of units rejected by parent limiter |
counter |
LockManagers
Lock Manager Statistics
OID: 1.3.6.1.4.1.19808.2.1.4
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
lockAcquisitionTime |
LAT |
Lock acquisition time |
sample |
µsec |
time/nanoseconds |
time/milliseconds |
||
lockWaitTime |
LWT |
Time waited for contested locks |
sample |
µsec |
time/nanoseconds |
time/milliseconds |
||
locksAcquired |
acq |
2 |
Locks acquired |
counter |
||||
locksReleased |
rel |
3 |
Locks released |
counter |
||||
lockWaits |
wait |
4 |
Lock waits occurred |
counter |
||||
lockTimeouts |
timeout |
5 |
Lock timeouts occurred |
counter |
||||
knownLocks |
locks |
6 |
Total number of locks with state |
counter |
gauge |
|||
acquireMessages |
msgAcquire |
7 |
LOCK_ACQUIRE messages sent |
counter |
||||
abortMessages |
msgAbort |
8 |
LOCK_ABORT_ACQUIRE messages sent |
counter |
||||
releaseMessages |
msgRelease |
9 |
LOCK_RELEASE_TRANSACTION messages sent |
counter |
||||
migrationRequestMessages |
msgMigReq |
10 |
LOCK_MIGRATION_REQUEST messages sent |
counter |
||||
migrationReleaseMessages |
msgMigRel |
11 |
LOCK_MIGRATION_RELEASE messages sent |
counter |
MemDB-Local
Local Memory Database Statistics
OID: 1.3.6.1.4.1.19808.2.1.9
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
committedSize |
csize |
2 |
Current committed size in kilobytes |
counter |
kb |
gauge |
||
maxCommittedSize |
max |
3 |
Maximum allowed committed size in kilobytes |
counter |
kb |
gauge |
||
churnSize |
churn |
4 |
Churn space used by the database, in bytes |
counter |
b |
|||
cleanupCount |
cleanups |
5 |
Number of state cleanups performed by the database |
counter |
# |
MemDB-Replicated
Replicated Memory Database Statistics
OID: 1.3.6.1.4.1.19808.2.1.10
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
committedSize |
csize |
2 |
Current committed size in kilobytes |
counter |
kb |
gauge |
||
maxCommittedSize |
max |
3 |
Maximum allowed committed size in kilobytes |
counter |
kb |
gauge |
||
churnSize |
churn |
4 |
Churn space used by the database, in bytes |
counter |
b |
|||
cleanupCount |
cleanups |
5 |
Number of state cleanups performed by the database |
counter |
# |
MemDB-Timestamp
Memory Database Timestamp Statistics
OID: 1.3.6.1.4.1.19808.2.1.25
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
waitingThreads |
waiting |
2 |
The number of threads waiting for a timestamp to become safe |
counter |
gauge |
|||
unexposedCommits |
unexposed |
3 |
The number of commits which are not yet safe to expose |
counter |
gauge |
Memory
JVM Memory Statistics
OID: 1.3.6.1.4.1.19808.2.1.31
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
heapInitial |
heapInit |
2 |
Memory Heap Usage Initial |
counter |
gauge |
|||
heapUsed |
heapUsed |
3 |
Memory Heap Usage Used |
counter |
gauge |
|||
heapMax |
heapMax |
4 |
Memory Heap Usage Max |
counter |
gauge |
|||
heapCommitted |
heapComm |
5 |
Memory Heap Usage Committed |
counter |
gauge |
|||
nonHeapInitial |
nonheapInit |
6 |
Memory Non Heap Usage Initial |
counter |
gauge |
|||
nonHeapUsed |
nonheapUsed |
7 |
Memory Non Heap Usage Used |
counter |
gauge |
|||
nonHeapMax |
nonheapMax |
8 |
Memory Non Heap Usage Max |
counter |
gauge |
|||
nonHeapCommitted |
nonheapComm |
9 |
Memory Non Heap Usage Committed |
counter |
gauge |
MemoryPool
JVM Memory Pool Statistics
OID: 1.3.6.1.4.1.19808.2.1.32
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
collectionUsageInitial |
collUsageInit |
2 |
Memory Pool Collection Usage Initial |
counter |
gauge |
|||
collectionUsageUsed |
collUsageUsed |
3 |
Memory Pool Collection Usage Used |
counter |
gauge |
|||
collectionUsageMax |
collUsageMax |
4 |
Memory Pool Collection Usage Max |
counter |
gauge |
|||
collectionUsageCommitted |
collUsageComm |
5 |
Memory Pool Collection Usage Committed |
counter |
gauge |
|||
collectionUsageThreshold |
collUsageThresh |
6 |
Memory Pool Collection Usage Threshold |
counter |
gauge |
|||
collectionUsageThresholdCount |
collUseThrCount |
7 |
Memory Pool Collection Usage Threshold Count |
counter |
gauge |
|||
peakUsageInitial |
peakUsageInit |
8 |
Memory Pool Peak Usage Initial |
counter |
gauge |
|||
peakUsageUsed |
peakUsageUsed |
9 |
Memory Pool Peak Usage Used |
counter |
gauge |
|||
peakUsageMax |
peakUsageMax |
10 |
Memory Pool Peak Usage Max |
counter |
gauge |
|||
peakUsageCommitted |
peakUsageComm |
11 |
Memory Pool Peak Usage Committed |
counter |
gauge |
|||
usageThreshold |
usageThresh |
12 |
Memory Pool Usage Threshold |
counter |
gauge |
|||
usageThresholdCount |
usageThreshCount |
13 |
Memory Pool Usage Threshold Count |
counter |
gauge |
|||
usageInitial |
usageInit |
14 |
Memory Pool Usage Initial |
counter |
gauge |
|||
usageUsed |
usageUsed |
15 |
Memory Pool Usage Used |
counter |
gauge |
|||
usageMax |
usageMax |
16 |
Memory Pool Usage Max |
counter |
gauge |
|||
usageCommitted |
usageComm |
17 |
Memory Pool Usage Committed |
counter |
gauge |
ObjectPools
Object Pool Statistics
OID: 1.3.6.1.4.1.19808.2.1.7
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
added |
2 |
Freed objects that were accepted to the pool |
counter |
|||||
removed |
3 |
Allocation requests that were satisfied from the pool |
counter |
|||||
overflow |
4 |
Freed objects that were discarded because the pool was full |
counter |
|||||
miss |
5 |
Allocation requests not satisfied by the pool because it was empty |
counter |
|||||
size |
6 |
Current number of objects in the pool |
counter |
gauge |
||||
capacity |
7 |
Maximum object pool capacity |
counter |
gauge |
||||
pruned |
8 |
Objects in the pool that were discarded because they fell off the end of the LRU queue |
counter |
OperatingSystem
JVM Operating System Statistics
OID: 1.3.6.1.4.1.19808.2.1.33
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
availableProcessors |
availProc |
2 |
Operating System Available Processors |
counter |
gauge |
|||
committedVirtualMemorySize |
commVirtMem |
3 |
Operating System Committed Virtual Memory |
counter |
gauge |
|||
freePhysicalMemorySize |
freePhysMem |
4 |
Operating System Free Physical Memory Size |
counter |
gauge |
|||
freeSwapSpaceSize |
freeSwapSpc |
5 |
Operating System Free Swap Space Size |
counter |
gauge |
|||
processCpuTime |
procCpuTime |
6 |
Operating System Process Cpu Time |
counter |
gauge |
|||
totalPhysicalMemorySize |
totPhysMem |
7 |
Operating System Total Physical Memory Size |
counter |
gauge |
|||
totalSwapSpaceSize |
totSwapSpc |
8 |
Operating System Total Swap Space Size |
counter |
gauge |
|||
maxFileDescriptorCount |
maxFileDesc |
9 |
Operating System Max File Descriptor Count |
counter |
gauge |
|||
openFileDescriptorCount |
openFileDesc |
10 |
Operating System Open File Descriptor Count |
counter |
gauge |
PooledByteArrayBuffer
Byte Array Buffer Pool Statistics
OID: 1.3.6.1.4.1.19808.2.1.26
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
out |
out |
2 |
Total buffer allocation requests |
counter |
||||
in |
in |
3 |
Total freed buffers |
counter |
||||
added |
added |
4 |
Freed buffers that were accepted to the pool |
counter |
||||
removed |
removed |
5 |
Buffer allocation requests that were satisfied from the pool |
counter |
||||
overflow |
overflow |
6 |
Freed buffers that were discarded because the pool was full |
counter |
||||
miss |
miss |
7 |
Buffer allocation requests not satisfied by the pool because it was empty |
counter |
||||
poolSize |
psize |
8 |
Current number of buffers in the pool |
counter |
gauge |
|||
bufferSize |
bsize |
9 |
Buffer size |
counter |
gauge |
|||
poolCapacity |
capacity |
10 |
Maximum pool capacity |
counter |
gauge |
Runtime
JVM Runtime Statistics
OID: 1.3.6.1.4.1.19808.2.1.34
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
uptime |
uptime |
2 |
Runtime Uptime |
counter |
gauge |
|||
startTime |
startTime |
3 |
Runtime Start Time |
counter |
gauge |
Services
Service Statistics
OID: 1.3.6.1.4.1.19808.2.1.5
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
sbbLifeTime |
lifeT |
Root SBB lifetime |
sample |
msec |
time/milliseconds |
time/seconds |
||
rootSbbsCreated |
created |
2 |
Root sbbs created |
counter |
||||
rootSbbsRemoved |
removed |
3 |
Root sbbs removed |
counter |
||||
activeRootSbbs |
active |
4 |
# of active root sbbs |
counter |
gauge |
StagingThreads
Staging Thread Statistics
OID: 1.3.6.1.4.1.19808.2.1.3
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
waitTime |
waitT |
Time spent waiting on stage queue |
sample |
msec |
time/nanoseconds |
time/milliseconds |
||
itemsAdded |
2 |
Items of work submitted to the thread pool |
counter |
|||||
itemsCompleted |
3 |
Items of work completed by the thread pool |
counter |
|||||
queueSize |
qSize |
4 |
Size of the work item queue |
counter |
gauge |
|||
numThreads |
numthrd |
5 |
Current size of the thread pool |
counter |
gauge |
|||
availableThreads |
avail |
6 |
Number of idle worker threads |
counter |
gauge |
|||
minThreads |
min |
7 |
Configured minimum size of the thread pool |
counter |
gauge |
|||
maxThreads |
max |
8 |
Configured maximum size of the thread pool |
counter |
gauge |
|||
activeThreads |
activ |
9 |
Worker threads currently active processing work |
counter |
gauge |
|||
peakThreads |
peak |
10 |
The most threads that were ever active in the thread pool in the current configuration |
counter |
gauge |
|||
dropped |
drop |
11 |
Number of dropped stage items |
counter |
StagingThreads-Misc
Distributed Resource Manager Runnable Stage Statistics
OID: 1.3.6.1.4.1.19808.2.1.21
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
waitTime |
waitT |
Time spent waiting on stage queue |
sample |
msec |
time/nanoseconds |
time/milliseconds |
||
itemsAdded |
2 |
Items of work submitted to the thread pool |
counter |
|||||
itemsCompleted |
3 |
Items of work completed by the thread pool |
counter |
|||||
queueSize |
qSize |
4 |
Size of the work item queue |
counter |
gauge |
|||
numThreads |
numthrd |
5 |
Current size of the thread pool |
counter |
gauge |
|||
availableThreads |
avail |
6 |
Number of idle worker threads |
counter |
gauge |
|||
minThreads |
min |
7 |
Configured minimum size of the thread pool |
counter |
gauge |
|||
maxThreads |
max |
8 |
Configured maximum size of the thread pool |
counter |
gauge |
|||
activeThreads |
activ |
9 |
Worker threads currently active processing work |
counter |
gauge |
|||
peakThreads |
peak |
10 |
The most threads that were ever active in the thread pool in the current configuration |
counter |
gauge |
|||
dropped |
drop |
11 |
Number of dropped stage items |
counter |
Thread
JVM Thread Statistics
OID: 1.3.6.1.4.1.19808.2.1.35
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
currentThreadCpuTime |
currThrdCpuTm |
2 |
Thread Current Thread Cpu Time |
counter |
gauge |
|||
currentThreadUserTime |
currThrdUsrTm |
3 |
Thread Current Thread User Time |
counter |
gauge |
|||
daemonThreadCount |
daeThrds |
4 |
Thread Daemon Thread Count |
counter |
gauge |
|||
peakThreadCount |
peakThrds |
5 |
Thread Peak Thread Count |
counter |
gauge |
|||
threadCount |
threads |
6 |
Thread Thread Count |
counter |
gauge |
|||
totalStartedThreadCount |
totStartThrds |
7 |
Thread Total Started Thread Count |
counter |
gauge |
TimerFacility
Timer Facility’s timing-wheel execution statistics
OID: 1.3.6.1.4.1.19808.2.1.24
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
cascadeOverflow |
cOverflow |
2 |
Number of scheduled jobs that were cascaded from the overflow list to wheel 3 |
counter |
delta |
|||
cascadeWheel1 |
cWheel1 |
3 |
Number of scheduled jobs that were cascaded from wheel 1 to wheel 0 |
counter |
delta |
|||
cascadeWheel2 |
cWheel2 |
4 |
Number of scheduled jobs that were cascaded from wheel 2 to wheel 1 |
counter |
delta |
|||
cascadeWheel3 |
cWheel3 |
5 |
Number of scheduled jobs that were cascaded from wheel 3 to wheel 2 |
counter |
delta |
|||
jobsExecuted |
jExecuted |
6 |
Number of jobs that reached their expiry time and were submitted for execution |
counter |
delta |
|||
jobsRejected |
jRejected |
7 |
Number of submitted jobs that were rejected by the executor |
counter |
delta |
|||
jobsScheduled |
jScheduled |
8 |
Number of jobs scheduled onto a timer queues for later execution |
counter |
delta |
|||
jobsToOverflow |
jToOverflow |
9 |
Number of scheduled jobs that were initially placed on the overflow list |
counter |
delta |
|||
jobsToWheel0 |
jToWheel0 |
10 |
Number of scheduled jobs that were initially placed on wheel 0 |
counter |
delta |
|||
jobsToWheel1 |
jToWheel1 |
11 |
Number of scheduled jobs that were initially placed on wheel 1 |
counter |
delta |
|||
jobsToWheel2 |
jToWheel2 |
12 |
Number of scheduled jobs that were initially placed on wheel 2 |
counter |
delta |
|||
jobsToWheel3 |
jToWheel3 |
13 |
Number of scheduled jobs that were initially placed on wheel 3 |
counter |
delta |
|||
jobsWaiting |
jWaiting |
14 |
Number of submitted jobs that are currently waiting to reach their expiry time |
counter |
delta |
|||
tasksCancelled |
tCancelled |
15 |
Number of tasks successfully cancelled |
counter |
delta |
|||
tasksFixedDelay |
tRepDelay |
16 |
Number of fixed-delay repeated execution tasks submitted |
counter |
delta |
|||
tasksFixedRate |
tRepRate |
17 |
Number of fixed-rate repeated execution tasks submitted |
counter |
delta |
|||
tasksImmediate |
tNow |
18 |
Number of immediate-execution tasks submitted |
counter |
delta |
|||
tasksOneShot |
tOnce |
19 |
Number of one-shot delayed execution tasks submitted |
counter |
delta |
|||
tasksRepeated |
tRepeated |
20 |
Number of times a repeated-execution task was rescheduled |
counter |
delta |
|||
ticks |
ticks |
21 |
Number of timer ticks elapsed |
counter |
delta |
Transactions
Transaction Manager Statistics
OID: 1.3.6.1.4.1.19808.2.1.6
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
active |
2 |
Number of active transactions |
counter |
gauge |
||||
started |
started |
3 |
Transactions started |
counter |
||||
committed |
commit |
4 |
Transactions committed |
counter |
||||
rolledBack |
rollback |
5 |
Transactions rolled back |
counter |
UnpooledByteArrayBuffer
Unpooled Byte Array Buffer Statistics
OID: 1.3.6.1.4.1.19808.2.1.27
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
out |
out |
2 |
Total buffer allocation requests |
counter |
||||
in |
in |
3 |
Total freed buffers |
counter |
||||
bytesAllocated |
allocated |
4 |
Total number of bytes allocated to buffers |
counter |
||||
bytesDiscarded |
discarded |
5 |
Total number of bytes discarded by freed buffers |
counter |
Savanna-Group
Per-group Savanna statistics
OID: 1.3.6.1.4.1.19808.2.1.19
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
tokenRotationTime |
tokRotation |
Token rotation time samples |
sample |
time/milliseconds |
time/milliseconds |
|||
regularMessageSize |
rgmSize |
Sent regular message size samples |
sample |
count/bytes |
count/bytes |
|||
messageDeliveryTime |
dyMsgTime |
Time taken to locally deliver a single message |
sample |
time/nanoseconds |
time/milliseconds |
|||
transmitBatchBytes |
txBatchBytes |
Size of messages sent per token rotation |
sample |
count/bytes |
count/bytes |
|||
appMessageSize |
appMsgSize |
Sent application message size samples |
sample |
count/bytes |
count/bytes |
|||
appRoundTripTime |
appRTT |
Time from sending a message to the message being delivered to application code on the same node |
sample |
time/nanoseconds |
time/milliseconds |
|||
appTransmitDelay |
appXmitDelay |
Time from sending an application message to savanna, to it being sent at the network level |
sample |
time/nanoseconds |
time/milliseconds |
|||
appDeliveryDelay |
appDelivDelay |
Time from a received message being eligible for delivery, to it being delivered to the application |
sample |
time/nanoseconds |
time/milliseconds |
|||
fragmentsPerAppMessage |
appMsgFrags |
Number of fragments making up a single application message |
sample |
count/fragments |
count/fragments |
|||
fragmentSize |
appFragSize |
Size of sent message fragments |
sample |
count/bytes |
count/bytes |
|||
fragmentsPerRegularMessage |
rgmFrags |
Number of application message fragments in a single regular message |
sample |
count/fragments |
count/fragments |
|||
udpBytesSent |
udpBytesTx |
2 |
Total UDP bytes sent |
counter |
bytes |
|||
udpBytesReceived |
udpBytesRx |
3 |
Total UDP bytes received |
counter |
bytes |
|||
udpDatagramsSent |
udpTx |
4 |
Number of UDP datagrams successfully sent |
counter |
count |
|||
udpDatagramsReceived |
udpRx |
5 |
Number of valid UDP datagrams received |
counter |
count |
|||
udpInvalidDatagramsReceived |
udpErrorRx |
6 |
Number of invalid UDP datagrams received |
counter |
count |
|||
udpDatagramSendErrors |
udpErrorTx |
7 |
Number of UDP datagrams that failed to be sent |
counter |
count |
|||
tokenRetransmits |
tokRetrans |
8 |
Number of token retransmits |
counter |
count |
|||
activityEstimate |
activityEst |
9 |
Cluster activity time estimate |
counter |
msec |
gauge |
||
regularMessagesSent |
rgmTx |
10 |
Number of regular messages sent |
counter |
count |
|||
regularMessagesReceived |
rgmRx |
11 |
Number of regular messages received |
counter |
count |
|||
recoveryMessagesSent |
rcmTx |
12 |
Number of recovery messages sent |
counter |
count |
|||
recoveryMessagesReceived |
rcmRx |
13 |
Number of recovery messages received |
counter |
count |
|||
restartGroupMessagesSent |
rsmTx |
14 |
Number of restart group messages sent |
counter |
count |
|||
restartGroupMessagesReceived |
rsmRx |
15 |
Number of restart group messages received |
counter |
count |
|||
restartGroupMessageRetransmits |
rsmRetrans |
16 |
Number of restart group messages retransmitted |
counter |
count |
|||
regularTokensSent |
rtokTx |
17 |
Number of regular tokens sent |
counter |
count |
|||
regularTokensReceived |
rtokRx |
18 |
Number of regular tokens received |
counter |
count |
|||
installTokensSent |
itokTx |
19 |
Number of install tokens sent |
counter |
count |
|||
installTokensReceived |
itokRx |
20 |
Number of install tokens received |
counter |
count |
|||
groupIdles |
idles |
21 |
Number of times group has gone idle |
counter |
count |
|||
messagesLessThanARU |
belowARU |
22 |
Number of messages seen less than ARU |
counter |
count |
|||
shiftToInstall |
toInstall |
23 |
Number of times group has shifted to install |
counter |
count |
|||
shiftToRecovery |
toRecovery |
24 |
Number of times group has shifted to recovery |
counter |
count |
|||
shiftToOperational |
toOper |
25 |
Number of times group has shifted to operational |
counter |
count |
|||
messageRetransmits |
msgRetrans |
26 |
Number of message retransmits |
counter |
count |
|||
fcReceiveBufferSize |
fcRcvBuf |
27 |
Flowcontrol receive buffer size |
counter |
bytes |
gauge |
||
fcSendWindowSize |
fcSendWin |
28 |
Flowcontrol send window size |
counter |
bytes |
gauge |
||
fcCongestionWindowSize |
fcConWin |
29 |
Flowcontrol congestion window size |
counter |
bytes |
gauge |
||
fcTokenRotationEstimate |
fcTokEst |
30 |
Flow control token rotation time estimate |
counter |
msec |
gauge |
||
fcRetransmissionRequests |
fcRetrans |
31 |
Number of current retransmission requests |
counter |
count |
gauge |
||
fcLimitedSends |
fcLimited |
32 |
Number of token rotations where we wanted to send more data than flowcontrol allowed |
counter |
count |
|||
deliveryQueueSize |
dyQSize |
33 |
Size of messages waiting to be delivered locally |
counter |
bytes |
gauge |
||
deliveryQueueBytes |
dyQBytes |
34 |
Size of messages waiting to be delivered locally |
counter |
bytes |
gauge |
||
transmitQueueSize |
txQSize |
35 |
Number of messages waiting to send |
counter |
bytes |
gauge |
||
transmitQueueBytes |
txQBytes |
36 |
Size of messages waiting to send |
counter |
bytes |
gauge |
||
appBytesSent |
appBytesTx |
37 |
Number of application message bytes sent |
counter |
bytes |
|||
appBytesReceived |
appBytesRx |
38 |
Number of application message bytes received |
counter |
bytes |
|||
appMessagesSent |
appTx |
39 |
Number of application messages sent |
counter |
count |
|||
appMessagesReceived |
appRx |
40 |
Number of application messages received and fully reassembled |
counter |
count |
|||
appPartialMessagesReceived |
appPartialRx |
41 |
Number of application messages received and partially reassembled |
counter |
count |
|||
appSendErrors |
appErrorTx |
42 |
Number of application messages dropped due to full buffers |
counter |
count |
|||
fragStartSent |
fragStartTx |
43 |
Number of start message fragments sent |
counter |
count |
|||
fragMidSent |
fragMidTx |
44 |
Number of middle message fragments sent |
counter |
count |
|||
fragEndSent |
fragEndTx |
45 |
Number of final message fragments sent |
counter |
count |
|||
fragNonSent |
fragNonTx |
46 |
Number of messages sent unfragmented |
counter |
count |
Savanna-Membership
Membership ring Savanna statistics
OID: 1.3.6.1.4.1.19808.2.1.18
Name | Short Name | Mapping | Description | Type | Unit Label | Default View | Source Units | Default Display Units |
---|---|---|---|---|---|---|---|---|
tokenRotationTime |
tokRotation |
Token rotation time samples |
sample |
time/milliseconds |
time/milliseconds |
|||
udpBytesSent |
udpBytesTx |
2 |
Total UDP bytes sent |
counter |
bytes |
|||
udpBytesReceived |
udpBytesRx |
3 |
Total UDP bytes received |
counter |
bytes |
|||
udpDatagramsSent |
udpTx |
4 |
Number of UDP datagrams successfully sent |
counter |
count |
|||
udpDatagramsReceived |
udpRx |
5 |
Number of valid UDP datagrams received |
counter |
count |
|||
udpInvalidDatagramsReceived |
udpErrorRx |
6 |
Number of invalid UDP datagrams received |
counter |
count |
|||
udpDatagramSendErrors |
udpErrorTx |
7 |
Number of UDP datagrams that failed to be sent |
counter |
count |
|||
tokenRetransmits |
tokRetrans |
8 |
Number of token retransmits |
counter |
count |
|||
activityEstimate |
activityEst |
9 |
Cluster activity time estimate |
counter |
msec |
gauge |
||
joinMessagesSent |
joinTx |
10 |
Number of join messages sent |
counter |
count |
|||
joinMessagesReceived |
joinRx |
11 |
Number of join messages received |
counter |
count |
|||
membershipTokensSent |
mtokTx |
12 |
Number of membership tokens sent |
counter |
count |
|||
membershipTokensReceived |
mtokRx |
13 |
Number of membership tokens received |
counter |
count |
|||
commitTokensSent |
ctokTx |
14 |
Number of commit tokens sent |
counter |
count |
|||
commitTokensReceived |
ctokRx |
15 |
Number of commit tokens received |
counter |
count |
|||
shiftToGather |
toGather |
16 |
Number of times group has shifted to gather |
counter |
count |
|||
shiftToInstall |
toInstall |
17 |
Number of times group has shifted to install |
counter |
count |
|||
shiftToCommit |
toCommit |
18 |
Number of times group has shifted to commit |
counter |
count |
|||
shiftToOperational |
toOper |
19 |
Number of times group has shifted to operational |
counter |
count |
|||
tokenRetransmitTimeouts |
tokTimeout |
20 |
Number of token retransmission timeouts |
counter |
count |
Metrics.Services.cmp
SBB CMP field metrics. These metrics are generated for every CMP field in each SBB.
The metrics recording can be turned on/off with rhino-console commands. |
Name | Description | Type | Unit Label | View | Source Units | Default Display Units |
---|---|---|---|---|---|---|
<cmpfield>Reads |
CMP field <cmpfield> reads |
Counter |
||||
<cmpfield>Writes |
CMP field <cmpfield> writes |
Counter |
||||
<cmpfield>ReferenceCacheHits |
CMP field <cmpfield> reference cache hits during field reads |
Counter |
||||
<cmpfield>ReferenceCacheMisses |
CMP field <cmpfield> reference cache misses during field reads |
Counter |
||||
<cmpfield>WriteTime |
CMP field <cmpfield> serialisation time |
Sample |
||||
<cmpfield>ReadTime |
CMP field <cmpfield> deserialisation time |
Sample |
||||
<cmpfield>Size |
CMP field <cmpfield> serialised size |
Sample |
Metrics.Services.lifecycle
SBB lifecycle method metrics. These stats records invocations of SBB lifecycle methods.
The metrics recording can be turned on/off with rhino-console commands. |
Name | Description | Type | Unit Label | View | Source Units | Default Display Units |
---|---|---|---|---|---|---|
sbbSetContexts |
Total number of setSbbContext invocations |
Counter |
||||
sbbUnsetContexts |
Total number of unsetSbbContext invocations |
Counter |
||||
sbbCreates |
Total number of sbbCreate invocations |
Counter |
||||
sbbRemoves |
Total number of sbbRemove invocations |
Counter |
||||
sbbLoads |
Total number of sbbLoad invocations |
Counter |
||||
sbbStores |
Total number of sbbStore invocations |
Counter |
||||
sbbActivates |
Total number of sbbActivate invocations |
Counter |
||||
sbbPassivates |
Total number of sbbPassivate invocations |
Counter |
||||
sbbRollbacks |
Total number of sbbRolledBack invocations |
Counter |
||||
sbbExceptionsThrown |
Total number of sbbExceptionThrown invocations |
Counter |
generate-system-report
The generate-system-report
script generates a tarball of useful system information for the OpenCloud support team. Below is an example of its output:
$ ./generate-system-report.sh This script generates a report tarball which can be useful when remotely diagnosing problems with this installation. The created tarball contains information on the current Rhino installation (config files, license details), as well as various system settings (operating system, program versions, and network settings). It is recommended that you include the generated 'report.tar' file when contacting Open Cloud for support. Generating report using /home/user/rhino/node-101/work/report for temporary file s. IMPORTANT: It is a good idea to run the start-rhino.sh script before this script. Otherwise, important run-time configuration information will be missing from the generated report. Done. 'report.tar' generated.
dumpthreads
The dumpthreads
script sends a QUIT signal to the JVM process that Rhino is running in, causing the JVM to produce a thread dump.
The script itself has no output. It is used internally by Rhino (via the Watchdog) to produce a Java thread dump from the Rhino JVM in certain error scenarios (such as stuck event processing threads). Below is a partial example of thread-dump output:
"StageWorker/TM/1" prio=1 tid=0x082bb5c0 nid=0x192 in Object.wait() [0x9aae9000..0x9aaea060] at java.lang.Object.wait(Native Method) - waiting on <0x9f4154d8> (a [Lcom.opencloud.ob.RhinoSDK.mM;) at java.lang.Object.wait(Object.java:474) at com.opencloud.ob.RhinoSDK.oS$a.run(13520:68) - locked <0x9f4154d8> (a [Lcom.opencloud.ob.RhinoSDK.mM;) at java.lang.Thread.run(Thread.java:595) "Timer-2" prio=1 tid=0x9ac06988 nid=0x18e in Object.wait() [0x9ab6a000..0x9ab6afe0] at java.lang.Object.wait(Native Method) - waiting on <0x9f4bff28> (a java.util.TaskQueue) at java.util.TimerThread.mainLoop(Timer.java:509) - locked <0x9f4bff28> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462)
dependency-graph
dependency-graph
is a command-line utility to show the dependencies between SLEE components in a running SLEE.
It can either list them to the console (useful with grep
), or write a DOT file for use with Graphviz. To use it, call the dependency-graph
script in the rhino/client/bin
directory (dependency-graph.bat
for Windows users).
Prerequisites |
|
||
---|---|---|---|
Options |
Invoke with no arguments to view command-line options: bin$ ./dependency-graph Valid command line options are: -h <host> - The hostname to connect to. -p <port> - The port to connect to. -u <username> - The user to authenticate as. -w <password> - The password used for authentication. -D - Display connection debugging messages. Exactly one of the following two options: -c - Write the dependencies to the console. -o <output> - Draws the graph to the given output file in "DOT" format (see: http://en.wikipedia.org/wiki/DOT_language). Graph options only (when using -o option): -e - Exclude events. -f - Write full component IDs (including vendor and version). -g - Group by deployable unit. -m - Monochrome (turn colors off). Exactly one of -c or -o should be specified. |
||
Examples |
Below are some example sessions against a RhinoSDK with the SIP examples installed. They illustrate how the level of detail can be controlled using the command-line flags.
With -e, -f, -g flags
$ cd rhino/client/bin bin$ ./dependency-graph -o sip-dependencies.dot -e -f -g Connecting to localhost:1199 Fetching dependency info from SLEE... Processing dependencies... Writing dependency graph in DOT format to sip-dependencies.dot... Finished generating file. If you have graphviz installed, this command should generate a PNG image file: dot -Tpng sip-dependencies.dot -o sip-dependencies.dot.png bin$ dot -Tpng sip-dependencies.dot -o SipExamples-EFG.png This excludes events (
Just -f and -g
This is the equivalent graph after dropping the ( $ ./dependency-graph -o sip-dependencies.dot -f -g ... $ dot -Tpng sip-dependencies.dot -o SipExamples-most-detail.png
Just -e
This is the equivalent graph with the least detail possible, using just the ( $ ./dependency-graph -o sip-dependencies.dot -e ... $ dot -Tpng sip-dependencies.dot -o SipExamples-least-detail.png |
Utilities
This section includes details and sample output for the following Rhino utilities:
Script | What it does |
---|---|
cleans out the Rhino disk-based database |
|
generates configuration files for Rhino’s management clients |
|
outputs a hash for password authentication |
|
uninstalls a deployable unit along with everything that depends on it |
init-management-db
The init-management-db
script cleans out the Rhino disk-based database.
The primary effect of this is the deletion of all SLEE state (including deployed components, activation states and SLEE state). For the SDK, this means deleting and regenerating the embedded database.
Below are examples of init-management-db
output for the production and SDK versions of Rhino:
Production |
$ ./init-management-db.sh Initializing database.. Connected to jdbc:postgresql://localhost:5432/template1 (PostgreSQL 8.4.9) Connected to jdbc:postgresql://localhost:5432/rhino (PostgreSQL 8.4.9) Database initialized. |
---|---|
SDK |
Initializing database.. Connected to jdbc:derby:rhino_sdk;create=true (Apache Derby 10.6.1.0 - (938214)) Database initialized. |
generate-client-configuration
The generation-client-configuration
script generates configuration files for Rhino’s management clients based on the Rhino configuration specified as a command-line argument.
The purpose of this script is to allow regeneration of the client configuration if the Rhino configuration is ever updated.
Below are examples of generation-client-configuration
output for the production and SDK versions of Rhino:
Production |
$ ./generate-client-configuration ../../node-101/config/config_variables Using configuration in ../../node-101/config/config_variables |
---|---|
SDK |
$ ./generate-client-configuration ../../config/config_variables Using configuration in ../../config/config_variables |
The list of files regenerated by this script can be found in the etc/templates
directory:
$ ls etc/templates/ client.properties jetty-file-auth.xml jetty-jmx-auth.xml web-console.passwd web-console.properties
rhino-passwd
The rhino-passwd
script outputs a password hash for the given password, for use with management-authentication methods such as the file login
module.
Below is an example of rhino-passwd
output:
This utility reads passwords from the console and displays the hashed password that must be put in the rhino.passwd file. Enter a blank line to exit. Password: acbd18db4cc2f85cedef654fccc4a4d8
cascade-uninstall
The cascade-uninstall
script uninstalls a deployable unit from Rhino — and everything that depends on it, including:
-
other deployable units that directly or indirectly depend on components contained in the deployable unit being uninstalled
-
services defined in deployable units that are being uninstalled
-
profile tables created from profile specifications defined in deployable units that are being uninstalled
-
resource adaptor entities created from resource adaptors defined in deployable units that are being uninstalled.
The script deactivates all services and resource adaptor entities that are to be removed and are in the ACTIVE state, and waits for them to reach the INACTIVE state before proceeding to uninstall the deployable unit.
Below are command-line options and sample uses of cascade-uninstall
:
Options |
$ ./cascade-uninstall Valid command line options are: -h <host> - The hostname to connect to. -p <port> - The port to connect to. -u <username> - The user to authenticate as. -w <password> - The password used for authentication. -D - Display connection debugging messages. -n <namespace> - Select namespace to perform the actions on. -l - List installed deployable units. -d <url> - Uninstall specified deployable unit. -c <id> - Uninstall specified copied or linked component. -s <id> - Remove the shadow from the specified shadowed component. -y - Assume a yes response to the uninstall confirmation. Information about what will be removed from the SLEE prior to removal is not displayed and components are removed without user confirmation. If any of the Typically you would only need use the |
---|---|
Examples |
To list all deployable units installed in Rhino: $ ./cascade-uninstall -l Connecting to localhost:1199 The following deployable units are installed: file:/home/rhino/rhino/lib/javax-slee-standard-types.jar file:rhino/units/in-common-du_2.0.jar file:rhino/units/incc-callbarring-service-du.jar file:rhino/units/incc-callduration-service-du.jar file:rhino/units/incc-callforwarding-service-du.jar file:rhino/units/incc-ratype-du_3.0.jar file:rhino/units/incc-vpn-service-du.jar file:rhino/units/insis-base-du_2.0.jar file:rhino/units/insis-caprovider-ratype-du_1.0.jar file:rhino/units/insis-scs-ratype-du_2.0.jar file:rhino/units/insis-scs-test-service-du.jar file:rhino/units/insis-swcap-du_2.0.jar To uninstall the deployable unit with the URL $ ./cascade-uninstall -d file:rhino/units/insis-base-du_2.0.jar Connecting to localhost:1199 Cascade removal of deployable unit file:rhino/units/insis-base-du_2.0.jar requires the following operations to be performed: Deployable unit file:rhino/units/insis-scs-test-service-du.jar will be uninstalled SBB with SbbID[name=IN-SIS Test Service Composition Selector SBB,vendor=OpenCloud,version=0.2] will be uninstalled Service with ServiceID[name=IN-SIS Test Service Composition Selector Service,vendor=OpenCloud,version=0.2] will be uninstalled This service will first be deactivated Deployable unit file:rhino/units/insis-swcap-du_2.0.jar will be uninstalled Resource adaptor with ResourceAdaptorID[name=IN-SIS Signalware CAP,vendor=OpenCloud,version=2.0] will be uninstalled Resource adaptor entity insis-cap1b will be removed This resource adaptor entity will first be deactivated Resource adaptor entity insis-cap1a will be removed This resource adaptor entity will first be deactivated Deployable unit file:rhino/units/insis-scs-ratype-du_2.0.jar will be uninstalled Resource adaptor type with ResourceAdaptorTypeID[name=IN-SIS Service Composition Selection,vendor=OpenCloud,version=2.0] will be uninstalled Event type with EventTypeID[name=com.opencloud.slee.resources.sis.script.in.scs.INSCSEvent,vendor=OpenCloud,version=2.0] will be uninstalled Deployable unit file:rhino/units/insis-base-du_2.0.jar will be uninstalled Profile specification with ProfileSpecificationID[name=IN-SIS Initial Trigger Rule Profile,vendor=OpenCloud,version=1.0] will be uninstalled Profile table initial-trigger-selection-rules will be removed Profile specification with ProfileSpecificationID[name=IN-SIS Service Composition Profile,vendor=OpenCloud,version=1.0] will be uninstalled Profile table service-compositions will be removed Profile specification with ProfileSpecificationID[name=IN-SIS Macro Profile,vendor=OpenCloud,version=1.0] will be uninstalled Profile table initial-trigger-selection-macros will be removed Profile specification with ProfileSpecificationID[name=IN-SIS Configuration,vendor=OpenCloud,version=2.0] will be uninstalled Profile table insis-configs will be removed Profile specification with ProfileSpecificationID[name=IN-SIS Service Configuration,vendor=OpenCloud,version=2.0] will be uninstalled Profile table service-configs will be removed Profile specification with ProfileSpecificationID[name=IN-SIS Address Subscription,vendor=OpenCloud,version=2.0] will be uninstalled Profile table address-subscriptions will be removed Profile specification with ProfileSpecificationID[name=IN-SIS Trigger Address Tracing,vendor=OpenCloud,version=2.0] will be uninstalled Profile table trigger-address-tracing will be removed Profile specification with ProfileSpecificationID[name=IN-SIS Service Key Subscription,vendor=OpenCloud,version=2.0] will be uninstalled Profile table service-key-subscriptions will be removed Library with LibraryID[name=IN-SIS Scripting Provider,vendor=OpenCloud,version=1.0] will be uninstalled Continue? (y/n): y Deactivating service ServiceID[name=IN-SIS Test Service Composition Selector Service,vendor=OpenCloud,version=0.2] All necessary services are inactive Deactivating resource adaptor entitiy insis-cap1b Deactivating resource adaptor entitiy insis-cap1a All necessary resource adaptor entities are inactive Uninstalling deployable unit file:rhino/units/insis-scs-test-service-du.jar Removing resource adaptor entity insis-cap1b Removing resource adaptor entity insis-cap1a Uninstalling deployable unit file:rhino/units/insis-swcap-du_2.0.jar Uninstalling deployable unit file:rhino/units/insis-scs-ratype-du_2.0.jar Removing profile table initial-trigger-selection-rules Removing profile table service-compositions Removing profile table initial-trigger-selection-macros Removing profile table insis-configs Removing profile table service-configs Removing profile table address-subscriptions Removing profile table trigger-address-tracing Removing profile table service-key-subscriptions Uninstalling deployable unit file:rhino/units/insis-base-du_2.0.jar |
Export-Related Tools
Rhino includes the following export and import related scripts:
Script |
What it does |
---|---|
export the state of the SLEE |
|
import SLEE state saved using |
|
save a profile snapshot |
|
inspect a profile snapshot |
|
prepare a snapshot for importing |
For details on using these scripts, see the Backup and Restore section. |
Memory Considerations
The Rhino Management and Monitoring Tools default to memory settings that will allow operation on most systems without error.
It may occasionally be necessary to tune the memory requirements for each tool. In particular, exporting and importing very large profile tables or deployable units may require increasing the heap limit above the default values for the rhino-console
, rhino-export
or rhino-import
tools.
Memory settings can be configured for each tool separately by editing the tool script in $RHINO_HOME/client/bin
and adding a GC_OPTIONS=
line. For example:
GC_OPTIONS="-Xmx256m"
The memory settings can be set globally for all tools by editing the existing GC_OPTIONS
line in $RHINO_HOME/client/etc/rhino-client-common
.
OpenCloud recommends not decreasing the default values unless you need to run the tools in a memory-constrained environment. |