This document details basic procedures for system administrators managing, maintaining, configuring and deploying Rhino 3.2 using the command-line console. To manage using a web interface, see the Rhino Element Manager documentation.
Topics
Administrative tasks for day-to-day management of the Rhino SLEE, its components and entities deployed in it, including: operational state, deployable units, services, resource adaptor entities, profile tables and profiles, alarms, usage, user transactions, and component activation priorities. |
|
Procedures for configuring Rhino upon installation, and as needed (for example to tune performance), including: logging, staging, object pools, licenses, rate limiting, security and external databases. |
|
Finding Housekeeping MBeans, and finding, inspecting and removing one or all activities or SBB entities. |
|
Backing up and restoring the database, and exporting and importing SLEE deployment state. |
|
Managing the SNMP subsystem in Rhino, including: configuring the agent, managing MIB files, viewing OID mappings. |
|
Configuring supplementary replication support services such as the session ownership store. |
|
Tools included with Rhino for system administrators to manage Rhino. |
Other documentation for the Rhino TAS can be found on the Rhino TAS product page.
Notices
Copyright © 2024 Microsoft. All rights reserved
This manual is issued on a controlled basis to a specific person on the understanding that no part of the Metaswitch Networks product code or documentation (including this manual) will be copied or distributed without prior agreement in writing from Metaswitch Networks.
Metaswitch Networks reserves the right to, without notice, modify or revise all or part of this document and/or change product features or specifications and shall not be responsible for any loss, cost, or damage, including consequential damage, caused by reliance on these materials.
Metaswitch and the Metaswitch logo are trademarks of Metaswitch Networks. Other brands and products referenced herein are the trademarks or registered trademarks of their respective holders.
SLEE Management
This section covers general administrative tasks for day-to-day management of the Rhino SLEE, its components and entities deployed in it.
JMX MBeans
Rhino SLEE uses Java Management Extension (JMX) MBeans for management functionality. This includes both functions defined in the JAIN SLEE 1.1 specification and Rhino extensions that allow additional functionality, beyond what’s in the specification.
Rhino’s command-line console is a front end for these MBeans, providing access to functions for managing the following:
- Namespaces
- Operational State
- Deployable Units
- Services
- Resource Adaptor Entities
- Profile Tables and Profiles
- Alarms
- Usage
- User Transactions
- Auditing Management Operations
- Linked and Shadowed Components
- Component Activation Priorities
- Declarative Configuration
Management may also be performed via the Rhino Element Manager web interface.
See also Management Tools, and the Rhino Management Extension APIs . |
From Rhino 3.0.0, state management commands and JMX methods for setting SLEE, Resource Adaptor Entity or Service state that do not take arguments accept a state change if at least one node in the cluster will change state. These commands delete any per-node state and set the default desired state to the target state. This behaviour is similar to enabling symmetric activation state mode for the component being updated in versions of Rhino prior to 3.0.0. Cluster nodes that are already in the required desired state are ignored, those that need to change transition. This behaviour is like the "-ifneeded" flag but the operation fails if no nodes are in the prerequisite state to transition to the target state. The with-node-arg variants create/update (as necessary) per-node state for the requested nodes. All specified nodes must be in the required prerequisite state to transition to the target state. It affects the start/stop/activate/deactivate/wait-til rhino-console and Ant tasks. |
The pool clustering mode is a feature introduced into Rhino in version 3.2. When using this mode of operation, deployment, management, and configuration state is not replicated between individual cluster nodes as happens when using the pre-existing Savanna clustering mode. To avoid confusion, almost all with-node-arg variants of management commands and JMX methods restrict the node (or node array) argument such that only the node the management operation is being performed on may be specified. This avoids situations such as executing the operation on node 101 to set the per-node SLEE state for node 102, but the operation not having any effect because node 101 does not replicate the change to other pool cluster nodes. |
Namespaces
As well as an overview of Rhino namespaces, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
createnamespace |
Namespace Management → |
|
removenamespace |
Namespace Management → |
|
listnamespaces |
Namespace Management → |
|
Setting the active namespace |
-n <namespace> (command-line option) setactivenamespace (interactive command) |
Namespace Management → |
Getting the active namespace |
|
Namespace Management → |
About Namespaces
A namespace is an independent deployment environment that is isolated from other namespaces.
A namespace has:
-
its own SLEE operational state
-
its own set of deployable units
-
its own set of instantiated profile tables, profiles, and resource adaptor entities
-
its own set of component configuration state
-
its own set of desired and actual states for services and resource adaptor entities.
All of these things can be managed within an individual namespace without affecting the state of any other namespace.
A namespace can be likened to a SLEE in itself, where Rhino with multiple namespaces is a container of SLEEs. |
A Rhino cluster always has a default namespace that cannot be deleted. Any number of user-defined namespaces may also be created, managed, and subsequently deleted when no longer needed.
User-defined namespaces are not supported when using Rhino configured in pool clustering mode. Only the default namespace may be used in this configuration. |
Management clients by default interact with the default namespace unless they explicitly request to interact with a different namespace.
Creating a Namespace
User-defined namespaces cannot be created when using Rhino configured in pool clustering mode. |
To create a new namespace, use the following rhino-console command or related MBean operation.
Console command: createnamespace
Command |
createnamespace <name> [-replication-resource <resource-name>] [-with-session-ownership-facility] Description Create a new deployment namespace. If the optional replication resource is not specified then the resource used for this namespace is the same as that used in the default namespace. |
---|---|
Example |
$ ./rhino-console createnamespace testnamespace Namespace testnamespace created |
MBean operation: createNamespace
MBean |
|
---|---|
Rhino extension |
public void createNamespace(String name, NamespaceOptions options) throws NullPointerException, InvalidArgumentException, NamespaceAlreadyExistsException, ManagementException; |
Removing a Namespace
To remove an existing user-defined namespace, use the following rhino-console command or related MBean operation.
The default namespace cannot be removed. |
All deployable units (other than the deployable unit containing the standard JAIN SLEE-defined types) must be uninstalled and all profile tables removed from a namespace before that namespace can be removed. |
Console command: removenamespace
Command |
removenamespace <name> Description Remove an existing deployment namespace |
---|---|
Example |
$ ./rhino-console removenamespace testnamespace Namespace testnamespace removed |
MBean operation: removeNamespace
MBean |
|
---|---|
Rhino extension |
public void removeNamespace(String name) throws NullPointerException, UnrecognizedNamespaceException, InvalidStateException, ManagementException; |
Listing Namespaces
To list all user-defined namespaces in a SLEE, use the following rhino-console command or related MBean operation.
Console command: listnamespaces
Command |
listnamespaces [-v] Description List all deployment namespaces. If the -v (verbose) option is given then the options that each namespace was created with is also given |
---|---|
Example |
$ ./rhino-console listnamespaces testnamespace |
MBean operation: getNamespaces
MBean |
|
---|---|
Rhino extension |
public String[] getNamespaces() throws ManagementException; This operation returns the names of the user-defined namespaces that have been created. |
Setting the Active Namespace
Each individual authenticated client connection to Rhino is associated with a namespace.
This setting, known as the active namespace, controls which namespace is affected by management commands such as those that install deployable units or change operational states.
To change the active namespace for a client connection, use the following rhino-console command or related MBean operation.
Console command: setactivenamespace
Command and command-line option |
Interactive mode
In interactive mode, the setactivenamespace <-default|name> Description Set the active namespace
Non-interactive mode
In non-interactive mode, the |
---|---|
Example |
Interactive mode
$ ./rhino-console Interactive Rhino Management Shell Rhino management console, enter 'help' for a list of commands [Rhino@localhost (#0)] setactivenamespace testnamespace The active namespace is now testnamespace [Rhino@localhost [testnamespace] (#1)] setactivenamespace -default The active namespace is now the default namespace [Rhino@localhost (#2)]
Non-interactive mode
$ ./rhino-console -n testnamespace start The active namespace is now testnamespace Starting SLEE on node(s) [101] SLEE transitioned to the Starting state on node 101 |
MBean operation: setActiveNamespace
MBean |
|
---|---|
Rhino extension |
public void setActiveNamespace(String name) throws NoAuthenticatedSubjectException, UnrecognizedNamespaceException, ManagementException; This operation sets the active namespace for the client connection. A |
Getting the Active Namespace
To get the active namespace for a client connection, use the following rhino-console information and related MBean operation.
Console:
Command prompt information |
The currently active namespace is reported in the command prompt within square brackets. If no namespace is reported, then the default namespace is active. |
---|---|
Example |
$ ./rhino-console Interactive Rhino Management Shell Rhino management console, enter 'help' for a list of commands [Rhino@localhost (#0)] setactivenamespace testnamespace The active namespace is now testnamespace [Rhino@localhost [testnamespace] (#1)] setactivenamespace -default The active namespace is now the default namespace [Rhino@localhost (#2)] |
MBean operation: getActiveNamespace
MBean |
|
---|---|
Rhino extension |
public String getActiveNamespace() throws NoAuthenticatedSubjectException, ManagementException; This operation returns the name of the namespace currently active for the client connection. |
Operational State
As well as an overview of SLEE operational states, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
setsleedesiredstate |
SLEE Management → |
|
Retrieving the basic operational state of nodes |
getsleeactualstate, getsleedesiredstate |
SLEE Management → |
Retrieving detailed information for every node in the cluster |
getClusterState |
Rhino Housekeeping → |
Gracefully shutting nodes down and, optionally, rebooting them |
shutdown |
SLEE Management → |
Forcefully terminating nodes |
kill |
SLEE Management → |
Listing nodes with per-node desired state |
getnodeswithpernodedesiredstate |
Node Housekeeping → |
Copying per-node desired state to another node |
copypernodedesiredstate |
Node Housekeeping → |
Removing per-node desired state |
removepernodedesiredstate |
Node Housekeeping → |
About SLEE Operational States
The SLEE specification defines the operational lifecycle of a SLEE — illustrated, defined, and summarised below.
SLEE lifecycle states
The SLEE lifecycle states are:
State | Definition |
---|---|
STOPPED |
The SLEE has been configured and initialised, and is ready to be started. Active resource adaptor entities have been loaded and initialised, and SBBs corresponding to active services have been loaded and are ready to be instantiated. The entire event-driven subsystem, however, is idle: resource adaptor entities and the SLEE are not actively producing events, the event router is not processing work, and the SLEE is not creating SBB entities. |
STARTING |
Resource adaptor entities in the SLEE that have been recorded in the management database as being in the ACTIVE state are started. The SLEE still does not create SBB entities. The node automatically transitions from this state to the RUNNING state when all startup tasks are complete, or to the STOPPING state if a startup task fails. |
RUNNING |
Activated resource adaptor entities in the SLEE can fire events, and the SLEE creates SBB entities and delivers events to them as needed. |
STOPPING |
Identical to the RUNNING state, except resource adaptor entities do not create (and the SLEE does not accept) new activity objects. Existing activity objects can end (according to the resource adaptor specification). The node automatically transitions out of this state, returning to the STOPPED state, when all SLEE activities have ended. The node can transition to this state directly from the STARTING state, effective immediately, if it has no activity objects. |
Independent SLEE states
Each namespace in each event-router node in a Rhino cluster maintains its own SLEE-lifecycle state machine, independently of other namespaces on the same or other nodes in the cluster. For example:
-
the default namespace on one node in a cluster might be in the RUNNING state
-
while a user-defined namespace on the same node is in the STOPPED state
-
while the default namespace on another node is in the STOPPING state
-
and the user-defined namespace on that node is in the RUNNING state.
The operational state of each namespace on each cluster node persists to the disk-based database.
Bootup SLEE state
After completing bootup and initialisation, a namespace on a node will enter the STOPPED state if:
-
the database has no persistent operational state information for that namespace on that node;
-
the namespace’s persistent operational state is STOPPED on that node; or
-
the node was started with the
-x
option (see Start Rhino in the Rhino Getting Started Guide).
Otherwise, the namespace will return to the same operational state that it was last in, as recorded in persistent storage.
Changing a namespace’s operational state
When using the Savanna clustering mode, you can change the operational state of any namespace on any node at any time, as long as least one node in the cluster is available to perform the management operation (regardless of if the node whose operational state being changed is a current cluster member). For example, you might set the operational state of the default namespace on node 103 to RUNNING before node 103 is started — then, when node 103 is started, after it completes initialising, the default namespace will enter the RUNNING state.
Changing a quorum node’s operational state
You can also change the operational state of a node which is a current member of the cluster as a quorum node… but quorum nodes make no use of operational state information stored in the database, and will not respond to operational state changes. (A node only uses operational state information if it starts as a regular event-router node.) |
When using the pool clustering mode, only the default namespace is supported. You can change the operational state of this namespace at any time, but only on the pool cluster node that the management operation is invoked on. To change the operational state of any other node, a management client needs to connect directly to that node.
Starting the SLEE
To start a SLEE on one or more nodes, use the following rhino-console command or related MBean operations.
When using the pool clustering mode, it is only possible to change the operational state of the SLEE on the node the management operation is invoked on. To change the operational state of another node, a management client needs to connect directly to that node. |
If executed without a list of nodes, all per-node desired state for the SLEE is removed and the default desired state of the SLEE is set to running (if it is not already). |
Console command: start
Command |
start [-nodes node1,node2,...] [-ifneeded] Description Start the SLEE (on the specified nodes) |
---|---|
Example |
To start nodes 101 and 102: $ ./rhino-console start -nodes 101,102 Starting SLEE on node(s) [101,102] SLEE transitioned to the Starting state on node 101 SLEE transitioned to the Starting state on node 102 |
MBean operation: setPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on specific nodes
public void setPerNodeDesiredState(int[] nodeIDs, SleeDesiredState desiredState) throws NullPointerException, InvalidArgumentException, SLEEManagementException; Rhino provides an extension to set the desired state for a SLEE on a set of nodes. |
MBean operation: setDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that do not have per-node SLEE state configured
public void setDefaultDesiredState(SleeDesiredState desiredState) throws NullPointerException, InvalidArgumentException, SLEEManagementException; Rhino provides an extension to set the desired state for a SLEE on nodes that do not have a per-node desired state configured. |
MBean operation: removePerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that have per-node SLEE state configured that is different from the default state
public void removePerNodeDesiredState(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, SLEEManagementException; Rhino provides an extension to clear the desired state for a SLEE on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state. |
MBean operation: start
MBean |
|
---|---|
SLEE-defined |
Start all nodes
public void start() throws InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Start specific nodes
public void start(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument which lets you control which nodes to start (by specifying node IDs). For this to work, the specified nodes must be in the STOPPED state. |
Stopping the SLEE
To stop SLEE event-routing functions on one or more nodes, use the following rhino-console command or related MBean operations.
When using the pool clustering mode, it is only possible to change the operational state of the SLEE on the node the management operation is invoked on. To change the operational state of another node, a management client needs to connect directly to that node. |
If executed without a list of nodes, all per-node desired state for the SLEE is removed and the default desired state of the SLEE is set to stopped (if it is not already). |
Console command: stop
Command |
stop [-nodes node1,node2,...] [-reassignto -node3,node4,...] [-ifneeded] Description Stop the SLEE (on the specified nodes (reassigning replicated activities to the specified nodes)) |
||
---|---|---|---|
Examples |
To stop nodes 101 and 102: $ ./rhino-console stop -nodes 101,102 Stopping SLEE on node(s) [101,102] SLEE transitioned to the Stopping state on node 101 SLEE transitioned to the Stopping state on node 102 To stop only node 101 and reassign replicated activities to node 102: $ ./rhino-console stop -nodes 101 -reassignto 102 Stopping SLEE on node(s) [101] SLEE transitioned to the Stopping state on node 101 Replicated activities reassigned to node(s) [102] To stop node 101 and distribute replicated activities of each replicating resource adaptor entity to all other eligible nodes (those on which the resource adaptor entity is in the ACTIVE state and the SLEE is in the RUNNING state), specify an empty (zero-length) argument for the $ ./rhino-console stop -nodes 101 -reassignto "" Stopping SLEE on node(s) [101] SLEE transitioned to the Stopping state on node 101 Replicated activities reassigned to node(s) [102,103]
|
MBean operation: setPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on specific nodes
public void setPerNodeDesiredState(int[] nodeIDs, SleeDesiredState desiredState) throws NullPointerException, InvalidArgumentException, SLEEManagementException; Rhino provides an extension to set the desired state for a SLEE on a set of nodes. |
MBean operation: setDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that do not have per-node SLEE state configured
public void setDefaultDesiredState(SleeDesiredState desiredState) throws NullPointerException, InvalidArgumentException, SLEEManagementException; Rhino provides an extension to set the desired state for a SLEE on nodes that do not have a per-node desired state configured. |
MBean operation: removePerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that have per-node state configured that is different from the default state
public void removePerNodeDesiredState(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, SLEEManagementException; Rhino provides an extension to clear the desired state for a SLEE on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state. |
MBean operation: stop
MBean |
|
---|---|
SLEE-defined |
Stop all nodes
public void stop() throws InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extensions |
Stop specific nodes
public void stop(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument which lets you control which nodes to stop (by specifying node IDs). For this to work, specified nodes must begin in the RUNNING state.
Reassign activities to other nodes
public void stop(int[] stopNodeIDs, int[] reassignActivitiesToNodeIDs) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Rhino also provides an extension that adds another argument, which lets you reassign ownership of replicated activities (from replicating resource adaptor entities) from the stopping nodes, distributing the activities of each resource adaptor entity equally among other event-router nodes where the resource adaptor entity is eligible to adopt them. With a smaller set of activities, the resource adaptor entities on the stopping nodes can more quickly return to the INACTIVE state (which is required for the SLEE to transition from the STOPPING to the STOPPED state). This only works for resource adaptor entities that are replicating activity state (see the description of the "Rhino-defined configuration property" on the MBean tab on Creating a Resource Adaptor Entity). See also Reassigning a Resource Adaptor Entity’s Activities to Other Nodes, in particular the Requirements tab. |
Basic Operational State of a Node
Retrieving actual state
To retrieve the actual operational state of a node, use the following rhino-console command or related MBean operation. For an explanation of the terms "actual state" and "desired state" see Concepts and Terminology.
Console command: getsleeactualstate
Command |
getsleeactualstate <-all|-nodes node1,node2,...> Description Get the actual SLEE state for the specified nodes. If -all is specified, query the state of all current event router cluster members. |
---|---|
Output |
The |
Examples |
To display the actual state of only node 101: $ ./rhino-console getsleeactualstate -nodes 101 Node 101: Stopped To display the actual state of every event-router node: $ ./rhino-console getsleeactualstate -all Getting desired SLEE state for node(s) [101,102] Node 101: Stopped Node 102: Running |
MBean operation: getActualState
MBean |
|
---|---|
Rhino extension |
Return actual state of a set of nodes
public SleeActualState getActualState(int[] nodeIDs) throws ManagementException; |
Retrieving desired state
To retrieve the desired operational state of a node, use the following rhino-console command or related MBean operation.
Console command: getsleedesiredstate
Command |
getsleedesiredstate <-default|-all|-nodes node1,node2,...> Description Get the default or per-node desired SLEE state. If -all is specified, query the state of all current event router nodes as well as all nodes with saved per-node state. |
---|---|
Output |
The |
Examples |
To display the desired state of only node 101: $ ./rhino-console getsleedesiredstate -nodes 101 Node 101: Stopped To display the desired state of every event-router node and configured node: $ ./rhino-console getsleedesiredstate -all Node 101: Stopped Node 102: Running (default) Node 103: Running To display the default desired state that unconfigured event router nodes will inherit: $ ./rhino-console getsleedesiredstate -default Getting default SLEE state Default SLEE state is: running |
MBean operation: getPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Return desired state of a set of nodes
public SleeActualState getPerNodeDesiredState(int[] nodeIDs) throws ManagementException; |
MBean operation: getDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Return the default desired state used by nodes that do not have a configured per-node state
public SleeActualState getDefaultDesiredState() throws ManagementException; |
Retrieving SLEE-defined state
To retrieve the basic operational state of a node in a form compatible with the JAIN SLEE specification, use the following rhino-console command or related MBean operation.
This command has been retrofitted to support reporting the current SLEE state of all pool cluster members when using the pool clustering mode. |
Console command: state
Command |
state [-nodes node1,node2,...] Description Get the state of the SLEE (on the specified nodes) |
---|---|
Output |
The |
Examples |
To display the state of only node 101: $ ./rhino-console state -nodes 101 Node 101 is Stopped To display the state of every event-router node: $ ./rhino-console state Node 101 is Stopped Node 102 is Running |
MBean operation: getState
MBean |
|||
---|---|---|---|
SLEE-defined |
Return state of current node
public SleeState getState() throws ManagementException; Rhino’s implementation of the SLEE-defined
|
||
Rhino extension |
Return state of specific nodes
public SleeState[] getState(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, ManagementException; Rhino provides an extension that adds an argument which lets you control which nodes to examine (by specifying node IDs). |
Detailed Information for Every Node in the Cluster
To retrieve detailed information for every node in the cluster (including quorum nodes), use the following rhino-console commands or related MBean operations.
Console command: getclusterstate
This command has been retrofitted to support reporting state information of all pool cluster members when using the pool clustering mode, however when using the pool clustering mode it is recommended to use the getpoolstate command instead. |
Command |
getclusterstate Description Display the current state of the Rhino Cluster |
---|---|
Output |
For every node in the cluster, retrieves detailed information on the:
|
Example |
$ ./rhino-console getclusterstate node-id active-alarms host node-type slee-state start-time up-time -------- -------------- ----------------- ------------- ----------- ------------------ ----------------- 101 0 host1.domain.com event-router Stopped 20080327 12:16:26 0days,2h,40m,3s 102 0 host2.domain.com event-router Running 20080327 12:16:30 0days,2h,39m,59s 103 0 host3.domain.com quorum n/a 20080327 14:36:25 0days,0h,20m,4s |
MBean operation: getClusterState
MBean |
|
---|---|
Rhino extension |
public TabularData getClusterState() throws ManagementException; (Refer to the |
Console command: getpoolstate
This command can only be used with Rhino nodes configured to use the pool clustering mode. |
Command |
getpoolstate [-verbose] Description Display the current state of the Rhino Pool |
---|---|
Output |
For every node in the pool, retrieves detailed information on the:
If the
The command will also summarise the number of rows output along with the number of known-live, assumed-live, and dead nodes (where the count of each type is greater than zero). |
Examples |
$ ./rhino-console getpoolstate node-id liveness-state node-state actual-slee-state boot-time up-time -------- --------------- ------------ ------------------ ------------------ ----------------- 101 alive OPERATIONAL running 20221117 09:19:36 0days,4h,31m,30s 102 alive OPERATIONAL stopped 20221117 12:30:59 0days,1h,20m,7s 2 rows 2 known-live nodes $ ./rhino-console getpoolstate -verbose node-id liveness-state node-state actual-slee-state boot-time up-time last-heartbeat-update-time last-metadata-update-time jmx-address interconnect-address rhino-version -------- --------------- ------------ ------------------ ------------------ ----------------- --------------------------- -------------------------- --------------------------------------------------- ------------------------------------------------------ ----------------------------------------------------------------------------------------- 101 alive OPERATIONAL running 20221117 09:19:36 0days,4h,34m,30s 20221117 13:54:06 20221117 12:00:02 172.17.0.1:1199,172.18.0.1:1199,172.21.71.230:1199 172.17.0.1:22020,172.18.0.1:22020,172.21.71.230:22020 Rhino (version='3.2', release='1-SNAPSHOT', build='202211161536', revision='a8987e9bc6') 102 alive OPERATIONAL stopped 20221117 12:30:59 0days,1h,23m,7s 20221117 13:54:05 20221117 13:51:05 172.17.0.1:1299,172.18.0.1:1299,172.21.71.230:1299 172.18.0.1:22021,172.17.0.1:22021,172.21.71.230:22021 Rhino (version='3.2', release='1-SNAPSHOT', build='202211161536', revision='a8987e9bc6') 2 rows 2 known-live nodes |
MBean operation: getNodeMetadata
This operation can only be used with Rhino nodes configured to use the pool clustering mode. |
MBean |
|
---|---|
Rhino extension |
public TabularData getNodeMetadata() throws IllegalStateException, ManagementException; (Refer to the |
See also Basic operational state of a node. |
Terminating Nodes
To terminate cluster nodes, you can:
What’s the difference between "stopping", "shutting down" and "killing" a node?
You can stop functions on nodes and nodes themselves, by:
See also Stop Rhino in the Getting Started Guide, which details using the |
Shut Down Gracefully
To gracefully shut down one or more nodes, use the following rhino-console commands or related MBean operations.
When using the pool clustering mode, only the node the management command is invoked on may be shutdown or rebooted. Other cluster nodes can only be shutdown or rebooted by connecting a management client directly to them. |
Console command: shutdown
Command |
shutdown [-nodes node1,node2,...] [-timeout timeout] [-restart] Description Gracefully shutdown and terminate the cluster (or the specified nodes). If the SLEE is running in any namespace on any target node, event routing functions are allowed to complete before termination without affecting existing desired state. The optional timeout is specified in seconds. Optionally restart the node(s) after shutdown |
---|---|
Examples |
To shut down the entire cluster (when using Savanna clustering): $ ./rhino-console shutdown Shutting down the SLEE Shutdown successful To shut down only node 102: $ ./rhino-console shutdown -nodes 102 Shutting down node(s) [102] Shutdown successful |
Since Rhino 3.0.0 the shutdown console command will shut down the specified nodes regardless of the desired SLEE state. If the SLEE is running in any namespace on any target node, event routing functions are allowed to complete before termination without affecting existing desired state. |
When using the pool clustering mode, the unadorned shutdown console command will only shut down the node that the console is connected to, as if the command had been given a -nodes <node-id-of-connected-node> argument, and not the entire cluster of pool nodes. It is not possible to shut down all pool cluster nodes in one operation using rhino-console . |
MBean operation: shutdownCluster
MBean |
|
---|---|
Rhino extension |
Shut down all nodes
public void shutdownCluster(boolean restart) throws InvalidStateException, ManagementException; When using the Savanna clustering mode, the shutdownCluster operation terminates every node in the cluster. When using the pool clustering mode, the shutdownCluster operation will ony terminate the node the operation is executed on. If the restart flag is set, terminated nodes will be restarted to the currently configured desired state. |
Rhino extension |
Shut down all nodes with a timeout
public void shutdownCluster(boolean restart, long timeout) throws InvalidStateException, ManagementException; When using the Savanna clustering mode, the shutdownCluster operation terminates every node in the cluster. When using the pool clustering mode, the shutdownCluster operation will ony terminate the node the operation is executed on. If the restart flag is set, terminated nodes will be restarted to the currently configured desired state. If the timeout argument is greater than zero, any nodes that still have live activities will be shutdown anyway. This may result in call failures. |
MBean operation: shutdownNodes
MBean |
|
---|---|
Rhino extension |
Shut down specific nodes
public void shutdownNodes(int[] nodeIDs, boolean restart) throws InvalidStateException, ManagementException; The shutdownNodes operation terminates the specified set of nodes. When using the pool clustering mode, the only node ID that may be legally specified is the node ID of the node the command is executed on. If the restart flag is set, terminated nodes will be restarted to the currently configured desired state. |
Rhino extension |
Shut down specific nodes with a timeout
public void shutdownNodes(int[] nodeIDs, boolean restart, long timeout) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; The shutdownNodes operation terminates the specified set of nodes. When using the pool clustering mode, the only node ID that may be legally specified for this operation is the node ID of the node the command is executed on. If the restart flag is set, terminated nodes will be restarted to the currently configured desired state. If the timeout argument is greater than zero, any nodes that still have live activities will be shutdown anyway. This may result in call failures. |
MBean operation: reboot
MBean |
|
---|---|
Rhino extension |
Reboot all nodes
public void reboot(SleeState[] states) throws InvalidArgumentException, InvalidStateException, ManagementException; When using the Savanna clustering mode, this operation reboots every node in the cluster to the state specified. When using the pool clustering mode, this operation will only reboot the node the operation is executed on. |
Rhino extension |
Reboot specific nodes
public void reboot(int[] nodeIDs, SleeState[]) throws NullPointerException, InvalidArgumentException, InvalidStateException, ManagementException; Extension to reboot that adds an argument which lets you control which nodes to shut down (by specifying node IDs). When using the pool clustering mode, the only node ID that may be legally specified is the node ID of the node the command is executed on. |
Event-router nodes can restart to either the RUNNING state or the STOPPED state. Quorum nodes must have a state provided but do not use this in operation. |
Forcefully Terminate
To forcefully terminate a cluster node that is in any state where it can respond to management operations, use the following rhino-console command or related MBean operation.
When using the pool clustering mode, this operation can only be used to terminate the node the management operation is invoked on. |
Console command: kill
Command |
kill -nodes node1,node2,... Description Forcefully terminate the specified nodes (forces them to become non-primary) |
---|---|
Example |
To forcefully terminate nodes 102 and 103: $ ./rhino-console kill -nodes 102,103 Terminating node(s) [102,103] Termination successful |
MBean operation: kill
MBean |
|
---|---|
Rhino operation |
public void kill(int[] nodeIDs) throws NullPointerException, InvalidArgumentException, ManagementException; Rhino’s |
Application state may be lost
Killing a node is not recommended — forcibly terminated nodes lose all non-replicated application state. |
Activation State
This section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
Listing all default and per-node desired states |
listdesiredstates |
Node Housekeeping → |
Listing nodes with per-node desired state |
getnodeswithpernodedesiredstate |
Node Housekeeping → |
Copying per-node desired state to another node |
copypernodedesiredstate |
Node Housekeeping → |
Removing per-node desired state |
removePerNodeActivationState |
Node Housekeeping → |
It also describes the deprecated activation state modes that have been functionally replaced by default and per-node desired state.
About Activation State Modes
Rhino versions prior to 3.0.0 had two modes of operation for managing the activation state of services and resource adaptor entities: per-node and symmetric. From Rhino 3.0.0 these two modes were combined and have been superseded by default desired state which can be overridden by per-node desired state. Per-node desired state overrides default desired state if present. Default desired state is effective if no per-node desired state exists.
When using Rhino 3.2 or later configured in pool clustering mode, the symmetric activation state mode is not available at all. Pool cluster nodes only support the configuration of desired state, and any given pool cluster node can only have per-node desired state set for itself, e.g. node 101 can have per-node desired state set for itself, but cannot have per-node desired state set for node 102. This is because management state such as desired state is not automatically replicated between pool cluster nodes, so setting per-node desired state for node 102 on node 101 would not have any effect on node 102 and thus would be misleading and confusing. This also means that default desired state set for a pool cluster node only applies to that node, and different pool cluster nodes could have different default desired state. Because of this, using per-node state in a pool clustering configuration is somewhat redundant as default desired state could be used instead, but Rhino still allows both to be set.
The actual state for all functions is always maintained on a per-node basis.
Per-node activation state
In per-node activation state mode, Rhino maintained activation state for the installed services and created resource adaptor entities in a namespace on a per-node basis. That is, the SLEE recorded separate activation state information for each individual cluster node.
The per-node activation state mode was the default mode in a newly installed Rhino cluster.
Symmetric activation state
In the symmetric activation state mode, Rhino maintained a single cluster-wide activation state view for each installed service and created resource adaptor entity. So, for example, if a service was activated, then it was simultaneously activated on every cluster node. If a new node joined the cluster, then the services and resource adaptor entities on that node each entered the same operational state as for existing cluster nodes.
Default and per-node desired state and actual state
In Rhino 3.0.0 and later, a default activation state for the SLEE, an installed service, or a created resource adaptor entity is configured for all nodes in the cluster with optional overrides configured on a per-node basis. The effective desired state for a node is the per-node state, or the default state if no per-node state exists for a given function. If it is desired to manage the state of a cluster in the way previously served by symmetric activation state mode, the default state should be used and per-node state left unconfigured. Commands for managing per-node desired state can be found under the topic Per-Node Desired State.
In operation, Rhino nodes have an actual state that is the current operational state. The actual state follows the desired state with a per-node convergence subsystem managing transitions between actual states as the lifecycle rules of system functions allow.
These terms are defined under Declarative Configuration Concepts and Terminology.
Listing All Desired States
To obtain a report detailing all the default and per-node desired states for the SLEE, services, and resource adaptor entities, use the following rhino-console command or related MBean operation.
Console command: listdesiredstates
Command |
listdesiredstates [-o filename] Description List all default and per-node desired states for the SLEE, services, and resource adaptor entities. The -o option will output the raw json-formatted report to the specified file instead of a human-readable report being output to the console. |
---|---|
Examples |
$ ./rhino-console listdesiredstates SLEE desired state: Default desired state: running Per-node desired states: node 103: stopped Service desired states: Service: ServiceID[name=SIS-IN Test Service Composition Selector Service,vendor=OpenCloud,version=0.3] Default desired state: active Per-node desired states: node 103: inactive Service: ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.3] Default desired state: active Per-node desired states: node 103: inactive Service: ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.3] Default desired state: active Per-node desired states: node 103: inactive Service: ServiceID[name=Call Duration Logging Service,vendor=OpenCloud,version=0.3] Default desired state: active Per-node desired states: node 103: inactive Service: ServiceID[name=VPN Service,vendor=OpenCloud,version=0.3] Default desired state: active Per-node desired states: node 103: inactive Resource adaptor entity desired states: Resource adaptor entity: insis-ptc-1a Default desired state: active Resource adaptor entity: insis-ptc-1b Default desired state: active Resource adaptor entity: insis-ptc-external Default desired state: active To save the report to a file in JSON format: $ ./rhino-console listdesiredstates -o desired-states.json Output written to file: desired-states.json |
MBean operation: getDesiredStates
MBean |
|
---|---|
Rhino operation |
public String getDesiredStates() throws ManagementException; This operation returns a JSON-formatted string that reports the default desired state and any per-node desired state, where it exists, for the SLEE and each service and resource adaptor entity. |
Per-Node Desired State
This section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples, and links to related javadocs.
Procedure | rhino-console command | MBean → Operation |
---|---|---|
getnodeswithpernodedesiredstate |
Node Housekeeping → |
|
copypernodedesiredstate |
Node Housekeeping → |
|
removepernodedesiredstate |
Node Housekeeping → |
See also Finding Housekeeping MBeans. |
Listing Nodes with Per-Node Desired State
To get a list of nodes with per-node desired state, use the following rhino-console command or related MBean operation.
Console command: getnodeswithpernodedesiredstate
Command |
getnodeswithpernodedesiredstate Description Get the set of nodes for which per-node desired state exists |
---|---|
Example |
$ ./rhino-console getnodeswithpernodedesiredstate Nodes with per-node desired state: [101,102,103] |
MBean operation: getNodesWithPerNodeActivationState
MBean |
|||
---|---|---|---|
Rhino operation |
public int[] getNodesWithPerNodeActivationState() throws ManagementException; This operation returns an array, listing the cluster node IDs for nodes that have per-node desired state recorded in the database).
|
Copying Per-Node Desired State to Another Node
This operation is not supported when using Rhino configured in pool clustering mode as a pool cluster node may only maintain per-node desired state for itself. |
To copy per-node desired state from one node to another, use the following rhino-console command or related MBean operation. This replaces any configured desired state for the node and triggers state convergence to update the actual state for the SLEE and all Services and Resource Adaptor Entities. Copying the state from a node that does not have per-node desired state configured will remove the state configuration for the target node. When a node has no per-node desired state configured it uses the default desired state.
Console command: copypernodedesiredstate
Command |
copypernodedesiredstate <from-node-id> <to-node-id> Description Copy per-node desired state from one node to another |
---|---|
Example |
To copy the per-node desired state from node 101 to node 102: $ ./rhino-console copypernodedesiredstate 101 102 Per-node desired state copied from 101 to 102 |
MBean operation: copyPerNodeActivationState
MBean |
|||
---|---|---|---|
Rhino operation |
public boolean copyPerNodeActivationState(int targetNodeID) throws UnsupportedOperationException, InvalidArgumentException, InvalidStateException, ManagementException; This operation:
|
The start-rhino.sh command with the Production version of Rhino also includes an option (-c ) to copy per-node desired state from another node to the booting node as it initialises. (See Start Rhino in the Getting Started Guide.) |
Removing Per-Node Desired State
To remove per-node desired state, use the following rhino-console command or related MBean operation. This removes any configured desired state for the node and triggers state convergence to update the actual state for the SLEE and all Services and Resource Adaptor Entities. When a node has no per-node desired state configured it uses the default desired state.
Console command: removepernodedesiredstate
Command |
removepernodedesiredstate <-all|-nodes node1,node2,...> Description Removes all per-node desired state from either all nodes (with -all), or specific nodes (with -nodes). This can remove per-node desired state from offline nodes. |
---|---|
Example |
To remove per-node desired state from node 103: $ ./rhino-console removepernodedesiredstate 103 Per-node desired state removed from 103 |
MBean operation: removePerNodeActivationState
MBean |
|
---|---|
Rhino operation |
public boolean removePerNodeActivationState() throws UnsupportedOperationException, InvalidStateException, ManagementException; This operation:
|
The start-rhino.sh command with the Production version of Rhino also includes an option (-d ) to remove per-node desired state from the booting node as it initialises. (See Start Rhino in the Getting Started Guide.) |
Startup and Shutdown Priority
Startup and shutdown priorities should be set when resource adaptors and services need to be activated or deactivated in a particular order when the SLEE is started or stopped. For example, the resource adaptors responsible for writing Call Detail Records often need to be deactivated last.
Valid priorities are between -128 and 127. Startup and shutdown occur from highest to lowest priority.
Console commands
Console command: getraentitystartingpriority
Command |
getraentitystartingpriority <entity-name> Description Get the starting priority for a resource adaptor entity |
---|---|
Examples |
./rhino-console getraentitystartingpriority sipra Resource adaptor entity sipra activation priority is currently 0 |
Console command: getraentitystoppingpriority
Command |
getraentitystoppingpriority <entity-name> Description Get the stopping priority for a resource adaptor entity |
---|---|
Examples |
./rhino-console getraentitystoppingpriority sipra Resource adaptor entity sipra deactivation priority is currently 0 |
Console command: getservicestartingpriority
Command |
getservicestartingpriority <service-id> Description Get the starting priority for a service |
---|---|
Examples |
./rhino-console getservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority is currently 0 |
Console command: getservicestoppingpriority
Command |
getservicestoppingpriority <service-id> Description Get the stopping priority for a service |
---|---|
Examples |
./rhino-console getservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority is currently 0 |
Console command: setraentitystartingpriority
Command |
setraentitystartingpriority <entity-name> <priority> Description Set the starting priority for a resource adaptor entity. The priority must be between -128 and 127 and higher priority values have precedence over lower priority values |
---|---|
Examples |
./rhino-console setraentitystartingpriority sipra 127 Resource adaptor entity sipra activation priority set to 127 ./rhino-console setraentitystartingpriority sipra -128 Resource adaptor entity sipra activation priority set to -128 |
Console command: setraentitystoppingpriority
Command |
setraentitystoppingpriority <entity-name> <priority> Description Set the stopping priority for a resource adaptor entity. The priority must be between -128 and 127 and higher priority values have precedence over lower priority values |
---|---|
Examples |
./rhino-console setraentitystoppingpriority sipra 127 Resource adaptor entity sipra deactivation priority set to 127 ./rhino-console setraentitystoppingpriority sipra -128 Resource adaptor entity sipra deactivation priority set to -128 |
Console command: setservicestartingpriority
Command |
setservicestartingpriority <service-id> <priority> Description Set the starting priority for a service. The priority must be between -128 and 127 and higher priority values have precedence over lower priority values |
---|---|
Examples |
./rhino-console setservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 127 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority set to 127 ./rhino-console setservicestartingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 -128 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] activation priority set to -128 |
Console command: setservicestoppingpriority
Command |
setservicestoppingpriority <service-id> <priority> Description Set the stopping priority for a service. The priority must be between -128 and 127 and higher priority values have precedence over lower priority values |
---|---|
Examples |
./rhino-console setservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 127 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority set to 127 ./rhino-console setservicestoppingpriority name=SIP\ Presence\ Service,vendor=OpenCloud,version=1.1 -128 Service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] deactivation priority set to -128 |
MBean operations
Services
MBean |
|
---|---|
Rhino extensions |
getStartingPriority
byte getStartingPriority(ServiceID service) throws NullPointerException, UnrecognizedServiceException, ManagementException;
getStartingPriorities
Byte[] getStartingPriorities(ServiceID[] services) throws NullPointerException, ManagementException;
getStoppingPriority
byte getStoppingPriority(ServiceID service) throws NullPointerException, UnrecognizedServiceException, ManagementException;
getStoppingPriorities
Byte[] getStoppingPriorities(ServiceID[] services) throws NullPointerException, ManagementException;
setStartingPriority
void setStartingPriority(ServiceID service, byte priority) throws NullPointerException, UnrecognizedServiceException, ManagementException;
setStoppingPriority
void setStoppingPriority(ServiceID service, byte priority) throws NullPointerException, UnrecognizedServiceException, ManagementException; |
Resource Adaptors
MBean |
|
---|---|
Rhino extensions |
getStartingPriority
byte getStartingPriority(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
getStartingPriorities
Byte[] getStartingPriorities(String[] entityNames) throws NullPointerException, ManagementException;
getStoppingPriority
byte getStoppingPriority(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
getStoppingPriorities
Byte[] getStoppingPriorities(String[] entityNames) throws NullPointerException, ManagementException;
setStartingPriority
void setStartingPriority(String entityName, byte priority) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException;
setStoppingPriority
void setStoppingPriority(String entityName, byte priority) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException; |
Deployable Units
As well as an overview of deployable units, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean → Operation |
---|---|---|
install installlocaldu |
DeploymentMBean → |
|
uninstall |
DeploymentMBean → |
|
listdeployableunits |
DeploymentMBean → |
|
lockdeploymentstate |
PlatformRestrictionsConfigManagementMBean → |
About Deployable Units
Below are a definition, preconditions for installing and uninstalling, and an example of a deployable unit.
What is a deployable unit?
A deployable unit is a jar file that can be installed in the SLEE. It contains:
-
a deployment descriptor
-
constituent jar files, with Java class files and deployment descriptors for components such as:
-
SBBs
-
events
-
profile specifications
-
resource adaptor types
-
resource adaptors
-
libraries
-
-
XML files for services.
The JAIN SLEE 1.1 specification defines the structure of a deployable unit. |
Installing and uninstalling deployable units
You must install and uninstall deployable units in a particular order, according to the dependencies of the SLEE components they contain. You cannot install a deployable unit unless either it contains all of its dependencies, or they are already installed. For example, if your deployable unit contains an SBB which depends on a library jar, the library jar must either already be installed in the SLEE, or be included in that same deployable unit jar.
Pre-conditions
A deployable unit cannot be installed if any of the following is true:
-
A deployable unit with the same URL has already been installed in the SLEE.
-
The deployable unit contains a component with the same name, vendor and version as a component of the same type that is already installed in the SLEE.
-
The deployable unit contains a component that references other components that are not yet installed in the SLEE and are not included in the deployable unit jar. (For example, an SBB component may reference event-type components and profile-specification components that are not included or pre-installed.)
A deployable unit cannot be uninstalled if either of the following is true:
-
There are any dependencies on any of its components from components in other installed deployable units. For example, if a deployable unit contains an SBB jar that depends on a profile-specification jar contained in a second deployable unit, the deployable unit containing the profile-specification jar cannot be uninstalled while the deployable unit containing the SBB jar remains installed.
-
There are "instances" of components contained in the deployable unit. For example, a deployable unit containing a resource adaptor cannot be uninstalled if the SLEE includes resource adaptor entities of that resource adaptor.
Deployable unit example
The following example illustrates the deployment descriptor for a deployable unit jar file:
<deployable-unit> <description> ... </description> ... <jar> SomeProfileSpec.jar </jar> <jar> BarAddressProfileSpec.jar </jar> <jar> SomeCustomEvent.jar </jar> <jar> FooSBB.jar </jar> <jar> BarSBB.jar </jar> ... <service-xml> FooService.xml </service-xml> ... </deployable-unit>
The content of the deployable unit jar file is as follows:
META-INF/deployable-unit.xml META-INF/MANIFEST.MF ... SomeProfileSpec.jar BarAddressProfileSpec.jar SomeCustomEvent.jar FooSBB.jar BarSBB.jar FooService.xml ...
Installing Deployable Units
To install a deployable unit, use the following rhino-console command or related MBean operation.
Console commands: install
, installlocaldu
Commands |
Installing from a URL
install <url> [-type <type>] [-installlevel <level>] Description Install a deployable unit jar or other artifact. To install something other than a deployable unit, the -type option must be specified. The -installlevel option controls to what degree the deployable artifact is installed
Installing from a local file
installlocaldu <file url> [-type <type>] [-installlevel <level>] [-url url] Description Install a deployable unit or other artifact. This command will attempt to forward the file content (by reading the file) to rhino if the management client is on a different host. To install something other than a deployable unit, the -type option must be specified. The -installlevel option controls to what degree the deployable artifact is installed. The -url option allows the deployment unit to be installed with an alternative URL identifier |
---|---|
Examples |
To install a deployable unit from a given URL: $ ./rhino-console install file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar installed: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar] To install a deployable unit from the local file system of the management client: $ ./rhino-console installlocaldu file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar installed: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar] |
MBean operation: install
MBean |
|
---|---|
SLEE-defined |
Install a deployable unit from a given URL
public DeployableUnitID install(String url) throws NullPointerException, MalformedURLException, AlreadyDeployedException, DeploymentException, ManagementException; Installs the given deployable unit jar file into the SLEE. The given URL must be resolvable from the Rhino node. |
Rhino extension |
Install a deployable unit from a given byte array
public DeployableUnitID install(String url, byte[] content) throws NullPointerException, MalformedURLException, AlreadyDeployedException, DeploymentException, ManagementException; Installs the given deployable unit jar file into the SLEE. The caller passes the actual file contents of the deployable unit in a byte array as a parameter to this method. The SLEE then installs the deployable unit as if it were from the URL. |
Uninstalling Deployable Units
To uninstall a deployable unit, use the following rhino-console command or related MBean operation.
A deployable unit cannot be uninstalled if it contains any components that any other deployable unit installed in the SLEE depends on. |
Console command: uninstall
Command |
uninstall <url> Description Uninstall a deployable unit jar |
---|---|
Examples |
To uninstall a deployable unit which was installed with the given URL: $ ./rhino-console uninstall file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar uninstalled: DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar] |
Console command: cascadeuninstall
Command |
cascadeuninstall <type> <url|component-id> [-force] [-s] Description Cascade uninstall a deployable unit or copied component. The optional -force argument prevents the command from prompting for confirmation before the uninstall occurs. The -s argument removes the shadow from a shadowed component and is not valid for deployable units |
---|---|
Examples |
To uninstall a deployable unit which was installed with the given URL and all deployable units that depend on this: $ ./rhino-console cascadeuninstall du file:du/ocsip-ra-2.3.1.17.du.jar Cascade removal of deployable unit file:du/ocsip-ra-2.3.1.17.du.jar requires the following operations to be performed: Deployable unit file:jars/sip-registrar-service.jar will be uninstalled SBB with SbbID[name=RegistrarSbb,vendor=OpenCloud,version=1.8] will be uninstalled Service with ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] will be uninstalled This service will first be deactivated Deployable unit file:jars/sip-presence-service.jar will be uninstalled SBB with SbbID[name=EventStateCompositorSbb,vendor=OpenCloud,version=1.0] will be uninstalled SBB with SbbID[name=NotifySbb,vendor=OpenCloud,version=1.1] will be uninstalled SBB with SbbID[name=PublishSbb,vendor=OpenCloud,version=1.0] will be uninstalled Service with ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] will be uninstalled This service will first be deactivated Service with ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] will be uninstalled This service will first be deactivated Service with ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] will be uninstalled This service will first be deactivated Deployable unit file:jars/sip-proxy-service.jar will be uninstalled SBB with SbbID[name=ProxySbb,vendor=OpenCloud,version=1.8] will be uninstalled Service with ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] will be uninstalled This service will first be deactivated Deployable unit file:du/ocsip-ra-2.3.1.17.du.jar will be uninstalled Resource adaptor with ResourceAdaptorID[name=OCSIP,vendor=OpenCloud,version=2.3.1] will be uninstalled Resource adaptor entity sipra will be removed This resource adaptor entity will first be deactivated Link name OCSIP bound to this resource adaptor entity will be removed Continue? (y/n): y Deactivating service ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] Deactivating service ServiceID[name=SIP Notification Service,vendor=OpenCloud,version=1.1] Deactivating service ServiceID[name=SIP Presence Service,vendor=OpenCloud,version=1.1] Deactivating service ServiceID[name=SIP Publish Service,vendor=OpenCloud,version=1.0] Deactivating service ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] All necessary services are inactive Deactivating resource adaptor entity sipra All necessary resource adaptor entities are inactive Uninstalling deployable unit file:jars/sip-registrar-service.jar Uninstalling deployable unit file:jars/sip-presence-service.jar Uninstalling deployable unit file:jars/sip-proxy-service.jar Unbinding resource adaptor entity link name OCSIP Removing resource adaptor entity sipra Uninstalling deployable unit file:du/ocsip-ra-2.3.1.17.du.jar |
MBean operation: uninstall
MBean |
|
---|---|
SLEE-defined |
public void uninstall(DeployableUnitID id) throws NullPointerException, UnrecognizedDeployableUnitException, DependencyException, InvalidStateException, ManagementException; Uninstalls the given deployable unit jar file (along with all the components it contains) out of the SLEE. |
Listing Deployable Units
To list the installed deployable units, use the following rhino-console command or related MBean operation.
Console command: listdeployableunits
Command |
listdeployableunits Description List the current installed deployable units |
---|---|
Example |
To list the currently installed deployable units: $ ./rhino-console listdeployableunits DeployableUnitID[url=file:/home/rhino/rhino/examples/sip-examples-2.0/lib/jsip-library-du-1.2.jar] DeployableUnitID[url=file:/home/rhino/rhino/lib/javax-slee-standard-types.jar] |
MBean operation: getDeployableUnits
MBean |
|
---|---|
SLEE-defined |
public DeployableUnitID[] getDeployableUnits() throws ManagementException; Returns the set of deployable unit identifiers that identify all the deployable units installed in the SLEE. |
Locking component installation
The lockdownDeployableUnits
command sets an internal flag in the Rhino management database to disable installation and uninstallation of deployable units. Once invoked it is impossible for a Rhino administrator to unlock the install and uninstall operations without reinitialising the management database while the cluster is offline. Use this command when preparing a deployment image that should be resistant to alteration by the operator of the system.
The primary purpose of the 'lockdownDeployableUnits' command is to create sealed deployment images that are resistant to tampering however it is of limited utility if the user managing the system has direct access to the underlying operating system. The principal benefits are:
-
To block people with access to user management tools from modifying the set of deployed binary components.
-
To make altering the deployment state more difficult and easier to detect - cluster restarts are highly visible and disruptive to operations.
-
To provide a simple process for locking deployments that does not require control of the deployed environment.
-
To support other security controls such read-only deployment images and any future integrity checks such as code signing.
To lock the deployment state, use the following rhino-console command or related MBean operation.
Console command: lockdowndeployableunits
Command |
lockdowndeployableunits [-force] Description Lock down the Rhino deployment binaries. Deployable units cannot be installed or uninstalled nor namespaces created or remove once lockdown has been enabled. Lockdown cannot be reversed except by completely reinitialising the management database and reinstalling the deployable units. The optional -force argument prevents the command from prompting for confirmation before the lockdown occurs. |
---|
MBean operation: disableDeployableUnitModification
MBean |
|
---|---|
SLEE-defined |
public void disableDeployableUnitModification() throws ConfigurationException; Locks code deployment in Rhino (installation and uninstallation of deployable units), and namespace creation and removal. |
MBean operation: PlatformRestrictionsConfigManagementMBean
MBean |
|
---|---|
SLEE-defined |
public boolean isDeployableUnitModificationDisabled() throws ConfigurationException; Checks the deployment lockdown state of Rhino. Returns true if code deployment and namespace management are locked. |
Services
As well as an overview of SLEE services, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean → Operation(s) |
---|---|---|
listservices |
Deployment → |
|
getserviceactualstate, getservicedesiredstate |
Service Management → |
|
listservicesbystate |
Service Management → |
|
activateservice |
Service Management → |
|
deactivateservice |
Service Management → |
|
deactivateandactivateservice + |
Service Management → |
|
listserviceralinks |
Deployment → |
|
listsbbs |
Deployment → |
|
|
Deployment → |
|
getservicemetricsrecordingenabled |
ServiceManagementMBean → |
|
setservicemetricsrecordingenabled |
ServiceManagementMBean → |
|
getservicereplicationselectors |
ServiceManagementMBean → |
|
setservicereplicationselectors |
ServiceManagementMBean → |
About Services
The SLEE specification presents the operational lifecycle of a SLEE service — illustrated, defined and summarised below.
What are SLEE services?
Services are SLEE components that provide the application logic to act on input from resource adaptors. |
Service lifecycle states
State | Definition |
---|---|
INACTIVE |
The service has been installed successfully and is ready to be activated, but not yet running. The SLEE will not create SBB entities of the service’s root SBB, to process events. |
ACTIVE |
The service is running. The SLEE will create SBB entities, of the service’s root SBB, to process initial events. The SLEE will also deliver events to SBB entities of the service’s SBBs, as appropriate. |
STOPPING |
The service is deactivating. Existing SBB entities of the service continue running and may complete their processing. But the SLEE will not create new SBB entities of the service’s root SBB, for new activities. |
When the SLEE can reclaim all of a service’s SBB entities, it transitions out of the STOPPING state and returns to the INACTIVE state. |
Independent operational states
As explained in About SLEE Operational States, each event-router node in a Rhino cluster maintains its own lifecycle state machine, independent of other nodes in the cluster. This is also true for each service: one service might be INACTIVE on one node in a cluster, ACTIVE on another, and STOPPING on a third. The operational state of a service on each cluster node also persists to the disk-based database.
A service will enter the INACTIVE state, after node bootup and initialisation completes, if the database’s persistent operational state information for that service is missing, or is set to INACTIVE or STOPPING.
And, like node operational states, when using the Savanna clustering mode, you can change the operational state of a service at any time, as long as least one node in the cluster is available to perform the management operation (regardless of whether or not the node whose operational state being changed is a current cluster member). For example, you might activate a service on node 103 before node 103 is booted — then, when node 103 boots, and after it completes initialisation, that service will transition to the ACTIVE state. When using the pool clustering mode, you can only change the state of services on the pool cluster node that the management operation is invoked on. To change the state of a service on any other node, a management client needs to connect directly to that node.
Configuring services
An administrator can configure a service before deployment by modifying its service-jar.xml
deployment descriptor (in its deployable unit). This includes specifying:
-
the address profile table to use when a subscribed address selects initial events for the service’s root SBB
-
the default event-router priority for the SLEE to give to root SBB entities of the service when processing initial events.
Individual SBBs used in a service can also have configurable properties or environment entries. Values for these environment entries are defined in the sbb-jar.xml
deployment descriptor included in the SBB’s component jar. Administrators can set or adjust the values for each environment entry before the SBB is installed in the SLEE.
The SLEE only reads the configurable properties defined for a service or SBB deployment descriptor at deployment time. If you need to change the value of any of these properties, you’ll need to:
-
uninstall the related component (service or SBB whose properties you want to configure) from the SLEE
-
change the properties
-
reinstall the component
-
uninstall and reinstall other components (as needed) to satisfy dependency requirements enforced by the SLEE.
Retrieving a Service’s State
Retrieving actual state
To retrieve the actual operational state of a Service, use the following rhino-console command or related MBean operation. For an explanation of the terms "actual state" and "desired state" see bxref:concepts-and-terminology.
Console command: getserviceactualstate
Command |
getserviceactualstate <service-id> <-all|-nodes node1,node2,...> Description Get the actual service state for the specified nodes. If -all is specified, query the state of all current event router cluster members. |
---|---|
Output |
The |
Examples |
To display the actual state of the service with the ServiceID $ ./rhino-console getserviceactualstate name=SimpleService1,vendor=Open Cloud,version=1.0 -nodes 101 Getting actual service state for node(s) [101] Node 101: Stopped To display the actual state of the service $ ./rhino-console getserviceactualstate name=SimpleService1,vendor=Open Cloud,version=1.0 -all Getting actual service state for node(s) [101,102] Node 101: Stopped Node 102: Running |
MBean operation: getActualState
MBean |
|
---|---|
Rhino extension |
Return actual state of a set of nodes
public ServiceActualState getActualState(ServiceID service ID, int[] nodeIDs) throws ManagementException; |
Retrieving desired state
To retrieve the desired operational state of a Service, use the following rhino-console command or related MBean operation.
Console command: getservicedesiredstate
Command |
getservicedesiredstate <service-id> <-default|-all|-nodes node1,node2,...> Description Get the default or per-node desired service state. If -all is specified, query the state of all current event router nodes as well as all nodes with saved per-node state. |
---|---|
Output |
The |
Examples |
To display the desired state of only node 101: $ ./rhino-console getservicedesiredstate -nodes 101 Node 101: Stopped To display the desired state of the service $ ./rhino-console getservicedesiredstate -all Node 101: Stopped Node 102: Running (default) Node 103: Running To display the default desired state that unconfigured event router nodes will inherit: $ ./rhino-console getservicedesiredstate -default Getting default service state Default service state is: running |
MBean operation: getPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Return desired state of a set of nodes
public ServiceDesiredState getPerNodeDesiredState(ServiceID service ID, int[] nodeIDs) throws ManagementException; |
MBean operation: getDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Return the default desired state used by nodes that do not have a configured per-node state
public ServiceDesiredState getDefaultDesiredState() throws ManagementException; |
Retrieving SLEE-defined state
To retrieve the operational state of a service in a form compatible with the JAIN SLEE specification, use the following rhino-console command or related MBean operation.
Console command: getservicestate
Command |
getservicestate <service-id> [-nodes node1,node2,...] Description Get the state of a service (on the specified nodes) |
---|---|
Output |
The |
Examples |
To display the state of the service with the ServiceID $ ./rhino-console getservicestate name=SimpleService1,vendor=Open Cloud,version=1.0 Service is Inactive on node 101 Service is Active on node 102 To display the state of the service on only node 101: $ ./rhino-console getservicestate name=SimpleService1,vendor=Open Cloud,version=1.0 -nodes 101 Service is Inactive on node 101 |
MBean operation: getState
MBean |
|||
---|---|---|---|
SLEE-defined |
Return state of service on current node
public ServiceState getState(ServiceID id) throws NullPointerException, UnrecognizedServiceException, ManagementException; Rhino’s implementation of the SLEE-defined
|
||
Rhino extension |
Return state of service on specified node(s)
public ServiceState[] getState(ServiceID id, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, ManagementException; Rhino provides an extension that adds an argument which lets you control the nodes on which to return the state of the service (by specifying node IDs). |
All Available Services
To list all available services installed in the SLEE, use the following rhino-console command or related MBean operation.
Console command: listservices
Command |
listservices Description List the current installed services |
---|---|
Example |
$ ./rhino-console listservices ServiceID[name=SIP AC Location Service,vendor=OpenCloud,version=1.7] ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] |
MBean operation: getServices
MBean |
|
---|---|
SLEE-defined |
public ServiceID[] getServices() throws ManagementException; This operation returns an array of service component identifiers, identifying the services installed in the SLEE. |
See also Services by State. |
Services by State
To list the services in a particular operational state, use the following rhino-console command or related MBean operation.
Console command: listservicesbystate
Command |
listservicesbystate <state> [-node node] Description List the services that are in the specified state (on the specified node) |
---|---|
Output |
The operational state of a service is node-specific. If the |
Example |
To list the services in the ACTIVE state on node 102: $ ./rhino-console listservicesbystate Active -node 102 Services in Active state on node 102: ServiceID[name=SIP Proxy Service,vendor=OpenCloud,version=1.8] ServiceID[name=SIP Registrar Service,vendor=OpenCloud,version=1.8] |
MBean operation: getServices
MBean |
|
---|---|
SLEE-defined |
Get services on all nodes
public ServiceID[] getServices(ServiceState state) throws NullPointerException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Get services on specific nodes
public ServiceID[] getServices(ServiceState state, int nodeID) throws NullPointerException, InvalidArgumentException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to list services in a particular state (by specifying node IDs). |
See also All Available Services. |
Activating Services
To activate one or more services, use the following rhino-console command or related MBean operations.
When using the pool clustering mode, it is only possible to change the state of a service on the node the management operation is invoked on. To change the state of a service on another node, a management client needs to connect directly to that node. |
If executed without a list of nodes, all per-node desired state for the service is removed and the default desired state of the service is set to active (if it is not already). |
Console command: activateservice
Command |
activateservice <service-id>* [-nodes node1,node2,...] [-ifneeded] Description Activate a service (on the specified nodes) |
---|---|
Example |
To activate the Call Barring and Call Forwarding services on nodes 101 and 102: $ ./rhino-console activateservice \ "name=Call Barring Service,vendor=OpenCloud,version=0.2" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \ -nodes 101,102 Activating services [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2], ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]] on node(s) [101,102] Services transitioned to the Active state on node 101 Services transitioned to the Active state on node 102 |
MBean operation: setPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on specific nodes
public void setPerNodeDesiredState(ServiceID id, int[] nodeIDs, ServiceDesiredState desiredState) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, ManagementException; Rhino provides an extension to set the desired state for a service on a set of nodes. |
MBean operation: setDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that do not have per-node state configured for the specified service
public void setDefaultDesiredState(ServiceID id, ServiceDesiredState desiredState) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, ManagementException; Rhino provides an extension to set the desired state for a service on nodes that do not have a per-node desired state configured. |
MBean operation: removePerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that have per-node service state configured that is different from the default state
public void removePerNodeDesiredState(ServiceID id, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, ManagementException; Rhino provides an extension to clear the desired state for a service on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state. |
MBean operation: activate
MBean |
|
---|---|
SLEE-defined |
Activate on all nodes
public void activate(ServiceID id) throws NullPointerException, UnrecognizedServiceException, InvalidStateException, InvalidLinkNameBindingStateException, ManagementException; public void activate(ServiceID[] ids) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, InvalidLinkNameBindingStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Activate on specific nodes
public void activate(ServiceID id, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; public void activate(ServiceID[] ids, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument to let you control the nodes on which to activate the specified services (by specifying node IDs). For this to work, the specified services must be in the INACTIVE state on the specified nodes. |
A service may require resource adaptor entity link names to be bound to appropriate resource adaptor entities before it can be activated. (See Getting Link Bindings Required by a Service and Managing Resource Adaptor Entity Link Bindings.) |
Deactivating Services
To deactivate one or more services on one or more nodes, use the following rhino-console command or related MBean operations.
When using the pool clustering mode, it is only possible to change the state of a service on the node the management operation is invoked on. To change the state of a service on another node, a management client needs to connect directly to that node. |
If executed without a list of nodes, all per-node desired state for the service is removed and the default desired state of the service is set to inactive (if it is not already). |
Console command: deactivateservice
Command |
deactivateservice <service-id>* [-nodes node1,node2,...] [-ifneeded] Description Deactivate a service (on the specified nodes) |
---|---|
Example |
To deactivate the Call Barring and Call Forwarding services on nodes 101 and 102: $ ./rhino-console deactivateservice \ "name=Call Barring Service,vendor=OpenCloud,version=0.2" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \ -nodes 101,102 Deactivating services [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2], ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]] on node(s) [101,102] Services transitioned to the Stopping state on node 101 Services transitioned to the Stopping state on node 102 |
MBean operation: setPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on specific nodes
public void setPerNodeDesiredState(ServiceID id, int[] nodeIDs, ServiceDesiredState desiredState) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, ManagementException; Rhino provides an extension to set the desired state for a service on a set of nodes. |
MBean operation: setDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that do not have per-node state configured for the specified service
public void setDefaultDesiredState(ServiceID id, ServiceDesiredState desiredState) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, ManagementException; Rhino provides an extension to set the desired state for a service on nodes that do not have a per-node desired state configured. |
MBean operation: removePerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that have per-node state configured that is different from the default state
public void removePerNodeDesiredState(ServiceID id, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, ManagementException; Rhino provides an extension to clear the desired state for a service on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state. |
MBean operation: deactivate
MBean |
|
---|---|
SLEE-defined |
Deactivate on all nodes
public void deactivate(ServiceID id) throws NullPointerException, UnrecognizedServiceException, InvalidStateException, ManagementException; public void deactivate(ServiceID[] ids) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Deactivate on specific nodes
public void deactivate(ServiceID id, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; public void deactivate(ServiceID[] ids, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to deactivate the specified services (by specifying node IDs). For this to work, the specified services must be in the ACTIVE state on the specified nodes. |
Console command: waittilserviceisinactive
Command |
waittilserviceisinactive <service-id> [-timeout timeout] [-nodes node1,node2,...] Wait for a service to finish deactivating (on the specified nodes) (timing out after N seconds) |
---|---|
Example |
To wait for the Call Barring and Call Forwarding services on nodes 101 and 102: $ ./rhino-console waittilserviceisinactive \ "name=Call Barring Service,vendor=OpenCloud,version=0.2" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \ -nodes 101,102 Service ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2] is in the Inactive state on node(s) [101,102] Service ServiceID[Call Forwarding Service,vendor=OpenCloud,version=0.2] is in the Inactive state on node(s) [101,102] |
Upgrading (Activating & Deactivating) Services
To activate some services and deactivate others, use the following rhino-console command or related MBean operation.
Activating and deactivating in one operation
The SLEE specification defines the ability to deactivate some services and activate other services in a single operation. As one set of services deactivates, the existing activities being processed by those services continue to completion, while new activities (started after the operation is invoked) are processed by the activated services. The intended use of this is to upgrade a service or services with new versions (however the operation does not have to be used strictly for this purpose). |
When using the pool clustering mode, it is only possible to change the state of services on the node the management operation is invoked on. To change the state of services on another node, a management client needs to connect directly to that node. |
Console command: deactivateandactivateservice
Command |
deactivateandactivateservice Deactivate <service-id>* Activate <service-id>* [-nodes node1,node2,...] Description Deactivate some services and Activate some other services (on the specified nodes) |
---|---|
Example |
To deactivate version 0.2 of the Call Barring and Call Forwarding services and activate version 0.3 of the same services on nodes 101 and 102: $ ./rhino-console deactivateandactivateservice \ Deactivate "name=Call Barring Service,vendor=OpenCloud,version=0.2" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.2" \ Activate "name=Call Barring Service,vendor=OpenCloud,version=0.3" \ "name=Call Forwarding Service,vendor=OpenCloud,version=0.3" \ -nodes 101,102 On node(s) [101,102]: Deactivating service(s) [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.2], ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.2]] Activating service(s) [ServiceID[name=Call Barring Service,vendor=OpenCloud,version=0.3], ServiceID[name=Call Forwarding Service,vendor=OpenCloud,version=0.3]] Deactivating service(s) transitioned to the Stopping state on node 101 Activating service(s) transitioned to the Active state on node 101 Deactivating service(s) transitioned to the Stopping state on node 102 Activating service(s) transitioned to the Active state on node 102 |
MBean operation: deactivateAndActivate
MBean |
|
---|---|
SLEE-defined |
Deactivate and activate on all nodes
public void deactivateAndActivate(ServiceID deactivateID, ServiceID activateID) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, InvalidLinkNameBindingStateException, ManagementException; public void deactivateAndActivate(ServiceID[] deactivateIDs, ServiceID[] activateIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, InvalidLinkNameBindingStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Deactivate and activate on specific nodes
public void deactivateAndActivate(ServiceID deactivateID, ServiceID activateID, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; public void deactivateAndActivate(ServiceID[] deactivateIDs, ServiceID[] activateIDs, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedServiceException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to activate and deactivate services (by specifying node IDs). For this to work, the services to deactivate must be in the ACTIVE state, and the services to activate must be in the INACTIVE state, on the specified nodes. |
Getting Link Bindings Required by a Service
To find the resource adaptor entity link name bindings needed for a service, and list the service’s SBBs, use the following rhino-console commands or related MBean operations.
Console commands
listserviceralinks
Command |
listserviceralinks service-id Description List resource adaptor entity links required by a service |
---|---|
Example |
To list the resource adaptor entity links that the JCC VPN service needs: $ ./rhino-console listserviceralinks "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" In service ServiceID[name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0]: SBB SbbID[name=AnytimeInterrogation sbb,vendor=Open Cloud,version=1.0] requires entity link bindings: slee/resources/map SBB SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0] requires entity link bindings: slee/resources/cdr |
listsbbs
Command |
listsbbs [service-id] Description List the current installed SBBs. If a service identifier is specified only the SBBs in the given service are listed |
---|---|
Example |
To list the SBBs in the JCC VPN service: $ ./rhino-console listsbbs "name=JCC 1.1 VPN,vendor=Open Cloud,version=1.0" SbbID[name=AnytimeInterrogation sbb,vendor=Open Cloud,version=1.0] SbbID[name=JCC 1.1 VPN sbb,vendor=Open Cloud,version=1.0] SbbID[name=Proxy route sbb,vendor=Open Cloud,version=1.0] |
MBean operations: getServices
, getSbbs
, and getDescriptors
MBean |
|||
---|---|---|---|
SLEE-defined |
Get all services in the SLEE
public ServiceID[] getServices() throws ManagementException;
Get all SBBs in a service
public SbbID[] getSbbs(ServiceID service) throws NullPointerException, UnrecognizedServiceException, ManagementException;
Get the component descriptor for a component
public ComponentDescriptor[] getDescriptors(ComponentID[] ids) throws NullPointerException, ManagementException;
|
Configuring service metrics recording status
To check and configure the status for recording service metrics, use the following rhino-console commands or related MBean operations.
The details for metrics stats are listed in Metrics.Services.cmp and Metrics.Services.lifecycle.
The default is set to disabled for performance consideration. |
When using the pool clustering mode, like all configuration state, the status for recording service metrics is configured separately for each pool cluster node by invoking the relevant management operations on the node where the configuration needs to be queried or changed. |
Console commands
getservicemetricsrecordingenabled
Command |
getservicemetricsrecordingenabled <service-id> Description Determine if metrics recording for a service has been enabled |
---|---|
Example |
To check the status for recording metrics: $ ./rhino-console getservicemetricsrecordingenabled name=service1,vendor=OpenCloud,version=1.0 Metrics recording for ServiceID[name=service1,vendor=OpenCloud,version=1.0] is currently disabled |
setservicemetricsrecordingenabled
Command |
setservicemetricsrecordingenabled <service-id> <true|false> Description Enable or disable the recording of metrics for a service |
---|---|
Example |
To enable the recording metrics: $ ./rhino-console setservicemetricsrecordingenabled name=service1,vendor=OpenCloud,version=1.0 true Metrics recording for ServiceID[name=service1,vendor=OpenCloud,version=1.0] has been enabled |
MBean operations: getServiceMetricsRecordingEnabled
and setServiceMetricsRecordingEnabled
MBean |
|
---|---|
Rhino extension |
Determine if the recording of metrics for a service is currently enabled or disabled.
public boolean getServiceMetricsRecordingEnabled(ServiceID service) throws NullPointerException, UnrecognizedServiceException, ManagementException;
Enable or disable the recording of metrics for a service.
public void setServiceMetricsRecordingEnabled(ServiceID service, boolean enabled) throws NullPointerException, UnrecognizedServiceException, ManagementException; |
Configuring service replication
The default replication behaviour of a service is defined by the service in its deployment descriptor, but may be overridden by an administrator after the service has been installed into the SLEE.
Default replication behaviour
Default replication behaviour is specified by a service in its oc-service.xml
extension service deployment descriptor. The service can specify the conditions under which the application state of the service will be replicated by using the following replication selectors:
-
Savanna — Service replication will occur if the namespace the service is installed in replicates application state over the traditional Savanna framework.
-
KeyValueStore — Service replication will occur if the namespace the service is installed in utilises a key/value store to persist application state.
-
Always — The service will always be replicated regardless of any underlying replication mechanism.
Zero or more replication selectors can be specified by the service. If any condition for replication is matched at deployment time then the service application state will be replicated. If not, no replication will take place for that service.
Configuring replication behaviour
The default replication selectors specified by a service can be changed by an administrator after the service is installed, but before it is deployed, using the following rhino-console commands or related MBean operations.
Console commands
getservicereplicationselectors
Command |
getservicereplicationselectors <service-id> Description Get the replication selectors for a service |
---|---|
Example |
To check the current replication selectors for a service: $ ./rhino-console getservicereplicationselectors name=service1,vendor=OpenCloud,version=1.0 Service ServiceID[name=service1,vendor=OpenCloud,version=1.0] current replication selectors are: [KEYVALUESTORE] |
setservicereplicationselectors
Command |
setservicereplicationselectors <service-id> -none|selector* Description Set the replication selectors for a service, valid selectors are: [ALWAYS, SAVANNA, KEYVALUESTORE] |
---|---|
Example |
The change the replication selectors for a service: $ ./rhino-console setservicereplicationselectors name=service1,vendor=OpenCloud,version=1.0 SAVANNA KEYVALUESTORE Service ServiceID[name=service1,vendor=OpenCloud,version=1.0] replication selectors set to [SAVANNA, KEYVALUESTORE] |
MBean operations: getReplicationSelectors
and setReplicationSelectors
MBean |
|
---|---|
Rhino extension |
Get the current replication selectors for a service.
public ReplicationSelector[] getReplicationSelectors(ServiceID id) throws NullPointerException, UnrecognizedServiceException, ManagementException;
Set the replication selectors for a service.
public void setReplicationSelectors(ServiceID id, ReplicationSelector[] selectors) throws NullPointerException, UnrecognizedServiceException, InvalidStateException, ManagementException; |
Resource Adaptor Entities
As well as an overview of resource adaptor entities, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command | MBean → Operation |
---|---|---|
listraconfigproperties |
Resource Management → |
|
createraentity |
Resource Management → |
|
removeraentity |
Resource Management → |
|
listraentityconfigproperties |
Resource Management → |
|
updateraentityconfigproperties |
Resource Management → |
|
activateraentity |
Resource Management → |
|
deactivateraentity |
Resource Management → |
|
reassignactivities |
Resource Management → |
|
getraentityactualstate, getraentitydesiredstate |
Resource Management → |
|
listraentitiesbystate |
Resource Management → |
|
bindralinkname |
Resource Management → |
|
unbindralinkname |
Resource Management → |
|
listralinknames |
Resource Management → |
About Resource Adaptor Entities
Resource adaptors (RAs) are SLEE components which let particular network protocols or APIs be used in the SLEE.
They typically include a set of configurable properties (such as address information of network endpoints, URLs to external systems, or internal timer-timeout values). These properties may include default values. A resource adaptor entity is a particular configured instance of a resource adaptor, with defined values for all of that RA’s configuration properties.
The resource adaptor entity lifecycle
The SLEE specification presents the operational lifecycle of a resource adaptor entity — illustrated, defined, and summarised below.
Resource adaptor entity lifecycle states
The SLEE lifecycle states are:
State | Definition |
---|---|
INACTIVE |
The resource adaptor entity has been configured and initialised. It is ready to be activated, but may not yet create activities or fire events to the SLEE. Typically, it is not connected to network resources. |
ACTIVE |
The resource adaptor entity is connected to the resources it needs to function (assuming they are available), and may create activities and fire events to the SLEE. |
STOPPING |
The resource adaptor entity may not create new activities in the SLEE, but may fire events to the SLEE on already existing activities. A resource adaptor entity transitions out of the STOPPING state, returning to the INACTIVE state, when all activities it owns have either ended or been assigned to another node for continued processing. |
Creating activities in the STOPPING state
By default, Rhino 3.2 prevents a resource adaptor from creating an activity in the This behaviour is controlled by the When set to The default value in earlier versions of Rhino was |
Independent lifecycle state machines
As explained in About SLEE Operational States, each event-router node in a Rhino cluster maintains its own lifecycle state machine, independent of other nodes in the cluster. This is also true for each resource adaptor entity: one resource adaptor entity might be INACTIVE on one node in a cluster, ACTIVE on another, and STOPPING on a third. The operational state of a resource adaptor entity on each cluster node also persists to the disk-based database.
A resource adaptor entity will enter the INACTIVE state, after node bootup and initialisation completes, if the database’s persistent operational state information for that resource adaptor entity is missing, or is set to INACTIVE or STOPPING.
And, like node operational states, when using the Savanna clustering mode, you can change the operational state of a resource adaptor entity at any time, as long as least one node in the cluster is available to perform the management operation (regardless of whether or not the node whose operational state being changed is a current cluster member). For example, you might activate a resource adaptor entity on node 103 before node 103 is booted — then, when node 103 boots, and after it completes initialisation, that resource adaptor entity will transition to the ACTIVE state. When using the pool clustering mode, you can only change the state of resource adaptor entities on the pool cluster node that the management operation is invoked on. To change the state of a resource adaptor entity on any other node, a management client needs to connect directly to that node.
Finding RA Configuration Properties
To determine resource adaptor configuration properties (which you need to know when Creating a Resource Adaptor Entity) use the following rhino-console command or related MBean operation.
Console command: listraconfigproperties
Command |
listraconfigproperties <resource-adaptor-id> Description List the configuration properties (and any default values) for a resource adaptor |
---|---|
Example |
To list the configuration properties of the Metaswitch SIP Resource Adaptor: $ ./rhino-console listraconfigproperties name=OCSIP,vendor=OpenCloud,version=2.1 Configuration properties for resource adaptor name=OCSIP,vendor=OpenCloud,version=2.1: Automatic100TryingSupport (java.lang.Boolean): true CRLLoadFailureRetryTimeout (java.lang.Integer): 900 CRLNoCRLLoadFailureRetryTimeout (java.lang.Integer): 60 CRLRefreshTimeout (java.lang.Integer): 86400 CRLURL (java.lang.String): ... |
MBean operation: getConfigurationProperties
MBean |
|
---|---|
SLEE-defined |
public ConfigProperties getConfigurationProperties(ResourceAdaptorID id) throws NullPointerException, UnrecognizedResourceAdaptorException, ManagementException |
Output |
This operation returns a |
Creating a Resource Adaptor Entity
To create a resource adaptor entity use the following rhino-console command or related MBean operation.
Console command: createrantity
Command |
createraentity <resource-adaptor-id> <entity-name> [<config-params>|(<property-name> <property-value)*] Description Create a resource adaptor entity with the given name. Optionally configuration properties can be specified, either as a single comma-separated string of name=value pairs, or as a series of separate name and value argument pairs |
||
---|---|---|---|
Example |
To create an instance of the Metaswitch SIP resource adaptor, called $ ./rhino-console createraentity name=OCSIP,vendor=OpenCloud,version=2.1 sipra \ IPAddress=192.168.0.100,Port=5160,SecurePort=5161 Created resource adaptor entity sipra |
||
Notes |
Entering configuration properties
When creating a resource adaptor entity, determine its configuration properties and then enter them in
|
MBean operation: createResourceAdaptorEntity
MBean |
|||||
---|---|---|---|---|---|
SLEE-defined |
public void createResourceAdaptorEntity(ResourceAdaptorID id, String entityName, ConfigProperties properties) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorException, ResourceAdaptorEntityAlreadyExistsException, InvalidConfigurationException, ManagementException; |
||||
Arguments |
This operation requires that you specify the resource adaptor entity’s:
|
Removing a Resource Adaptor Entity
To remove a resource adaptor entity use the following rhino-console command or related MBean operation.
You can only remove a resource adaptor entity from the SLEE when it is in the INACTIVE state on all event-router nodes currently in the primary component. |
Console command: removeraentity
Command |
removeraentity <entity-name> Description Remove a resource adaptor entity |
---|---|
Example |
To remove the resource adaptor entity named $ ./rhino-console removeraentity sipra Removed resource adaptor entity sipra |
MBean operation: removeResourceAdaptorEntity
MBean |
|
---|---|
SLEE-defined |
public void removeResourceAdaptorEntity(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, DependencyException, ManagementException; |
Listing configuration properties for a Resource Adaptor Entity
To list the configuration properties for a resource adaptor entity use the following rhino-console command or related MBean operation.
Console command: listraentityconfigproperties
Command |
listraentityconfigproperties <entity-name> Description List the configuration property values for a resource adaptor entity |
---|---|
Example |
To list the resource adaptor entity called $ ./rhino-console listraentityconfigproperties sipra Configuration properties for resource adaptor entity sipra: Automatic100TryingSupport (java.lang.Boolean): true AutomaticOptionsResponses (java.lang.Boolean): true CRLLoadFailureRetryTimeout (java.lang.Integer): 900 CRLNoCRLLoadFailureRetryTimeout (java.lang.Integer): 60 CRLRefreshTimeout (java.lang.Integer): 86400 CRLURL (java.lang.String): ClientAuthentication (java.lang.String): NEED EnableDialogActivityTests (java.lang.Boolean): false EnabledCipherSuites (java.lang.String): ExtensionMethods (java.lang.String): IPAddress (java.lang.String): AUTO Keystore (java.lang.String): sip-ra-ssl.keystore KeystorePassword (java.lang.String): KeystoreType (java.lang.String): jks MaxContentLength (java.lang.Integer): 131072 OffsetPorts (java.lang.Boolean): false Port (java.lang.Integer): 5060 PortOffset (java.lang.Integer): 101 ReplicatedDialogSupport (java.lang.Boolean): false RetryAfterInterval (java.lang.Integer): 5 SecurePort (java.lang.Integer): 5061 TCPIOThreads (java.lang.Integer): 1 Transports (java.lang.String): udp,tcp Truststore (java.lang.String): sip-ra-ssl.truststore TruststorePassword (java.lang.String): TruststoreType (java.lang.String): jks UseVirtualAddressInURIs (java.lang.Boolean): true ViaSentByAddress (java.lang.String): VirtualAddresses (java.lang.String): WorkerPoolSize (java.lang.Integer): 4 WorkerQueueSize (java.lang.Integer): 50 slee-vendor:com.opencloud.rhino_max_activities (java.lang.Integer): 0 slee-vendor:com.opencloud.rhino_replicate_activities (java.lang.String): mixed |
MBean operation: getConfigurationProperties
MBean |
|
---|---|
SLEE-defined |
public ConfigProperties getConfigurationProperties(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException; |
Output |
This operation returns a |
Updating configuration properties for a Resource Adaptor Entity
To update configuration properties for a resource adaptor entity use the following rhino-console command or related MBean operation.
When is it appropriate to update configuration properties?
A resource adaptor may elect to support reconfiguration when its resource adaptor entities are active using the If the value of the If the value of the |
Console command: updateraentityconfigproperties
Command |
updateraentityconfigproperties <entity-name> [<config-params>|(<property-name> <property-value)*] Description Update configuration properties for a resource adaptor entity. Properties can be specified either as a single comma-separated string of name=value pairs or as a series of separate name and value argument pairs |
---|---|
Example |
To update the $ ./rhino-console updateraentityconfigproperties sipra Port 5061 SecurePort 5062 Updated configuration parameters for resource adaptor entity sipra |
MBean operation: updateConfigurationProperties
MBean |
|
---|---|
SLEE-defined |
public void updateConfigurationProperties(String entityName, ConfigProperties properties) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, InvalidConfigurationException, ManagementException; |
Input |
This operation requires a |
Activating a Resource Adaptor Entity
To activate a resource adaptor entity on one or more nodes use the following rhino-console command or related MBean operations.
When using the pool clustering mode, it is only possible to change the state of a resource adaptor entity on the node the management operation is invoked on. To change the state of a resource adaptor entity on another node, a management client needs to connect directly to that node. |
If executed without a list of nodes, all per-node desired state for the resource adaptor entity is removed and the default desired state of the resource adaptor entity is set to active (if it is not already). |
Console command: activateraentity
Command |
activateraentity <entity-name> [-nodes node1,node2,...] [-ifneeded] Description Activate a resource adaptor entity (on the specified nodes) |
---|---|
Example |
To activate the resource adaptor entity called $ ./rhino-console activateraentity sipra -nodes 101,102 Activating resource adaptor entity sipra on node(s) [101,102] Resource adaptor entity transitioned to the Active state on node 101 Resource adaptor entity transitioned to the Active state on node 102 |
MBean operation: setPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on specific nodes
public void setPerNodeDesiredState(String entityName, int[] nodeIDs, ResourceAdaptorEntityDesiredState desiredState) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino provides an extension to set the desired state for a resource adaptor entity on a set of nodes. |
MBean operation: setDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that do not have per-node state configured for the specified resource adaptor entity
public void setDefaultDesiredState(String entityName, ResourceAdaptorEntityDesiredState desiredState) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino provides an extension to set the desired state for a resource adaptor entity on nodes that do not have a per-node desired state configured. |
MBean operation: removePerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that have per-node state configured that is different from the default state
public void removePerNodeDesiredState(String entityName, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino provides an extension to clear the desired state for a resource adaptor entity on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state. |
MBean operation: activateResourceAdaptorEntity
MBean |
|
---|---|
SLEE-defined |
Activate on all nodes
public void activateResourceAdaptorEntity(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Activate on specific nodes
public void activateResourceAdaptorEntity(String entityName, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to activate the resource adaptor entity (by specifying node IDs). For this to work, the resource adaptor entity must be in the INACTIVE state on the specified nodes. |
Deactivating a Resource Adaptor Entity
To deactivate a resource adaptor entity on one or more nodes use the following rhino-console command or related MBean operation.
When using the pool clustering mode, it is only possible to change the state of a resource adaptor entity on the node the management operation is invoked on. To change the state of a resource adaptor entity on another node, a management client needs to connect directly to that node. |
If executed without a list of nodes, all per-node desired state for the resource adaptor entity is removed and the default desired state of the resource adaptor entity is set to inactive (if it is not already). |
See also Reassigning a resource adaptor entity’s Activities to Other Nodes, particularly the Requirements tab. |
Console command: deactivateraentity
Command |
deactivateraentity <entity-name> [-nodes node1,node2,... [-reassignto node3,node4,...]] [-ifneeded] Description Deactivate a resource adaptor entity (on the specified nodes (reassigning replicated activities to the specified nodes)) |
||
---|---|---|---|
Examples |
To deactivate the resource adaptor entity named $ ./rhino-console deactivateraentity sipra -nodes 101,102 Deactivating resource adaptor entity sipra on node(s) [101,102] Resource adaptor entity transitioned to the Stopping state on node 101 Resource adaptor entity transitioned to the Stopping state on node 102 To deactivate the resource adaptor entity named $ ./rhino-console deactivateraentity sipra -nodes 101 -reassignto 102 Deactivating resource adaptor entity sipra on node(s) [101] Resource adaptor entity transitioned to the Stopping state on node 101 Replicated activities reassigned to node(s) [102]
To deactivate the resource adaptor entity named $ ./rhino-console deactivateraentity sipra -nodes 101 -reassignto "" Deactivating resource adaptor entity sipra on node(s) [101] Resource adaptor entity transitioned to the Stopping state on node 101 Replicated activities reassigned to node(s) [102,103] |
MBean operation: setPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on specific nodes
public void setPerNodeDesiredState(String entityName, int[] nodeIDs, ResourceAdaptorEntityDesiredState desiredState) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino provides an extension to set the desired state for a resource adaptor entity on a set of nodes. |
MBean operation: setDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that do not have per-node state configured for the specified resource adaptor entity
public void setDefaultDesiredState(String entityName, ResourceAdaptorEntityDesiredState desiredState) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino provides an extension to set the desired state for a resource adaptor entity on nodes that do not have a per-node desired state configured. |
MBean operation: removePerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Activate or deactivate on nodes that have per-node state configured that is different from the default state
public void removePerNodeDesiredState(String entityName, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino provides an extension to clear the desired state for a resource adaptor entity on a set of nodes. Nodes that do not have a per-node desired state configured use the default desired state. |
MBean operation: deactivateResourceAdaptorEntity
MBean |
|
---|---|
SLEE-defined |
Deactivate on all nodes
public void deactivateResourceAdaptorEntity(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extensions |
Deactivate on specific nodes
public void deactivateResourceAdaptorEntity(String entityName, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino provides an extension that adds an argument that lets you control the nodes on which to deactivate the resource adaptor entity (by specifying node IDs). For this to work, the resource adaptor entity must be in the ACTIVE state on the specified nodes.
Reassign deactivating activities to other nodes
public void deactivateResourceAdaptorEntity(String entityName, int[] nodeIDs, int[] reassignActivitiesToNodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; Rhino also provides an extension that adds another argument, that lets you reassign ownership of replicated activities (from a replicating resource adaptor entity), distributing them equally among other available event-router nodes. This reduces the set of activities on the nodes with the deactivating resource adaptor entity, so the resource adaptor entity can return to the INACTIVE state on those nodes quicker. This only works for resource adaptor entities that are replicating activity state (see the description of the "Rhino-defined configuration property" for the MBean on Creating a Resource Adaptor Entity). In addition, this only works when using the Savanna clustering mode. Activity reassignment using this operation is not supported when using the pool clustering mode. |
Reassigning a Resource Adaptor Entity’s Activities to Other Nodes
To reassign activities from a resource adaptor entity to a different node, use the following rhino-console command or related MBean operation, noting the requirements.
Why reassign replicating activities?
A resource adaptor entity in the STOPPING state cannot return to the INACTIVE state until all the activities that it owns have ended. You can let a deactivating resource adaptor entity return to the INACTIVE state quicker by reassigning its replicating activities to other eligible nodes. |
When using the pool clustering mode, it is not possible to reassign activities from one pool node to another using this operation. |
Console command: reassignactivities
Command |
reassignactivities <entity-name> -from node1,node2,... -to node3,node4,... Description Reassign replicated activities of a resource adaptor entity from the specified nodes to other nodes |
||
---|---|---|---|
Examples |
To reassign activities owned by the resource adaptor entity named $ ./rhino-console reassignactivities sipra -from 101 -to 102,103 Replicated activities for sipra reassigned to node(s) [102,103]
To reassign activities owned by the resource adaptor entity named $ ./rhino-console reassignactivities sipra -from 101 -to "" Replicated activities for sipra reassigned to node(s) [102,103] |
MBean operation: reassignActivities
MBean |
|
---|---|
Rhino extension |
public void reassignActivities(String entityName, int[] fromNodeIDs, int[] toNodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, InvalidStateException, ManagementException; This operation reassigns replicated activities owned by the named resource adaptor entity, on the nodes specified, using Rhino’s standard failover algorithm, to the nodes specified by the |
Requirements for reassigning activities
You can only reassign replicated activities from a resource adaptor entity to other nodes if the all the following conditions are satisfied:
-
The Savanna clustering mode is being used.
-
The node is a current member of the primary component.
-
The node is an event-router node (not a quorum node).
-
The operational state of the SLEE on the node is RUNNING or STOPPING.
-
The operational state of the resource adaptor entity on the node is ACTIVE or STOPPING.
Further, a node can only take ownership of replicated activities if it satisfies all the following conditions:
-
The Savanna clustering mode is being used.
-
The node is a current member of the primary component.
-
The node is an event-router node (not a quorum node).
-
The operational state of the SLEE on the node is RUNNING.
-
The operational state of the resource adaptor entity on the node is ACTIVE.
Also, non-replicated activities cannot be reassigned to other nodes, and a resource adaptor entity must end any non-replicated activities it created itself.
You can choose to forcefully remove activities if a resource adaptor entity fails to end them in a timely manner. |
Retrieving a Resource Adaptor Entity’s State
Retrieving actual state
To retrieve the actual operational state of a Resource Adaptor Entity, use the following rhino-console command or related MBean operation. For an explanation of the terms "actual state" and "desired state" see bxref:concepts-and-terminology.
Console command: getraentityactualstate
Command |
getraentityactualstate <entity-name> <-all|-nodes node1,node2,...> Description Get the actual resource adaptor entity state for the specified nodes. If -all is specified, query the state of all current event router cluster members. |
---|---|
Output |
The |
Examples |
To display the actual state of the Resource Adaptor Entity $ ./rhino-console getraentityactualstate sipra -nodes 101 Getting actual Resource Adaptor Entity state for node(s) [101] Node 101: Stopped To display the actual state of the Resource Adaptor Entity $ ./rhino-console getraentityactualstate sipra -all Getting actual Resource Adaptor Entity state for node(s) [101,102] Node 101: Stopped Node 102: Running |
MBean operation: getActualState
MBean |
|
---|---|
Rhino extension |
Return actual state of a set of nodes
public ResourceAdaptorEntityActualState getActualState(String entityName, int[] nodeIDs) throws ManagementException; |
Retrieving desired state
To retrieve the desired operational state of a Resource Adaptor Entity, use the following rhino-console command or related MBean operation.
Console command: getraentitydesiredstate
Command |
getraentitydesiredstate <entity-name> <-default|-all|-nodes node1,node2,...> Description Get the default or per-node desired resource adaptor entity state. If -all is specified, query the state of all current event router nodes as well as all nodes with saved per-node state. |
---|---|
Output |
The |
Examples |
To display the desired state of only node 101: $ ./rhino-console getraentitydesiredstate -nodes 101 Node 101: Stopped To display the desired state of the Resource Adaptor Entity $ ./rhino-console getraentitydesiredstate -all Node 101: Stopped Node 102: Running (default) Node 103: Running To display the default desired state that unconfigured event router nodes will inherit: $ ./rhino-console getraentitydesiredstate -default Getting default Resource state Default Resource state is: running |
MBean operation: getPerNodeDesiredState
MBean |
|
---|---|
Rhino extension |
Return the desired state of a set of nodes
public ResourceAdaptorEntityDesiredState getPerNodeDesiredState(String entityName, int[] nodeIDs) throws ManagementException; |
MBean operation: getDefaultDesiredState
MBean |
|
---|---|
Rhino extension |
Return the default desired state used by nodes that do not have a configured per-node state
public ResourceAdaptorEntityDesiredState getDefaultDesiredState(String entityName) throws ManagementException; |
Retrieving SLEE-defined state
To retrieve the operational state of a Resource Adaptor Entity in a form compatible with the JAIN SLEE specification, use the following rhino-console command or related MBean operation.
Console command: getraentitystate
Command |
getraentitystate <entity-name> [-nodes node1,node2,...] Description Get the state of a resource adaptor entity (on the specified nodes) |
---|---|
Output |
The |
Examples |
To display the state of the resource adaptor entity with the name $ ./rhino-console getraentitystate sipra Resource Adaptor Entity is Inactive on node 101 Resource Adaptor Entity is Active on node 102 To display the state of the Resource Adaptor Entity on only node 101: $ ./rhino-console getraentitystate sipra -nodes 101 Resource Adaptor Entity is Inactive on node 101 |
MBean operation: getState
MBean |
|||
---|---|---|---|
SLEE-defined |
Return state of Resource Adaptor Entity on current node
public ResourceAdaptorEntityState getState(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino’s implementation of the SLEE-defined
|
||
Rhino extension |
Return state of Resource Adaptor Entity on specified node(s)
public ResourceAdaptorEntityState[] getState(String entityName, int[] nodeIDs) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, ManagementException; Rhino provides an extension that adds an argument which lets you control the nodes on which to return the state of the Resource Adaptor Entity (by specifying node IDs). |
Listing Resource Adaptor Entities by State
To list resource adaptor entities in a particular operational state, use the following rhino-console command or related MBean operation.
Console command: listraentitiesbystate
Command |
listraentitiesbystate <state> [-node node] Description List the resource adaptor entities that are in the specified state (on the specified node) |
---|---|
Examples |
To list the resource adaptor entities on the node where $ ./rhino-console listraentitiesbystate Active No resource adaptor entities in Active state on node 101 To list the resource adaptor entities that are active on node 102: $ ./rhino-console listraentitiesbystate Active -node 102 Resource adaptor entities in Active state on node 102: sipra |
MBean operation: getResourceAdaptorEntities
MBean |
|
---|---|
SLEE-defined |
Return names of resource adaptor entities in specified state on current node
public String[] getResourceAdaptorEntities(ResourceAdaptorEntityState state) throws NullPointerException, ManagementException; Rhino’s implementation of the SLEE-defined |
Rhino extension |
Return names of resource adaptor entities in specified state on specified node
public String[] getResourceAdaptorEntities(ResourceAdaptorEntityState state, int nodeID) throws NullPointerException, InvalidArgumentException, ManagementException; Rhino provides an extension that lets you specify the nodes (by specifying node IDs) on which to return the names of resource adaptor entities in the specified state. |
Managing Resource Adaptor Entity Link Bindings
What are resource adaptor entity link name bindings?
When an SBB needs access to a resource adaptor entity, it uses JNDI to get references to Java objects that implement the resource adaptor interface (provided by the resource adaptor entity). The SBB declares (in its deployment descriptor) the resource adaptor type it expects, and an arbitrary link name. Before activating a service using the SBB, an administrator must bind a resource adaptor entity (of the type expected) to the specified link name. |
Rhino includes procedures for:
Binding a Resource Adaptor Entity to a Link Name
To bind a resource adaptor entity to a link name, use the following rhino-console command or related MBean operation.
Only one resource adaptor entity can be bound to a link name at any time. |
Console command: bindralinkname
Command |
bindralinkname <entity-name> <link-name> Description Bind a resource adaptor entity to a link name |
---|---|
Example |
To bind the resource adaptor entity with the name $ ./rhino-console bindralinkname sipra sip Bound sipra to link name sip |
MBean operation: bindLinkName
MBean |
|
---|---|
SLEE-defined |
public void bindLinkName(String entityName, String linkName) throws NullPointerException, InvalidArgumentException, UnrecognizedResourceAdaptorEntityException, LinkNameAlreadyBoundException, ManagementException; |
Unbinding Link Names
To unbind a resource adaptor entity from a link name, use the following rhino-console command or related MBean operation.
Console command: unbindralinkname
Command |
unbindralinkname <link-name> Description Unbind a resource adaptor entity from a link name |
---|---|
Example |
To unbind the link name $ ./rhino-console unbindralinkname sip Unbound link name sip |
MBean operation: unbindLinkName
MBean |
|
---|---|
SLEE-defined |
public void unbindLinkName(String linkName) throws NullPointerException, UnrecognizedLinkNameException, DependencyException,ManagementException; |
Listing Link Name Bindings
To list resource adaptor entity link names that have been bound in the SLEE, use the following rhino-console command or related MBean operation.
Console command: listralinknames
Command |
listralinknames [entity name] Description List the bound link names (for the specified resource adaptor entity) |
---|---|
Examples |
To list all resource adaptor entity link name bindings: $ ./rhino-console listralinknames slee/resources/cdr -> cdrra slee/resources/map -> mapra To list all link name bindings for the resource adaptor entity named $ ./rhino-console listralinknames mapra slee/resources/map |
MBean operation: getLinkNames
MBean |
|
---|---|
SLEE-defined |
List all bound link names
public String[] getLinkNames() throws ManagementException; Rhino’s implementation of the SLEE-defined
List link names to which a specific resource adaptor entity has been bound
public String[] getLinkNames(String entityName) throws NullPointerException, UnrecognizedResourceAdaptorEntityException, ManagementException; The SLEE-defined operation also includes an argument for returning just link names to which a specified resource adaptor entity has been bound. If the resource adaptor entity has not been bound to any link names, the returned array is zero-length. |
Profile Tables and Profiles
As well as an overview of SLEE profiles, this section includes instructions for performing the following Rhino SLEE procedures with explanations, examples and links to related javadocs:
Procedure | rhino-console command(s) | MBean(s) → Operation |
---|---|---|
createprofiletable |
Profile Provisioning → |
|
createprofile |
Profile Provisioning → |
|
listprofiletables |
Profile Provisioning → |
|
listprofiles |
Profile Provisioning → |
|
listprofileattributes |
Profile Provisioning, Profile → |
|
setprofileattributes |
Profile Provisioning, Profile → |
|
listprofilesbyattribute + listprofilesbyindexedattribute |
Profile Provisioning → |
|
exportall + importprofiles |
Profile Provisioning → |
About Profiles
What are profiles? profile tables? profile specifications?
A profile is an entry in a profile table. It has a name, may have values (called "attributes") and may have indexed fields. It’s like a row in SQL, but may also include business and management logic. A profile table is a "container" for profiles. Its specification schema, the profile specification deployment descriptor, may define queries for the profile table. The SLEE specification defines the format and structure of profile specification schemas. A profile table’s default profile is the initial set of profile attribute values for newly created profiles within that table (if not specified explicitly with the profile-creation command). |
Before deploying a profile into the SLEE, an administrator can configure its profile specification. You do this by modifying values in the profile’s profile-spec-jar.xml
deployment descriptor (in its deployable unit). For example, you can specify:
-
static queries available to SLEE components, and administrators using the management interface
-
profile specification environment entries
-
indexing hints for profile attributes.
For more on profile static queries, environment entires and indexing, see the SLEE 1.1 specification. |
Creating Profile Tables
To create a new profile table based on an already-deployed profile specification, use the following rhino-console command or related MBean operation.
Name character restriction
The profile table name cannot include the |
Console command: createprofiletable
Command |
createprofiletable <profile-spec-id> <table-name> Description Create a profile table |
---|---|
Example |
$ ./rhino-console createprofiletable name=AddressProfileSpec,vendor=javax.slee,version=1.1 testtable Created profile table testtable |
MBean operation: createProfileTable
MBean |
|
---|---|
SLEE-defined |
public void createProfileTable(javax.slee.profile.ProfileSpecificationID id, String newProfileTableName) throws NullPointerException, UnrecognizedProfileSpecificationException, InvalidArgumentException, ProfileTableAlreadyExistsException, ManagementException; |
Arguments |
This operation requires that you specify the profile table’s:
|
Creating Profiles
To create a profile in an existing profile table, use the following rhino-console command or related MBean operation.
Console command createprofile
Command |
createprofile <table-name> <profile-name> (<attr-name> <attr-value>)* Description Add a profile to a table, optionally setting attributes (see -setProfileAttributes option) Add a profile to a table, optionally setting attributes (See Setting Profile attributes) |
---|---|
Example |
$ ./rhino-console createprofile testtable testprofile Profile testtable/testprofile created |
Notes |
Setting profile attributes
When creating a profile, decide the profile’s attribute names and then enter them in
White space, commas, quotes
If a profile or profile table name or an attribute name or value contains white space or a comma, you must quote the string. For example: $ ./rhino-console createprofile "testtable 2" "testprofile 2" SubscriberAddress "my address" forwarding true If the value requires quotes, you must escape them using a backslash "\" (to avoid them being removed by the parser). For example: $ ./rhino-console createprofile testtable testprofile attrib "\"The quick brown fox\""
Name uniqueness
The profile name must be unique within the scope of the profile table. |
MBean operation: createProfile
MBean |
|
---|---|
SLEE-defined |
public javax.management.ObjectName createProfile(String profileTableName, String newProfileName) throws NullPointerException, UnrecognizedProfileTableNameException, InvalidArgumentException, ProfileAlreadyExistsException, ManagementException; |
Arguments |
This operation requires that you specify the profile’s:
|
Notes |
Profile MBean commit state
This operation returns an
Name uniqueness
The profile name must be unique within the scope of the profile table. |
Listing Profile Tables
To list all profile tables in a SLEE, use the following rhino-console command or related MBean operation.
Console command: listprofiletables
Command |
listprofiletables Description List the current created profile tables |
---|---|
Example |
$ ./rhino-console listprofiletables callbarring callforwarding |
MBean operation: getProfileTables
MBean |
|
---|---|
SLEE-defined |
public Collection getProfileTables() throws ManagementException; |
Listing Profiles
To list all profiles of a specific profile table, use the following rhino-console command or related MBean operation.
Console command: listprofiles
Command |
listprofiles <table-name> Description List the profiles in a table |
---|---|
Example |
$ ./rhino-console listprofiles testtable testprofile |
MBean operation: getProfiles
MBean |
|
---|---|
SLEE-defined |
public Collection getProfiles(String profileTableName) throws NullPointerException, UnrecognizedProfileTableNameException, ManagementException; |
Arguments |
This operation requires that you specify the profile table’s:
|
Listing Profile Attributes
To list a profile’s attributes (names and current values), use the following rhino-console command or related MBean operation.
Console command: listprofileattributes
Command |
listprofileattributes <table-name> [profile-name] Description List the current values of a profile, or if no profile is specified the current values of the default profile are listed |
---|---|
Example |
$ ./rhino-console listprofileattributes testtable testprofile Address={null} |
MBean operation: getProfile
MBean |
|||
---|---|---|---|
SLEE-defined |
public javax.management.ObjectName getProfile(String profileTableName,String profileName) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedProfileNameException, ManagementException; |
||
Arguments |
This operation requires that you specify the profile table’s:
|
||
Notes |
Profile MBean state
This operation returns an
|
Setting Profile Attributes
To set a profile’s attribute values, use the following rhino-console command or related MBean operation.
Console command: setprofileattributes
Command |
setprofileattributes <table-name> <profile-name> (<attr-name> <attr-value>)* Description Set the current values of a profile (use "" for default profile). The implementation supports only a limited set of attribute types that it can convert from strings to objects |
---|---|
Example |
$ ./rhino-console setprofileattributes testtable testprofile Address IP:192.168.0.1 Set attributes in profile testtable/testprofile |
Notes |
White space, commas, quotes
If a profile or profile table name or an attribute name or value contains white space or a comma, you must quote the string. For example: $ ./rhino-console setprofileattributes "testtable 2" "testprofile 2" SubscriberAddress "my address" forwarding true If the value requires quotes, you must escape them using a backslash "\" (to avoid them being removed by the parser). For example: $ ./rhino-console setprofileattributes testtable testprofile attrib "\"The quick brown fox\"" |
MBean operation: getProfile
MBean |
|||
---|---|---|---|
SLEE-defined |
public javax.management.ObjectName getProfile(String profileTableName,String profileName) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedProfileNameException, ManagementException; |
||
Arguments |
This operation requires that you specify the profile table’s:
|
||
Notes |
Profile MBean state
This operation returns an To put the MBean into the read-write state, invoke
|
Finding Profiles by Attribute Value
To retrieve all profiles with a specific attribute value, use the following rhino-console commands or related MBean operations:
Console command: listprofilesbyattribute
Command |
listprofilesbyattribute <table-name> <attr-name> <attr-value> [display-attributes (true/false)] Description List the profile which have an attribute <attr-name> equal to <attr-value>. The implementation supports only a limited set of attribute types that it can convert from strings to objects |
---|---|
Example |
$ ./rhino-console listprofilesbyattribute testtable Address IP:192.168.0.1 1 profiles returned ProfileID[table=testtable,profile=testprofile] |
Notes |
SLEE 1.1- & SLEE 1.0-specific commands
Between SLEE 1.0 and SLEE 1.1, the underlying profile specification schema changed significantly. According to the SLEE 1.1 Specification, profile attributes no longer have to be indexed to be legally used by a find-by-attribute-value query. Therefore, the
Backwards compatibility
SLEE 1.1 demands backwards compatibility for SLEE 1.0-compliant profiles, which means a SLEE 1.0 -compliant profile specification can be deployed into the SLEE; and profile tables and profiles can be successfully created and managed. |
Console command: listprofilesbyindexedattribute
Command |
listprofilesbyindexedattribute <table-name> <attr-name> <attr-value> [display-attributes (true/false)] Description List the profiles which have an indexed attribute <attr-name> equal to <attr-value>. The implementation supports only a limited set of attribute types that it can convert from strings to objects |
---|---|
Example |
$ ./rhino-console listprofilesbyindexedattribute testtable indexedAttrib someValue 1 profiles returned ProfileID[table=testtable,profile=testprofile] |
MBean operation: getProfilesByAttribute
MBean |
|
---|---|
SLEE-defined |
public Collection getProfilesByAttribute(String profileTableName, String attributeName, Object attributeValue) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedAttributeException, InvalidArgumentException, AttributeTypeMismatchException, ManagementException; |
Arguments |
This operation requires that you specify the:
|
Notes |
SLEE 1.1- & SLEE 1.0-specific commands
Between SLEE 1.0 and SLEE 1.1, the underlying profile specification schema changed significantly. According to the SLEE 1.1 Specification, profile attributes no longer have to be indexed to be legally used by a find-by-attribute-value query. Therefore, the
Backwards compatibility
SLEE 1.1 demands backwards compatibility for SLEE 1.0-compliant profiles, which means a SLEE 1.0 compliant profile specification can be deployed into the SLEE; and profile tables and profiles can be successfully created and managed. |
MBean operation: getProfilesByIndexedAttribute
MBean |
|
---|---|
SLEE-defined |
public Collection getProfilesByIndexedAttribute(String profileTableName, String attributeName, Object attributeValue) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedAttributeException, AttributeNotIndexedException, AttributeTypeMismatchException, ManagementException; |
Arguments |
This operation requires that you specify the:
|
Finding Profiles Using Static Queries
To retrieve all profiles match a static query (pre-defined in a profile table’s profile specification schema), use the following MBean operation.
The Rhino SLEE does not use a rhino-console command for this function. |
MBean operation: getProfilesByStaticQuery
MBean |
|
---|---|
SLEE-defined |
public Collection getProfilesByStaticQuery(String profileTableName, String queryName, Object[] parameters) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedQueryNameException, InvalidArgumentException, AttributeTypeMismatchException, ManagementException; |
Arguments |
This operation requires that you specify the:
|
For more about static query methods, please see chapter 10.8.2 "Static query methods" in the SLEE 1.1 specification. |
Finding Profiles Using Dynamic Queries
To retrieve all profiles match a dynamic query (an expression the administrator constructs at runtime) , use the following MBean operation.
The Rhino SLEE does not use a rhino-console command for this function. |
MBean operation: getProfilesByDynamicQuery
MBean |
|
---|---|
SLEE-defined |
public Collection getProfilesByDynamicQuery(String profileTableName, QueryExpression expr) throws NullPointerException, UnrecognizedProfileTableNameException, UnrecognizedAttributeException, AttributeTypeMismatchException, ManagementException; |
Arguments |
This operation requires that you specify the:
|
For more about dynamic query methods, please see chapter 10.20.3 "Dynamic Profile queries" in the SLEE 1.1 specification. |
Exporting Profiles
To export SLEE profiles, use the following rhino-console command or related MBean operation.
Console command: exportall
The Rhino command console currently does not have a command specific to profile exports. Instead you use a more general export function, which (apart from SLEE profiles) also exports deployable units for services and RAs currently installed in the SLEE. |
Command |
exportall <zip|directory> Description Export the internal state of the SLEE including deployable units, profile tables, and other component state as an imperative-style configuration export. Uses JMX to export profiles. Use of the standalone rhino-export utility is encouraged for deployments involving large profile sets. |
---|---|
Example |
$ ./rhino-console exportall /home/userXY/myexport Exporting file:jars/incc-callbarring-service.jar... Exporting file:jars/incc-callforwarding-service.jar... Taking snapshot for callforwarding Saving callforwarding.jar (183kb) Streaming profile table 'callforwarding' snapshot to callforwarding.data (2 entries) [################################################################################] 2/2 entries Taking snapshot for callbarring Saving callbarring.jar (177kb) Streaming profile table 'callbarring' snapshot to callbarring.data (2 entries) [################################################################################] 2/2 entries Extracted 4 of 4 entries (157 bytes) Snapshot timestamp 2008-05-07 15:17:42.325 (1210130262325) Critical region time : 0.002 s Request preparation time : 0.053 s Data extraction time : 0.302 s Total time : 0.355 s Converting 2 profile table snapshots... Converting callforwarding... bean class=class com.opencloud.deployed.Profile_Table_2.ProfileOCBB_Bean [###########################################################################] converted 2 of 2 [###########################################################################] converted 2 of 2 Converted 2 records Converting callbarring... bean class=class com.opencloud.deployed.Profile_Table_1.ProfileOCBB_Bean [###########################################################################] converted 2 of 2 [###########################################################################] converted 2 of 2 Converted 2 records Export complete |
Exported profile files
After the export, you will find the exported profiles as .xml files in the
Exporting "snapshots"
See also Profile Snapshots, to export profile snapshots in binary format and convert them into
Exporting a SLEE
See also Exporting a SLEE, to export all deployed components and configuration of a Rhino SLEE. |
MBean operation: exportProfiles
MBean |
|||
---|---|---|---|
Rhino extension |
com.opencloud.rhino.management.profile.ProfileDataCollection exportProfiles(String profileTableName, String[] profileNames) throws NullPointerException, UnrecognizedProfileTableNameException, ManagementException; |
||
Arguments |
This operation requires that you specify the profile table’s:
|
Importing Profiles
To import SLEE profiles, use the following rhino-console command or related MBean operation.
Console command: importprofiles
Use the importprofiles command to import profile data from an xml file that has previously been created (for example, using the exportall command). |
Command |
importprofiles <filename.xml> [-table table-name] [-replace] [-max profiles-per-transaction] [-noverify] Description Import profiles from xml data |
---|---|
Example |
$ ./rhino-console exportall /home/userXY/myexport ... ./rhino-console importprofiles /home/userXY/myexport/profiles/testtable.xml Importing profiles into profile table: testtable 2 profile(s) processed: 1 created, 0 replaced, 0 removed, 1 skipped |
Notes |
Referenced profile table must exist
For the profile import to run successfully, the profile table the xml data refers to must exist before invoking the |
MBean operation: importProfiles
MBean |
|||
---|---|---|---|
Rhino extension |
com.opencloud.rhino.management.profile.ProfileImportResult importProfiles(com.opencloud.rhino.management.profile.ProfileDataCollection profileData) throws NullPointerException, UnrecognizedProfileTableNameException, InvalidArgumentException, ProfileAlreadyExistsException, UnrecognizedProfileNameException, ManagementException; |
||
Arguments |
This operation requires that you specify the profile table’s:
|
Alarms
As well as an overview and list of alarms, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs.
Procedure | rhino-console command | MbBean → Operations |
---|---|---|
listactivealarms |
Alarm → |
|
clearalarm |
Alarm → |
|
clearalarms |
Alarm → |
|
setalarmlogperiod getalarmlogperiod |
Logging Management → |
|
createthresholdrule removethresholdrule |
Threshold Rule Management → |
|
listthresholdrules |
Threshold Rule Management → |
|
|
getconfig exportconfig importconfig |
Threshold Rule → |
activatethresholdrule |
Threshold Rule → |
|
getthresholdrulescanperiod setthresholdrulescanperiod |
Threshold Rule Management → |
About Alarms
Alarms in Rhino alert the SLEE administrator to exceptional conditions.
Application components in the SLEE raise them, as does Rhino itself (upon detecting an error condition). Rhino clears some alarms automatically when the error conditions are resolved. The SLEE administrator must clear others manually.
When an alarm is raised or cleared, Rhino generates a JMX notification from the Alarm MBean
. Management clients may attach a notification listener to the Alarm MBean, to receive alarm notifications. Rhino logs all alarm notifications.
What’s new in SLEE 1.1?
While only SBBs could generate alarms in SLEE 1.0, other types of application components can also generate alarms in SLEE 1.1.
In SLEE 1.1, alarms are stateful — between being raised and cleared, an alarm persists in the SLEE, where an administrator may examine it. (In SLEE 1.0, alarms could be generated with a severity level that indicated a cleared alarm, but the fact that an error condition had occurred did not persist in the SLEE beyond the initial alarm generation.)
Configuring alarm log period
To set and get the interval between periodic active alarm logs, use the following rhino-console commands or related MBean operations.
Rhino periodically logs active alarms and the default interval is 60 seconds.
When using the pool clustering mode, like all configuration state, the alarm logging period is configured separately for each pool cluster node by invoking the relevant management operations on the node where the configuration needs to be queried or changed. Changing the alarm logging period will only affect the node the management operation is invoked on. |
setalarmlogperiod
Command |
setalarmlogperiod <seconds> Description Sets the interval between periodic active alarm logs. Required Arguments seconds The interval between periodic alarm logs. Setting to 0 will disable logging of periodic alarms. |
---|---|
Example |
To set log period to 30 seconds: $ ./rhino-console setalarmlogperiod 30 Active alarm logging period set to 30 seconds. |
getalarmlogperiod
Command |
getalarmlogperiod Description Returns the interval between periodic active alarm logs. |
---|---|
Example |
To get alarm log period: $ ./rhino-console getalarmlogperiod Active alarm logging period is currently 30 seconds. |
MBean operations: setAlarmLogPeriod
MBean |
|
---|---|
SLEE-defined |
Set the interval between periodic active alarm logs
public void setAlarmLogPeriod(int period) throws IllegalArgumentException, ConfigurationException; Sets the interval between periodic active alarm logs. Setting the period to 0 will disable periodic alarm logging.
Get the interval between periodic active alarm logs
public int getAlarmLogPeriod() throws ConfigurationException; Returns the interval between periodic active alarm logs. |
Viewing Active Alarms
To view active alarms, use the following rhino-console command or related MBean operation.
When using the pool clustering mode, it is only possible to view the alarms that have been raised on the node the management operation is invoked on. To view alarms raised on a different node, a management client needs to connect directly to that node. |
Console command: listactivealarms
Command |
listactivealarms [<type> <notif-source>] [-stack] Description List the alarms currently active in the SLEE (for a specific notification if provided). Use -stack to display stack traces for alarm cause exceptions. |
---|---|
Example |
To list all active alarms in the SLEE: $ ./rhino-console listactivealarms 1 alarm: Alarm 101:193215480667648 [diameter.peer.connectiondown] Level : Warning InstanceID : diameter.peer.hss-instance Source : (RA Entity) sh-cache-ra Timestamp : 20161019 14:02:58 (active 15m 30s) Message : Connection to hss-instance:3868 is down The number value on the first line "101:193215480667648" is the The value in the square brackets "diameter.peer.connectiondown" is the |
MBean operations: getAlarms
and getDescriptors
MBean |
|
---|---|
SLEE-defined |
Get identifiers of all active alarms in the SLEE
public String[] getAlarms() throws ManagementException; Rhino’s implementation of the SLEE-defined
Get identifiers of active alarms raised by a specific notification source
public String[] getAlarms(NotificationSource notificationSource) throws NullPointerException, UnrecognizedNotificationSourceException, ManagementException; This variant of
Get alarm descriptor for an alarm identifier
public Alarm[] getDescriptors(String[] alarmIDs) throws NullPointerException, ManagementException; This operation returns the |
Clear Individual Alarms
To clear an alarm using its alarm identifier, use the following rhino-console command or related MBean operation.
When using the pool clustering mode, it is only possible to clear alarms raised on the same node that the management operation is invoked on. |
Console command: clearalarm
console command
Command |
clearalarm <alarmid> Description Clear the specified alarm. |
---|---|
Example |
To clear the alarm with the identifier $ ./rhino-console clearalarm 101:102916243593:1 Alarm 101:102916243593:1 cleared |
MBean operation: clearAlarm
MBean |
|
---|---|
SLEE-defined |
public boolean clearAlarm(String alarmID) throws NullPointerException, ManagementException; Rhino’s implementation of the SLEE-defined |
Clear Alarms Raised by a Particular Notification Source
To clear alarms raised by a particular notification source, use the following rhino-console command or related MBean operation.
When using the pool clustering mode, it is only possible to clear alarms raised on the same node that the management operation is invoked on. |
Console command: clearalarms
Command |
clearalarms <type> <notification-source> [<alarm type>] Description Clear all alarms raised by the notification source (of the specified alarm type) This command clears all alarms of the specified alarm type (or all alarms if no alarm-type is specified), that have been raised by the specified notification source. |
---|---|
Example |
To clear all alarms raised by a resource adaptor entity named $ ./rhino-console clearalarms resourceadaptorentity insis-cap 2 alarms cleared To clear only "noconnection" alarms raised by the resource adaptor entity named $ ./rhino-console clearalarms resourceadaptorentity insis-cap noconnection 1 alarm cleared |
MBean operation: clearAlarms
MBean |
|
---|---|
SLEE-defined |
Clear all active alarms raised by a notification source
public int clearAlarms(NotificationSource notificationSource) throws NullPointerException, UnrecognizedNotificationSourceException, ManagementException Rhino’s implementation of the SLEE-defined
Clear active alarms of a specified type raised by a notification source
public int clearAlarms(NotificationSource notificationSource, String alarmType) throws NullPointerException, UnrecognizedNotificationSourceException, ManagementException; This variant of |
Threshold Alarms
To supplement standard alarms (which Rhino and installed components raise), an administrator may configure custom alarms (which Rhino will raise or clear automatically based on SLEE Statistics.
These are known as threshold alarms, and you manage them using the Threshold Rule Management MBean
.
When using the pool clustering mode, like all configuration state, threshold alarms are node-specific and must be configured separately for each individual pool cluster node. |
Threshold rules
You configure the threshold rules governing each threshold alarm using a Threshold Rule MBean
.
Each threshold rule consists of:
-
a unique name identifying the rule
-
one or more trigger conditions
-
an alarm level, type and message text
-
and optionally:
-
one or more reset conditions
-
how long (in milliseconds) the trigger conditions must remain before Rhino raises the alarm
-
how long (in milliseconds) the reset conditions must remain before Rhino clears the alarm.
-
You can combine condition sets using either an AND or an OR operator. (AND means all conditions must be satisfied, whereas OR means any one of the conditions may be satisfied — to raise or clear the alarm.) |
Parameter sets
Threshold rules use the same parameter sets as the statistics client. You can discover them either by using the statistics client graphically or by using its command-line mode from a command shell as shown below.
$ ./rhino-stats -l The following root parameter sets are available for monitoring: Activities, ActivityHandler, ByteArrayBuffers, CGIN, DatabaseQuery, Diameter, EndpointLimiting, EventRouter, Events, HTTP, JDBCDatasource, JVM, LicenseAccounting, Limiters, LockManagers, MemDB-Local, MemDB-Replicated, MemDB-Timestamp, Metrics, ObjectPools, SIP, SIS-SIP, SLEE-Usage, Services, StagingThreads, StagingThreads-Misc, TimerFacility, TransactionManager For parameter set type descriptions and a list of available parameter sets use the -l <root parameter set name> option
$ ./rhino-stats -l JVM Connecting to localhost:1199 Parameter Set: JVM Parameter Set Type: JVM Description: JVM Statistics Counter type statistics: Id: Name: Label: Description: 0 heapUsed husd Used heap memory 1 heapCommitted hcomm Committed heap memory 2 heapInitial hinit Initial heap memory 3 heapMaximum hmax Maximum heap memory 4 nonHeapUsed nhusd Used non-heap memory 5 nonHeapCommitted nhcomm Committed non-heap memory 6 nonHeapInitial nhinit Initial non-heap memory 7 nonHeapMaximum nhmax Maximum non-heap memory 8 classesCurrentLoaded cLoad Number of classes currently loaded 9 classesTotalLoaded cTotLoad Total number of classes loaded since JVM start 10 classesTotalUnloaded cTotUnload Total number of classes unloaded since JVM start Sample type statistics: (none defined) Found 1 parameter sets under 'JVM': -> "JVM"
How Rhino evaluates threshold rules
Rhino periodically evaluates the trigger conditions of each configured rule. When a trigger condition is satisfied and its trigger period has been met or exceeded, Rhino raises the corresponding alarm. If the rule has reset conditions, Rhino evaluates those too, and when the reset condition is satisfied and the reset trigger period has been met or exceeded, clears the alarm. If the rule does not have reset conditions, an administrator must clear the alarm manually.
You can configure the frequency of threshold alarm rule evaluation, using the Threshold Rule Management MBean
. An administrator can specify a polling frequency in milliseconds, or enter 0
to disable rule evaluation. The Rhino default is 0
(which must be changed to enable threshold-rule evaluation). Ideal polling frequency is highly dependent on the nature of alarms configured.
Simple and relative rule conditions
There are two types of threshold rule conditions, explained in the tables below.
Simple rule conditions
What it compares | Operators for comparison | Conditions | Example |
---|---|---|---|
The value of a counter-type Rhino statistic against a constant value. |
>, >=, <, <=, ==, != |
The constant value to compare against may be any floating-point number. The condition can either compare against the absolute value of the statistic (suitable for gauge-type statistics), or against the observed difference between successive samples (suitable for pure counter-type statistics). |
A condition that selects the statistic |
Relative rule conditions
What it compares | Operators for comparison | Conditions | Example |
---|---|---|---|
The ratio between two monitored statistics against a constant value. |
>, >=, <, <=, ==, != |
The constant value to compare against may be any floating-point number. |
A condition that selects the statistics |
For definitions of counter, guage and sample type statistics, see About Rhino Statistics. |
See also:
|
Creating and Removing Rules
To create or remove a threshold-alarm rule, use the following rhino-console commands or related MBean operations.
Console command: createthresholdrule
Command |
createthresholdrule <name> Description Create a threshold alarm rule |
---|---|
Example |
To create a rule named "low memory": $ ./rhino-console createthresholdrule "low memory" Threshold rule low memory created |
MBean operation: createRule
MBean |
|
---|---|
Rhino operation |
public ObjectName createRule(String ruleName) throws ConfigurationException, ValidationException; This operation creates a rule with the name given, and returns the JMX object name of a |
Console command: removethresholdrule
Command |
removethresholdrule <name> Description Remove a threshold alarm rule |
---|---|
Example |
To remove a rule named "low memory": $ ./rhino-console removethresholdrule "low memory" Threshold rule low memory removed |
MBean operation: removeRule
MBean |
|
---|---|
Example |
public void removeRule(String ruleName) throws ConfigurationException, ValidationException; This operation removes the rule with the name given. |
Listing Rules
To list all threshold alarm rules, use the following rhino-console command or related MBean operation.
Console command: listthresholdrules
Command |
listthresholdrules Description List threshold alarm rules |
---|---|
Example |
To list all threshold alarm rules, with their activation states: $ ./rhino-console listthresholdrules Current threshold rules: low memory (active) low disk (inactive) testrule (inactive) |
MBean operation: getRules
MBean |
|
---|---|
Rhino operation |
public String[] getRules() throws ConfigurationException; |
Configuring Rules
To configure a threshold alarm rule:
-
use the following rhino-console commands to view available rules, export a rule to an XML file, edit the rule file, and then re-import the edited file into the SLEE
-
or use Threshold Rule MBean operations.
View rules
To view a current threshold alarm rule., use the getconfig
console command:
Command |
getconfig [-namespace] <configuration type> [configuration key] Description Extract and display content of a container configuration key. The optional -namespace argument must be used to get the config of a namespace-specific key. If no key is specified the configs of all keys of the given type are shown |
---|---|
Example |
To display the threshold alarm rule named "rhino-memory-usage-over-80": $ ./rhino-console getconfig threshold-rules "rule/rhino-memory-usage-over-80" <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE rhino-threshold-rules-config PUBLIC "-//Open Cloud Ltd.//DTD Rhino Threshold Rules Config 2.6//EN" "rhino-threshold-rules-config-2.6.dtd"> <rhino-threshold-rules-config config-version="2.6" rhino-version="Rhino (version='3.2', release='8', build='xxx', revision='xxx')" timestamp="xxx"> <!--Generated Rhino configuration file: xxxx-xx-xx xx:xx:xx.xxx--> <threshold-rules active="true" name="rhino-memory-usage-over-80"> <trigger-conditions name="Trigger conditions" operator="OR" period="5000"> <relative-threshold operator=">" value="0.8"> <first-statistic calculate-delta="false" parameter-set="JVM" statistic="heapUsed"/> <second-statistic calculate-delta="false" parameter-set="JVM" statistic="heapCommitted"/> </relative-threshold> </trigger-conditions> <reset-conditions name="Reset conditions" operator="OR" period="0"/> <trigger-actions> <raise-alarm-action level="Critical" message="Memory Heap used over 80%" type="MEMORY"/> </trigger-actions> <reset-actions/> </threshold-rules> </rhino-threshold-rules-config> |
Export rules
To save a threshold rule configuration to a file for editing, use the exportconfig
console command:
Command |
exportconfig [-namespace] <configuration type> [configuration key] <filename> Description Extract content of a container configuration key and save it to a file. The optional -namespace argument must be used to export the config of a namespace-specific key |
---|---|
Example |
To export the threshold alarm rule named "rhino-memory-usage-over-80" to the file $ ./rhino-console exportconfig threshold-rules "rule/rhino-memory-usage-over-80" rule_rhino-memory-usage-over-80.xml Export threshold-rules: (rule/rhino-memory-usage-over-80 to rule_rhino-memory-usage-over-80.xml Wrote rule_rhino-memory-usage-over-80.xml |
The structure of the exported data in the XML file is identical to that displayed by the getconfig command. |
Edit rules
You can modify a rule using a text editor. In the following example, a reset condition has been added to the rule previously exported, so that the alarm raised will automatically clear when heap memory utilisation falls below 80% for a continuous 30s period. (Previously the reset-conditions
element in this rule had no conditions.)
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE rhino-threshold-rules-config PUBLIC "-//Open Cloud Ltd.//DTD Rhino Threshold Rules Config 2.6//EN" "rhino-threshold-rules-config-2.6.dtd"> <rhino-threshold-rules-config config-version="2.6" rhino-version="Rhino (version='3.2', release='8', build='xxx', revision='xxx')" timestamp="xxx"> <!--Generated Rhino configuration file: xxxx-xx-xx xx:xx:xx.xxx--> <threshold-rules active="true" name="rhino-memory-usage-over-80"> <trigger-conditions name="Trigger conditions" operator="OR" period="1000"> <relative-threshold operator=">" value="0.8"> <first-statistic calculate-delta="false" parameter-set="JVM" statistic="heapUsed"/> <second-statistic calculate-delta="false" parameter-set="JVM" statistic="heapCommitted"/> </relative-threshold> </trigger-conditions> <reset-conditions name="Reset conditions" operator="OR" period="30000"> <relative-threshold operator="<" value="0.8"> <first-statistic calculate-delta="false" parameter-set="JVM" statistic="heapUsed"/> <second-statistic calculate-delta="false" parameter-set="JVM" statistic="heapCommitted"/> </relative-threshold> </reset-conditions> <trigger-actions> <raise-alarm-action level="Critical" message="Memory Heap used over 80%" type="MEMORY"/> </trigger-actions> <reset-actions> <clear-raised-alarm-action/> </reset-actions> </threshold-rules> </rhino-threshold-rules-config>
Import rules
To import the modified threshold alarm rule file, use the importconfig
console command:
Command |
importconfig [-namespace] <configuration type> <filename> [-replace] Description Import a container configuration key. The optional -namespace argument must be used to import a config for a namespace-specific key |
---|---|
Example |
To import the threshold alarm rule from the file $ ./rhino-console importconfig threshold-rules rule_rhino-memory-usage-over-80.xml -replace Configuration successfully imported. |
The -replace option is required when importing a rule with the same name as an existing rule, as there can be only one rule configuration with a given name present at any one time. |
Threshold Rule MBean Operations
To configure a threshold alarm rule, use the following MBean operations (defined on the Threshold Rule MBean
interface), for:
-
adding, removing and getting trigger conditions, and getting and setting their operators and periods
-
adding, removing and getting reset conditions, and getting and setting their operators and periods
-
setting the alarm
-
getting an alarm’s level, type, and message.
See also Configuring Rules. |
Trigger conditions
To add, remove and get threshold alarm trigger conditions, and get and set their operators and periods, use the following MBean operations:
Operations | Usage |
---|---|
|
To add a trigger condition to the rule:
public void addTriggerCondition(String parameterSetName, String statistic, String operator, double value) throws ConfigurationException, UnknownStatsParameterSetException, UnrecognizedStatisticException, ValidationException; public void addTriggerCondition(String parameterSetName1, String statistic1, String parameterSetName2, String statistic2, String operator, double value throws ConfigurationException, UnknownStatsParameterSetException, UnrecognizedStatisticException, ValidationException; The first operation adds a simple trigger condition to the rule. The second operation adds a relative condition between two parameter set statistics (see Simple and relative rule conditions).
To get the current trigger conditions:
public String[] getTriggerConditions() throws ConfigurationException;
To remove a trigger condition:
public void removeTriggerCondition(String key) throws ConfigurationException, ValidationException;
To get or set the trigger condition operator:
public String getTriggerConditionsOperator() throws ConfigurationException; public void setTriggerConditionsOperator(String operator) throws ConfigurationException, ValidationException; The operator must be one of the logical operators
To get or set the trigger condition period:
public long getTriggerPeriod() throws ConfigurationException; public void setTriggerPeriod(long period) throws ConfigurationException, ValidationException; The trigger period is measured in milliseconds. If it is |
Reset conditions
To add, remove and get threshold alarm reset conditions, and get and set their operators and periods, use the following MBean operations:
Operations | Usage |
---|---|
|
To add a reset condition to the rule:
public void addResetCondition(String parameterSetName, String statistic, String operator, double value) throws ConfigurationException, UnknownStatsParameterSetException, UnrecognizedStatisticException, ValidationException; public void addResetCondition(String parameterSetName1, String statistic1, String parameterSetName2, String statistic2, String operator, double value) throws ConfigurationException, UnknownStatsParameterSetException, UnrecognizedStatisticException, ValidationException; The first operation adds a simple reset condition to the rule. The second operation adds a relative condition between two parameter set statistics (see bxfref:threshold-alarms[Simple and relative rule conditions]).
To get the current reset conditions:
public String[] getResetConditions() throws ConfigurationException;
To remove a reset condition:
public void removeResetCondition(String key) throws ConfigurationException, ValidationException;
To get or set the reset condition operator:
public String getResetConditionsOperator() throws ConfigurationException; public void setResetConditionsOperator(String operator) throws ConfigurationException, ValidationException; The operator must be one of the logical operators
To get or set the reset condition period:
public long getResetPeriod() throws ConfigurationException; public void setResetPeriod(long period) throws ConfigurationException, ValidationException; The reset period is measured in milliseconds. If it is |
Setting alarms
To set the alarm to be raised by a threshold rule, use the following MBean operation:
Operations | Usage |
---|---|
public void setAlarm(AlarmLevel level, String type, String message) throws ConfigurationException, ValidationException; The alarm level may be any level other than |
Getting alarm information
To get a threshold alarm’s level, type, and message, use the following MBean operations:
Operations | Usage |
---|---|
public AlarmLevel getAlarmLevel() throws ConfigurationException; public String getAlarmType() throws ConfigurationException; public String getAlarmMessage() throws ConfigurationException; |
Activating and Deactivating Rules
To activate or deactivate a threshold-alarm rule, use the following rhino-console commands or related MBean operations.
Activate Rules
Console command: activatethresholdrule
Command |
activatethresholdrule <name> Description Activate a threshold alarm rule |
---|---|
Example |
To activate the rule with the name "low memory": $ ./rhino-console activatethresholdrule "low memory" Threshold rule low memory activated |
You can also activate a rule by exporting it, modifying the XML, and then reimporting it (assuming the active parameter in the rule is set to true — see Configuring Rules). |
MBean operation: activateRule
MBean |
|
---|---|
Rhino operation |
public void activateRule() throws ConfigurationException; This operation activates the threshold-alarm rule represented by the |
threshold rule scan period must be configured to a non-zero value before Rhino will evaluate active threshold-alarm rules. |
Deactivate rules
Console command: deactivatethresholdrule
Command |
deactivatethresholdrule <name> Description Deactivate a threshold alarm rule |
---|---|
Example |
To deactivate the rule with the name "low memory": $ ./rhino-console deactivatethresholdrule "low memory" Threshold rule low memory deactivated |
MBean operation: deactivateRule
MBean |
|
---|---|
Rhino operation |
public void deactivateRule() throws ConfigurationException; This operation deactivates the threshold-alarm rule represented by the |
Setting and Getting Rule-Scan Periods
To set or get the threshold rule scan period, use the following rhino-console commands or MBean operations.
What is a rule-scan period?
A threshold-alarm rule-scan period determines when Rhino’s threshold-rule scanner evaluates active threshold-alarm rules. The scan period must be set to a valid non-zero value for Rhino to evaluate the rules. At the beginning of each scan period, Rhino evaluates each active threshold-alarm rule as follows:
(The same process applies to the reset conditions once a rule has been triggered.) |
Console command: setthresholdrulescanperiod
Command |
setthresholdrulescanperiod <period> Description Set the threshold alarm rule scan period, measured in ms. Must be > 500 or 0 to disable rule checking |
---|---|
Example |
To set the threshold rule scan period to 30000ms (30s): $ ./rhino-console setthresholdrulescanperiod 30000 Threshold rule scan period set to 30000ms To disable threshold rule scanning: $ ./rhino-console setthresholdrulescanperiod 0 Threshold rule scanning disabled |
MBean operation: setScanPeriod
MBean |
|
---|---|
Rhino operation |
public void setScanPeriod(int scanPeriod) throws ConfigurationException, ValidationException; The scan period is measured in milliseconds. |
Console command: getthresholdrulescanperiod
Command |
getthresholdrulescanperiod Description Get the threshold alarm rule scan period |
---|---|
Example |
$ ./rhino-console getthresholdrulescanperiod Threshold rule scan period set to 30000ms |
MBean operation: getScanPeriod
MBean |
|
---|---|
Rhino operation |
public int getScanPeriod() throws ConfigurationException; The scan period is measured in milliseconds. |
Runtime Alarm List
To list all alarms that may be raised by Rhino and installed components (including their messages, and when raised and cleared), use the following rhino-console command.
Console command: alarmcatalog
Command |
alarmcatalog [-v] Description List the alarms that may be raised by Rhino and installed components. Using the -v flag will display more detail. |
---|---|
Example |
And this displays more detail:
|
Rhino Alarm List
This is a list of all alarms raised by this version of Rhino. For the management command that lists all alarms that may be raised by Rhino and installed components see Runtime Alarm List.
Alarm Type | Description |
---|---|
Category: AbnormalExecution (Alarms raised as a result of an abnormal execution condition being detected) |
|
An uncaught exception has been detected. |
|
Category: Activity Handler (Alarms raised by Rhino activity handler) |
|
The oldest activity handler snapshot is too old. |
|
Category: Cassandra Key/Value Store (Alarms raised by the Cassandra key/value store) |
|
All database nodes for all persistence instances have failed or are otherwise unreachable. |
|
The local database driver cannot connect to the configured persistence instance. |
|
The local database driver cannot connect to a database node. |
|
A required database keyspace does not exist and runtime data definition updates are disallowed. |
|
A required database table does not exist and runtime data definition updates are disallowed. |
|
The volume of committed but not yet persisted application state has exceeded the configured pending size limit threshold. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the pending size volume below the limit again |
|
rhino.cassandra-kv-store.scan-persist-time-threshold-reached |
The allowed pending transaction scan or persist time has exceeded the configured thresholds due to overload. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the load on the pending transaction scanner |
Category: Cassandra Session Ownership Store (Alarms raised by the Cassandra session ownership store) |
|
All database nodes for all persistence instances have failed or are otherwise unreachable. |
|
The local database driver cannot connect to the configured persistence instance. |
|
The local database driver cannot connect to a database node. |
|
A required database keyspace does not exist and runtime data definition updates are disallowed. |
|
A required database table does not exist and runtime data definition updates are disallowed. |
|
Category: Cluster Clock Synchronisation (Alarms raised by the cluster clock synchronisation monitor) |
|
Another cluster node is reporting a system clock deviation relative to the local node beyond the maximum permitted threshold. The status of external processes maintaining the system clock on that node (eg. NTP) should be checked. |
|
Category: Clustering (Alarms raised by Rhino cluster state changes) |
|
A node left the cluster for some reason other than a management-initiated shutdown. |
|
Category: Configuration Management (Alarms raised by the Rhino configuration manager) |
|
An error occurred while trying to write the file-based configuration for the configuration type specified in the alarm instance. |
|
An error occurred while trying to read the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
|
An error occurred while trying to activate the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
|
Category: Database (Alarms raised during database communications) |
|
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
Rhino requires a backing database for persistence of state for failure recovery purposes. A persistent instance defines a connection to a database backend. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
|
Rhino requires a backing database for persistence of state for failure recovery purposes. If no connection to the database backend is available, state cannot be persisted. |
|
A persistent instance defines the connection to the database backend. If the persistent instance cannot be instantiated then JDBC connections cannot be made. |
|
Category: Event Router State (Alarms raised by event router state management) |
|
A licensing problem was detected during SLEE start. |
|
A licensing problem was detected during service activation. |
|
A licensing problem was detected during resource adaptor entity activation. |
|
A component reported an unexpected error during convergence |
|
A component has not transitioned to the effective desired state after the timeout period |
|
A resource adaptor entity is of a type that does not support active reconfiguration but has a desired state that contains configuration properties different from those in the actual state |
|
Category: GroupRMI (Alarms raised by the GroupRMI server) |
|
A group RMI invocation completed without committing or rolling back a transaction that it started. The dangling transaction will be automatically rolled back by the group RMI server to prevent future issues but these occurrences are software bugs that should be reported. |
|
Category: Key/Value Store (Alarms raised by key/value store persistence resource managers) |
|
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
A persistence instance used by a key/value store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
|
Category: Licensing (Alarms raised by Rhino licensing) |
|
Rate limiter throttling is active. This throttling and hence this alarm only happens in SDK versions of Rhino, not production versions. |
|
A license installed in Rhino has passed its expiry time. |
|
A license installed in Rhino is within seven days of its expiry time. |
|
The hardware addresses listed in a host-based license only partially match those on the host. |
|
The hardware addresses listed in a host-based license do not match those on the host. |
|
Rhino does not have a valid license installed. |
|
The work done by a function exceeds licensed capacity. |
|
A particular function is not licensed. |
|
Category: Limiting (Alarms raised by Rhino limiting) |
|
A rate limiter is below negative capacity. |
|
A stat limiter is misconfigured. |
|
Category: Logging (Alarms raised by Rhino logging) |
|
An appender has thrown an exception when attempting to pass log messages from a logger to it. |
|
Category: M-lets Startup (Alarms raised by the M-let starter) |
|
The M-Let starter component could not register itself with the platform MBean server. This normally indicates a serious JVM misconfiguration. |
|
The M-Let starter component could not register an MBean for a configured m-let. This normally indicates an error in the m-let configuration file. |
|
Category: Pool Maintenance Provider (Alarms raised by pool maintenance provider persistence resource managers) |
|
The persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
rhino.pool-maintenance-provider.persistence-instance-instantiation-failure |
A persistence instance used by the pool-maintenance-provider cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
An unexpected heartbeat timestamp for this node was encountered when querying the heartbeat table. This could mean, for example, that multiple pool nodes are configured with the same node id. |
|
rhino.rhino.pool-maintenance-provider.invalid-node-update-time |
A pool node is refreshing its heartbeat timestamps but using a clock time that exceeds the permitted delta from this node’s clock time. |
Category: REM Startup (Alarms raised by embedded REM starter) |
|
This version of Rhino is supposed to contain an embedded instance of REM but it was not found, most likely due to a packaging error. |
|
There was an unexpected problem while starting the embedded REM. This could be because of a port conflict or packaging problem. |
|
Category: Runtime Environment (Alarms related to the runtime environment) |
|
This JVM is not a supported JVM. |
|
SLEE event-routing functions failed to start after node restart |
|
Filenames with the maximum length expected by Rhino are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
|
Category: SAS facility (Alarms raised by Rhino SAS Facility) |
|
Attempting to reconect to SAS server |
|
SAS message queue is full. Some events have not been reported to SAS |
|
Category: SLEE State (Alarms raised by SLEE state management) |
|
An unexpected exception was caught during SLEE start. |
|
Category: SNMP (Alarms raised by Rhino SNMP) |
|
The SNMP agent listens for requests received on all network interfaces that match the requested SNMP configuration. If no suitable interfaces can be found that match the requested configuration, then the SNMP agent cannot process any SNMP requests. |
|
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If no ports could be bound, the SNMP agent cannot process any SNMP requests. |
|
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If this succeeds on some (but not all) interfaces, the SNMP agent can only process requests received via the interfaces that succeeded. |
|
This is a catchall alarm for unexpected failures during agent startup. If an unexpected failure occurs, the state of the SNMP agent is unpredictable and requests may not be successfully processed. |
|
This alarm represents a failure to determine an address from the notification target configuration. This can occur if the notification hostname is not resolvable, or if the specified hostname is not parseable. |
|
Multiple parameter set type configurations for in-use parameter set types map to the same OID. All parameter set type mappings will remain inactive until the conflict is resolved. |
|
Multiple counters in the parameter set type configuration map to the same index. The parameter set type mappings will remain inactive until the conflict is resolved. |
|
Category: Scattercast Management (Alarms raised by Rhino scattercast management operations) |
|
Reboot needed to make scattercast update active. |
|
Category: Service State (Alarms raised by service state management) |
|
The service threw an exception during service activation, or an unexpected exception occurred while attempting to activate the service. |
|
Category: Session Ownership Store (Alarms raised by session ownership store persistence resource managers) |
|
The persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
rhino.session-ownership-store.persistence-instance-instantiation-failure |
A persistence instance used by the session ownership store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Category: Threshold Rules (Alarms raised by the threshold alarm rule processor) |
|
A threshold rule trigger or reset rule failed. |
|
A threshold rule trigger or reset rule refers to an unknown statistics parameter set. |
|
Category: Watchdog (Alarms raised by the watchdog) |
|
The system property watchdog.no_exit is set, enabling override of default node termination behaviour on failed watchdog conditions. This can cause catastrophic results and should never be used. |
|
A forward timewarp was detected. |
|
A reverse timewarp was detected. |
|
A long JVM garbage collector pause has been detected. |
Category: AbnormalExecution
Alarms raised as a result of an abnormal execution condition being detected
Alarm Type |
rhino.uncaught-exception |
---|---|
Level |
WARNING |
Message |
Uncaught exception thrown by thread %s: %s |
Description |
An uncaught exception has been detected. |
Raised |
When an uncaught exception has been thrown. |
Cleared |
Never, must be cleared manually or Rhino restarted with the source of the uncaught exception corrected. |
Category: Activity Handler
Alarms raised by Rhino activity handler
Alarm Type |
rhino.ah.snapshot-age |
---|---|
Level |
WARNING |
Message |
Oldest activity handler snapshot is older than %s, snapshot is %s (from %d), creating thread: %s |
Description |
The oldest activity handler snapshot is too old. |
Raised |
When the age of the oldest activity handler snapshot is greater than the threshold set by the rhino.ah.snapshot_age_warn system property (30s default). |
Cleared |
When the age of the oldest snapshot is less than or equal to the threshold. |
Category: Cassandra Key/Value Store
Alarms raised by the Cassandra key/value store
Alarm Type |
rhino.cassandra-kv-store.connection-error |
---|---|
Level |
CRITICAL |
Message |
Connection error for persistence instance %s |
Description |
The local database driver cannot connect to the configured persistence instance. |
Raised |
When communication with the database fails, for example because no node is available to execute a query. |
Cleared |
When the connection error is resolved. |
Alarm Type |
rhino.cassandra-kv-store.db-node-failure |
---|---|
Level |
MAJOR |
Message |
Connection lost to database node %s in persistence instance %s |
Description |
The local database driver cannot connect to a database node. |
Raised |
When communication with the database node fails. |
Cleared |
When the connection error is resolved or the node is removed from the cluster. |
Alarm Type |
rhino.cassandra-kv-store.missing-keyspace |
---|---|
Level |
CRITICAL |
Message |
Database keyspace %s does not exist |
Description |
A required database keyspace does not exist and runtime data definition updates are disallowed. |
Raised |
When a required database keyspace is found to be missing. |
Cleared |
When the database keyspace is detected to be present. |
Alarm Type |
rhino.cassandra-kv-store.missing-table |
---|---|
Level |
CRITICAL |
Message |
Database table %s does not exist |
Description |
A required database table does not exist and runtime data definition updates are disallowed. |
Raised |
When a required database table is found to be missing. |
Cleared |
When the database table is detected to be present. |
Alarm Type |
rhino.cassandra-kv-store.no-nodes-available |
---|---|
Level |
CRITICAL |
Message |
No database node in any persistence instance is available to execute queries |
Description |
All database nodes for all persistence instances have failed or are otherwise unreachable. |
Raised |
When an attempted database query execution fails because no node is available to accept it in any persistence instance. |
Cleared |
When one or more nodes become available to accept queries. |
Alarm Type |
rhino.cassandra-kv-store.pending-size-limit-reached |
---|---|
Level |
WARNING |
Message |
Not-yet-persisted application state has exceeded the configured pending size limit, newly committed state is being discarded |
Description |
The volume of committed but not yet persisted application state has exceeded the configured pending size limit threshold. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the pending size volume below the limit again |
Raised |
When the pending size volume exceeds the pending size limit. |
Cleared |
When the pending size volume falls below the pending size limit again. |
Alarm Type |
rhino.cassandra-kv-store.scan-persist-time-threshold-reached |
---|---|
Level |
WARNING |
Message |
Pending transaction scan or persist time has exceeded the configured maximum thresholds, newly committed state is being discarded |
Description |
The allowed pending transaction scan or persist time has exceeded the configured thresholds due to overload. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the load on the pending transaction scanner |
Raised |
When the pending transaction scan or persist times exceed the configured maximum thresholds. |
Cleared |
When the pending transaction scan and persist times fall below the configured maximum thresholds again. |
Category: Cassandra Session Ownership Store
Alarms raised by the Cassandra session ownership store
Alarm Type |
rhino.cassandra-session-ownership-store.connection-error |
---|---|
Level |
CRITICAL |
Message |
Connection error for persistence instance %s |
Description |
The local database driver cannot connect to the configured persistence instance. |
Raised |
When communication with the database fails, for example because no node is available to execute a query. |
Cleared |
When the connection error is resolved. |
Alarm Type |
rhino.cassandra-session-ownership-store.db-node-failure |
---|---|
Level |
MAJOR |
Message |
Connection lost to database node %s in persistence instance %s |
Description |
The local database driver cannot connect to a database node. |
Raised |
When communication with the database node fails. |
Cleared |
When the connection error is resolved or the node is removed from the cluster. |
Alarm Type |
rhino.cassandra-session-ownership-store.missing-keyspace |
---|---|
Level |
CRITICAL |
Message |
Database keyspace %s does not exist |
Description |
A required database keyspace does not exist and runtime data definition updates are disallowed. |
Raised |
When a required database keyspace is found to be missing. |
Cleared |
When the database keyspace is detected to be present. |
Alarm Type |
rhino.cassandra-session-ownership-store.missing-table |
---|---|
Level |
CRITICAL |
Message |
Database table %s does not exist |
Description |
A required database table does not exist and runtime data definition updates are disallowed. |
Raised |
When a required database table is found to be missing. |
Cleared |
When the database table is detected to be present. |
Alarm Type |
rhino.cassandra-session-ownership-store.no-nodes-available |
---|---|
Level |
CRITICAL |
Message |
No database node in any persistence instance is available to execute queries |
Description |
All database nodes for all persistence instances have failed or are otherwise unreachable. |
Raised |
When an attempted database query execution fails because no node is available to accept it in any persistence instance. |
Cleared |
When one or more nodes become available to accept queries. |
Category: Cluster Clock Synchronisation
Alarms raised by the cluster clock synchronisation monitor
Alarm Type |
rhino.monitoring.clocksync |
---|---|
Level |
WARNING |
Message |
Node %d is reporting a local clock time deviation beyond the maximum expected threshold of %dms |
Description |
Another cluster node is reporting a system clock deviation relative to the local node beyond the maximum permitted threshold. The status of external processes maintaining the system clock on that node (eg. NTP) should be checked. |
Raised |
When a cluster node reports a local clock time deviation relative to the local node beyond the maximum permitted threshold. |
Cleared |
When the clock deviation returns to a value at or below the threshold. |
Category: Clustering
Alarms raised by Rhino cluster state changes
Alarm Type |
rhino.node-failure |
---|---|
Level |
MAJOR |
Message |
Node %d has left the cluster |
Description |
A node left the cluster for some reason other than a management-initiated shutdown. |
Raised |
When the cluster state listener detects a node has left the cluster unexpectedly. |
Cleared |
When the failed node returns to the cluster. |
Category: Configuration Management
Alarms raised by the Rhino configuration manager
Alarm Type |
rhino.config.activation-failure |
---|---|
Level |
MAJOR |
Message |
Error activating configuration from file %s. Configuration was replaced with defaults and old configuration file was moved to %s. |
Description |
An error occurred while trying to activate the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
Raised |
When an exception occurs while activating a file-based configuration. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.config.read-error |
---|---|
Level |
MAJOR |
Message |
Error reading configuration from file %s. Configuration was replaced with defaults and old configuration file was moved to %s. |
Description |
An error occurred while trying to read the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
Raised |
When an exception occurs while reading a configuration file. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.config.save-error |
---|---|
Level |
MAJOR |
Message |
Error saving file based configuration: %s |
Description |
An error occurred while trying to write the file-based configuration for the configuration type specified in the alarm instance. |
Raised |
When an exception occurs while writing to a configuration file. |
Cleared |
Never, must be cleared manually. |
Category: Database
Alarms raised during database communications
Alarm Type |
rhino.database.connection-lost |
---|---|
Level |
MAJOR |
Message |
Connection to %s database failed: %s |
Description |
Rhino requires a backing database for persistence of state for failure recovery purposes. If no connection to the database backend is available, state cannot be persisted. |
Raised |
When the connection to a database backend is lost. |
Cleared |
When the connection is restored. |
Alarm Type |
rhino.database.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has been removed |
Description |
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.database.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has no active persistence instances |
Description |
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.database.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s for database %s |
Description |
Rhino requires a backing database for persistence of state for failure recovery purposes. A persistent instance defines a connection to a database backend. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Alarm Type |
rhino.jdbc.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s for JDBC configuration with JNDI name %s |
Description |
A persistent instance defines the connection to the database backend. If the persistent instance cannot be instantiated then JDBC connections cannot be made. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Category: Event Router State
Alarms raised by event router state management
Alarm Type |
rhino.state.convergence-failure |
---|---|
Level |
MAJOR |
Message |
State convergence failed for "%s". The component remains in the "%s" state. |
Description |
A component reported an unexpected error during convergence |
Raised |
When a configuration change requiring a component to change state does not complete convergence due to an error. |
Cleared |
When the component transitions to the configured desired state. |
Alarm Type |
rhino.state.convergence-timeout |
---|---|
Level |
MINOR |
Message |
State convergence timed out for "%s". The component remains in the "%s" state. Convergence will be retried periodically until it reaches the desired state. |
Description |
A component has not transitioned to the effective desired state after the timeout period |
Raised |
When a configuration change requiring a component to change state does not complete convergence in the expected time. |
Cleared |
When the component transitions to the configured desired state. |
Alarm Type |
rhino.state.raentity.active-reconfiguration |
---|---|
Level |
MINOR |
Message |
Resource adaptor entity "%s" does not support active reconfiguration. Configuration changes will not take effect until the resource adaptor entity is deactivated and reactivated |
Description |
A resource adaptor entity is of a type that does not support active reconfiguration but has a desired state that contains configuration properties different from those in the actual state |
Raised |
When a configuration change requiring a resource adaptor entity to be reconfigured and the resource adaptor does not support active reconfiguration. |
Cleared |
When the resource adaptor entity is deactivated and convergence has updated the configuration properties. |
Alarm Type |
rhino.state.unlicensed-raentity |
---|---|
Level |
MAJOR |
Message |
No valid license for resource adaptor entity "%s" found. The resource adaptor entity has not been activated. |
Description |
A licensing problem was detected during resource adaptor entity activation. |
Raised |
When a node attempts to transition a resource adaptor entity from an actual state of INACTIVE to an actual state of ACTIVE but absence of a valid license prevents that transition. |
Cleared |
When a valid license is installed. |
Alarm Type |
rhino.state.unlicensed-service |
---|---|
Level |
MAJOR |
Message |
No valid license for service "%s" found. The service has not been activated. |
Description |
A licensing problem was detected during service activation. |
Raised |
When a node attempts to transition a service from an actual state of INACTIVE to an actual state of ACTIVATING but absence of a valid license prevents that transition. |
Cleared |
When a valid license is installed. |
Alarm Type |
rhino.state.unlicensed-slee |
---|---|
Level |
CRITICAL |
Message |
No valid license for the SLEE found. The SLEE has not been started. |
Description |
A licensing problem was detected during SLEE start. |
Raised |
When a node attempts to transition its SLEE from an actual state of STOPPED state to an actual state of STARTING but absence of a valid license prevents that transition. |
Cleared |
When a valid license is installed. |
Category: GroupRMI
Alarms raised by the GroupRMI server
Alarm Type |
rhino.group-rmi.dangling-transaction |
---|---|
Level |
WARNING |
Message |
Group RMI invocation %s completed leaving an active transaction dangling: %s. Please report this bug to Metaswitch support. |
Description |
A group RMI invocation completed without committing or rolling back a transaction that it started. The dangling transaction will be automatically rolled back by the group RMI server to prevent future issues but these occurrences are software bugs that should be reported. |
Raised |
When a group RMI invocation completes leaving an active transaction dangling. |
Cleared |
Never, must be cleared manually. |
Category: Key/Value Store
Alarms raised by key/value store persistence resource managers
Alarm Type |
rhino.kv-store.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has been removed |
Description |
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.kv-store.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has no active persistence instances |
Description |
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.kv-store.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s for key/value store %s |
Description |
A persistence instance used by a key/value store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Category: Licensing
Alarms raised by Rhino licensing
Alarm Type |
rhino.license.expired |
---|---|
Level |
MAJOR |
Message |
License with serial "%s" has expired |
Description |
A license installed in Rhino has passed its expiry time. |
Raised |
When a license expires and there is no superseding license installed. |
Cleared |
When the license is removed or a superseding license is installed. |
Alarm Type |
rhino.license.over-licensed-capacity |
---|---|
Level |
MAJOR |
Message |
Over licensed capacity for function "%s". |
Description |
The work done by a function exceeds licensed capacity. |
Raised |
When the amount of work processed by the named function exceeds the licensed capacity. |
Cleared |
When the amount of work processed by the function becomes less than or equal to the licensed capacity. |
Alarm Type |
rhino.license.over-limit |
---|---|
Level |
MAJOR |
Message |
Rate limiter throttling active, throttled to %d events/second |
Description |
Rate limiter throttling is active. This throttling and hence this alarm only happens in SDK versions of Rhino, not production versions. |
Raised |
When there is more incoming work than allowed by the licensed limit so Rhino starts rejecting some. |
Cleared |
When the total input rate (both accepted and rejected work) drops below the licensed limit. |
Alarm Type |
rhino.license.partially-licensed-host |
---|---|
Level |
MINOR |
Message |
Host "%s" is not fully licensed. Not all hardware addresses on this host match those licensed. Please request a new license for host "%s". |
Description |
The hardware addresses listed in a host-based license only partially match those on the host. |
Raised |
When a host-based license with invalid host addresses is installed. |
Cleared |
When the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.pending-expiry |
---|---|
Level |
MAJOR |
Message |
License with serial "%s" is due to expire on %s |
Description |
A license installed in Rhino is within seven days of its expiry time. |
Raised |
Seven days before a license will expire and there is no superseding license installed. |
Cleared |
When the license expires, the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.unlicensed-function |
---|---|
Level |
MAJOR |
Message |
There are no valid licenses installed for function "%s" and version "%s". |
Description |
A particular function is not licensed. |
Raised |
When a unit of an unlicensed function is requested. |
Cleared |
When a license is installed that licenses a particular function, and another unit is requested. |
Alarm Type |
rhino.license.unlicensed-host |
---|---|
Level |
MINOR |
Message |
"%s" is not licensed. Hardware addresses on this host did not match those licensed, or hostname has changed. Please request a new license for host "%s". |
Description |
The hardware addresses listed in a host-based license do not match those on the host. |
Raised |
When a host-based license with invalid host addresses is installed. |
Cleared |
When the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.unlicensed-rhino |
---|---|
Level |
MAJOR |
Message |
Rhino platform is no longer licensed |
Description |
Rhino does not have a valid license installed. |
Raised |
When a license expires or is removed leaving Rhino in an unlicensed state. |
Cleared |
When an appropriate license is installed. |
Category: Limiting
Alarms raised by Rhino limiting
Alarm Type |
rhino.limiting.below-negative-capacity |
---|---|
Level |
WARNING |
Message |
Token count in rate limiter "%s" capped at negative saturation point on node %d. Too much work has been forced. Alarm will clear once token count >= 0. |
Description |
A rate limiter is below negative capacity. |
Raised |
By a rate limiter when a very large number of units have been forcibly used and the internal token counter has reached the biggest possible negative number (-2,147,483,648). |
Cleared |
When the token count becomes greater than or equal to zero. |
Alarm Type |
rhino.limiting.stat-limiter-misconfigured |
---|---|
Level |
MAJOR |
Message |
Stat limiter "%s" is misconfigured: %s. All unit requests will be allowed by this limiter until the error is resolved. |
Description |
A stat limiter is misconfigured. |
Raised |
By a stat limiter that has been asked for one or more units and has been unable to find the configured parameter set or statistic name. |
Cleared |
When the stat limiter is reconfigured or the configured parameter set that was missing is deployed. |
Category: Logging
Alarms raised by Rhino logging
Alarm Type |
rhino.logging.appender-error |
---|---|
Level |
MAJOR |
Message |
An error occurred logging to an appender: %s |
Description |
An appender has thrown an exception when attempting to pass log messages from a logger to it. |
Raised |
When an appender throws an AppenderLoggingException when a logger tries to log to it. |
Cleared |
When the problem with the given appender has been resolved and the logging configuration is updated. |
Category: M-lets Startup
Alarms raised by the M-let starter
Alarm Type |
rhino.mlet.loader-failure |
---|---|
Level |
MAJOR |
Message |
Error registering MLetLoader MBean |
Description |
The M-Let starter component could not register itself with the platform MBean server. This normally indicates a serious JVM misconfiguration. |
Raised |
During Rhino startup if an error occurred registering the m-let loader component with the MBean server. |
Cleared |
Never, must be cleared manually or Rhino restarted. |
Alarm Type |
rhino.mlet.registration-failure |
---|---|
Level |
MINOR |
Message |
Could not create or register MLet: %s |
Description |
The M-Let starter component could not register an MBean for a configured m-let. This normally indicates an error in the m-let configuration file. |
Raised |
During Rhino startup if an error occurred starting a m-let configured. |
Cleared |
Never, must be cleared manually or Rhino restarted with updated configuration. |
Category: Pool Maintenance Provider
Alarms raised by pool maintenance provider persistence resource managers
Alarm Type |
rhino.pool-maintenance-provider.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config has been removed |
Description |
The persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.pool-maintenance-provider.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config has no active persistence instances |
Description |
The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.pool-maintenance-provider.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s |
Description |
A persistence instance used by the pool-maintenance-provider cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Alarm Type |
rhino.rhino.pool-maintenance-provider.invalid-node-update-time |
---|---|
Level |
WARNING |
Message |
Node %s is reporting heartbeat timestamps that exceed the maximum permitted delta from current time; current delta is %sms in the %s |
Description |
A pool node is refreshing its heartbeat timestamps but using a clock time that exceeds the permitted delta from this node’s clock time. |
Raised |
When a node’s heartbeat timestamps are noticed to exceed the permitted time delta from this node’s clock time for longer than the configured grace period. |
Cleared |
When the node’s timestamp no longer exceed the permitted time delta. |
Alarm Type |
rhino.rhino.pool-maintenance-provider.missing-heartbeat |
---|---|
Level |
MAJOR |
Message |
Expected to find my node with a heartbeat timestamp one of %s but found a timestamp of %s instead |
Description |
An unexpected heartbeat timestamp for this node was encountered when querying the heartbeat table. This could mean, for example, that multiple pool nodes are configured with the same node id. |
Raised |
When an unexpected heartbeat timestamp for this node is encountered after a heartbeat table query. |
Cleared |
When an expected timestamp is encountered. |
Category: REM Startup
Alarms raised by embedded REM starter
Alarm Type |
rhino.rem.missing |
---|---|
Level |
MINOR |
Message |
Rhino Element Manager classes not found, embedded REM is disabled. |
Description |
This version of Rhino is supposed to contain an embedded instance of REM but it was not found, most likely due to a packaging error. |
Raised |
During Rhino startup if the classes could not be found to start the embedded REM. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.rem.startup |
---|---|
Level |
MINOR |
Message |
Could not start embedded Rhino Element Manager |
Description |
There was an unexpected problem while starting the embedded REM. This could be because of a port conflict or packaging problem. |
Raised |
During Rhino startup if an error occurred starting the embedded REM. |
Cleared |
Never, must be cleared manually or Rhino restarted with updated configuration. |
Category: Runtime Environment
Alarms related to the runtime environment
Alarm Type |
rhino.runtime.long-filenames-unsupported |
---|---|
Level |
WARNING |
Message |
Filenames with a length of %s characters are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
Description |
Filenames with the maximum length expected by Rhino are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
Raised |
During Rhino startup if the long filename check fails. |
Cleared |
Never, must be cleared manually or Rhino restarted after being installed on a filesystem supporting long filenames. |
Alarm Type |
rhino.runtime.slee |
---|---|
Level |
CRITICAL |
Message |
SLEE event-routing functions failed to start after node restart |
Description |
SLEE event-routing functions failed to start after node restart |
Raised |
During Rhino startup if SLEE event-routing functions fail to restart. |
Cleared |
Never, must be cleared manually or the node restarted. |
Alarm Type |
rhino.runtime.unsupported.jvm |
---|---|
Level |
WARNING |
Message |
This JVM (%s) is not supported. Supported JVMs are: %s |
Description |
This JVM is not a supported JVM. |
Raised |
During Rhino startup if an unsupported JVM was detected. |
Cleared |
Never, must be cleared manually or Rhino restarted with a supported JVM. |
Category: SAS facility
Alarms raised by Rhino SAS Facility
Alarm Type |
rhino.sas.connection.lost |
---|---|
Level |
MAJOR |
Message |
Connection to SAS server at %s:%d is down |
Description |
Attempting to reconect to SAS server |
Raised |
When SAS client loses connection to server |
Cleared |
On reconnect |
Alarm Type |
rhino.sas.queue.full |
---|---|
Level |
WARNING |
Message |
SAS message queue is full |
Description |
SAS message queue is full. Some events have not been reported to SAS |
Raised |
When SAS facility outgoing message queue is full |
Cleared |
When the queue is not full for at least sas.queue_full_interval |
Category: SLEE State
Alarms raised by SLEE state management
Alarm Type |
rhino.state.slee-start |
---|---|
Level |
CRITICAL |
Message |
The SLEE failed to start successfully. |
Description |
An unexpected exception was caught during SLEE start. |
Raised |
When a node attempts to transition its SLEE from an actual state of STOPPED state to an actual state of STARTING but an unexpected exception occurred while fulfilling that request. |
Cleared |
After the desired state of the SLEE is reset to STOPPED. |
Category: SNMP
Alarms raised by Rhino SNMP
Alarm Type |
rhino.snmp.bind-failure |
---|---|
Level |
MAJOR |
Message |
The SNMP agent could not be started on node %d: no addresses were successfully bound. |
Description |
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If no ports could be bound, the SNMP agent cannot process any SNMP requests. |
Raised |
When the SNMP Agent attempts to start listening for requests, but no port in the configured range on any configured interface could be used. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.duplicate-counter-mapping |
---|---|
Level |
WARNING |
Message |
Duplicate counter mappings in parameter set type %s |
Description |
Multiple counters in the parameter set type configuration map to the same index. The parameter set type mappings will remain inactive until the conflict is resolved. |
Raised |
When a in-use parameter set type has a configuration with duplicate counter mappings. |
Cleared |
When the conflict is resolved, either by changing the relevant counter mappings, or if the parameter set type is removed from use. |
Alarm Type |
rhino.snmp.duplicate-oid-mapping |
---|---|
Level |
WARNING |
Message |
Duplicate parameter set type mapping configurations for OID %s |
Description |
Multiple parameter set type configurations for in-use parameter set types map to the same OID. All parameter set type mappings will remain inactive until the conflict is resolved. |
Raised |
When multiple in-use parameter set types have a configuration that map to the same OID. |
Cleared |
When the conflict is resolved, either by changing the OID mappings in the relevant parameter set type configurations, or if a parameter set type in conflict is removed from use. |
Alarm Type |
rhino.snmp.general-failure |
---|---|
Level |
MINOR |
Message |
The SNMP agent encountered an error during startup: %s |
Description |
This is a catchall alarm for unexpected failures during agent startup. If an unexpected failure occurs, the state of the SNMP agent is unpredictable and requests may not be successfully processed. |
Raised |
When the SNMP Agent attempts to start listening for requests, but there is an unexpected failure not covered by other alarms. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.no-bind-addresses |
---|---|
Level |
MAJOR |
Message |
The SNMP agent could not be started on node %d: no suitable bind addresses available. |
Description |
The SNMP agent listens for requests received on all network interfaces that match the requested SNMP configuration. If no suitable interfaces can be found that match the requested configuration, then the SNMP agent cannot process any SNMP requests. |
Raised |
When the SNMP Agent attempts to start listening for requests, but no suitable network interface addresses can be found to bind to. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.notification-address-failure |
---|---|
Level |
MAJOR |
Message |
Failed to create notification target for address "%s". |
Description |
This alarm represents a failure to determine an address from the notification target configuration. This can occur if the notification hostname is not resolvable, or if the specified hostname is not parseable. |
Raised |
During SNMP agent start if a notification target address cannot be determined (e.g. due to a hostname resolution failing). |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.partial-failure |
---|---|
Level |
MINOR |
Message |
The SNMP agent failed to bind to the following addresses: %s |
Description |
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If this succeeds on some (but not all) interfaces, the SNMP agent can only process requests received via the interfaces that succeeded. |
Raised |
When the SNMP Agent attempts to start listening for requests, and only some of the configured interfaces successfully bound a UDP port. |
Cleared |
When the SNMP Agent is stopped. |
Category: Scattercast Management
Alarms raised by Rhino scattercast management operations
Alarm Type |
rhino.scattercast.update-reboot-required |
---|---|
Level |
CRITICAL |
Message |
Scattercast endpoints have been updated. A cluster reboot is required to apply the update. An automatic reboot has been triggered, Manual intervention required if the reboot fails. |
Description |
Reboot needed to make scattercast update active. |
Raised |
When scattercast endpoints are updated. |
Cleared |
On node reboot. |
Category: Service State
Alarms raised by service state management
Alarm Type |
rhino.state.service-activation |
---|---|
Level |
MAJOR |
Message |
Service "%s" failed to activate successfully. |
Description |
The service threw an exception during service activation, or an unexpected exception occurred while attempting to activate the service. |
Raised |
When a node attempts to transition a service from an actual state of INACTIVE to an actual state of ACTIVATING but the service rejected the activation request or an unexpected exception occurred while fulfilling that request. |
Cleared |
After the desired state of the service is reset to INACTIVE. |
Category: Session Ownership Store
Alarms raised by session ownership store persistence resource managers
Alarm Type |
rhino.session-ownership-store.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config has been removed |
Description |
The persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.session-ownership-store.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config has no active persistence instances |
Description |
The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.session-ownership-store.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s |
Description |
A persistence instance used by the session ownership store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Category: Threshold Rules
Alarms raised by the threshold alarm rule processor
Alarm Type |
rhino.threshold-rules.rule-failure |
---|---|
Level |
WARNING |
Message |
Threshold rule %s trigger or reset condition failed to run |
Description |
A threshold rule trigger or reset rule failed. |
Raised |
When a threshold rule condition cannot be evaluated, for example it refers to a statistic that does not exist. |
Cleared |
When the threshold rule condition is corrected. |
Alarm Type |
rhino.threshold-rules.unknown-parameter-set |
---|---|
Level |
WARNING |
Message |
Threshold rule %s refers to unknown statistics parameter set '%s' |
Description |
A threshold rule trigger or reset rule refers to an unknown statistics parameter set. |
Raised |
When a threshold rule condition cannot be evaluated because it refers to a statistics parameter set that does not exist. |
Cleared |
When the threshold rule condition is corrected. |
Category: Watchdog
Alarms raised by the watchdog
Alarm Type |
rhino.watchdog.forward-timewarp |
---|---|
Level |
WARNING |
Message |
Forward timewarp of %sms detected at %s |
Description |
A forward timewarp was detected. |
Raised |
When the system clock is detected to have progressed by an amount exceeding the sum of the watchdog check interval and the maximum pause margin. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.watchdog.gc |
---|---|
Level |
CRITICAL |
Message |
Long JVM %s GC of %sms detected |
Description |
A long JVM garbage collector pause has been detected. |
Raised |
When the Java Virtual Machine performs a garbage collection which stops all application threads longer than the configured acceptable threshold. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.watchdog.no-exit |
---|---|
Level |
CRITICAL |
Message |
System property watchdog.no_exit is set, watchdog will be terminated rather than killing the node if a failed watchdog condition occurs |
Description |
The system property watchdog.no_exit is set, enabling override of default node termination behaviour on failed watchdog conditions. This can cause catastrophic results and should never be used. |
Raised |
When the watchdog.no_exit system property is set. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.watchdog.reverse-timewarp |
---|---|
Level |
WARNING |
Message |
Reverse timewarp of %sms detected at %s |
Description |
A reverse timewarp was detected. |
Raised |
When the system clock is detected to have progressed by an amount less than the difference between the watchdog check interval and the reverse timewarp margin. |
Cleared |
Never, must be cleared manually. |
Usage
As well as an overview of usage, this section includes instructions for performing the following Rhino SLEE procedures, with explanations, examples and links to related javadocs:
Procedure | rhino-console command | MBean → Operations |
---|---|---|
dumpusagestats |
Usage → |
|
setusagenotificationsenabled |
UsageNotificationManager → set<usage-parameter-name> |
|
listusagenotificationsenabled |
UsageNotificationManager → get<usage-parameter-name> |
|
createusageparameterset |
ServiceUsage → |
|
listusageparametersets |
ServiceUsage → |
|
removeusageparameterset |
ServiceUsage → |
About Usage
A usage parameter is a parameter that an object in the SLEE can update, to provide usage information.
There are two types:
-
Counter-type usage parameters have values that can be incremented or decremented.
-
Sample-type usage parameters accumulate sample data.
Accessing usage parameters
Administrators can access usage parameters through the SLEE’s management interface.
Management clients can access usage parameters through the usage parameters interface declared in an SBB, resource adaptor or profile specification. Usage parameters cannot be created through the management interface. Instead, a usage parameters interface must be declared in the SLEE component. For example, an SBB declares an sbb-usage-parameters-interface
element in the SBB deployment descriptor (similar procedures apply for resource adaptors and profile specifications).
You can also use notifications to output usage parameters to management clients.
Creating named usage parameter sets
By default, the SLEE creates unnamed usage parameter sets for a notification source. You can also create named usage parameter sets, for example to hold multiple values of usage parameters for the same notification source.
Rhino usage extensions
To alleviate the limitations of the SLEE-defined usage mechanism, Rhino provides a usage extension mechanism that allows an SBB or resource adaptor to declare multiple usage parameters interfaces, and defines a Usage facility with which SBBs and resource adaptors can manage and access their own usage parameter sets.
Viewing Usage Parameters
To view the current value of a usage parameter, use the following rhino-console command or related MBean operation.
Whereas the MBean operation below can only get individual usage parameter values, the console command outputs current values of all usage parameters for a specified notification source. |
Console command: dumpusagestats
Command |
dumpusagestats <type> <notif-source> [param-set-name] [reset] Description Dump the current values of the usage parameters for the specified notification source. The usage parameter set name is optional and if not specified the values for the unnamed (or root) parameter set are returned. If [reset] is specified, the values of the usage parameters are reset after being obtained |
---|---|
Example |
$ ./rhino-console dumpusagestats sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" parameter-name counter-value sample-stats type ------------------- -------------- ------------- -------- callAttempts 0 counter missingParameters 0 counter offNetCalls 0 counter onNetCalls 0 counter unknownShortCode 0 counter unknownSubscribers 0 counter 6 rows |
MBean operation: get<usage-parameter-name>
MBean |
|
---|---|
SLEE-defined |
Counter-type usage parameters
public long get<usage-parameter-name>(boolean reset) throws ManagementException;
Sample-type usage parameters
public SampleStatistics get<usage-parameter-name>(boolean reset) throws ManagementException; |
Arguments |
This operation requires that you specify whether the values are to be reset after being read:
|
Return value |
Operations for counter-type usage parameters return the current value of the counter. Operations for sample-type usage parameters return a |
Usage Notifications
You can enable or disable usage notifications, and list which usage notifications are enabled:
Enabling and Disabling Usage Notifications
To enable or disable usage notifications, use the following rhino-console command or related MBean operation.
The notifications-enabled flag
To enable notifications to output usage parameters to management clients, set the usage |
When using the pool clustering mode, like all configuration state, whether usage notifications are enabled or disabled is configured separately for each pool cluster node. |
Console command: setusagenotificationsenabled
Command |
setusagenotificationsenabled <type> <notif-source> [upi-type] <param-name> <flag> Description Set the usage notifications-enabled flag for specified usage notification source's usage parameter. The usage parameters interface type is optional and if not specified the root usage parameters interface type is used |
---|---|
Example |
$ ./rhino-console setusagenotificationsenabled sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" \ callAttempts true Usage notifications for usage parameter callAttempts for SbbNotification[service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]] have been enabled |
MBean operation: set<usage-parameter-name>NotificationsEnabled
MBean |
|
---|---|
SLEE-defined |
public void set<usage-parameter-name>NotificationsEnabled(boolean enabled) throws ManagementException; |
Arguments |
|
Notes |
Enabling usage notification
Usage notifications are enabled or disabled on a per-usage-parameter basis for each notification source. That means that if usage notifications are enabled for a particular usage parameter, if that usage parameter is updated in any usage parameter set belonging to the notification source, a usage notification will be generated by the SLEE. |
Viewing Usage Notification Status
To list usage parameter status, use the following rhino-console command or related MBean operation.
To see which usage parameters management clients are receiving through notifications, you can list usage parameter status. |
When using the pool clustering mode, like all configuration state, whether usage notifications are enabled or disabled is configured separately for each pool cluster node. |
Console command: listusagenotificationsenabled
Command |
listusagenotificationsenabled <type> <notif-source> [upi-type] Description List the usage notification manager flags for the specified notification source. The usage parameters interface type is optional and if not specified the flags for the root usage parameters interface type are returned |
---|---|
Example |
$ ./rhino-console listusagenotificationsenabled sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" parameter-name notifications-enabled ------------------- ---------------------- callAttempts true missingParameters false offNetCalls false onNetCalls false unknownShortCode false unknownSubscribers false 6 rows |
MBean operation: get<usage-parameter-name>NotificationsEnabled
MBean |
|
---|---|
SLEE-defined |
public boolean get<usage-parameter-name>NotificationsEnabled() throws ManagementException; |
Arguments |
|
Named Usage Parameter Sets
By default, the SLEE creates unnamed usage parameter sets for a notification source. You can also create named usage parameter sets, for example to hold multiple values of usage parameters for the same notification source.
Rhino includes facilities for creating, listing and removing named usage parameter sets for services, resource adaptor entities and profile tables.
This section includes the following procedures:
Usage parameter sets for internal subsystems (not listed using console command)
The SLEE specification also includes usage parameter sets for "internal subsystems". You can list these, but not create or remove them, since they are part of the SLEE implementation. However, Rhino uses its own statistics API to collect statistics from internal subsystems — so if you try to list usage parameter set names for an internal subsystem using |
Creating Usage Parameter Sets
To create a named usage parameter set for services, resource adaptor entities or profile tables, use the following rhino-console or related MBean operations.
Services
Console command: createusageparameterset
Command |
createusageparameterset <type> <notif-source> <param-set-name> Description Create a new usage parameter set with the specified name for the specified notification source |
---|---|
Example |
$ ./rhino-console createusageparameterset sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" \ firstLook created usage parameter set firstLook for SbbNotification[service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]] |
MBean operation: createUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void createUsageParameterSet(SbbID id, String paramSetName) throws NullPointerException, UnrecognizedSbbException, InvalidArgumentException, UsageParameterSetNameAlreadyExistsException, ManagementException; |
Arguments |
|
Resource adaptor entities
Console command: createusageparameterset
Command |
createusageparameterset <type> <notif-source> <param-set-name> Description Create a new usage parameter set with the specified name for the specified notification source |
---|---|
Example |
$ ./rhino-console createusageparameterset resourceadaptorentity \ "entity=cdr" \ cdr-usage created usage parameter set cdr-usage for RAEntityNotification[entity=cdr] |
MBean operation: createUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void createUsageParameterSet(String paramSetName) throws NullPointerException, InvalidArgumentException, UsageParameterSetNameAlreadyExistsException, ManagementException; |
Arguments |
|
Profile tables
Console command: createusageparameterset
Command |
createusageparameterset <type> <notif-source> <param-set-name> Description Create a new usage parameter set with the specified name for the specified notification source |
---|---|
Example |
$ ./rhino-console createusageparameterset profiletable \ "table=PostpaidChargingPrefixTable" \ ppprefix-usage created usage parameter set ppprefix-usage for ProfileTableNotification[table=PostpaidChargingPrefixTable] |
MBean operation: createUsageParameterSet
MBean |
|
---|---|
SLEE-defined |
public void createUsageParameterSet(String paramSetName) throws NullPointerException, InvalidArgumentException, UsageParameterSetNameAlreadyExistsException, ManagementException; |
Arguments |
|
Listing Usage Parameter Sets
To list named usage parameter sets for services, resource adaptor entities or profile tables, use the following rhino-console or related MBean operations.
Services
Console command: listusageparametersets
Command |
listusageparametersets <type> <notif-source> Description List the usage parameter sets for the specified notification source. The unnamed (or root) parameter set is not included in this list |
---|---|
Example |
$ ./rhino-console listusageparametersets sbb \ "service=ServiceID[name=VPN Service,vendor=OpenCloud,version=0.2],sbb=SbbID[name=VPN SBB,vendor=OpenCloud,version=0.2]" firstLook secondLook |
MBean operation: getUsageParameterSets
MBean{cth} |
|
---|---|
SLEE-defined |
public String[] getUsageParameterSets(SbbID id) throws NullPointerException, UnrecognizedSbbException, InvalidArgumentException, ManagementException |
Arguments |
|
Resource adaptor entities
Console command: listusageparametersets
Command |
listusageparametersets <type> <notif-source> Description List the usage parameter sets for the specified notification source. The unnamed (or root) parameter set is not included in this list |
---|---|
Example |
$ ./rhino-console listusageparametersets resourceadaptorentity \ "entity=cdr" cdr-usage |
MBean operation: getUsageParameterSets
MBean{cth} |
|
---|---|
SLEE-defined |
public String[] getUsageParameterSets() throws ManagementException |
Profile tables
Console command: listusageparametersets
Command |
listusageparametersets <type> <notif-source> Description List the usage parameter sets for the specified notification source. The unnamed (or root) parameter set is not included in this list |
---|---|
Example |
$ ./rhino-console listusageparametersets profiletable \ "table=PostpaidChargingPrefixTable" ppprefix-usage |