This is a list of all alarms raised by this version of Rhino. For the management command that lists all alarms that may be raised by Rhino and installed components see Runtime Alarm List.
Alarm Type | Description |
---|---|
Category: AbnormalExecution (Alarms raised as a result of an abnormal execution condition being detected) |
|
An uncaught exception has been detected. |
|
Category: Activity Handler (Alarms raised by Rhino activity handler) |
|
The oldest activity handler snapshot is too old. |
|
Category: Cassandra Key/Value Store (Alarms raised by the Cassandra key/value store) |
|
All database nodes for all persistence instances have failed or are otherwise unreachable. |
|
The local database driver cannot connect to the configured persistence instance. |
|
The local database driver cannot connect to a database node. |
|
A required database keyspace does not exist and runtime data definition updates are disallowed. |
|
A required database table does not exist and runtime data definition updates are disallowed. |
|
The volume of committed but not yet persisted application state has exceeded the configured pending size limit threshold. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the pending size volume below the limit again |
|
rhino.cassandra-kv-store.scan-persist-time-threshold-reached |
The allowed pending transaction scan or persist time has exceeded the configured thresholds due to overload. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the load on the pending transaction scanner |
Category: Cassandra Session Ownership Store (Alarms raised by the Cassandra session ownership store) |
|
All database nodes for all persistence instances have failed or are otherwise unreachable. |
|
The local database driver cannot connect to the configured persistence instance. |
|
The local database driver cannot connect to a database node. |
|
A required database keyspace does not exist and runtime data definition updates are disallowed. |
|
A required database table does not exist and runtime data definition updates are disallowed. |
|
Category: Cluster Clock Synchronisation (Alarms raised by the cluster clock synchronisation monitor) |
|
Another cluster node is reporting a system clock deviation relative to the local node beyond the maximum permitted threshold. The status of external processes maintaining the system clock on that node (eg. NTP) should be checked. |
|
Category: Clustering (Alarms raised by Rhino cluster state changes) |
|
A node left the cluster for some reason other than a management-initiated shutdown. |
|
Category: Configuration Management (Alarms raised by the Rhino configuration manager) |
|
An error occurred while trying to write the file-based configuration for the configuration type specified in the alarm instance. |
|
An error occurred while trying to read the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
|
An error occurred while trying to activate the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
|
Category: Database (Alarms raised during database communications) |
|
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
Rhino requires a backing database for persistence of state for failure recovery purposes. A persistent instance defines a connection to a database backend. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
|
Rhino requires a backing database for persistence of state for failure recovery purposes. If no connection to the database backend is available, state cannot be persisted. |
|
A persistent instance defines the connection to the database backend. If the persistent instance cannot be instantiated then JDBC connections cannot be made. |
|
Category: Event Router State (Alarms raised by event router state management) |
|
A licensing problem was detected during SLEE start. |
|
A licensing problem was detected during service activation. |
|
A licensing problem was detected during resource adaptor entity activation. |
|
A component reported an unexpected error during convergence |
|
A component has not transitioned to the effective desired state after the timeout period |
|
A resource adaptor entity is of a type that does not support active reconfiguration but has a desired state that contains configuration properties different from those in the actual state |
|
Category: GroupRMI (Alarms raised by the GroupRMI server) |
|
A group RMI invocation completed without committing or rolling back a transaction that it started. The dangling transaction will be automatically rolled back by the group RMI server to prevent future issues but these occurrences are software bugs that should be reported. |
|
Category: Key/Value Store (Alarms raised by key/value store persistence resource managers) |
|
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
A persistence instance used by a key/value store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
|
Category: Licensing (Alarms raised by Rhino licensing) |
|
Rate limiter throttling is active. This throttling and hence this alarm only happens in SDK versions of Rhino, not production versions. |
|
A license installed in Rhino has passed its expiry time. |
|
A license installed in Rhino is within seven days of its expiry time. |
|
The hardware addresses listed in a host-based license only partially match those on the host. |
|
The hardware addresses listed in a host-based license do not match those on the host. |
|
Rhino does not have a valid license installed. |
|
The work done by a function exceeds licensed capacity. |
|
A particular function is not licensed. |
|
Category: Limiting (Alarms raised by Rhino limiting) |
|
A rate limiter is below negative capacity. |
|
A stat limiter is misconfigured. |
|
Category: Logging (Alarms raised by Rhino logging) |
|
An appender has thrown an exception when attempting to pass log messages from a logger to it. |
|
Category: M-lets Startup (Alarms raised by the M-let starter) |
|
The M-Let starter component could not register itself with the platform MBean server. This normally indicates a serious JVM misconfiguration. |
|
The M-Let starter component could not register an MBean for a configured m-let. This normally indicates an error in the m-let configuration file. |
|
Category: Pool Maintenance Provider (Alarms raised by pool maintenance provider persistence resource managers) |
|
The persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
rhino.pool-maintenance-provider.persistence-instance-instantiation-failure |
A persistence instance used by the pool-maintenance-provider cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
An unexpected heartbeat timestamp for this node was encountered when querying the heartbeat table. This could mean, for example, that multiple pool nodes are configured with the same node id. |
|
rhino.rhino.pool-maintenance-provider.invalid-node-update-time |
A pool node is refreshing its heartbeat timestamps but using a clock time that exceeds the permitted delta from this node’s clock time. |
Category: REM Startup (Alarms raised by embedded REM starter) |
|
This version of Rhino is supposed to contain an embedded instance of REM but it was not found, most likely due to a packaging error. |
|
There was an unexpected problem while starting the embedded REM. This could be because of a port conflict or packaging problem. |
|
Category: Runtime Environment (Alarms related to the runtime environment) |
|
This JVM is not a supported JVM. |
|
SLEE event-routing functions failed to start after node restart |
|
Filenames with the maximum length expected by Rhino are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
|
Category: SAS facility (Alarms raised by Rhino SAS Facility) |
|
Attempting to reconect to SAS server |
|
SAS message queue is full. Some events have not been reported to SAS |
|
Category: SLEE State (Alarms raised by SLEE state management) |
|
An unexpected exception was caught during SLEE start. |
|
Category: SNMP (Alarms raised by Rhino SNMP) |
|
The SNMP agent listens for requests received on all network interfaces that match the requested SNMP configuration. If no suitable interfaces can be found that match the requested configuration, then the SNMP agent cannot process any SNMP requests. |
|
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If no ports could be bound, the SNMP agent cannot process any SNMP requests. |
|
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If this succeeds on some (but not all) interfaces, the SNMP agent can only process requests received via the interfaces that succeeded. |
|
This is a catchall alarm for unexpected failures during agent startup. If an unexpected failure occurs, the state of the SNMP agent is unpredictable and requests may not be successfully processed. |
|
This alarm represents a failure to determine an address from the notification target configuration. This can occur if the notification hostname is not resolvable, or if the specified hostname is not parseable. |
|
Multiple parameter set type configurations for in-use parameter set types map to the same OID. All parameter set type mappings will remain inactive until the conflict is resolved. |
|
Multiple counters in the parameter set type configuration map to the same index. The parameter set type mappings will remain inactive until the conflict is resolved. |
|
Category: Scattercast Management (Alarms raised by Rhino scattercast management operations) |
|
Reboot needed to make scattercast update active. |
|
Category: Service State (Alarms raised by service state management) |
|
The service threw an exception during service activation, or an unexpected exception occurred while attempting to activate the service. |
|
Category: Session Ownership Store (Alarms raised by session ownership store persistence resource managers) |
|
The persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
|
The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
|
rhino.session-ownership-store.persistence-instance-instantiation-failure |
A persistence instance used by the session ownership store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Category: Threshold Rules (Alarms raised by the threshold alarm rule processor) |
|
A threshold rule trigger or reset rule failed. |
|
A threshold rule trigger or reset rule refers to an unknown statistics parameter set. |
|
Category: Watchdog (Alarms raised by the watchdog) |
|
The system property watchdog.no_exit is set, enabling override of default node termination behaviour on failed watchdog conditions. This can cause catastrophic results and should never be used. |
|
A forward timewarp was detected. |
|
A reverse timewarp was detected. |
|
A long JVM garbage collector pause has been detected. |
Category: AbnormalExecution
Alarms raised as a result of an abnormal execution condition being detected
Alarm Type |
rhino.uncaught-exception |
---|---|
Level |
WARNING |
Message |
Uncaught exception thrown by thread %s: %s |
Description |
An uncaught exception has been detected. |
Raised |
When an uncaught exception has been thrown. |
Cleared |
Never, must be cleared manually or Rhino restarted with the source of the uncaught exception corrected. |
Category: Activity Handler
Alarms raised by Rhino activity handler
Alarm Type |
rhino.ah.snapshot-age |
---|---|
Level |
WARNING |
Message |
Oldest activity handler snapshot is older than %s, snapshot is %s (from %d), creating thread: %s |
Description |
The oldest activity handler snapshot is too old. |
Raised |
When the age of the oldest activity handler snapshot is greater than the threshold set by the rhino.ah.snapshot_age_warn system property (30s default). |
Cleared |
When the age of the oldest snapshot is less than or equal to the threshold. |
Category: Cassandra Key/Value Store
Alarms raised by the Cassandra key/value store
Alarm Type |
rhino.cassandra-kv-store.connection-error |
---|---|
Level |
CRITICAL |
Message |
Connection error for persistence instance %s |
Description |
The local database driver cannot connect to the configured persistence instance. |
Raised |
When communication with the database fails, for example because no node is available to execute a query. |
Cleared |
When the connection error is resolved. |
Alarm Type |
rhino.cassandra-kv-store.db-node-failure |
---|---|
Level |
MAJOR |
Message |
Connection lost to database node %s in persistence instance %s |
Description |
The local database driver cannot connect to a database node. |
Raised |
When communication with the database node fails. |
Cleared |
When the connection error is resolved or the node is removed from the cluster. |
Alarm Type |
rhino.cassandra-kv-store.missing-keyspace |
---|---|
Level |
CRITICAL |
Message |
Database keyspace %s does not exist |
Description |
A required database keyspace does not exist and runtime data definition updates are disallowed. |
Raised |
When a required database keyspace is found to be missing. |
Cleared |
When the database keyspace is detected to be present. |
Alarm Type |
rhino.cassandra-kv-store.missing-table |
---|---|
Level |
CRITICAL |
Message |
Database table %s does not exist |
Description |
A required database table does not exist and runtime data definition updates are disallowed. |
Raised |
When a required database table is found to be missing. |
Cleared |
When the database table is detected to be present. |
Alarm Type |
rhino.cassandra-kv-store.no-nodes-available |
---|---|
Level |
CRITICAL |
Message |
No database node in any persistence instance is available to execute queries |
Description |
All database nodes for all persistence instances have failed or are otherwise unreachable. |
Raised |
When an attempted database query execution fails because no node is available to accept it in any persistence instance. |
Cleared |
When one or more nodes become available to accept queries. |
Alarm Type |
rhino.cassandra-kv-store.pending-size-limit-reached |
---|---|
Level |
WARNING |
Message |
Not-yet-persisted application state has exceeded the configured pending size limit, newly committed state is being discarded |
Description |
The volume of committed but not yet persisted application state has exceeded the configured pending size limit threshold. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the pending size volume below the limit again |
Raised |
When the pending size volume exceeds the pending size limit. |
Cleared |
When the pending size volume falls below the pending size limit again. |
Alarm Type |
rhino.cassandra-kv-store.scan-persist-time-threshold-reached |
---|---|
Level |
WARNING |
Message |
Pending transaction scan or persist time has exceeded the configured maximum thresholds, newly committed state is being discarded |
Description |
The allowed pending transaction scan or persist time has exceeded the configured thresholds due to overload. State generated for new transactions will be ignored by the key/value store and not buffered for persisting until sufficient state has been persisted to reduce the load on the pending transaction scanner |
Raised |
When the pending transaction scan or persist times exceed the configured maximum thresholds. |
Cleared |
When the pending transaction scan and persist times fall below the configured maximum thresholds again. |
Category: Cassandra Session Ownership Store
Alarms raised by the Cassandra session ownership store
Alarm Type |
rhino.cassandra-session-ownership-store.connection-error |
---|---|
Level |
CRITICAL |
Message |
Connection error for persistence instance %s |
Description |
The local database driver cannot connect to the configured persistence instance. |
Raised |
When communication with the database fails, for example because no node is available to execute a query. |
Cleared |
When the connection error is resolved. |
Alarm Type |
rhino.cassandra-session-ownership-store.db-node-failure |
---|---|
Level |
MAJOR |
Message |
Connection lost to database node %s in persistence instance %s |
Description |
The local database driver cannot connect to a database node. |
Raised |
When communication with the database node fails. |
Cleared |
When the connection error is resolved or the node is removed from the cluster. |
Alarm Type |
rhino.cassandra-session-ownership-store.missing-keyspace |
---|---|
Level |
CRITICAL |
Message |
Database keyspace %s does not exist |
Description |
A required database keyspace does not exist and runtime data definition updates are disallowed. |
Raised |
When a required database keyspace is found to be missing. |
Cleared |
When the database keyspace is detected to be present. |
Alarm Type |
rhino.cassandra-session-ownership-store.missing-table |
---|---|
Level |
CRITICAL |
Message |
Database table %s does not exist |
Description |
A required database table does not exist and runtime data definition updates are disallowed. |
Raised |
When a required database table is found to be missing. |
Cleared |
When the database table is detected to be present. |
Alarm Type |
rhino.cassandra-session-ownership-store.no-nodes-available |
---|---|
Level |
CRITICAL |
Message |
No database node in any persistence instance is available to execute queries |
Description |
All database nodes for all persistence instances have failed or are otherwise unreachable. |
Raised |
When an attempted database query execution fails because no node is available to accept it in any persistence instance. |
Cleared |
When one or more nodes become available to accept queries. |
Category: Cluster Clock Synchronisation
Alarms raised by the cluster clock synchronisation monitor
Alarm Type |
rhino.monitoring.clocksync |
---|---|
Level |
WARNING |
Message |
Node %d is reporting a local clock time deviation beyond the maximum expected threshold of %dms |
Description |
Another cluster node is reporting a system clock deviation relative to the local node beyond the maximum permitted threshold. The status of external processes maintaining the system clock on that node (eg. NTP) should be checked. |
Raised |
When a cluster node reports a local clock time deviation relative to the local node beyond the maximum permitted threshold. |
Cleared |
When the clock deviation returns to a value at or below the threshold. |
Category: Clustering
Alarms raised by Rhino cluster state changes
Alarm Type |
rhino.node-failure |
---|---|
Level |
MAJOR |
Message |
Node %d has left the cluster |
Description |
A node left the cluster for some reason other than a management-initiated shutdown. |
Raised |
When the cluster state listener detects a node has left the cluster unexpectedly. |
Cleared |
When the failed node returns to the cluster. |
Category: Configuration Management
Alarms raised by the Rhino configuration manager
Alarm Type |
rhino.config.activation-failure |
---|---|
Level |
MAJOR |
Message |
Error activating configuration from file %s. Configuration was replaced with defaults and old configuration file was moved to %s. |
Description |
An error occurred while trying to activate the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
Raised |
When an exception occurs while activating a file-based configuration. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.config.read-error |
---|---|
Level |
MAJOR |
Message |
Error reading configuration from file %s. Configuration was replaced with defaults and old configuration file was moved to %s. |
Description |
An error occurred while trying to read the file-based configuration for the configuration type specified in the alarm instance. Rhino will use defaults from defaults.xml, move the broken configuration aside, and overwrite the config file. |
Raised |
When an exception occurs while reading a configuration file. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.config.save-error |
---|---|
Level |
MAJOR |
Message |
Error saving file based configuration: %s |
Description |
An error occurred while trying to write the file-based configuration for the configuration type specified in the alarm instance. |
Raised |
When an exception occurs while writing to a configuration file. |
Cleared |
Never, must be cleared manually. |
Category: Database
Alarms raised during database communications
Alarm Type |
rhino.database.connection-lost |
---|---|
Level |
MAJOR |
Message |
Connection to %s database failed: %s |
Description |
Rhino requires a backing database for persistence of state for failure recovery purposes. If no connection to the database backend is available, state cannot be persisted. |
Raised |
When the connection to a database backend is lost. |
Cleared |
When the connection is restored. |
Alarm Type |
rhino.database.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has been removed |
Description |
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.database.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has no active persistence instances |
Description |
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.database.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s for database %s |
Description |
Rhino requires a backing database for persistence of state for failure recovery purposes. A persistent instance defines a connection to a database backend. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Alarm Type |
rhino.jdbc.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s for JDBC configuration with JNDI name %s |
Description |
A persistent instance defines the connection to the database backend. If the persistent instance cannot be instantiated then JDBC connections cannot be made. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Category: Event Router State
Alarms raised by event router state management
Alarm Type |
rhino.state.convergence-failure |
---|---|
Level |
MAJOR |
Message |
State convergence failed for "%s". The component remains in the "%s" state. |
Description |
A component reported an unexpected error during convergence |
Raised |
When a configuration change requiring a component to change state does not complete convergence due to an error. |
Cleared |
When the component transitions to the configured desired state. |
Alarm Type |
rhino.state.convergence-timeout |
---|---|
Level |
MINOR |
Message |
State convergence timed out for "%s". The component remains in the "%s" state. Convergence will be retried periodically until it reaches the desired state. |
Description |
A component has not transitioned to the effective desired state after the timeout period |
Raised |
When a configuration change requiring a component to change state does not complete convergence in the expected time. |
Cleared |
When the component transitions to the configured desired state. |
Alarm Type |
rhino.state.raentity.active-reconfiguration |
---|---|
Level |
MINOR |
Message |
Resource adaptor entity "%s" does not support active reconfiguration. Configuration changes will not take effect until the resource adaptor entity is deactivated and reactivated |
Description |
A resource adaptor entity is of a type that does not support active reconfiguration but has a desired state that contains configuration properties different from those in the actual state |
Raised |
When a configuration change requiring a resource adaptor entity to be reconfigured and the resource adaptor does not support active reconfiguration. |
Cleared |
When the resource adaptor entity is deactivated and convergence has updated the configuration properties. |
Alarm Type |
rhino.state.unlicensed-raentity |
---|---|
Level |
MAJOR |
Message |
No valid license for resource adaptor entity "%s" found. The resource adaptor entity has not been activated. |
Description |
A licensing problem was detected during resource adaptor entity activation. |
Raised |
When a node attempts to transition a resource adaptor entity from an actual state of INACTIVE to an actual state of ACTIVE but absence of a valid license prevents that transition. |
Cleared |
When a valid license is installed. |
Alarm Type |
rhino.state.unlicensed-service |
---|---|
Level |
MAJOR |
Message |
No valid license for service "%s" found. The service has not been activated. |
Description |
A licensing problem was detected during service activation. |
Raised |
When a node attempts to transition a service from an actual state of INACTIVE to an actual state of ACTIVATING but absence of a valid license prevents that transition. |
Cleared |
When a valid license is installed. |
Alarm Type |
rhino.state.unlicensed-slee |
---|---|
Level |
CRITICAL |
Message |
No valid license for the SLEE found. The SLEE has not been started. |
Description |
A licensing problem was detected during SLEE start. |
Raised |
When a node attempts to transition its SLEE from an actual state of STOPPED state to an actual state of STARTING but absence of a valid license prevents that transition. |
Cleared |
When a valid license is installed. |
Category: GroupRMI
Alarms raised by the GroupRMI server
Alarm Type |
rhino.group-rmi.dangling-transaction |
---|---|
Level |
WARNING |
Message |
Group RMI invocation %s completed leaving an active transaction dangling: %s. Please report this bug to Metaswitch support. |
Description |
A group RMI invocation completed without committing or rolling back a transaction that it started. The dangling transaction will be automatically rolled back by the group RMI server to prevent future issues but these occurrences are software bugs that should be reported. |
Raised |
When a group RMI invocation completes leaving an active transaction dangling. |
Cleared |
Never, must be cleared manually. |
Category: Key/Value Store
Alarms raised by key/value store persistence resource managers
Alarm Type |
rhino.kv-store.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has been removed |
Description |
A persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.kv-store.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config for %s has no active persistence instances |
Description |
A persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.kv-store.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s for key/value store %s |
Description |
A persistence instance used by a key/value store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Category: Licensing
Alarms raised by Rhino licensing
Alarm Type |
rhino.license.expired |
---|---|
Level |
MAJOR |
Message |
License with serial "%s" has expired |
Description |
A license installed in Rhino has passed its expiry time. |
Raised |
When a license expires and there is no superseding license installed. |
Cleared |
When the license is removed or a superseding license is installed. |
Alarm Type |
rhino.license.over-licensed-capacity |
---|---|
Level |
MAJOR |
Message |
Over licensed capacity for function "%s". |
Description |
The work done by a function exceeds licensed capacity. |
Raised |
When the amount of work processed by the named function exceeds the licensed capacity. |
Cleared |
When the amount of work processed by the function becomes less than or equal to the licensed capacity. |
Alarm Type |
rhino.license.over-limit |
---|---|
Level |
MAJOR |
Message |
Rate limiter throttling active, throttled to %d events/second |
Description |
Rate limiter throttling is active. This throttling and hence this alarm only happens in SDK versions of Rhino, not production versions. |
Raised |
When there is more incoming work than allowed by the licensed limit so Rhino starts rejecting some. |
Cleared |
When the total input rate (both accepted and rejected work) drops below the licensed limit. |
Alarm Type |
rhino.license.partially-licensed-host |
---|---|
Level |
MINOR |
Message |
Host "%s" is not fully licensed. Not all hardware addresses on this host match those licensed. Please request a new license for host "%s". |
Description |
The hardware addresses listed in a host-based license only partially match those on the host. |
Raised |
When a host-based license with invalid host addresses is installed. |
Cleared |
When the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.pending-expiry |
---|---|
Level |
MAJOR |
Message |
License with serial "%s" is due to expire on %s |
Description |
A license installed in Rhino is within seven days of its expiry time. |
Raised |
Seven days before a license will expire and there is no superseding license installed. |
Cleared |
When the license expires, the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.unlicensed-function |
---|---|
Level |
MAJOR |
Message |
There are no valid licenses installed for function "%s" and version "%s". |
Description |
A particular function is not licensed. |
Raised |
When a unit of an unlicensed function is requested. |
Cleared |
When a license is installed that licenses a particular function, and another unit is requested. |
Alarm Type |
rhino.license.unlicensed-host |
---|---|
Level |
MINOR |
Message |
"%s" is not licensed. Hardware addresses on this host did not match those licensed, or hostname has changed. Please request a new license for host "%s". |
Description |
The hardware addresses listed in a host-based license do not match those on the host. |
Raised |
When a host-based license with invalid host addresses is installed. |
Cleared |
When the license is removed, or a superseding license is installed. |
Alarm Type |
rhino.license.unlicensed-rhino |
---|---|
Level |
MAJOR |
Message |
Rhino platform is no longer licensed |
Description |
Rhino does not have a valid license installed. |
Raised |
When a license expires or is removed leaving Rhino in an unlicensed state. |
Cleared |
When an appropriate license is installed. |
Category: Limiting
Alarms raised by Rhino limiting
Alarm Type |
rhino.limiting.below-negative-capacity |
---|---|
Level |
WARNING |
Message |
Token count in rate limiter "%s" capped at negative saturation point on node %d. Too much work has been forced. Alarm will clear once token count >= 0. |
Description |
A rate limiter is below negative capacity. |
Raised |
By a rate limiter when a very large number of units have been forcibly used and the internal token counter has reached the biggest possible negative number (-2,147,483,648). |
Cleared |
When the token count becomes greater than or equal to zero. |
Alarm Type |
rhino.limiting.stat-limiter-misconfigured |
---|---|
Level |
MAJOR |
Message |
Stat limiter "%s" is misconfigured: %s. All unit requests will be allowed by this limiter until the error is resolved. |
Description |
A stat limiter is misconfigured. |
Raised |
By a stat limiter that has been asked for one or more units and has been unable to find the configured parameter set or statistic name. |
Cleared |
When the stat limiter is reconfigured or the configured parameter set that was missing is deployed. |
Category: Logging
Alarms raised by Rhino logging
Alarm Type |
rhino.logging.appender-error |
---|---|
Level |
MAJOR |
Message |
An error occurred logging to an appender: %s |
Description |
An appender has thrown an exception when attempting to pass log messages from a logger to it. |
Raised |
When an appender throws an AppenderLoggingException when a logger tries to log to it. |
Cleared |
When the problem with the given appender has been resolved and the logging configuration is updated. |
Category: M-lets Startup
Alarms raised by the M-let starter
Alarm Type |
rhino.mlet.loader-failure |
---|---|
Level |
MAJOR |
Message |
Error registering MLetLoader MBean |
Description |
The M-Let starter component could not register itself with the platform MBean server. This normally indicates a serious JVM misconfiguration. |
Raised |
During Rhino startup if an error occurred registering the m-let loader component with the MBean server. |
Cleared |
Never, must be cleared manually or Rhino restarted. |
Alarm Type |
rhino.mlet.registration-failure |
---|---|
Level |
MINOR |
Message |
Could not create or register MLet: %s |
Description |
The M-Let starter component could not register an MBean for a configured m-let. This normally indicates an error in the m-let configuration file. |
Raised |
During Rhino startup if an error occurred starting a m-let configured. |
Cleared |
Never, must be cleared manually or Rhino restarted with updated configuration. |
Category: Pool Maintenance Provider
Alarms raised by pool maintenance provider persistence resource managers
Alarm Type |
rhino.pool-maintenance-provider.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config has been removed |
Description |
The persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.pool-maintenance-provider.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config has no active persistence instances |
Description |
The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.pool-maintenance-provider.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s |
Description |
A persistence instance used by the pool-maintenance-provider cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Alarm Type |
rhino.rhino.pool-maintenance-provider.invalid-node-update-time |
---|---|
Level |
WARNING |
Message |
Node %s is reporting heartbeat timestamps that exceed the maximum permitted delta from current time; current delta is %sms in the %s |
Description |
A pool node is refreshing its heartbeat timestamps but using a clock time that exceeds the permitted delta from this node’s clock time. |
Raised |
When a node’s heartbeat timestamps are noticed to exceed the permitted time delta from this node’s clock time for longer than the configured grace period. |
Cleared |
When the node’s timestamp no longer exceed the permitted time delta. |
Alarm Type |
rhino.rhino.pool-maintenance-provider.missing-heartbeat |
---|---|
Level |
MAJOR |
Message |
Expected to find my node with a heartbeat timestamp one of %s but found a timestamp of %s instead |
Description |
An unexpected heartbeat timestamp for this node was encountered when querying the heartbeat table. This could mean, for example, that multiple pool nodes are configured with the same node id. |
Raised |
When an unexpected heartbeat timestamp for this node is encountered after a heartbeat table query. |
Cleared |
When an expected timestamp is encountered. |
Category: REM Startup
Alarms raised by embedded REM starter
Alarm Type |
rhino.rem.missing |
---|---|
Level |
MINOR |
Message |
Rhino Element Manager classes not found, embedded REM is disabled. |
Description |
This version of Rhino is supposed to contain an embedded instance of REM but it was not found, most likely due to a packaging error. |
Raised |
During Rhino startup if the classes could not be found to start the embedded REM. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.rem.startup |
---|---|
Level |
MINOR |
Message |
Could not start embedded Rhino Element Manager |
Description |
There was an unexpected problem while starting the embedded REM. This could be because of a port conflict or packaging problem. |
Raised |
During Rhino startup if an error occurred starting the embedded REM. |
Cleared |
Never, must be cleared manually or Rhino restarted with updated configuration. |
Category: Runtime Environment
Alarms related to the runtime environment
Alarm Type |
rhino.runtime.long-filenames-unsupported |
---|---|
Level |
WARNING |
Message |
Filenames with a length of %s characters are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
Description |
Filenames with the maximum length expected by Rhino are unsupported on this filesystem. Unexpected deployment errors may occur as a result |
Raised |
During Rhino startup if the long filename check fails. |
Cleared |
Never, must be cleared manually or Rhino restarted after being installed on a filesystem supporting long filenames. |
Alarm Type |
rhino.runtime.slee |
---|---|
Level |
CRITICAL |
Message |
SLEE event-routing functions failed to start after node restart |
Description |
SLEE event-routing functions failed to start after node restart |
Raised |
During Rhino startup if SLEE event-routing functions fail to restart. |
Cleared |
Never, must be cleared manually or the node restarted. |
Alarm Type |
rhino.runtime.unsupported.jvm |
---|---|
Level |
WARNING |
Message |
This JVM (%s) is not supported. Supported JVMs are: %s |
Description |
This JVM is not a supported JVM. |
Raised |
During Rhino startup if an unsupported JVM was detected. |
Cleared |
Never, must be cleared manually or Rhino restarted with a supported JVM. |
Category: SAS facility
Alarms raised by Rhino SAS Facility
Alarm Type |
rhino.sas.connection.lost |
---|---|
Level |
MAJOR |
Message |
Connection to SAS server at %s:%d is down |
Description |
Attempting to reconect to SAS server |
Raised |
When SAS client loses connection to server |
Cleared |
On reconnect |
Alarm Type |
rhino.sas.queue.full |
---|---|
Level |
WARNING |
Message |
SAS message queue is full |
Description |
SAS message queue is full. Some events have not been reported to SAS |
Raised |
When SAS facility outgoing message queue is full |
Cleared |
When the queue is not full for at least sas.queue_full_interval |
Category: SLEE State
Alarms raised by SLEE state management
Alarm Type |
rhino.state.slee-start |
---|---|
Level |
CRITICAL |
Message |
The SLEE failed to start successfully. |
Description |
An unexpected exception was caught during SLEE start. |
Raised |
When a node attempts to transition its SLEE from an actual state of STOPPED state to an actual state of STARTING but an unexpected exception occurred while fulfilling that request. |
Cleared |
After the desired state of the SLEE is reset to STOPPED. |
Category: SNMP
Alarms raised by Rhino SNMP
Alarm Type |
rhino.snmp.bind-failure |
---|---|
Level |
MAJOR |
Message |
The SNMP agent could not be started on node %d: no addresses were successfully bound. |
Description |
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If no ports could be bound, the SNMP agent cannot process any SNMP requests. |
Raised |
When the SNMP Agent attempts to start listening for requests, but no port in the configured range on any configured interface could be used. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.duplicate-counter-mapping |
---|---|
Level |
WARNING |
Message |
Duplicate counter mappings in parameter set type %s |
Description |
Multiple counters in the parameter set type configuration map to the same index. The parameter set type mappings will remain inactive until the conflict is resolved. |
Raised |
When a in-use parameter set type has a configuration with duplicate counter mappings. |
Cleared |
When the conflict is resolved, either by changing the relevant counter mappings, or if the parameter set type is removed from use. |
Alarm Type |
rhino.snmp.duplicate-oid-mapping |
---|---|
Level |
WARNING |
Message |
Duplicate parameter set type mapping configurations for OID %s |
Description |
Multiple parameter set type configurations for in-use parameter set types map to the same OID. All parameter set type mappings will remain inactive until the conflict is resolved. |
Raised |
When multiple in-use parameter set types have a configuration that map to the same OID. |
Cleared |
When the conflict is resolved, either by changing the OID mappings in the relevant parameter set type configurations, or if a parameter set type in conflict is removed from use. |
Alarm Type |
rhino.snmp.general-failure |
---|---|
Level |
MINOR |
Message |
The SNMP agent encountered an error during startup: %s |
Description |
This is a catchall alarm for unexpected failures during agent startup. If an unexpected failure occurs, the state of the SNMP agent is unpredictable and requests may not be successfully processed. |
Raised |
When the SNMP Agent attempts to start listening for requests, but there is an unexpected failure not covered by other alarms. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.no-bind-addresses |
---|---|
Level |
MAJOR |
Message |
The SNMP agent could not be started on node %d: no suitable bind addresses available. |
Description |
The SNMP agent listens for requests received on all network interfaces that match the requested SNMP configuration. If no suitable interfaces can be found that match the requested configuration, then the SNMP agent cannot process any SNMP requests. |
Raised |
When the SNMP Agent attempts to start listening for requests, but no suitable network interface addresses can be found to bind to. |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.notification-address-failure |
---|---|
Level |
MAJOR |
Message |
Failed to create notification target for address "%s". |
Description |
This alarm represents a failure to determine an address from the notification target configuration. This can occur if the notification hostname is not resolvable, or if the specified hostname is not parseable. |
Raised |
During SNMP agent start if a notification target address cannot be determined (e.g. due to a hostname resolution failing). |
Cleared |
When the SNMP Agent is stopped. |
Alarm Type |
rhino.snmp.partial-failure |
---|---|
Level |
MINOR |
Message |
The SNMP agent failed to bind to the following addresses: %s |
Description |
The SNMP agent attempts to bind a UDP port on each configured SNMP interface to receive requests. If this succeeds on some (but not all) interfaces, the SNMP agent can only process requests received via the interfaces that succeeded. |
Raised |
When the SNMP Agent attempts to start listening for requests, and only some of the configured interfaces successfully bound a UDP port. |
Cleared |
When the SNMP Agent is stopped. |
Category: Scattercast Management
Alarms raised by Rhino scattercast management operations
Alarm Type |
rhino.scattercast.update-reboot-required |
---|---|
Level |
CRITICAL |
Message |
Scattercast endpoints have been updated. A cluster reboot is required to apply the update. An automatic reboot has been triggered, Manual intervention required if the reboot fails. |
Description |
Reboot needed to make scattercast update active. |
Raised |
When scattercast endpoints are updated. |
Cleared |
On node reboot. |
Category: Service State
Alarms raised by service state management
Alarm Type |
rhino.state.service-activation |
---|---|
Level |
MAJOR |
Message |
Service "%s" failed to activate successfully. |
Description |
The service threw an exception during service activation, or an unexpected exception occurred while attempting to activate the service. |
Raised |
When a node attempts to transition a service from an actual state of INACTIVE to an actual state of ACTIVATING but the service rejected the activation request or an unexpected exception occurred while fulfilling that request. |
Cleared |
After the desired state of the service is reset to INACTIVE. |
Category: Session Ownership Store
Alarms raised by session ownership store persistence resource managers
Alarm Type |
rhino.session-ownership-store.no-persistence-config |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config has been removed |
Description |
The persistence resource configuration referenced in rhino-config.xml has been removed at runtime. |
Raised |
When an in-use persistence resource configuration is removed by a configuration update. |
Cleared |
When the persistence resource configuration is restored. |
Alarm Type |
rhino.session-ownership-store.no-persistence-instances |
---|---|
Level |
CRITICAL |
Message |
Persistence resource config has no active persistence instances |
Description |
The persistence resource configuration referenced in rhino-config.xml has no persistence instances configured, or no configured persistence instances could be instantiated. |
Raised |
When an in-use persistence resource configuration has no active persistence instances. |
Cleared |
When at least one active persistence instance exists for the persistence resource configuration. |
Alarm Type |
rhino.session-ownership-store.persistence-instance-instantiation-failure |
---|---|
Level |
MAJOR |
Message |
Unable to instantiate persistence instance %s |
Description |
A persistence instance used by the session ownership store cannot be instantiated. If the persistent instance cannot be instantiated then that connection cannot be made and state cannot be persisted to that instance. |
Raised |
When a persistent instance configuration change occurs but instantiation of that persistent instance fails. |
Cleared |
When a correct configuration is installed. |
Category: Threshold Rules
Alarms raised by the threshold alarm rule processor
Alarm Type |
rhino.threshold-rules.rule-failure |
---|---|
Level |
WARNING |
Message |
Threshold rule %s trigger or reset condition failed to run |
Description |
A threshold rule trigger or reset rule failed. |
Raised |
When a threshold rule condition cannot be evaluated, for example it refers to a statistic that does not exist. |
Cleared |
When the threshold rule condition is corrected. |
Alarm Type |
rhino.threshold-rules.unknown-parameter-set |
---|---|
Level |
WARNING |
Message |
Threshold rule %s refers to unknown statistics parameter set '%s' |
Description |
A threshold rule trigger or reset rule refers to an unknown statistics parameter set. |
Raised |
When a threshold rule condition cannot be evaluated because it refers to a statistics parameter set that does not exist. |
Cleared |
When the threshold rule condition is corrected. |
Category: Watchdog
Alarms raised by the watchdog
Alarm Type |
rhino.watchdog.forward-timewarp |
---|---|
Level |
WARNING |
Message |
Forward timewarp of %sms detected at %s |
Description |
A forward timewarp was detected. |
Raised |
When the system clock is detected to have progressed by an amount exceeding the sum of the watchdog check interval and the maximum pause margin. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.watchdog.gc |
---|---|
Level |
CRITICAL |
Message |
Long JVM %s GC of %sms detected |
Description |
A long JVM garbage collector pause has been detected. |
Raised |
When the Java Virtual Machine performs a garbage collection which stops all application threads longer than the configured acceptable threshold. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.watchdog.no-exit |
---|---|
Level |
CRITICAL |
Message |
System property watchdog.no_exit is set, watchdog will be terminated rather than killing the node if a failed watchdog condition occurs |
Description |
The system property watchdog.no_exit is set, enabling override of default node termination behaviour on failed watchdog conditions. This can cause catastrophic results and should never be used. |
Raised |
When the watchdog.no_exit system property is set. |
Cleared |
Never, must be cleared manually. |
Alarm Type |
rhino.watchdog.reverse-timewarp |
---|---|
Level |
WARNING |
Message |
Reverse timewarp of %sms detected at %s |
Description |
A reverse timewarp was detected. |
Raised |
When the system clock is detected to have progressed by an amount less than the difference between the watchdog check interval and the reverse timewarp margin. |
Cleared |
Never, must be cleared manually. |