3.2.12 2024-08-13
Improvements
-
Added a new watchdog to monitor for adverse GC events called GC Watchdog. This watchdog monitors GC activity and raises an alarm if a long GC pause is detected. Enabled only in pool mode. (#1713035)
Bug fixes
-
MemDB migrating locks still present for removed keyspaces are now excluded from the extracted state stream used to resynchronise booting cluster nodes so they don’t cause a
NullPointerException
during state installation on the booting node. (#1685609)
3.2.11 - 2024-07-12
Improvements
-
Added commented-out properties to Rhino start scripts and security properties to simplify enabling X.509 Certificate revocation checks. (#1635614)
Bug fixes
-
Added missing netty classes to cassandra support jar, required when using TLS connections to the database. (#1641997)
3.2.10 - 2024-06-16
Bug fixes
-
Fixed a threshold rule processor issue that caused rules using the same counter statistic in different modes (absolute vs. delta) to not evaluate correctly for one of the modes. Fixed a small memory leak when threshold rules were deleted. (#1620902)
-
Fixed issue with missing client utilities class that prevented creation of declarative exports. (#1535946)
3.2.9 - 2024-05-30
Improvements
-
Rhino now recognises deployable units signed using the RSA and ECDSA algorithms (in addition to the DSA algorithm it already supported). (#1570188)
Bug fixes
-
The Rhino console now supports the escape sequences generated by Putty’s application numpad mode. (#1316710)
-
Disabled JVM assertions in a PostgreSQL JDBC driver class to work around a rarely-occurring bug in the driver code. (#1394784)
Dependency Updates
-
Update the org.postgresql:postgresql library to 42.7.2
3.2.8 - 2024-02-29
Major changes
-
This release of Rhino introduces support for Java 17 and has been tested with the Microsoft build of OpenJDK 17 (https://learn.microsoft.com/en-us/java/openjdk/download#openjdk-17). Existing support for Java 11 is unchanged.
Improvements
-
Optimised several internal Rhino debugging code paths. (#1087376)
3.2.7 - 2024-01-24
Improvements
-
Removed a source of contention when enqueueing SAS events for transmission. (#1053165)
-
Removed repeated logging of a duplicate address warning in rhino-stats when multiple Rhino nodes report the same JMX IP address, which can occur when Rhino nodes are running in containers. The warning is now reported only once. (#830359)
Bug fixes
-
Fixed the rhino-stats client so that CGIN stats can also be collected when the wrong version is specified in the file defining the stats to be collected. (#1067916)
-
Fixed a bug where the rhino-stats client could not collect stats if the client was on RVT version 4.1 while the server was on 4.0. (#1068980)
-
Fixed a bug where cancelling a remote timer that was set on a recently restarted pool node could fail. (#1127179)
Dependency Updates
-
Update the netty libraries to 4.1.103.Final
3.2.6 - 2023-11-13
Improvements
-
The stats collection feature can now continue to collect stats even if the wrong versions are specified in the file defining the stats to be collected. This means that MAG nodes no longer need to be restarted after upgrades of RVT nodes. (#751400)
Dependency Updates
-
Update the SLEE Annotations dependency to 3.1.4
3.2.5 - 2023-10-12
Major changes
-
Rhino SDK and Production now use the Java G1 garbage collector by default. (#443219)
-
The CMS collector is deprecated in Java 11 and is no longer recommended.
-
The production installer now includes a question to allow G1 or CMS to be selected during installation.
-
In Rhino SDK, the collector options can be customized in
config/jvm_args
. -
In Rhino Production, the collector options can be customized in
node-xxx/config/options
.
-
Bug fixes
-
Fixed a
rhino-stats
bug where quiet mode (-q
) used in conjunction with rolling stats output mode (-o
) would prevent any stats being written to the output file. Also improved the command line help description for rolling file output mode. (#826721) -
Fixed a bug where a lock manager statistics parameter set may be unavailable for reading statistics after undeploying and then redeploying the same SLEE service or profile specification. (#923567)
-
Fixed a rare race condition where undeploying and then redeploying a SLEE service or profile specification could result in the associated lock manager incorrectly removing locks. (#915873)
Improvements
-
Added tolerance to the rhino-stats collection so that available stats will get collected even if any of the selected stats were unavailable. (#751400)
-
Added a 5s timeout to RMI client connection health checks as a workaround for rare connection hangs in JDK library code. (#835614)
-
A maximum file size in terms of code points is now enforced for declarative YAML documents. The default is 10,485,760 which may be increased or decreased using the system property
rhino.config.yaml_code_point_limit
. (#920468) -
When running in pool mode, Rhino nodes will automatically restart if an OutOfMemoryError is encountered. (#751195)
Dependencies
-
Updated org.yaml:snakeyaml to 2.0.
-
Updated org.postgresql:postgresql to 42.6.0.
-
Updated com.google.protobuf:protobuf-java to 3.19.6.
-
Updated io.grpc libraries to 1.58.0.
-
Updated org.xerial.snappy:snappy-java to 1.1.10.5.
3.2.3 - 2023-07-17
Major changes
-
When running in pool mode, management functionality that is not relevant for this mode has been disabled.
This disallows management clients from specifying foreign node IDs in operations where it would serve no useful purpose or be misleading, for example, trying to obtain a
NodeHousekeepingMBean
for a node other than the one the client is connected to, setting or querying per-node activation state for a foreign node, shutting down or restarting foreign nodes, etc. In all these cases, the foreign node may be a pool cluster member but is not going to be affected by any of these operations originating on a different pool node. Additional checks have also been added into declarative config import to prevent per-node activation state being imported for foreign nodes while running in pool mode.The rhino-console commands
state
,getclusterstate
, andwaitonstate
have been updated to support pool mode clusters. (#222043) -
Removed the PoolMembershipFacility API in favour of an enhanced ClusterStateChangeFacility instead. Adding pool mode support directly to the ClusterStateChangeFacility allows resource adaptors that depend on the types of notification generated by this facility to continue working with approximately the same behaviour, without any code changes, when Rhino is running in pool mode. (#370019, #368243)
New features
-
Support for clients obtaining Rhino statistics using the direct connection mode is now optional and can be disabled if desired (previously it was always enabled). The network interface that direct connection sockets are bound to in Rhino can now also be configured in rhino-config.xml New questions have been added to the production installer to determine the initial deployment configuration, with the install defaults being the same as set in previous Rhino versions. (#665937)
-
Support for allowing profile snapshots by management clients is now optional and can be disabled if desired (previously it was always enabled). Configuration for the snapshot server has moved from a runtime system property to rhino-config.xml. New configuration options for the snapshot server have been added, including allowing the network interface that server sockets are bound to to be specified. New questions have been added to the production installer to determine the initial deployment configuration, with the install defaults being the same as set in previous Rhino versions. (#665934)
Bug fixes
-
Fixed a rare race condition that allowed resource adaptor entities to be created by convergence tasks (as a result of declarative config import) while the actual SLEE state was
starting
, which later led to anAssertionError
when the actual state of the SLEE changed again. (#822565) -
Fixed issue with
rhino-export -R
producing an invalid export when writing to a destination path containing multiple path segements. (#474399) -
Fixed bug where rhino-console would ignore the password command line parameter (-w) if it appeared before the username (-u). (#731287)
-
Fixed a bug in rhino-console where the port command line argument (-p) would be ignored if the
rhino.remote.serverlist
property is defined inclient.properties
. (#366785) -
Fixed a memory leak when direct stats connections between cluster nodes, used for reporting local stats to the session-controlling node, are closed after a stats session ends. (#500827)
-
Fixed precedence of the rhino-stats command line connection options for host (-h) and port (-p) so they override any serverlist or adhoc addresses specified in the
client.properties
file. (#270120) -
The
rhino.remote.serverlist
property is no longer used byrhino-stats
when creating connections to pool nodes. (#319909) -
When running in pool mode, the
rhino-stats
client will no longer make multiple connections to the same JMX endpoint address when multiple nodes report using the same address. Instead, duplicate addresses will be ignored. (#220533) -
Fixed a bug in
rhino-stats
where identifiers set with the-I
option were not quoted in CSV output despite containing commas. (#441687) -
The
rhino-stats
client will no longer display the previous value for a stat if there is no new value for the stat in the latest update from Rhino. (#384080) -
Fixed a
NullPointerException
that occurred in therhino-stats
GUI when double-clicking on namespace grouping nodes in the left-hand side parameter set tree. (#646650) -
Fixed a
NullPointerException
that occurred in therhino-stats
GUI if a parameter set with no parameter set type is selected in the "Select Parameter Set" dialog box (when adding new statistics to an existing graph). Also fixed the enabled/disabled behaviour of the "OK" button in this dialog box so that it is only enabled when at least one statistic is selected. (#646742) -
Fixed some session ownership store alarm names to refer to session ownership rather than the key/value store. Renamed the "table-missing" alarm names for the key/value store, session ownership store, and pool maintenance provider, to "missing-table" for consistency with the "missing-keyspace" alarms. (#376162)
-
Missing keyspace or table alarms raised as a result of runtime query failures will now be correctly cleared when the problem is resolved. The alarm instance identifier of missing table alarms raised as a result of runtime query failures has also been corrected (they were not being qualified with the associated keyspace name). (#511248)
-
Added a schema-change listener to the Cassandra database sessions used for persisting resources (key/value store, session ownership store, and pool maintenance provider). This means that if a database keyspace or table used by one of these resources is manually dropped from Cassandra then the resource will notice immediately and take appropriate action - either recreate the missing keyspace and/or tables if data definition updates are allowed, or raise an alarm if not. As a side effect, this fixes related issues such as the pool metadata table not being repopulated if the metadata table is dropped and recreated. (#530772:)
-
Fixed the cleanup of replicated storage keyspaces when a resource adaptor entity using them is removed when the keyspace is replicated using the key/value store. This bug also caused a discrepancy between nodes about which Cassandra tables should and should not exist. (#721572)
-
The alarms raised by persisting resources (key/value store, session ownership store, and pool maintenance provider) are now included in the Rhino alarm catalog. (#378603)
-
Exceptions from the com.opencloud.rhino.resource package thrown by subtypes of PersistingResourceMBean are now rendered correctly by remote clients. (#386568)
-
Additional exceptions that can be thrown by gRPC services (remote timers, pool mode message facility) are now caught and handled appropriately. (#370127)
-
Fixed a
NullPointerException
that could occur during convergence is a resource adaptor entity was removed while the SLEE was starting. -
Added missing javadoc in the install packages for key/value store, session ownership store, and pool maintenance subsystem. (#426216)
-
Fixed a bug where errors in the set up of the rhino-stats collection caused a memory leak. (#751399)
Improvements
-
The log4j logger configurations representing tracer levels for services and profile tables will now be updated during desired state convergence checks if it’s determined that the logger configurations do not match previously configured tracer level desired state. (#644804)
-
The logging polled memory appender now supports CIDR notation for its bind address, allowing a network range to be specified. The appender will bind to the network interface it finds within that range. An error will occur if exactly one network interface cannot be found within the range. (#665936)
-
Added a new resource adaptor extension configuration property to
RhinoExtensionProperties
that returns the local Rhino node ID as ajava.lang.String
. (#260512) -
Static OIDs have been assigned to the stats parameter set types of the key/value store, session ownership store and pool maintenance provider. (#421461)
-
The
rhino-stats
client now logs warning and error messages to STDERR instead of mixing them with output in STDOUT. (#384075) -
The upper limit of the
rhino-stats
pool period config has been removed. (#375849) -
Added new parameter sets for Rhino interconnect gRPC client and server statistics. (#235852)
-
Added a new counter statistic to Cassandra resources that records any internal exceptions raised while handling result sets from a query operation. (#379164)
-
Replaced the Cassandra persisting resource driver timeout and driver error alarms with specific logging and statistics. This prevents driver related alarms from remaining uncleared after the issue is resolved. (#376653)
-
The pool mode MessageFacility implementation now imposes a timeout for responses to sent messages. (#385131)
-
IPv6 address are now supported by the Rhino client connection library.
-
The Cassandra Datastax driver reference configuration is not included in the Rhino install documentation subdirectory. (#422004)
-
Added more restrictive permissions for temporary files. (#412308)
Other changes
-
Remote timer internode communication now uses the default interconnect request timeout configured in
rhino-config.xml
(default 5s) rather than a separately configured system property (default was 2s). (#396582)
New Alarms
New alarms have been added for the pool maintenance and interconnect subsystems.
For more detail or a full list of alarms raised by Rhino, see the Rhino Administration and Deployment Guide or use the alarmcatalog
rhino-console command.
Alarm type | Raised | Cleared |
---|---|---|
Interconnect subsystem |
||
rhino.interconnect |
When gRPC is failing to send messages to other nodes. |
When there have been no failed messages to other nodes for a while (five seconds by default). |
Pool maintenance subsystem |
||
rhino.cassandra-pool-maintenance.heartbeat.unstable |
When attempted heartbeat table query executions fail even though there is a node available to accept them in any persistence instance. |
When either one or more available nodes become able to accept queries or when no nodes are available. |
Removed Alarms
Alarm type | Reason |
---|---|
Pool maintenance subsystem |
|
rhino.cassandra-pool-maintenance-provider.driver-timeout |
The driver timeout alarm is replaced by logging at WARN level on the |
rhino.cassandra-pool-maintenance-provider.driver-error |
The driver error alarm is replaced by logging at WARN level on the |
Key-Value Store |
|
rhino.cassandra-kv-store.driver-timeout |
The driver timeout alarm is replaced by logging at WARN level on the |
rhino.cassandra-kv-store.driver-error |
The driver error alarm is replaced by logging at WARN level on the |
Session Ownership Store |
|
rhino.cassandra-session-ownership-store.driver-timeout |
The driver timeout alarm is replaced by logging at WARN level on the |
rhino.cassandra-session-ownership-store.driver-error |
The driver error alarm is replaced by logging at WARN level on the |
New monitoring statistics
This release adds the following statistics for tracking persisting resource errors encountered by Rhino.
Parameter set name | Name | Description |
---|---|---|
PoolMaintenanceProvider.heartbeat.cassandra |
executorDriverExceptions |
Exceptions thrown from the Cassandra driver while executing statements. |
resultHandlingExceptions |
Exceptions thrown while handling statement results. |
|
PoolMaintenanceProvider.nodemetadata.cassandra |
executorDriverExceptions |
Exceptions thrown from the Cassandra driver while executing statements. |
resultHandlingExceptions |
Exceptions thrown while handling statement results. |
|
KeyValueStore.cassandra.global |
executorDriverExceptions |
Exceptions thrown from the Cassandra driver while executing statements. |
resultHandlingExceptions |
Exceptions thrown while handling statement results. |
|
SessionOwnershipStore |
executorDriverExceptions |
Exceptions thrown from the Cassandra driver while executing statements. |
resultHandlingExceptions |
Exceptions thrown while handling statement results. |
3.2.2 - 2022-12-09
Improvements
-
Improved error messages reported when attempting to invoke scattercast management operations when scattercast is not configured. (#262419)
-
Updated the LDAP login module to use TLS by default. (#412311)
Bug fixes
-
Fixed known issues in Cassandra resources with missing keyspace and table alarms not being cleared when a corresponding keyspace or table is no longer required due to deployment changes. All alarms related to a particular Cassandra persistence instance are also now cleared if that persistence instance is removed from a persistence resource. The clearing of Cassandra driver timeout alarms is now managed on a per-session basis rather than globally, i.e. a timeout alarm raised for a particular session will now be cleared after the configured period if there are no more timeouts reported by that session, irrespective of timeouts that may occur on other database sessions. (#370627)
-
Fixed alarms for failures in Cassandra resources where missing table alarms were not raised in certain failure cases. (#220529)
-
Fixed hardcoded appender ref modification problems. Logging will now continue to the audit logs after an appender ref is added. The additivity of the logger will now correctly have an additivity of false. (#377792)
-
Fixed the runtime exception thrown by the
MessageFacility
when trying to send broadcast messages when using Rhino Pools mode. (#370122) -
Fixed the parsing of a configured SNMP notification target to allow hyphens in the middle of the hostname. (#423328)
Dependencies
-
Updated com.fasterxml.jackson artifacts to 2.14.1.
-
Updated org.apache.common:commons-text to 1.10.0.
3.2.1 - 2022-10-13
Rhino Pools
Rhino 3.2 introduces an optional new high availability approach referred to as 'Rhino Pools' to better support cloud deployments. In this new approach, rather than using the Savanna clustering system, individual Rhino nodes are independent and coordinate through an external Cassandra database instead. Pools eliminates most inter-node communication aside from proxied signaling and session failover related data.
When Rhino Pools is combined with declarative configuration and externalized session data, the result is better support for scalable cloud-oriented deployments.
The key benefits of the new Rhino Pools clustering mode are:
-
Improved reliability in various failure cases. Restrictions of quorum and a distributed consistent view between cluster members are removed. Rhino nodes are independent so if more than half of the nodes simultaneously fail the remainder continue to function.
-
No distributed locking amongst the Rhino nodes.
-
A more permissive node liveness model. Rhino nodes will remain available in more cases of solution component failure.
-
Increased scalability. The maximum size of a Rhino Pool is much higher than a Savanna based Rhino cluster.
Rhino Pools is optional, and the 'classic' Savanna clustering mode remains available in Rhino 3.2. New solutions built on Rhino should strongly consider adopting Rhino Pools rather than Savanna clusters.
Rhino Pools introduces new installer questions and configuration options.
Installing Rhino now requires setting the clustering mode to one of SAVANNA
or POOL
.
Different configuration options are available depending on the clustering mode selected.
Of particular note, schema-modifying operations on a Cassandra database (i.e. creating and removing keyspaces and tables) are explicitly disallowed by Rhino Pool nodes and thus the option to enable or disable this feature in Cassandra-based persistent resources is not configurable when this mode is selected.
The configuration file rhino-config.xml
has a new configuration entry for clustering mode with supported values of SAVANNA
and POOL
.
A new section is added to this file for the pool maintenance service provider to configure the Cassandra keyspace used for this subsystem.
Node interconnect
Rhino 3.2 includes a new node interconnect implementation to unify previously separate inter-node communication channels for services and resource adaptors and provide support for unclustered Rhino pools. This interconnect uses the gRPC protocol framework as the underlying transport.
Prior to version 3.2 Rhino used an internal HTTP API as an interconnect for the remote timer facility and Savanna for the resource adaptor message facility. In Rhino Pools both the message facility and remote timers use the new node interconnect. Broadcast messages are not supported by the interconnect, therefore the message facility only allows unicast node to node messages in Rhino Pools.
When Rhino 3.2 uses the Savanna clustering system:
-
The Savanna transport is used for the message facility
-
The new node interconnect is used for remote timers
Configuration for the remote timer port range is removed from the Rhino installer and replaced by configuration for a bind address and port range for the new node interconnect. The address can be assigned either as a single address or a range in CIDR notation that contains one of the addresses assigned to a network interface. The configured listen address should only be associated with a single network interface. Valid bind address strings are a simple IP address, e.g. 127.0.0.1 or a CIDR range, e.g. 192.168.1.0/24.
A port range is required for deployments with multiple Rhino nodes on one host. Each Rhino node will bind to the first available port in the range. For deployments with only one node per host a single port is sufficient. The default port configuration allows up to 10 nodes per host.
A new section has been added to the configuration file rhino-config.xml
file for configuration of the rhino interconnect, with substitution variables based on these configuration properties.
Installer changes
Rhino Pools, the new node interconnect, and support for keyspace name configuration each add questions to the Rhino installer.
Option | Description | Config variable name | Default value |
---|---|---|---|
Clustering mode |
The model Rhino uses for managing nodes and operational membership. |
CLUSTERING_MODE |
SAVANNA |
Key/value store keyspace name prefix |
The prefix used for names of Cassandra keyspaces persisting data from key/value store backed MemDB instances. |
KEY_VALUE_STORE_KEYSPACE_PREFIX |
Varies by clustering mode |
Session ownership store keyspace name prefix |
The prefix used for the names of Cassandra keyspaces storing session ownership records. |
SESSION_OWNERSHIP_STORE_KEYSPACE_PREFIX |
Varies by clustering mode |
Pool maintenance keyspace |
The name of the Cassandra keyspace used by the pool maintenance subsystem for storing pool membership information. |
POOL_MAINTENANCE_KEYSPACE |
"rhino_pool_maintenance" |
Interconnect listen address |
The address or CIDR range containing the address of the interface for the node interconnect to bind to. |
RHINO_INTERCONNECT_LISTEN_ADDRESS |
0.0.0.0 |
Interconnect port range |
The range of ports (min and max port number) for the node interconnect to try binding to. |
RHINO_INTERCONNECT_LISTEN_PORT_RANGE_MIN, RHINO_INTERCONNECT_LISTEN_PORT_RANGE_MAX |
22020-22029 |
Other improvements
-
Added
exportpersistingresourcedatadefinitions
rhino-console command and associated MBean method. This command is used to export Cassandra resource database schemas for importing into a Cassandra database. (#185430) -
Cassandra persisting resource keyspace names may now be configured at install time. (#185433)
-
Added
dumppersistingresourcetable
rhino-console command and associated MBean method. This command allows dumping persisting resource (i.e. database) contents for diagnostic purposes. (#222039) -
Added
listpersistingresources
rhino-console command and associated MBean method. This command lists all configured persisting resources. (#222039) -
Improvements to the rhino-stats client to allow collection of stats from Rhino Pools and from nodes in different clusters in an 'adhoc' mode. (#256356) When run in adhoc mode the nodes' addresses are added as a comma separated list of
host:port
(JMX MBean server addresses) either:-
after the
-a
command line option, or -
as the value of the
rhino.adhoc.addresses
property in theclient/etc/client.properties
file of an installation.
-
-
Modified Rhino’s internal handling of MemDB keyspaces to allow for the keyspace names to be stabilized. The keyspace names can be reused and generated consistently. (#338386)
-
Added flags to disable execution of DDU queries (data definition updates). This enables deployments where the Cassandra installation is managed separately from the Rhino cluster with restrictive access controls or when Rhino is deployed in pool mode. This behavior is configurable at install time, the default is to automatically update schemas unless using pool mode where automatic updates are prohibited. (#185432)
-
The remote timer
CompositeData
andTabularData
MBean objects now return the remaining repetitions of a timer with therepetitions
field, instead of the expiry time with theexpiry-time
field. (#334324)
Bug fixes
-
Fixed node restart failure when deployed library components are referred to by other SLEE components that have been verified but not deployed. (#358369)
-
Removed client fallback to insecure JMX connections if SSL negotiation fails. The Rhino management interface does not support insecure connections. (#195604)
-
Fixed a sporadic management lock timeout when shutting down the SNMP agent during reconfiguration. (#360655)
-
Fixed issue where sample stats distributions were calculated between the bin distribution min and max, which could lead to odd stats (most notably negative percentile stats) if min < observed min or max > observed max. (RHI-3505)
-
Fixed MemDB retained size accounting when the key for a deleted but not yet GC’d entry is reused. (#337944)
Dependencies
-
Added gRPC 1.47.0 and associated dependencies.
-
Java 11.0.15 is now the minimum supported version. This version disables the insecure TLS-1.0 and 1.1 protocol versions in the SSL socket implementation.
New Alarms
New alarms have been added for the pool maintenance subsystem, key-value store and session ownership store.
For more detail or a full list of alarms raised by Rhino, see the Rhino Administration and Deployment Guide or use the alarmcatalog
rhino-console command.
Alarm type | Raised | Cleared |
---|---|---|
Pool maintenance subsystem |
||
rhino.cassandra-pool-maintenance-provider.no-nodes-available |
When an attempted database query execution fails because no node is available to accept it in any persistence instance. |
When one or more nodes become available to accept queries. |
rhino.cassandra-pool-maintenance-provider.connection-error |
When communication with the database fails, for example because no node is available to execute a query. |
When the connection error is resolved. |
rhino.cassandra-pool-maintenance-provider.driver-timeout |
When a timeout occurs trying to execute a statement in the database. |
When no new timeout has occurred for the period of time specified by the cassandra.driver_timeout_interval system property. |
rhino.cassandra-pool-maintenance-provider.driver-error |
When an error occurs with the database but communication with it still works, for example a keyspace is not available to execute a query on. |
Must be cleared manually. |
rhino.cassandra-pool-maintenance-provider.db-node-failure |
When communication with the database node fails. |
When the connection error is resolved or the node is removed from the cluster. |
rhino.cassandra-pool-maintenance-provider.missing-keyspace |
When a required database keyspace is found to be missing. |
When the database keyspace is detected to be present. |
rhino.cassandra-pool-maintenance-provider.missing-table |
When a required database table is found to be missing. |
When the database table is detected to be present. |
rhino.pool-maintenance-provider.no-persistence-config |
When an in-use persistence resource configuration is removed by a configuration update. |
When the persistence resource configuration is restored. |
rhino.pool-maintenance-provider.no-persistence-instances |
When an in-use persistence resource configuration has no active persistence instances. |
When at least one active persistence instance exists for the persistence resource configuration. |
rhino.rhino.pool-maintenance-provider.missing-heartbeat |
When an unexpected heartbeat timestamp for this node is encountered after a heartbeat table query. |
When an expected timestamp is encountered. |
rhino.rhino.pool-maintenance-provider.invalid-node-update-time |
When a node’s heartbeat timestamps are noticed to exceed the permitted time delta from this node’s clock time for longer than the configured grace period. |
When the node’s timestamp no longer exceed the permitted time delta. |
Key-Value Store |
||
rhino.cassandra-kv-store.driver-timeout |
When a timeout occurs trying to execute a statement in the database. |
When no new timeout has occurred for the period of time specified by the cassandra.driver_timeout_interval system property. |
rhino.cassandra-kv-store.driver-error |
When an error occurs with the database but communication with it still works, for example a keyspace is not available to execute a query on. |
Must be cleared manually. |
rhino.cassandra-kv-store.missing-keyspace |
When a required database keyspace is found to be missing. |
When the database keyspace is detected to be present. |
Session Ownership Store |
||
rhino.cassandra-session-ownership-store.driver-timeout |
When a timeout occurs trying to execute a statement in the database. |
When no new timeout has occurred for the period of time specified by the cassandra.driver_timeout_interval system property. |
rhino.cassandra-session-ownership-store.driver-error |
When an error occurs with the database but communication with it still works, for example a keyspace is not available to execute a query on. |
Must be cleared manually. |
rhino.cassandra-session-ownership-store.missing-keyspace |
When a required database keyspace is found to be missing. |
When the database keyspace is detected to be present. |
rhino.cassandra-session-ownership-store.missing-table |
When a required database table is found to be missing. |
When the database table is detected to be present. |
Removed Alarms
Alarm type | Reason |
---|---|
rhino.remote-timer-server.bind-error |
The remote timer server has been replaced by node interconnect that prevents node startup if binding fails. |
New monitoring statistics
Rhino 3.2 adds statistics for the pool maintenance subsystem. For more detail or a full list of statistics reported by Rhino, see the Rhino Administration and Deployment Guide
Pool maintenance statistics are reported in the following parameter sets:
Parameter set name | Description |
---|---|
PoolMaintenanceProvider.heartbeat |
Heartbeat Subsystem statistics |
PoolMaintenanceProvider.heartbeat.cassandra |
Cassandra-specific heartbeat table statistics |
PoolMaintenanceProvider.nodemetadata.cassandra |
Cassandra node metadata table statistics |
For earlier changes, see the Rhino 3.1 Changelog. Rhino 3.2 includes all changes up to version 3.1.1.