Below are troubleshooting steps -- symptoms, diagnostic steps, and workarounds or resolutions -- for Rhino configuration.

Rhino provides a secured environment for its management infrastructure and for deployed components such as SBBs, Resource Adaptors, Profiles and Mlets. If some component of this management infrastructure attempts to perform an unauthorized operation, then `AccessControlException`s will be reported in the logging output of Rhino. The two most common forms are described in this section.

Symptoms

The default installation of Rhino is strict about which remote machines may connect to Rhino to perform management operations. The following exceptions are common when a machine attempts to connect to Rhino that has not been explicitly configured. Examples of the two most common messages are as follows.

A refused JMX-Remote connection produces the following message:

Exception in thread "RMI TCP Connection(6)-192.168.0.38" java.security.AccessControlException: access denied (java.net.SocketPermission 192.168.0.38:48070 accept,resolve)
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:264)
at java.security.AccessController.checkPermission(AccessController.java:427)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
at java.lang.SecurityManager.checkAccept(SecurityManager.java:1157)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.checkAcceptPermission(TCPTransport.java:560) at sun.rmi.transport.tcp.TCPTransport.checkAcceptPermission(TCPTransport.java:208)
at sun.rmi.transport.Transport$1.run(Transport.java:152)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:149)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:460)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:701)
at java.lang.Thread.run(Thread.java:595)

This can be caused by a client/bin/rhino-console, client/bin/rhino-stats, client/bin/rhino-console, and external clients such as the Rhino Element Manager or any other application which uses Rhino’s JMX-Remote connection support.

The following message is caused by the rhino-console connecting from a host which is not in the list of accepted IP addresses:

2016-12-01 12:18:38.444  ERROR   [rhino.main]   <RMI TCP Connection(idle)> Uncaught exception detected in thread Thread[RMI TCP Connection(idle),5,RMI Runtime]: java.security.AccessControlException: access denied ("java.net.SocketPermission" "192.168.2.33:34611" "accept,resolve")
java.security.AccessControlException: access denied ("java.net.SocketPermission" "192.168.2.33:34611" "accept,resolve")
        at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
        at java.security.AccessController.checkPermission(AccessController.java:559)
        at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
        at java.lang.SecurityManager.checkAccept(SecurityManager.java:1170)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.checkAcceptPermission(TCPTransport.java:668)
        at sun.rmi.transport.tcp.TCPTransport.checkAcceptPermission(TCPTransport.java:305)
        at sun.rmi.transport.Transport$2.run(Transport.java:201)
        at sun.rmi.transport.Transport$2.run(Transport.java:199)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
        at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Resolution

In this case, the components can be given permission to accept connections from certain IP addresses or hostnames by entering those addresses in the appropriate section in config/mlet-permachine.conf for the production system and in config/mlet.conf for the SDK. This is described in more detail in the Rhino Administrators Guide.

If the connection is local, it may be necessary to add that local interface to the LOCALIPS variable in the config/config_variables file. Both IPv4 and IPv6 addresses will need to be specified, and that node or SDK will need to be restarted.

Symptoms

Components inside Rhino run in a security context. These components include SBBs, Resource Adaptors, Profiles and Mlets. If a component attempts to perform some operation which requires a security permission that the component does not have, then an exception is generated. An example of this is when a Resource Adaptor attempts to write to the filesystem when it does not have permission to do so.

...
WARN [rhino.remotetx] <StageWorker/RTM/1> Prepare failed:
Node 101 failed: com.opencloud.slee.ext.resource.ResourceException: Could not create initial CDR file ...
at java.lang.Thread.run(Thread.java:534)
Caused by: java.io.IOException: Access denied trying to create a new CDR logfile
...
at java.security.AccessController.doPrivileged(Native Method)
... 14 more
Caused by: java.security.AccessControlException: access denied (java.io.FilePermission ... read)
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:269)
at java.security.AccessController.checkPermission(AccessController.java:401)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:524)
at java.lang.SecurityManager.checkRead(SecurityManager.java:863)
at java.io.File.exists(File.java:678)
... 19 more
...

Diagnosis and Resolution

Typically this indicates that the component is not configured appropriately.

As a step in diagnosing security-related problems, the security manager can be disabled by commenting out the following line in the read-config-variables script:

OPTIONS="$OPTIONS -Djava.security.manager"

This is not recommended as a permanent solution and should not be used on a production system. Disabling the security manager should only be used temporarily as a means to diagnose the problem in a test environment. To determine which permissions are required uncomment this line in $RHINO_HOME/read-config-variables:

#OPTIONS="$OPTIONS \-Djava.security.debug=access:failure"

Rhino will then print the permissions that are failing security checks to the console log. The required permissions should be added to the <security-permissions> section of the component deployment descriptor. For more information regarding granting additional permissions to an RA or SBB, refer to the sections on RA and SBB Deployment Descriptors in the Rhino Administration Manual.

Memory Database Full

Rhino’s in-memory database (MemDB) has a specified fixed capacity. Transactions which execute against a MemDB will not commit if the transaction would take MemDB past its fixed capacity. This typically occurs due to inadequate sizing of a Rhino installation but may indicate faulty logic in a service or resource adaptor. Log messages containing "Unable to prepare due to size limits of the DB" or "Unable to prepare due to committed size limits of MemDB Local" are a clear indicator of this problem. The symptoms are varied and are described below.

Profile Management and/or provisioning failing

Symptoms

Creating or importing profiles fails. The Rhino log contains a message with a log key profile.* similar to the following:

2016-12-01 13:41:57.081  WARN    [profile.mbean]   <RMI TCP Connection(2)-192.168.0.204> [foo:8] Error committing profile:
javax.slee.management.ManagementException: Cannot commit transaction
        at com.opencloud.rhino.management.TxSupport.commitTx(TxSupport.java:36)
        at com.opencloud.rhino.management.SleeSupport.commitTx(SleeSupport.java:28)
        at com.opencloud.rhino.impl.profile.GenericProfile.commitProfile(GenericProfile.java:127)
        at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.opencloud.rhino.management.DynamicMBeanSupport.doInvoke(DynamicMBeanSupport.java:157)
        at com.opencloud.rhino.management.DynamicMBeanSupport$1.run(DynamicMBeanSupport.java:121)
        at java.security.AccessController.doPrivileged(Native Method)
        at com.opencloud.rhino.management.DynamicMBeanSupport.invoke(DynamicMBeanSupport.java:113)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
        at sun.reflect.GeneratedMethodAccessor81.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.opencloud.rhino.management.BaseMBeanInterceptor.intercept(BaseMBeanInterceptor.java:12)
        at com.opencloud.rhino.management.AuditingMBeanInterceptor.intercept(AuditingMBeanInterceptor.java:245)
        at com.opencloud.rhino.management.ObjectNameNamespaceQualifierMBeanInterceptor.intercept(ObjectNameNamespaceQualifierMBeanInterceptor.java:74)
        at com.opencloud.rhino.management.NamespaceAssociatorMBeanInterceptor.intercept(NamespaceAssociatorMBeanInterceptor.java:66)
        at com.opencloud.rhino.management.RhinoPermissionCheckInterceptor.intercept(RhinoPermissionCheckInterceptor.java:56)
        at com.opencloud.rhino.management.CompatibilityMBeanInterceptor.intercept(CompatibilityMBeanInterceptor.java:130)
        at com.opencloud.rhino.management.StartupManagementMBeanInterceptor.intercept(StartupManagementMBeanInterceptor.java:97)
        at com.opencloud.rhino.management.RemoteSafeExceptionMBeanInterceptor.intercept(RemoteSafeExceptionMBeanInterceptor.java:26)
        at com.opencloud.rhino.management.SleeMBeanServerBuilder$MBeanServerInvocationHandler.invoke(SleeMBeanServerBuilder.java:38)
        at com.sun.proxy.$Proxy9.invoke(Unknown Source)
        at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
        at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
        at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
        at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
        at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
        at sun.rmi.transport.Transport$2.run(Transport.java:202)
        at sun.rmi.transport.Transport$2.run(Transport.java:199)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
        at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: com.opencloud.transaction.TransientRollbackException: Unable to prepare due to size limits of the DB
        at com.opencloud.transaction.local.OCTransactionImpl.commit(OCTransactionImpl.java:107)
        at com.opencloud.transaction.local.OCTransactionManagerImpl.commit(OCTransactionManagerImpl.java:190)
        at com.opencloud.rhino.management.TxSupport.commitTx(TxSupport.java:33)
        ... 48 more

Resolution and monitoring

If profile management and/or provisioning commands are unsuccessful due to the size restriction of the profile database installed in Rhino the size of the database should be increased. To resize the profile database follow the instructions in Resizing MemDB Instances to alter the size of the ProfileDatabase.

The following command can be used to monitor the size of the ProfileDatabase installed in Rhino.

./client/bin/rhino-stats -m MemDB-Replicated.domain-0-ProfileDatabase

Deployment failing

Symptoms

If deployment is unsuccessful this is possibly due to the size restriction of the ManagementDatabase installed in Rhino.

The error message and Rhino log will look similar to:

javax.slee.management.ManagementException: File storage error
        at com.opencloud.rhino.node.FileManager.store(FileManager.java:119)
        at com.opencloud.rhino.management.deployment.Deployment.install(Deployment.java:400)
        at com.opencloud.rhino.management.deployment.Deployment.install(Deployment.java:267)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.opencloud.rhino.management.DynamicMBeanSupport.doInvoke(DynamicMBeanSupport.java:157)
        at com.opencloud.rhino.management.DynamicMBeanSupport$1.run(DynamicMBeanSupport.java:121)
        at java.security.AccessController.doPrivileged(Native Method)
        at com.opencloud.rhino.management.DynamicMBeanSupport.invoke(DynamicMBeanSupport.java:113)
        at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
        at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
        at sun.reflect.GeneratedMethodAccessor83.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.opencloud.rhino.management.BaseMBeanInterceptor.intercept(BaseMBeanInterceptor.java:12)
        at com.opencloud.rhino.management.AuditingMBeanInterceptor.intercept(AuditingMBeanInterceptor.java:245)
        at com.opencloud.rhino.management.ObjectNameNamespaceQualifierMBeanInterceptor.intercept(ObjectNameNamespaceQualifierMBeanInterceptor.java:74)
        at com.opencloud.rhino.management.NamespaceAssociatorMBeanInterceptor.intercept(NamespaceAssociatorMBeanInterceptor.java:66)
        at com.opencloud.rhino.management.RhinoPermissionCheckInterceptor.intercept(RhinoPermissionCheckInterceptor.java:56)
        at com.opencloud.rhino.management.CompatibilityMBeanInterceptor.intercept(CompatibilityMBeanInterceptor.java:130)
        at com.opencloud.rhino.management.StartupManagementMBeanInterceptor.intercept(StartupManagementMBeanInterceptor.java:97)
        at com.opencloud.rhino.management.RemoteSafeExceptionMBeanInterceptor.intercept(RemoteSafeExceptionMBeanInterceptor.java:26)
        at com.opencloud.rhino.management.SleeMBeanServerBuilder$MBeanServerInvocationHandler.invoke(SleeMBeanServerBuilder.java:38)
        at com.sun.proxy.$Proxy9.invoke(Unknown Source)
        at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
        at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
        at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
        at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
        at sun.reflect.GeneratedMethodAccessor82.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
        at sun.rmi.transport.Transport$2.run(Transport.java:202)
        at sun.rmi.transport.Transport$2.run(Transport.java:199)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
        at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: javax.slee.TransactionRolledbackLocalException: Cannot commit transaction
        at com.opencloud.rhino.facilities.FacilitySupport.commitTx(FacilitySupport.java:47)
        at com.opencloud.rhino.node.FileManager.store(FileManager.java:112)
        ... 49 more
Caused by: com.opencloud.transaction.TransientRollbackException$$_Safe: Unable to prepare due to size limits of the DB
        at com.opencloud.transaction.local.OCTransactionImpl.commit(OCTransactionImpl.java:107)
        at com.opencloud.transaction.local.OCTransactionManagerImpl.commit(OCTransactionManagerImpl.java:190)
        at com.opencloud.rhino.facilities.FacilitySupport.commitTx(FacilitySupport.java:44)
        ... 50 more

Resolution and monitoring

If deployment commands are unsuccessful due to the size restriction of the management database installed in Rhino the size of the database should be increased. To resize the management database follow the instructions in Resizing MemDB Instances to alter the size of the ManagementDatabase.

The following command can be used to monitor the size of the ManagementDatabase installed in Rhino.

./client/bin/rhino-stats -m MemDB-Replicated.ManagementDatabase

Calls not being setup successfully

If calls are not being setup successfully the failures may be caused by the size restriction of either or both of the LocalMemoryDatabase and ReplicatedMemoryDatabase installed in Rhino.

If the service does not use replicated transactions the Rhino log will contain messages similar to this:

2016-11-30 09:08:01.732  WARN    [rhino.er.stage.eh]   <jr-74> Event router transaction failed to commit
com.opencloud.transaction.TransientRollbackException: Unable to prepare due to committed size limits of MemDB Local
        at com.opencloud.ob.Rhino.bO.commit(2.3-1.20-85576:108)
        at com.opencloud.ob.Rhino.me.commit(2.3-1.20-85576:191)
        at com.opencloud.ob.Rhino.AB.run(2.3-1.20-85576:95)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at com.opencloud.ob.Rhino.bq$a$a$a.run(2.3-1.20-85576:440)

Replicated services will produce log entries similar to this:

2016-11-30 09:08:01.732  WARN    [rhino.er.stage.eh]   <jr-74> Event router transaction failed to commit
com.opencloud.transaction.TransientRollbackException: Unable to prepare due to size limits of the DB
        at com.opencloud.ob.Rhino.bO.commit(2.3-1.20-85576:108)
        at com.opencloud.ob.Rhino.me.commit(2.3-1.20-85576:191)
        at com.opencloud.ob.Rhino.AB.run(2.3-1.20-85576:95)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at com.opencloud.ob.Rhino.bq$a$a$a.run(2.3-1.20-85576:440)

Resolution and monitoring

If calls are failing due to the size restriction of the in-memory database installed in Rhino the size of the database should be increased. Data should be collected before resizing to determine which database is reaching its size limit.

The following commands can be used to monitor the size of these databases installed in Rhino.

./client/bin/rhino-stats -m MemDB-Local.LocalMemoryDatabase
./client/bin/rhino-stats -m MemDB-Replicated.domain-0-DomainedMemoryDatabase
./client/bin/rhino-stats -m MemDB-Replicated.domain-0-ReplicatedMemoryDatabase

Note that it may be necessary to additionally monitor the output of the following memory databases in Rhino: MemDB-Replicated.ManagementDatabase, and MemDB-Replicated.ProfileDatabase.

If Rhino is configured to use striping, the DomainedMemoryDatabase will be divided into stripes that are each limited to a fraction of the size allocated to the MemDB instance. When using a striped MemDB it is possible for individual stripes to become full without the whole database filling. This is typically a symptom of a highly asymmetric workload or a poorly designed service. To check for full stripes monitor all the stripe statistics in the DB displaying problems. For example the DomainedMemoryDatabase stripes in an 8-stripe configuration are:

 "MemDB-Replicated.domain-0-DomainedMemoryDatabase.domain-0-DomainedMemoryDatabase.stripe-0"
 "MemDB-Replicated.domain-0-DomainedMemoryDatabase.domain-0-DomainedMemoryDatabase.stripe-1"
 "MemDB-Replicated.domain-0-DomainedMemoryDatabase.domain-0-DomainedMemoryDatabase.stripe-2"
 "MemDB-Replicated.domain-0-DomainedMemoryDatabase.domain-0-DomainedMemoryDatabase.stripe-3"
 "MemDB-Replicated.domain-0-DomainedMemoryDatabase.domain-0-DomainedMemoryDatabase.stripe-4"
 "MemDB-Replicated.domain-0-DomainedMemoryDatabase.domain-0-DomainedMemoryDatabase.stripe-5"
 "MemDB-Replicated.domain-0-DomainedMemoryDatabase.domain-0-DomainedMemoryDatabase.stripe-6"
 "MemDB-Replicated.domain-0-DomainedMemoryDatabase.domain-0-DomainedMemoryDatabase.stripe-7"

If the committed size of the whole MemDB instance reported is close to the maximum, increase the configured size of this instance. If the committed size of only one stripe is close to the maximum then the service is a poor match for striping. Reduce the number of stripes.

To resize a MemDB instance follow the instructions in Resizing MemDB Instances to alter the size of the LocalMemoryDatabase, DomainedMemoryDatabase or ReplicatedMemoryDatabase.

When monitoring the MemDB, watch the output of the churn, committedSize and maxCommittedSize fields. The churn field is the total change in the content of the Memory Database in bytes. The committedSize field is the current committed size of the Memory Database in kilobytes. The maxCommittedSize field is the maximum allowed committed size of the Memory Database in kilobytes. If the the difference between committedSize and maxCommittedSize is close to the churn or less than 100kB it is possible that the Memory Database is not committing transactions as they grow the database past its specified limit.

If the committedSize increases over time or does not approximately track the system load this may indicate a fault in a resource adaptor or service. Increase the MemDB size to prevent call failures and contact your solution provider for assistance.

For other problems that cause dropped calls refer to sections Dropped Calls, Java Virtual Machine heap issues and Application or resource adaptor heap issues.

Resizing MemDB Instances

The general workaround for MemDB sizing problems is to make the appropriate Memory Database larger. To do this, edit the node-*/config/rhino-config.xml file and look for a <memdb> or <memdb-local> entry which has a <jndi-name> with the appropriate Database name (e.g. ProfileDatabase). Increase the <committed-size> of the database to increase its maximum committed size.

Increasing the size of a Memory Database also requires increasing the maximum heap size of the JVM to accomodate the larger database. To do this, edit the node-*/config/config_variables file and increase the HEAP_SIZE by the same amount.

These steps will need to be performed for all nodes, and the nodes will need to be restarted for the change to take effect.

Initially try doubling the size of a database and monitor usage to determine the relationship between load and usage. Assuming usage is proportional to call load you may then need to alter the configured size to accomodate the highest sustained load peak. If usage does not appear to be proportional to load or the problem is not solved after a second increase in size, please contact your solution provider for support.

Resource Adaptors refuse to connect using TCP/IP

Make sure that your SLEE is in the “Running” state. A SLEE that is not running will not activate resource adaptors so will neither listen nor initiate connections.

If you have IPv6 installed on your machine, then Java may be attempting to use IPv6 rather than IPv4. Depending on the configuration of the network services and host computer’s interfaces Java may be resolving hostnames differently from the other network components. If this is the case then the resource adaptor may attempt to connect to the remote system using IPv6 when the system is only listening on an IPv4 address. Less frequently the reverse may be true.

Diagnostic steps

Use the network diagnostic tools provided with your OS distribution to check the current connected and listening ports for running programs. One common tool for this purpose is netstat:

$  netstat --inet6 -anp
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp6       0      0 :::1202                 :::*                    LISTEN      6429/java
tcp6       0      0 :::1203                 :::*                    LISTEN      6429/java
tcp6       0      0 :::53                   :::*                    LISTEN      -
tcp6       0      0 :::22                   :::*                    LISTEN      -
tcp6       0      0 ::1:631                 :::*                    LISTEN      -
tcp6       0      0 ::1:5432                :::*                    LISTEN      -
tcp6       0      0 :::51673                :::*                    LISTEN      -
tcp6       0      0 :::9474                 :::*                    LISTEN      6429/java
tcp6       0      0 :::1199                 :::*                    LISTEN      6429/java
tcp6       0      0 :::111                  :::*                    LISTEN      -
tcp6       0      0 :::22000                :::*                    LISTEN      6429/java
tcp6       0      0 127.0.0.1:55682         127.0.0.1:5432          ESTABLISHED 6429/java
tcp6       0      0 127.0.0.1:55684         127.0.0.1:5432          ESTABLISHED 6429/java
udp6       0      0 ::1:35296               ::1:35296               ESTABLISHED -
udp6       0      0 :::45751                :::*                                -
udp6       0      0 :::5353                 :::*                                7991/plugins --disa
udp6       0      0 192.168.0.204:16100     :::*                                6429/java
udp6       0      0 :::53                   :::*                                -
udp6       0      0 :::111                  :::*                                -
udp6       0      0 fe80::9eeb:e8ff:fe0:123 :::*                                -
udp6       0      0 ::1:123                 :::*                                -
udp6       0      0 :::123                  :::*                                -
udp6       0      0 :::704                  :::*                                -
raw6       0      0 :::58                   :::*                    7           -

Check the firewall configuration on both the host running Rhino and the host providing the network function the RA connects to. Also check any intermediate network components that provide filtering functions.

Workaround or Resolution

Try adding`-Djava.net.preferIPv4Stack=true` as a command-line argument to Rhino. This option can be set by adding a line EXTRA_OPTIONS=-Djava.net.preferIPv4Stack=true to the RHINO_HOME/config_variables file. Other Java programs that connect to Rhino may also need this argument added. For other programs an RA connects to consult the documentation for that program to learn how to configure the address family to use. If a program cannot be configured appropriately you may need to disable support for IPv6 in the OS configuration.

If a firewall is configured to block the addresses used by the resource adaptor it must be reconfigured to allow connections.

Local hostname not resolved properly

In some cases the local name of a host (e.g. test-vm.example.com) is resolved wrongly (to 127.0.0.1 instead of its external address e.g. 192.168…​). This results in remote connections being refused when trying to connect via management clients from a different host.

Symptoms

  • Connections via management clients are refused

Diagnostic steps

First check that proper permissions have been given in the appropriate sections of $RHINO_HOME/config/mlet.conf (SDK) respectively $RHINO_HOME/etc/default/config/mlet-permachine.conf (Production).

Check that no matter whether you are trying to connect through the Standalone Webconsole or the Commandline Console application you get an exception similar to the one below:

...
Connection failed to mycomputer:1199
Connection failed to mycomputer:1199 (unreachable at Thu Apr 26 11:32:10 NZST 2007) javax.security.auth.login.LoginException: Could not connect
...
Caused by: com.opencloud.slee.client.SleeClientException: Could not connect to a Rhino management host
    at com.opencloud.slee.mlet.shell.spi.jmx.RhinoJmxClient.connect(RhinoJmxClient.java:186)
    at com.opencloud.slee.mlet.shell.spi.jmx.RhinoJmxClient.connect(RhinoJmxClient.java:124)
    at com.opencloud.slee.mlet.shell.spi.jmx.RhinoJmxClient.login(RhinoJmxClient.java:242)

Workaround or Resolution

Edit the contents of the file /etc/hosts. It should contain entries as shown below.

...
127.0.0.1 localhost
192.168.123.123 mycomputer
...

Entries that resolve the machine’s name to its local address should be commented out:

...
#127.0.0.1 localhost.localdomain localhost mycomputer
...

Logging configuration errors

Symptoms

  • Reconfiguring the logging subsystem fails

Diagnostic steps

First check your command parameters for errors as prompted by the error message returned.

If the error is unclear or the parameters appear to be correct, read the logging status log node-???/work/logging-status.log. The messages in this file may provide sufficient detail to resolve the problem. Check for known issues

Workaround or resolution

Fix the configuration command parameters, avoiding configuration combinations known to be faulty.

If the configuration appears to be correct contact your solution support provider for assistance.

Previous page Next page