This document provides API-level details for developers to extend SLEE services and Metaswitch Sentinel features.

Note Introduced in Rhino 2.4.0

SBB Parts

This page includes these details for using SBB parts:

What are SBB parts?

An "SBB part", as its name implies, can be thought of as a sub-component of a JAIN SLEE SBB (Service Building Block). An SBB part is an installable SLEE component in its own right. It may have dependencies, and other components may depend on it. The concept of the SBB part component was borne from a use case where the developer wanted to put a number of common shared classes into a library-type component; but the classes had dependencies on various profile specifications, event types, and resource adaptor types — all of which a standard JAIN SLEE library component cannot provide (a library component may only depend on other library components). We thought it might be desirable to extend the definition of the standard library component with the option to declare these other types of references, mainly because a library component was always seen as a passive "primitive" type of component that sat at the top of the dependency hierarchy. Adding dependency links to other component types would significantly complicate the SLEE component dependency hierarchy. In particular, it would create the possibility of cyclic dependencies between different component types — which must be avoided, as in practise they cannot be mapped to a realistic runtime class loader hierarchy. So instead, a new component type was created to fulfill this need: the SBB part.

SBB part components have one other feature that make them distinct from library components: at runtime the SBB part component classes are included in the same class loader as the SBB classes for the service, rather than in a parent class loader (as standard library components are).

SBB part components

An SBB part component may define:

  • dependencies — on libraries, event types, resource adaptor types, profile specifications and other SBB parts

  • per-instance state — held in Container Managed Persistence (CMP) fields that can maintain persistent state that should persist across failures

  • usage statistics — collected through usage parameters interface types declared by the SBB part component (see Usage in the Rhino Administration and Deployment Guide for a description of Rhino’s usage extension mechanism)

  • event handler methods — provided by the SBB part component for each event type it receives, with application logic to process events of a specific event type

  • shareable data — defined by the SBB part component to share with other components as a set of activity context attributes (each activity context attribute has a name and a type, and is stored in one or more activity contexts — an SBB part component defines an activity context interface interface that provides type-safe accessor methods to get and set these attributes).

An SBB part component can only be referenced by SBB components or other SBB part components.

SBBs and SBB parts

An SBB may depend on one or more SBB parts. An SBB part may also depend on other SBB parts. An SBB that depends on one or more SBB parts, both directly and indirectly, has its definition implicitly extended in the following ways:

per-instance state

The per-instance persistent state of the SBB is defined by the union of the CMP fields declared by the SBB and the CMP fields declared by each dependent SBB part. All CMP fields declared by these components share the same namespace. A CMP field declared by both the SBB and a dependent SBB part, or two different dependent SBB parts, must be declared with the same field type, as they each refer to the same piece of per-instance state.

usage statistics

The usage parameters interface types available to the SBB is the union of the usage parameters interface types declared by the SBB and the usage parameters interface types declared by each dependent SBB part. All usage parameters interface types declared by these components share the same namespace. A usage parameters interface type declared by both the SBB and a dependent SBB part, or two different dependent SBB parts, must be declared with the same usage parameters interface class name. Amongst the SBB and all dependent SBB parts, at most one usage parameters interface type may be declared as the root usage parameter set type.

event handler methods

The event handler methods available to the SBB is the union of the event handler methods defined by the SBB and the event handler methods defined by each dependent SBB part. Any given event type can have at most one event handler method defined between the SBB and the SBB parts — if the SBB declares an event handler method for a given event type, then no dependent SBB part may also declare an event handler method for the same event type.

shareable data

The activity context attributes of the SBB are the union of the activity context attributes declared by the SBB and the activity context attributes declared by each dependent SBB part. All activity context attributes declared by these components share the same namespace. An activity context attribute declared by both the SBB and a dependent SBB part, or two different dependent SBB parts, must be declared with the same type, as they each refer to the same attribute. The SBB and dependent SBB parts may each declare an alias for the same activity context attribute, but the alias name must be identical; in other words, each alias declaration must be identical. If any of the components declare an alias for a given activity context attribute, then the alias applies to all the components.

JNDI environment

The JNDI bindings available to the SBB is the union of all JNDI bindings declared by the SBB and all JNDI bindings declared by each SBB part. A binding to a given name declared by both the SBB and a dependent SBB part, or two different dependent SBB parts, must be declared with the same value. For example, if the binding is made to a resource adaptor entity, then each binding declaration must have the same resource adaptor type reference.

Put simply, an SBB becomes the union of the declarations made by itself and all its dependent SBB parts; and there must be no conflict between components that make the same declarations.

SBB part objects

An SBB part may optionally declare an SBB part class. The SBB part class contains the event-processing logic of the SBB part component. An instance of the SBB part class is known as an SBB part object.

The lifecycle of an SBB part object is tightly coupled to the lifecycle of an SBB object. When an SBB object is created, an SBB part object for each dependent SBB part that declares an SBB part class is also created; and the SBB object maintains a reference to this SBB part object for the lifetime of the SBB object. When the SBB object undergoes a lifecycle state transition, for example from the Pooled state to the Ready state, each dependent SBB part object also undergoes the same lifecycle transition. If the SLEE determines that the SBB object is no longer required and becomes eligible for JVM garbage collection, then so too do the dependent SBB part objects.

During the lifetime of an SBB part object, it may be assigned to different SBB entities. When the SBB part object is assigned to an SBB part entity, it can receive events destined for the SBB entity and can manipulate the persistent state of the SBB entity. It can also access the relationships of the SBB entity.

SBB part object lifecycle

An SBB part object can be in one of the following three states:

  • Does Not Exist — The SBB part object does not exist. It may not have been created or it may have been deleted.

  • Pooled — The SBB part object exists but is not assigned to any particular SBB entity.

  • Ready — The SBB part object is assigned to an SBB entity. It is ready to receive events through its event handler methods.

The following steps describe the lifecycle of an SBB part object:

  1. An SBB part object’s lifecycle starts when the SLEE creates the object using newInstance. The SLEE passes the SBB part object an SbbPartContext object to the constructor if the constructor defines such an argument. The SbbPartContext object allows the SBB part object to invoke functions provided by the SLEE. Once the SBB object is created, the SLEE then injects values into SBB part class fields annotated for dependency injection.

  2. The SBB part object is bound to an owning SBB object. While the SBB object is in the Pooled state, the SBB part object is also in the Pooled state, and is not associated with any particular SBB entity.

  3. An SBB part object transitions from the Pooled state to the Ready state when the SLEE selects the owning SBB object to process an event or to service a logic object invocation. There are two possible transitions from the Pooled state to the Ready state: through the @PostCreate method, or through the @OnActivate method.

    • The SLEE invokes the @PostCreate method when the SBB part object is assigned to a new SBB entity that has just been created explicitly by an invocation of the create method on a ChildRelation object, or implicitly by the SLEE to process an initial event.

    • The SLEE invokes the @OnActivate method on an SBB part object when the owning SBB object needs to be activated to receive a method invocation on an existing SBB entity. This occurs when there is no existing SBB object in the Ready state assigned to the SBB entity available to receive the invocation.

  4. When an SBB part object is in the Ready state, the SBB part object is associated with a specific SBB entity. While the SBB part object is in the Ready state, the SLEE can synchronise the transient state held in the SBB part object with the persistent state of the SBB entity whenever it determines the need to, by invoking the @PostLoad and @PreStore methods zero or more times. Event handler and exception callback methods can be invoked on the SBB part object zero or more times. Invocations of the @PostLoad and @PreStore methods can be arbitrarily mixed with invocations of these methods subject to the SBB part object lifecycle.

  5. The SLEE can choose to passivate an SBB object. Passivating an SBB object disassociates the SBB object from the SBB entity it is currently assigned to. When an SBB object is passivated, its dependent SBB part objects are also passivated and therefore disassociated from the SBB entity. When an SBB part object is passivated, the SLEE first invokes the @PreStore method to allow the SBB part object to prepare itself for the synchronisation of the SBB entity’s persistent state with the SBB part object’s transient state; then the SLEE invokes the @OnPassivate method to return the SBB part object to the Pooled state.

  6. Eventually, the SLEE will transition the SBB part object to the Pooled state. There are two possible normal transitions from the Ready state to the Pooled state: through the @OnPassivate method, and through the @PreRemove method.

    • The SLEE invokes the @OnPassivate method when the SLEE wants to disassociate the SBB object and its dependent SBB part objects from the SBB entity without removing the SBB entity.

    • The SLEE invokes the @PreRemove method when the SLEE wants to remove the SBB entity (in other words, when the SBB entity is removed as part of cascading removal of an SBB entity sub-tree).

  7. When the SBB object and its dependent SBB part objects are put back into the pool, they are no longer associated with the SBB entity. The SLEE can assign the SBB object and SBB part objects to any SBB entity of the same SBB component.

  8. The SLEE may release its references to an SBB object in the pool, along with its dependent SBB part objects, allowing them to be garbage collected. It may do this after calling the unsetSbbContext method on the SBB object, and the @PreDispose method on each SBB part object.

Warning
  • The SbbPartContext object passed by the SLEE to the SBB part object in the constructor, or by dependency injection, is not an object that contains static information. For example, the result of the getSbbLocalObject method might be different each time an SBB part object moves from the Pooled state to the Ready state, and the result of the getActivities method may be different in different event handler method invocations.

  • An SBB part object is only ever associated with one SBB in one service. This means that the getService, getSbb, and getSbbPart methods of an SbbPartContext object always return the same result during the lifetime of the SBB part object.

  • The order in which the SLEE invokes dependent SBB part objects for a given lifecycle callback method is not defined. The SBB part objects may be invoked in any order; however the SBB part object callbacks will always be made before or after the corresponding SBB object lifecycle callback method invocation, as defined in the Lifecycle methods section.

  • A RuntimeException thrown from any method of an SBB part object (including the event handler methods and the lifecycle callbacks invoked by the SLEE) results in the transition to the Does Not Exist state after the appropriate exception handler methods have been invoked. The SLEE will not invoke any method except the @OnException method on the SBB part object after a RuntimeException has been caught. The corresponding SBB entity continues to exist. The SBB entity can continue to receive events and synchronous invocations because the SLEE can use different SBB and SBB part objects to process events and synchronous invocations that should be sent to the SBB entity. For more on exception handling, see the Exception callback method section.

SBB part class

An SBB part must define an SBB part class if the SBB part declares event handler methods. In any other case, the definition of an SBB part class is optional. An SBB part class must:

  • be defined in a named package; that is, the class must have a package declaration

  • be defined as public

  • not be abstract or final

  • not define the finalize method.

The SBB developer implements the SBB part class. In it, they may define lifecycle and event handler methods, and an exception callback. The SBB part class may also use Dependency Injection for various types of fields.

Lifecycle methods

Each SBB part object has a lifecycle. The SLEE invokes the lifecycle methods of the SBB part object to make the SBB part object aware of its lifecycle state. The SLEE invokes a given lifecycle method on an SBB part object at the same time that it invokes the corresponding lifecycle method on the owning SBB object; however the order in which different dependent SBB parts are invoked for a given lifecycle method invocation is not defined.

The lifecycle methods are described below. In the case of lifecycle methods denoted by annotations, at most one of each lifecycle method may be declared in an SBB part class.

Constructor

The SBB part class must define a public constructor that takes either no arguments or a single argument of type com.opencloud.rhino.slee.sbbpart.SbbPartContext. The SLEE invokes the constructor of an SBB part class to create a new SBB part object. The SLEE creates a new SBB part object when the owning SBB object transitions from the Does Not Exist state to the Pooled state. If the SBB part object needs to use the SbbPartContext object during its lifetime, it should keep a reference to the SbbPartContext object in an instance variable. Alternatively, the SBB part object may obtain a reference to an SbbPartContext object using Dependency Injection.

During the constructor invocation, the SBB part object is not assigned an SBB entity. The SBB part object can use the constructor to allocate and initialise state or connect to resources that are to be held by the SBB part object during its lifetime. Such state and resources cannot be specific to an SBB entity, because the SBB part object might be reused during its lifetime to service multiple SBB entities.

The SBB part object constructor invocation corresponds to the setSbbContext lifecycle method of the SBB abstract class, and is invoked after the setSbbContext method returns successfully.

If both supported constructors are defined, then the one-argument constructor is used.

Warning
  • The SBB part class constructor must not define a throws clause.

  • The SBB part object must not attempt to access its persistent state using the CMP field accessor methods during the constructor invocation.

  • The constructor is invoked with an unspecified transaction context.

@PreDispose

The SBB part class may optionally implement a lifecycle callback method invoked by the SLEE before the SLEE terminates the life of the SBB part object. This method must be annotated with the @PreDispose annotation. This method is invoked when the SBB part object transitions from the Pooled state to the Does Not Exist state. During this method, an SBB entity is not assigned to the SBB part object. The SBB part object can use this method to free state or resources that are held by the SBB part object. These state and resources typically had been allocated by the SBB part class constructor.

This method corresponds to the unsetSbbContext lifecycle method of the SBB abstract class, and is invoked before the unsetSbbContext method is invoked on the owning SBB object.

Warning
  • A method annotated with @PreDispose method must be declared as public and cannot be static, abstract, or final.

  • This method must not take any arguments, must have a void return type, and must not define a throws clause.

  • The SBB part object must not attempt to access its persistent state using the CMP field accessor methods during this method.

  • This method is invoked with an unspecified transaction context.

@PostCreate

The SBB part class may optionally implement a lifecycle callback method invoked when a new SBB entity is created. The method must be annotated with the @PostCreate annotation. This method is invoked when the SBB part object transitions from the Pooled state to the Ready state as a result of SBB entity creation, and is invoked after the persistent representation of the SBB entity has been created and the SBB part object is assigned to the created SBB entity. This method can be used to initialise any transient state and acquire any resources that the SBB part needs while it is in the Ready state.

Warning
  • A method annotated with @PostCreate must be declared as public and cannot be static, abstract, or final.

  • This method must not take any arguments and must have a void return type.

  • This method may throw a javax.slee.CreateException when there is an application level problem (rather than a SLEE or system level problem). The SLEE will also propagate the CreateException unchanged to the caller that requested the creation of the SBB entity. The caller may be the SLEE or an SBB object. The throws clause is optional.

  • This method may not define a throws clause that includes any exception other than CreateException.

  • The SLEE guarantees that the values that will be initially returned from the getter methods of any CMP field declared exclusively by the SBB part will be the Java language defaults (such as 0 for integer, or null for object references), unless those CMP fields have been annotated with CMP Field Enhancements @InitialValueFields, in which case the initial value returned by the CMP field getter method will be as defined by the initial value field.

  • The SLEE invokes this method with the transaction context used to invoke the sbbCreate and sbbPostCreate methods on the owning SBB object.

  • The SBB part object enters the Ready state after this method returns normally. If this method returns by throwing an exception, the SBB part object does not become Ready.

This method corresponds to the sbbPostCreate lifecycle method of the SBB abstract class, and is invoked after the sbbPostCreate method returns successfully.

Note that there is no SBB part lifecycle method equivalent to the SBB abstract class sbbCreate method.

@OnActivate

The SBB part class may optionally implement a lifecycle callback method invoked when the SLEE needs to assign an SBB part object in the Pooled state to a existing SBB entity. This method must be annotated with the @OnActivate annotation. The SBB part object transitions to the Ready state after this method returns. This method gives the SBB part object a chance to initialise additional transient state and acquire additional resources that it needs while it is in the Ready state.

Warning
  • A method annotated with @OnActivate must be declared as public and cannot be static, abstract, or final.

  • This method must not take any arguments, must have a void return type, and must not define a throws clause.

  • The SBB part object must not attempt to access its persistent state using the CMP field accessor methods during this method.

  • This method executes with an unspecified transaction context.

This method corresponds to the sbbActivate lifecycle method of the SBB abstract class, and is invoked after the sbbActivate method returns successfully.

@OnPassivate

The SBB part class may optionally implement a lifecycle callback method invoked when the SLEE decides to disassociate an SBB part object in the Ready state from the SBB entity it is currently associated with. This method must be annotated with the @OnPassivate annotation. The SBB part object transitions to the Pooled state after this method returns. This method gives the SBB part object the change to release any state or resources that should not be held while the SBB part object is in the Pooled state. These state and resources typically had been allocated during the @OnActivate method.

Warning
  • A method annotated with @OnPassivate must be declared as public and cannot be static, abstract, or final.

  • This method must not take any arguments, must have a void return type, and must not define a throws clause.

  • The SBB part object must not attempt to access its persistent state using the CMP field accessor methods during this method.

  • This method executes with an unspecified transaction context.

This method corresponds to the sbbPassivate lifecycle method of the SBB abstract class, and is invoked after the sbbPassivate method returns successfully.

@PreRemove

The SBB part class may optionally implement a lifecycle callback method invoked by the SLEE when the SBB entity assigned to the SBB part object is about to be removed. This method must be annotated with the @PreRemove annotation. The SBB part object is in the Ready state when this method is invoked, and it will transition to the Pooled state after this method returns. This method can be used to implement any actions that must be done before the SBB entity’s persistent representation is removed.

Warning
  • A method annotated with @PreRemove must be declared as public and cannot be static, abstract, or final.

  • This method must not take any arguments, must have a void return type, and must not define a throws clause.

  • The SLEE synchronises the SBB part object’s state before it invokes the @PreRemove method. This means that the CMP state of the SBB part object at the beginning of this method is the same as it would be at the beginning of an event handler method.

  • This method is invoked with same transaction context as used to invoke the sbbRemove method on the owning SBB object.

  • Since the SBB part object will transition to the Pooled state, the state of the SBB part object at the end of this method must be equivalent to the state of a passivated SBB part object. This means that the SBB part object must free any state and release any resource that it would normally release in the @OnPassivate method (if declared).

This method corresponds to the sbbRemove lifecycle method of the SBB abstract class, and is invoked before the sbbRemove method is invoked on this owning SBB object.

@PostLoad

The SBB part class may optionally implement a lifecycle callback method invoked by the SLEE to synchronise the state of the SBB part object with its assigned SBB entity’s persistent state. This method must be annotated with the @PostLoad annotation. The SBB developer can assume that the persistent state of the SBB entity the SBB part object is assigned to has been loaded just before this method is invoked. It is the responsibility of the SBB developer to use this method to re-compute or initialise the values of any transient instance variables in the SBB part object that depend on the SBB entity’s persistent state. In general, any transient state that depends on the persistent state of an SBB entity should be recalculated in this method. The SBB developer can use this method, for instance, to perform some computation on the values returned by the CMP field accessor methods, such as converting text fields to more convenient objects or binary representations.

Warning
  • A method annotated with @PostLoad must be declared as public and cannot be static, abstract, or final.

  • This method must not take any arguments, must have a void return type, and must not define a throws clause.

  • The SLEE invokes this method within a transaction context.

This method corresponds to the sbbLoad lifecycle method of the SBB abstract class, and is invoked after the sbbLoad method returns successfully.

@PreStore

The SBB part class may optionally implement a lifecycle callback method invoked by the SLEE to synchronise the SBB entity’s persistent state with the state of the SBB part object. This method must be annotated with the @PreStore annotation. The SBB developer should use this method to update the SBB entity using the CMP field accessor methods before its persistent state is synchronised. For example, this method may perform conversion of object or binary data representations to text. The SBB developer can assume that after this method returns, the persistent state is synchronised.

Warning
  • A method annotated with @PreStore must be declared as public and cannot be static, abstract, or final.

  • This method must not take any arguments, must have a void return type, and must not define a throws clause.

  • The SLEE invokes this method within a transaction context.

This method corresponds to the sbbStore lifecycle method of the SBB abstract class, and is invoked after the sbbStore method returns successfully.

Event handler methods

An SBB part class may receive an event through one of its event handler methods. An SBB part declares event handler methods in the same way as an SBB. For each event type received by the SBB part, you must:

  • provide an event element in the SBB part’s sbb-part deployment descriptor element — the value of the event-direction attribute of the event element must be Receive. It must also include an event-name element and an event-type-ref element.

    • The event-name element provides the SBB part scoped name used within the SBB part class to identify the event type, and determines the name of the event handler method.

    • The event-type-ref element references an event-definition element that provides the event type and the event class.

    • The event-definition element is provided and defined by the event producer of the event type.

    • The initial-event attribute of the event element may optionally be set to True.

    • The event element may optionally include an event-resource-option element.

  • implement the event handler method in the SBB part class — this method contains the application logic that will be invoked to process events of this event type.

The name of the event handler method is derived from the event name of the event type that will be received by the event handler method. The method name of the event handler method is derived by adding an on prefix to the event name. The event handler method has one of the following method signatures:

public void on<event name>(<event class> event,
                           <SBB Part Activity Context Interface interface> activity);
public void on<event name>(<event class> event,
                           <SBB Part Activity Context Interface interface> activity,
                           EventContext eventContext);
Warning
  • The first method signature without an event context argument is used if the SBB does not need access to event context.

  • The second method signature provides access to the event context associated with the event.

  • An SBB part can only have one event handler method for each event type.

  • The event handler method must be declared as public and cannot be static, abstract, or final.

  • The event handler method is a mandatory transactional method. Hence, the SLEE always invokes this method within a transaction.

  • An SBB part object as an event consumer receives an event on an activity context. This activity context is the activity context on which the event was fired. In the case of an event handler method, an SBB part activity context interface object (if the SBB part defines an SBB part activity context interface interface) or a generic activity context interface object represents the activity context on which the event was fired.

  • An event context is associated with the event that is fired if the SBB part implements an event handler method that includes an event context argument. The event context can be used to suspend and resume further event processing of this event.

  • The event handler method may return inadvertently by throwing a RuntimeException. See RuntimeException handling for transactional methods for details on how the SLEE handles this situation.

Event handler methods declared by SBB parts have the same rules and restrictions as event handler methods declared by SBBs.

An SBB part can manage the event types that it may receive for a particular activity context which it is attached to, by altering the event mask as an SBB would. The maskEvent and getEventMask methods defined in the SbbPartContext interface behave identically to the same methods defined in the SbbContext interface. An individual SBB or SBB part may only mask the events that it receives. For example, an SBB cannot mask an event received by a dependent SBB part.

Initial event selector methods

An SBB part class may define an initial event selector method for any event declared as an initial event. The behaviour and function of an initial event selector declared by an SBB part is identical to the behaviour and function of an initial event selector method declared by an SBB. The method signature of the initial event selector method is as follows:

public InitialEventSelector <initial event selector method name>(InitialEventSelector ies);
Warning
  • The initial event selector method must be declared as public and cannot be static, abstract, or final.

  • The method name is declared in the SBB part deployment descriptor. The method name must not begin with sbbPart, and must be a valid Java identifier.

  • This method is a non-transactional method.

  • It is only invoked on SBB part objects in the Pooled state.

Exception callback method

The SBB part class may optionally implement a callback method invoked by the SLEE to handle RuntimeExceptions thrown by the SBB part’s event handler methods and the mandatory transactional lifecycle callback methods: @PostCreate, @PreRemove, @PostLoad, and @PreStore. This method must be annotated with the @OnException annotation.

Warning
  • A method annotated with @PreStore must be declared as public and cannot be static, abstract, or final.

  • This method must declare three arguments in the following order:

    1. an Exception argument —  this is the exception thrown by one of the methods invoked by the SLEE, such as an event handler method or a lifecycle method.

    2. an Object argument —  this is the event argument passed to the event handler method, if the exception was thrown by an event handler method. If the exception was not thrown by an event handler method, this argument is null.

    3. an ActivityContextInterface argument — this is the ActivityContextInterface argument passed to the event handler method, if the exception was thrown by an event handler method. If the exception was not thrown by an event handler method, this argument is null. The specific type of this argument may be any subclass of ActivityContextInterface that is assignable to the SBB part’s declared activity context interface type.

  • The method must have a void return type, and must not define a throws clause.

  • This method is a mandatory transactional method.

  • The SLEE does not invoke this method if a non-transactional method invocation returns by throwing a RuntimeException.

  • If this method is invoked on an SBB part object in the Ready state, the state of the SBB part object remains as it was at the point that the RuntimeException was thrown. The SBB part object moves to the Does Not Exist state after the @OnException method has been invoked.

Tip A well-written SBB part should not throw any RuntimeExceptions from any of its SLEE invoked methods. Instead, the SBB part should place the exception handling logic inside a try { } catch (Throwable) clause and handle RuntimeExceptions within each invoked method.
RuntimeException handling for transactional methods

When a SLEE originated mandatory transactional method is invoked on an SBB part object and the invocation returns with a RuntimeException thrown, the SLEE performs the following actions:

  • The SLEE logs this condition.

  • The SLEE marks the transaction of the invocation for rollback.

  • The SLEE invokes the @OnException method, if declared, of the same SBB part object with the same transaction. The SBB part object may be in the Pooled state or in the Ready state. For example, if a @PostCreate method throws the RuntimeException, then the SBB part object remains in the Pooled state when the SLEE invokes the @OnException method on the SBB part object. If an event handler method throws the RuntimeException, then the SBB part object remains in the Ready state when the SLEE invokes the @OnException method on the SBB part object.

  • The SLEE moves the SBB part object, along with the owning SBB object and any other dependent SBB part objects, to the Does Not Exist state.

  • If the @OnException method of the SBB part object returns with another RuntimeException thrown, the SLEE logs this condition. The @OnException method is not reinvoked in this case.

RuntimeException handling for non-transactional methods

When the SLEE invokes a non-transactional method of an SBB part object and the invocation returns by throwing a RuntimeException, the SLEE performs following sequence of actions:

  • The SLEE logs this occurrence.

  • The SLEE moves the SBB part object, along with the owning SBB object any other dependent SBB part objects, to the Does Not Exist state.

Transaction rollback processing

An SBB part object is not involved with transaction rollback processing. If transaction rollback occurs after the SLEE invokes a mandatory transactional method on an SBB part object, such as an event handler method, the sbbRolledBack callback method will be invoked on an SBB object of the SBB part’s owning SBB in accordance with the normal SLEE rules for rollback processing.

Method name restrictions

Non-private (such as public, protected, or package private) methods that are defined in an SBB part class must not begin with sbbPart.

Dependency injection

When implementing an SBB part class, the SBB developer has the option to use dependency injection to initialise the value of certain types of class fields. Dependency injection eliminates the need for the typical boilerplate code associated with the initialisation of these fields.

Dependency injection is supported using the API provided by JSR 330 Dependency Injection for Java, in particular the @javax.inject.Inject and @javax.inject.Named annotations. Any SBB part class field where dependency injection is required must be annotated with @Inject. The @Named annotation may also be used, where permitted, to provide an additional parameter to the injector.

The @Inject annotation may be used on SBB part class fields of the following types:

  • com.opencloud.rhino.slee.sbbpart.SbbPartContext

  • com.opencloud.rhino.cmp.CMPFields, and any type that can be assigned any CMP Extension Interface defined by the SBB part

  • javax.slee.facilities.Tracer or com.opencloud.rhino.facilities.Tracer

    • The @Named annotation may be used to specify the name of the tracer to assign to the field. The named value must be a valid SLEE tracer name.

  • javax.slee.facilities.ActivityContextNamingFacility

  • javax.slee.facilities.AlarmFacility

  • javax.slee.facilities.TimerFacility

  • javax.slee.profile.ProfileFacility or com.opencloud.rhino.facilities.profile.ProfileFacility

  • com.opencloud.rhino.facilities.childrelations.ChildRelationFacility

  • com.opencloud.rhino.facilities.usage.UsageFacility

    • The UsageFacility is only available to SBB parts that declare at least one usage parameters interface.

    • If the SBB part declares a root usage parameter set type, then any type that can be assigned the usage parameters interface of the root usage parameter set type.

    • This field will be assigned the root usage parameter set for the SBB part.

  • com.opencloud.rhino.license.LicenseFacility

  • javax.slee.profile.ProfileTableActivityContextInterfaceFactory

  • javax.slee.nullactivity.NullActivityFactory

    • The @Named annotation may be used to indicate the specific type of null activity factory to assign to the field. The named value must be one of replicated, non-replicated, or the empty string  . The empty string results in the default null activity factory for the service to be used, and is equivalent to omitting the @Named annotation.

  • javax.slee.nullactivity.NullActivityContextInterfaceFactory

  • javax.slee.serviceactivity.ServiceActivityFactory

  • javax.slee.serviceactivity.ServiceActivityContextInterfaceFactory

An injected field must not be static or final. Any access modifier (public, protected, package private, or private) is permitted.

If an SBB part class makes use of dependency injection, the SLEE injects these references after the SBB part object is created; in other words, after the constructor invocation has returned, and before any other methods are invoked on the object. Fields are injected beginning with the topmost superclass that requests injection, then working down through each subclass as required.

SBB abstract class abstract method replacements

The SBB abstract class allows the declaration of abstract methods in order to provide various functionality to the SBB code. Since the SBB part class cannot be abstract, alternative mechanisms are provided so that the same functionality is available to SBB parts. These mechanisms are described below.

Per-instance state

An SBB declares its per-instance state by defining abstract getter and setter methods in the SBB abstract class and indicating that they relate to CMP fields using <cmp-field> entries in the SBB deployment descriptor. Rhino also allows an SBB to declare CMP fields using CMP extension interfaces. An SBB part can also define its per-instance state using CMP extension interfaces. The SBB part obtains access to the CMP fields defined in CMP extension interfaces using a com.opencloud.rhino.cmp.CMPFields object, obtainable from its SbbPartContext object. The CMPFields object may be typecast to any CMP extension interface declared by the SBB part, thus exposing the CMP field accessor methods defined by the interface.

Activity context interface narrow method

An SBB that declares an activity context interface that is a subtype of javax.slee.ActivityContextInterface is expected to define an abstract activity context interface narrow method in the SBB abstract class. This method converts or "narrows" a generic javax.slee.ActivityContextInterface object to an object implementing the SBB’s activity context interface. An SBB part that declares an activity context interface that is a subtype of javax.slee.ActivityContextInterface can similarly narrow a generic javax.slee.ActivityContextInterface object using the asSbbPartActivityContextInterface method on its SbbPartContext object. This method returns an activity context interface object that implements the SBB part’s declared activity context interface.

Child relations

For each child relation that an SBB has, an abstract child relation accessor method must be declared in the SBB abstract class. This method returns a javax.slee.ChildRelation object that allows child SBBs to be created, inspected, and removed.

As SBB parts cannot declare their own SBB child relations, SBB parts do not need to define child relation accessor methods.

Profile CMP interface accessor method

The profile CMP interface accessor method was deprecated in the JAIN SLEE 1.1 specification. As a replacement, SBBs can use the profile facility and ProfileTable objects to query and access profiles.

There is no equivalent to the profile CMP interface accessor method for SBB parts. Like SBBs, SBB parts can use the profile facility to query and access profiles.

Usage parameters interface accessor methods

An SBB that declares a usage parameters interface is expected to declare at least one abstract usage parameters interface accessor method in the SBB abstract class. This method returns an object implementing the SBB’s usage parameters interface allowing usage statistics for the SBB to be accumulated.

Rhino provides an extension mechanism that allows an SBB to declare more than one usage parameters interface, and defines a usage facility with which SBBs can manage and access their usage parameter sets. the usage facility eliminates the need for an SBB to declare the usage parameters interface accessor methods. An SBB part may also declare usage parameters interfaces using the same extension mechanism, and also may manage and access the SBB’s usage parameter sets using the usage facility.

Fire event methods

If an SBB needs to fire an event as part of its application logic, it must declare an abstract fire event method to do so. Event firing is not supported for SBB parts, so an SBB part has no equivalent to a fire event method.

SbbPartContext interface

The SLEE provides each SBB part object, if requested through an SBB part class constructor argument or dependency injection, with an SbbPartContext object. The SbbPartContext object gives the SBB part object access to the SBB part object’s context maintained by the SLEE, allows the SBB part object to invoke functions provided by the SLEE, and obtains information about the SBB entity assigned to the SBB part object.

An SbbPartContext object is associated with one service and one SBB; and the associated service and SBB do not change during the lifetime of that SbbPartContext object.

The SbbPartContext object implements the SbbPartContext interface. The SbbPartContext interface extends the JAIN SLEE defined SbbContext interface with additional functionality, as described below.

Methods inherited from SbbContext

The methods inherited from the JAIN SLEE defined SbbContext interface have the same meaning and purpose when used by SBB parts.

SbbPartContext interface getSbbPart method

The getSbbPart method returns an SbbPartID object that encapsulates the component identity of the SBB part.

SbbPartContext interface getTracer method

The getTracer method overrides the same method from SbbContext to return a Rhino-specific extension of the Tracer interface.

Tip For more about this Tracer extension, please see SLEE Facilities.

SbbPartContext interface asSbbPartActivityContextInterface method

The asSbbPartActivityContextInterface method is used by the SBB part to narrow an object that implements the generic ActivityContextInterface to an object that implements the SBB part activity context interface so that the SBB part can access the activity context interface attributes defined in the SBB part activity context interface.

This method takes as its input parameter an activity context interface object and returns an object that implements the SBB part activity context interface interface of the SBB part. The SBB part activity context interface interface provides the accessor methods that allow an SBB part object to access the shareable attributes of the SBB part that are stored in the activity context interface.

If the SBB part does not define an SBB part activity context interface interface, then this method returns the same object passed in as a parameter.

SbbPartContext interface getActivities methods

The SbbPartContext interface defines two getActivities methods:

  • getActivities() — This method overrides the same method from SbbContext to return a Rhino-specific extension of the ActivityContextInterface interface. Otherwise, this method behaves in the same way as defined by the JAIN SLEE specification for SBBs.

Tip For more about this ActivityContextInterface extension, please see Miscellaneous SLEE API Enhancements.
  • getActivities(Class) — This method behaves similarly to the no-argument version; however it only returns activity context interface objects where the type of the underlying activity object is assignable to the class argument. For example, if this method was invoked with NullActivity.class as an argument, then only activity context interface objects for the null activities the SBB entity currently associated with the SBB part is attached to would be returned.

SbbPartContext interface getConvergenceName method

The getConvergenceName method returns the convergence name that the SBB entity the SBB part is associated with was created with. The value returned from this method is a vendor-specific string that uniquely identifies the initial event selector conditions that led to the SBB entity’s creation.

This method only returns a non-null value if invoked on an SbbPartContext object belonging to a root SBB entity.

SbbPartContext interface getCMPFields method

The getCMPFields method provides the SBB part with access to its per-instance state.

SbbPartContext interface getJndiBindings method

The getJndiBindings method returns a map describing the JNDI bindings available to the SBB part.

Tip For more about this method, please see SLEE Facilities.

SbbPartContext interface enableEntityTreePersistence method

The enableEntityTreePersistence method enables persistence of application state to external replicated storage resources such as a key/value store. Initial replicated persistence of application state can be disabled using the service-properties element in the extension service deployment descriptor, then enabled using this method when the SBB entity has reached a stable state.

Tip For more about application initiated persistence, please see Application initiated persistence.

SbbPartContext interface getConvergenceNameSessionOwnershipRecord methods

These methods obtain a reference to the convergence name session ownership record for the SBB entity tree.

Tip For more about convergence name session ownership records, please see Convergence name session ownership record.

SBB part component environment

An SBB part has access to the same JNDI environment bindings as its owning SBB. All the SLEE facilities, environment entries, and resource adaptor type bindings that are available to an SBB are also available to all its dependent SBB parts. An SBB part accesses its JNDI environment in exactly the same way as an SBB.

The current specification of SBB parts does not yet allow an SBB part component to declare its own environment entries. An SBB part component may however define its own resource adaptor type bindings.

SBB part example

Below is an example of an SBB part class. The SBB part declares an event handler method that receives a SLEE timer event, which logs the event and increments a CMP field and a usage counter:

import javax.inject.Inject;
import javax.inject.Named;
import javax.slee.CreateException;
import javax.slee.facilities.TimerEvent;
import com.opencloud.rhino.facilities.Tracer;
import com.opencloud.rhino.slee.lifecycle.PostCreate;

public class ExampleSbbPart {
    @PostCreate
    public void onCreate() throws CreateException {
        rootTracer.info("SBB part created");
    }

    public void onTimerEvent(TimerEvent event, ExampleSbbPartActivityContextInterface aci) {
        // get CMP counter
        int count = cmpFields.getCounter() + 1;

        // log event
        timerTracer.info("received timer event: " + count);

        // increment CMP counter
        cmpFields.setCounter(count);

        // record usage stats
        rootUsage.incrementTimerEvents(1);

        ...
    }


    @Inject
    private Tracer rootTracer;

    @Inject @Named("timer")
    private Tracer timerTracer;

    @Inject
    private ExampleSbbPartCMPInterface cmpFields;

    @Inject
    private ExampleSbbPartUsageInterface rootUsage;
}

CMP Field Enhancements

This page provides an API for the following CMP field enhancements:

Array support

As an extension to the JAIN SLEE specification, Rhino supports CMP field declarations of arrays for the following SLEE-defined types:

  • javax.slee.ActivityContextInterface, and any subclass of this interface

  • javax.slee.SbbLocalObject, and any subclass of this interface

    • If the abstract getter and setter methods for the CMP field are defined in the SBB abstract class (as opposed to a CMP extension interface), the corresponding <cmp-field> declaration in the deployment descriptor may not include an <sbb-alias-ref> element.

  • javax.slee.EventContext

  • javax.slee.profile.ProfileLocalObject, and any subclass of this interface.

Arrays may be declared with any dimension. Array support is automatic wherever the basic type is supported in CMP fields; in other words, no user prompt or directive is necessary.

Serialization enhancements

Rhino introduces a number of enhancements that offer significantly more control and flexibility over how CMP field values are serialized, and much better serialization performance, when compared with standard Java serialization. These include:

FastSerializable

The com.opencloud.util.FastSerializable interface provides a simple alternative to the standard Java java.io.Serializable interface where the exact type of an object to be serialized is known at compile time.

The FastSerializable interface is defined as follows:

package com.opencloud.util;

public interface FastSerializable {
    public void toStream(java.io.DataOutput stream)
        throws java.io.IOException;
}

A class implementing the FastSerializable interface must provide a public constructor that takes either:

  • a single java.io.DataInput argument; or

  • a java.io.DataInput argument and a java.lang.ClassLoader argument.

If both constructors are declared by a given class then the two-argument constructor is used.

Serialization of the object is performed by an invocation of the toStream method. Object state must be written to the DataOutput passed as the input argument. Deserialization is performed by new object instantiation and constructor invocation. Object state can be read from the DataInput argument. The ClassLoader argument, if present, can be used to resolve any application-specific classes stored in the stream.

Rhino recognises and supports the FastSerializable contract on all CMP fields. Arrays of FastSerializable types of any dimension are handled automatically by Rhino, and no special treatment is necessary.

While the FastSerializable contract has some similarity to the java.io.Externalizable contract, there are some differences that warrant discussion:

  • FastSerializable operates in terms of data I/O streams, while Externalizable operates in terms of object I/O streams. The reason for this is that FastSerializable desires to avoid the generally costly serialization overhead that occurs when serializing arbitrary objects. By limiting the stream I/O to basic datatypes, it forces the user to think about the most performance-efficient manner in which the object state can be serialized and deserialized.

  • When FastSerializable types are used in CMP fields, Rhino’s CMP implementation typically determines the type of object to reconstruct during deserialization at code-generation time, not run time; so care must be taken that objects are restored using the correct type. Problems occur, for example, if a CMP field is declared in terms of FastSerializable type Foo, but an object of subclass Bar is stored in the CMP field. The CMP implementation assumes that the CMP field will only store a Foo object, and will instantiate a Foo object when the CMP field is deserialized, leading to deserialization failures. Making classes that implement FastSerializable final is good practice, unless this issue is taken into consideration.

  • No handling of shared references is performed by the implementation. If the same object is encountered twice during serialization, then two copies of the object will be stored and subsequently deserialized.

FastSerialize

The com.opencloud.util.FastSerialize class provides some utility functions that may be useful to application developers implementing their own serialization logic based around FastSerializable.

Example

Below is an example of a FastSerializable type:

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import com.opencloud.util.FastSerializable;

public final class Person implements FastSerializable {
    public Person(String firstName, String lastName, int age) {
        this.firstName = firstName;
        this.lastName = lastName;
        this.age = age;
    }

    public Person(DataInput in, ClassLoader classLoader) throws IOException {
        firstName = in.readUTF();
        lastName = in.readUTF();
        age = in.readInt();
    }

    @Override
    public void toStream(DataOutput out) throws IOException {
       out.writeUTF(firstName);
       out.writeUTF(lastName);
       out.writeInt(age);
    }

    public String getFirstName() { return firstName; }
    public String getLastName() { return lastName; }
    public int getAge() { return age; }

    private final String firstName;
    private final String lastName;
    private final int age;
}

Encodable

The com.opencloud.rhino.cmp.Encodable interface provides a similar contract to that of FastSerializable but is specifically targeted at types stored in SBB CMP fields. It provides access to utility functions to encode and decode SLEE-specific datatypes that are not defined as serializable, such as SBB and profile local object references, and an encodable context that can be set by an SBB to pass information to the encoding . decoding methods, such as references to resource adaptor provider objects.

The Encodable interface is defined as follows:

package com.opencloud.rhino.cmp;

import com.opencloud.rhino.cmp.codecs.EncoderUtils;

public interface Encodable {
    public void encode(java.io.DataOutput out, EncoderUtils utils)
        throws java.io.IOException;
}

A class implementing the Encodable interface must provide a public constructor that takes, in this order: a java.io.DataInput argument, a java.lang.ClassLoader argument, and a com.opencloud.rhino.cmp.codecs.DecoderUtils argument.

Serialization of the object is performed by an invocation of the encode method. Object state must be written to the DataOutput passed as the input argument. Deserialization is performed by new object instantiation and constructor invocation. Object state can be read from the DataInput argument. The ClassLoader argument can be used to resolve any application-specific classes stored in the stream.

The constructor and encode methods of an Encodable type are always invoked with the same transaction context used to access or update the CMP field. This is typically only of consequence if an encodable context is used to provide access to other SBB CMP fields.

EncoderUtils / DecoderUtils

SLEE-defined datatypes such as SBB and profile local objects, activity context interface objects, and event context objects are not defined by the SLEE specification as being serializable. The SLEE specification does provide provision for storing objects of these types directly into CMP fields; but the lack of implicit serializability means that, for example, an SBB local object reference cannot be encapsulated within some other object which is stored into CMP, as object serialization will fail when it reaches the unserializable SBB local object reference.

The EncoderUtils object passed to the encode method provides access to methods that can serialize these SLEE-defined datatypes, allowing classes that implement the Encodable contract to encapsulate objects of these datatypes and still be storable into CMP fields. The corresponding DecoderUtils object passed to the decoding constructor provides access to methods that can deserialize these datatypes, allowing correct object reconstruction during deserialization.

Rhino recognises and supports the Encodable contract on all CMP fields; however the utility methods provided by SleeDatatypeEncoder and SleeDatatypeDecoder only function under certain conditions:

  • Encode and decode of all SLEE-defined datatypes is supported for SBB and SBB Part CMP fields.

  • Encode and decode of EventContext objects is supported for Activity Context Interface attributes.

  • Encode and decode of SLEE-defined datatypes is unsupported in any other case, and invoked methods will throw a java.lang.UnsupportedOperationException.

Arrays of Encodable types of any dimension are handled automatically by Rhino, and no special treatment is necessary.

Unlike FastSerializable types, CMP fields that store Encodable types may, at runtime, store a subclass of the declared CMP field type without issue. For example, if a CMP field is declared in terms of Encodable type Foo, an object of subclass Bar may be stored in the CMP field and it will serialize and deserialize as expected. Deserialization is, however, more efficient if the type of the stored object is the same as the CMP field type, as reflection must be used to reconstruct a stored object if the type of the stored object differs from the expected type.

Like FastSerializable, no handling of shared references is performed by the implementation. If the same object is encountered twice during serialization, then two copies of the object will be stored and subsequently deserialized.

Example

Below is an example of an Encodable type:

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import javax.slee.ActivityContextInterface;
import com.opencloud.rhino.cmp.Encodable;
import com.opencloud.rhino.cmp.codecs.DecoderUtils;
import com.opencloud.rhino.cmp.codecs.EncoderUtils;
import com.opencloud.rhino.cmp.codecs.SleeDatatypeDecoder;
import com.opencloud.rhino.cmp.codecs.SleeDatatypeEncoder;

public final class Relay implements Encodable {
    public Relay(IncomingSbbLocalObject incoming, OutgoingSbbLocalObject outgoing, ActivityContextInterface aci) {
        this.incoming = incoming;
        this.outgoing = outgoing;
        this.aci = aci;
    }

    public Relay(DataInput in, ClassLoader classLoader, DecoderUtils utils) throws IOException {
        SleeDatatypeDecoder decoder = utils.getSleeDatatypeDecoder();
        incoming = decoder.decodeSbbLocalObject(in);
        outgoing = decoder.decodeSbbLocalObject(in);
        aci = decoder.decodeActivityContextInterface(in);
        messageCount = in.readInt();
    }

    @Override
    public void encode(DataOutput out, EncoderUtils utils) throws IOException {
       SleeDatatypeEncoder encoder = utils.getSleeDatatypeEncoder();
       encoder.encodeSbbLocalObject(incoming, out);
       encoder.encodeSbbLocalObject(outgoing, out);
       encoder.encodeActivityContextInterface(aci, out);
       out.writeInt(messageCount);
    }

    public IncomingSbbLocalObject getIncomingSbb() { return incoming; }
    public OutgoingSbbLocalObject getOutgoingSbb() { return outgoing; }
    public ActivityContextInterface getActivityContextInterface() { return aci; }

    public void incMessageCount() { messageCount++; }
    public int getMessageCount() { return messageCount; }

    private final IncomingSbbLocalObject incoming;
    private final OutgoingSbbLocalObject outgoing;
    private final ActivityContextInterface aci;
    private int messageCount;
}

Datatype codecs

The Encodable contract provides an in-line mechanism for object serialization. That is, code for serialization and deserialization forms part of the class itself. There may be times, however, when it is desired or necessary for the serialization code to be separated from the class being serialized. For example, the serialization logic may have common components that can be shared between multiple classes, or the source code for the class being serialized may not be available to be enhanced to support the FastSerializable or Encodable serialization contracts.

To support these situations, Rhino allows a datatype codec to be defined and associated with either a CMP field or a serializable class directly. The datatype codec specifies how objects of the target type are serialized and deserialized, essentially providing a third-person perspective to the Encodable contract.

A datatype codec must implement the com.opencloud.rhino.cmp.codecs.DatatypeCodec interface. This interface is defined as follows:

package com.opencloud.rhino.cmp.codecs;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public interface DatatypeCodec<T> {
    public void encode(T value, DataOutput out, EncoderUtils utils)
        throws IOException;
    public T decode(DataInput in, ClassLoader classLoader, DecoderUtils utils)
        throws IOException;
}

A datatype codec class must be public (and static, if an inner class) and implement a public no-argument constructor. The encode method functions identically to the encode method defined by the Encodable interface, but takes the object to be serialized as an additional argument. The decode method functions identically to the deserialization constructor required by the Encodable contract, but returns the deserialized object as the method return result. The generic type parameter T identifies the type of object that the datatype codec is capable of serializing and deserializing. Generally, a datatype codec does not need to concern itself with encoding or decoding null values, as Rhino will only invoke the codec for non-null values. There is one exception to this rule, discussed in the Datatype codecs for collections section below.

The encode and decode methods are always invoked with the same transaction context used to access or update the CMP field. This is typically only of consequence if an encodable context is used to provide access to other SBB CMP fields.

A datatype codec is associated with the corresponding datatype using the @DatatypeCodecType annotation. This annotation requires the datatype codec class to be specified as an argument. The annotation can be used either directly on the target class to be serialized, or attached to a CMP field getter or setter method. If attached to a CMP field getter or setter method of an array type, then the datatype codec need only be defined in terms of the base array component type. The codec will be invoked for each non-null array element encountered during serialization or deserialization.

Datatype codecs are supported on all CMP fields where Encodable types are supported, and have the same conditions of use.

The @DatatypeCodecType annotation may not be used on a CMP field getter or setter method where the CMP field is one of the following types, or is an array of any dimension of one of the following types:

  • javax.slee.ActivityContextInterface, and any subclass of this interface

  • javax.slee.SbbLocalObject, and any subclass of this interface

  • javax.slee.EventContext

  • javax.slee.profile.ProfileLocalObject, and any subclass of this interface.

Examples

Below is an example of a datatype codec handling the serialization of class type Customer:

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import com.opencloud.rhino.cmp.codecs.DatatypeCodec;
import com.opencloud.rhino.cmp.codecs.DecoderUtils;
import com.opencloud.rhino.cmp.codecs.EncoderUtils;

public class CustomerCodec implements DatatypeCodec<Customer> {
    @Override
    public void encode(Customer value, DataOutput out, EncoderUtils utils) throws IOException {
        ...
    }

    @Override
    public Customer decode(DataInput in, ClassLoader classLoader, DecoderUtils utils) throws IOException {
        return ...
    }
}

The datatype codec can be associated directly with the Customer datatype by annotating the class itself as shown below:

@DatatypeCodecType(CustomerCodec.class)
public class Customer ...

Alternatively, a CMP field storing a Customer object can be annotated to declare the datatype codec to be used for that CMP field, as shown below:

public abstract class MySbb implements Sbb {
    ...

    @DatatypeCodecType(CustomerCodec.class)
    public abstract void setCustomer(Customer customer);
    public abstract Customer getCustomer();
}

The datatype codec can be equally used on a CMP field storing an array of Customer objects, as shown below:

public abstract class MySbb implements Sbb {
    ...

    @DatatypeCodecType(CustomerCodec.class)
    public abstract void setCustomers(Customer[] customers);
    public abstract Customer[] getCustomers();
}

Encodable context

There are a number of use cases where serialization and/or deserialization of a CMP object requires access to resources that the Encodable and DatatypeCodec encode and decode methods may not natively have access to. For example, deserialization of a stored network message may require access to the owning resource adaptor provider object to reconstruct the correct object graph. To solve this problem, Rhino defines the concept of an encodable context. An encodable context can provide access to the resources that the encode and decode methods need in order to fulfill their function.

Encodable context definition

The encodable context required for any given Encodable or DatatypeCodec type may be defined as either a class or interface, though it is strongly recommended that the context be defined as an interface. The reason for this is to allow the provider of the context to more easily combine contexts required by multiple Encodable or DatatypeCodec types used in different CMP fields into a single implementation object. An encodable context may provide read access to other CMP states, but should not provide write access to any CMP state. SLEE behaviour is undefined if arbitrary CMP fields are modified during the encode or decode of another CMP field. Otherwise, there are no specific requirements or restrictions about what an encodable context may provide access to.

Below is an example of an encodable context interface:

import org.jainslee.resources.diameter.ro.RoProviderFactory;

public interface RoProviderContext {
    public RoProviderFactory getRoProviderFactory();
    public Tracer getTracer();
    public int getSomeContextValue();
}

An Encodable or DatatypeCodec object obtains an encodable context object from the EncoderUtils argument passed to the encode method, or from the DecoderUtils argument passed to the deserialization constructor (for Encodables) or decode method (for DatatypeCodecs).

Encodable context provider

If an Encodable or DatatypeCodec type is a consumer of an encodable context, then there must be a corresponding provider of the context. Only SBBs currently support providing an encodable context object for use by CMP fields declared by itself or any dependent SBB part. An SBB sets the encodable context using the setEncodableContext method on its com.opencloud.rhino.slee.RhinoSbbContext object (a Rhino extension of javax.slee.SbbContext):

package com.opencloud.rhino.slee;

public interface RhinoSbbContext extends SbbContext {
    public <T> void setEncodableContext(T context)
        throws SLEEException;

    ...
}

The encodable context object set by the SBB must implement all encodable context types expected by the Encodable or DatatypeCodec types that it uses, so that each Encodable or DatatypeCodec type may typecast the object to the encodable context type that it expects. An encodable context object is scoped to an SBB object; therefore the recommended place to initialise the encodable context is in the SBB’s setSbbContext method.

Below is an example implementation of the RoProviderContext encodable context interface shown above:

import javax.naming.InitialContext;
import javax.slee.Sbb;
import javax.slee.SbbContext;
import com.opencloud.rhino.slee.RhinoSbbContext;
import org.jainslee.resources.diameter.ro.RoProviderFactory;

public abstract class MySbb implements Sbb {
    public void setSbbContext(SbbContext context) {
        RhinoSbbContext rhinoContext = (RhinoSbbContext)context;

        final RoProviderFactory roProviderFactory = new InitialContext().lookup(...);
        final Tracer encodableContextTracer = context.getTracer(...);
        rhinoContext.setEncodableContext(new RoProviderContext() {
            @Override
            public RoProviderFactory getRoProviderFactory() {
                // return provider factory from JNDI
                return roProviderFactory;
            }
            @Override
            public Tracer getTracer() {
                // return tracer for encode/decode methods to use
                return encodableContextTracer;
            }
            @Override
            public int getSomeContextValue() {
                // return value from SBB CMP
                return MySbb.this.getSomeContextValue();
            }
        });

        ....
    }

    // cmp field declaration
    public abstract void setSomeContextValue(int value);
    public abstract int getSomeContextValue();

    ...
}

Since other persistent entities, such as profiles and activity context interface objects, do not currently support the provision of an encodable context object, Encodable or DatatypeCodec types that require an encodable context cannot be used with these persistent entities. The getEncodableContext method defined in the EncoderUtils and DecoderUtils interfaces will always return null for these types of persistent entities.

Codecs for Java collection types

Rhino natively supports FastSerializable and Encodable types, and types using a DatatypeCodec, on array-type CMP fields. Each non-null array element is individually serialized and deserialized as appropriate. However there are times when using a Java Collections Framework type, such as List or Set in a CMP field, is preferable to using an array; but retaining the serialization benefits provided by the element type is desired.

To answer this, Rhino provides a set of base classes that provides a framework for efficient serialization of Encodable, List, Set, and Map types, and additional datatype codec annotations to simplify the common use cases.

Encodable collections

Rhino provides three base abstract classes to support efficiently serialized collections:

Each of these classes wraps an implementation of the corresponding collection type, and implements the Encodable contract to manage the serialization and deserialization of that collection. To use any of these as a CMP field type, the application developer must implement a concrete class extending from the relevant base class, taking into account the following rules and considerations.

Constructors

A concrete subclass must provide a public constructor satisfying the Encodable contract which delegates to the equivalent protected constructor in the base class. The subclass should also provide at least one general user constructor that delegates to one of the base class public constructors, such as the no-argument constructor. A subclass of EncodableList may also need to provide a constructor suitable for use with the implementation of the abstract newInstance method.

Implementation of writer / reader methods

A concrete subclass must provide the implementation of the abstract writer and reader methods defined by the base class. These methods are responsible for the encoding and decoding of individual collection elements, map keys, or map values, as appropriate.

Implementation of EncodableList newInstance method

The java.util.List interface includes a method, subList, which returns a view of a portion of the source list. The EncodableList requires that a subclass implements the newInstance method it defines to facilitate the implementation of this method. The newInstance method should return a new instance of the concrete class which wraps the list provided by the method argument. Delegating to the EncodableList constructor defined with the same arguments as the newInstance method is the recommended approach.

Managing null elements, keys, and values

The default implementation of EncodableList and EncodableSet assumes that null elements will not occur in the collection. The default implementation of EncodableMap assumes that null keys will not occur in the map, but that null values might. The consequence of this is that if a null element occurs in a list or set, or null key occurs in a map, then during serialization the corresponding writer method will be asked to encode the null object. While this is not problematic, it means that the writer and reader methods need to perform additional work to handle the presence of null objects in the stream they write to or read from.

To simplify the code required of the writer and reader methods when null objects are expected by a given datatype, a subclass may change the default behaviour of the base class by overriding the manageNullElements method in EncodableList and EncodableSet, or the manageNullKeys and manageNullValues methods in EncodableMap. If these methods return true, then the base class will check for null objects of the corresponding type and handle them internally, only invoking the writer and reader methods for non-null objects. If null objects are never expected, or not supported by the underlying backing store (or the writer and reader methods will handle null objects), then these methods may return false, resulting in a slightly smaller serialization data stream that doesn’t include the extra information required for null checks.

Below is an example of subclass of EncodableList that stores a list of strings with possible null elements:

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.List;
import com.opencloud.rhino.cmp.codecs.DecoderUtils;
import com.opencloud.rhino.cmp.codecs.EncoderUtils;
import com.opencloud.rhino.cmp.codecs.SimpleDatatypeCodecs;
import com.opencloud.rhino.util.EncodableList;

public class StringList extends EncodableList<String> {
    // general user constructor
    // since we implicitly delegate to the default superclass constructor,
    // the underlying collection will be an ArrayList
    public StringList() {}

    // constructor required by the Encodable contract
    public StringList(DataInput in, ClassLoader cl, DecoderUtils utils) throws IOException {
        super(in, cl, utils);
    }

    // constructor used by newInstance()
    protected StringList(BackingStore backingStore, List<String> list) {
        super(backingStore, list);
    }

    @Override
    protected StringList newInstance(BackingStore backingStore, List<String> list) {
        return new StringList(backingStore, list);
    }

    @Override
    protected void writeElement(String element, DataOutput out, EncoderUtils utils) throws IOException {
        CODEC.encode(element, out, utils);
    }

    @Override
    protected String readElement(DataInput in, ClassLoader cl, DecoderUtils utils) throws IOException {
        return CODEC.decode(in, cl, utils);
    }

    @Override
    protected boolean manageNullElements() {
        // null elements are possible
        // tell the superclass to deal with them so we don't have to
        return true;
    }


    // use predefined datatype codec for encode and decode
    private static final SimpleDatatypeCodecs.StringCodec CODEC = new SimpleDatatypeCodecs.StringCodec();
}

Below is an example of a CMP field declared in terms of this datatype:

public abstract void setNames(StringList names);
public abstract StringList getNames();
Datatype codecs for collections

In many use cases, the implementation of an Encodable collection type would look strikingly similar to the example shown in the previous section. Most of the implementation code is boilerplate, even more so if the writer and reader methods simply delegate to another datatype codec class. To simplify application development, Rhino provides an alternative through the use of annotations on the CMP field getter or setter method:

  • An EncodableList implementation can be substituted with a @ListCodecType annotation.

  • An EncodableSet implementation can be substituted with a @SetCodecType annotation.

  • An EncodableMap implementation can be substituted with a @MapCodecType annotation.

These annotations have only two minor restrictions:

  1. Element, key, and value encoding and decoding must be implemented using datatype codecs.

  2. Generic type parameters aside, an annotated CMP field must be declared in terms of the base collection interface type, that is: java.util.List, java.util.Set, or java.util.Map, or an array of any dimension of this type.

    • If the CMP field type is scalar, any implementation class of the declared CMP field type may be passed as a parameter to the CMP field setter method, for example: a java.util.ArrayList or java.util.HashSet. However, no assumption can be made about the type of object returned from the CMP field getter method, other than it implements the interface declared as the CMP field type. For example, if the CMP field type is java.util.List, a java.util.ArrayList may be passed to the CMP field setter method; but it cannot be assumed that a java.util.ArrayList will be returned from the CMP field getter method.

    • If the CMP field type is an array, any implementation class of the base component type of the declared CMP field array type may be passed as array elements. However, no assumption can be made about the type of objects returned as array elements from the CMP field getter method, other than they implement the interface declared as the array base component type.

Below is an example of how the StringList CMP field example from the previous section could be simplified by using the @ListCodecType annotation:

@ListCodecType(codec = SimpleDatatypeCodecs.StringCodec.class, manageNullElements = true)
public abstract void setNames(List<String> names);
public abstract List<String> getNames();

The annotations provide a similar level of configurability as can be achieved by manual implementation. For example, the underlying collection type and management of null objects can be specified as annotation parameters.

Datatype codecs and null objects

If a collection datatype codec annotation indicates that null objects should not be managed internally by Rhino, and null objects occur in the corresponding data set, then this is one specific case where the serializing datatype codec is expected to handle and manage the null value itself during serialization and deserialization. For simplicity reasons, it is highly recommended that Rhino be asked to manage null objects when they are expected through appropriate annotation parameters.

Predefined datatype codecs

Rhino provides two sets of predefined datatype codecs:

These predefined codecs can be particularly useful when using datatype codecs for collections that store simple types such as java.lang.Integer or java.lang.String.

Initial values

Often it is necessary, when a new persistent entity (such as an SBB entity) is created, to initialise various CMP fields to a value different than the standard Java default. For example: a CMP field storing a counter may need to be initialised to 1 instead of the default value of 0; a CMP field storing an array may need to be initialised to an empty array rather than default null; or a CMP field storing a list may need to be initialised to an empty list. The standard way to achieve this is to add the necessary CMP setter method invocations to the entity’s create lifecycle method; this is acceptable, but means that the initial value is separated from the CMP field declaration, making the code less obvious than it could be. Also this technique doesn’t work if, for example, an arbitrary CMP extension interface is linked in to the SBB — the SBB needs to know a priori what CMP fields need initialising — which means CMP fields in CMP extension interfaces dynamically added after compilation cannot be initialised this way.

To solve this problem, Rhino introduces an annotation that can be used to specify the initial value for a CMP field.

Initial value fields

The initial value for a CMP field can be declared by annotating either the CMP field getter or setter method with the @com.opencloud.rhino.cmp.InitialValueField annotation. The annotation references, either implicitly or explicitly by parameter, a class field that contains the initial value for the CMP field. This mechanism means that the initial value can be constructed by any legal Java means, and initial values for any arbitrarily complex CMP field type can be declared without issue.

If the annotation does not explicitly name a class field to use as the initial value, then a default name of _initial + the CMP field name, with the first letter capitalised, is assumed. For example, for a CMP field with the name foo, the default initial value field name is _initialFoo.

The initial value class field must:

  • be public, static, and final

  • be visible in the scope of the annotated CMP field accessor method

  • have a type that is assignable to the CMP field.

The @InitialValueField annotation may be used on:

  • an SBB abstract class CMP field getter or setter method

  • an SBB or SBB part CMP extension interface CMP field getter or setter method

  • a profile CMP interface CMP field getter or setter method.

It is illegal to use this annotation on both the getter and setter methods for the same CMP field, or to declare an initial value for the same CMP field multiple times. For example, if a given CMP field is declared both in an SBB abstract class and an SBB CMP extension interface, only one CMP field declaration may be annotated with an initial value declaration. It is legal, however, for example, for two separate SBB CMP extension interfaces to both extend from a common interface that declares a CMP field with an initial value, as the initial value has only been declared once by the parent interface.

The @InitialValueField annotation may not be used on a CMP field getter or setter method where the CMP field is one of the following types, or is an array of any dimension of one of the following types:

  • javax.slee.ActivityContextInterface, and any subclass of this interface

  • javax.slee.SbbLocalObject, and any subclass of this interface

  • javax.slee.EventContext

  • javax.slee.profile.ProfileLocalObject, and any subclass of this interface.

A error occurs at deployment time if any of the above constraints are violated.

SLEE behaviour is undefined if an object referenced by an initial value class field is mutated at any time. A newly created persistent entity containing the CMP field could be initialised with either the original initial value or the modified initial value at the discretion of the SLEE implementation. It is generally advised that initial values be immutable; though in some cases this will not be possible.

Examples

Below is an example where the initial value field is explicitly named by the annotation:

@InitialValueField("aValue")
public abstract void setIntValue(int value);
public abstract int getIntValue();
public static final int aValue = 42;

Below is an example where the initial value field is not explicitly specified, so the default field name is assumed:

@InitialValueField
public abstract void setListValue(List<String> value);
public abstract List<String> getListValue();
public static final List<String> _initialListValue = new ArrayList<String>(5);

Pass-by-reference

The JAIN SLEE 1.1 specification, as written, prescribes that CMP fields have pass-by-value semantics. This means that when an object value is stored in a CMP field, the SLEE will make and store a copy of that value, rather than store the original value. Similarly when a stored value is retrieved from a CMP field, the SLEE will name a copy of the stored value and return the copied value. The effect of this is that the stored object is unaffected by any subsequent changes made by the application to the original or retrieved object — a value retrieved from the CMP field will always have the exact same state as it did when stored in the CMP field.

Generally speaking, this is desired behaviour, and it makes application code easier to understand. However there are times when pass-by-value semantics get in the way of efficient programming, requiring additional coding to work around the limitations of these semantics. As an example, consider an application that needs to use some unchanging session state across multiple transactions. The correct SLEE technique here is to store the state in a CMP field, then retrieve that state in each transaction as required. Unfortunately the pass-by-value semantics mean that the application incurs the overhead of a stored value copy every time it retrieves the value from the CMP field, even though the stored value is never changed. A typical workaround to avoid this overhead is to:

  1. create an instance variable which caches a reference to the stored value

  2. add alternative getter and setter methods which check the cache first

  3. include code in the relevant lifecycle callback methods to clear any cached reference when appropriate…​

…​all of which is cumbersome, particularly when there are many CMP fields that require this treatment.

To simplify the application developer’s effort in situations like these, Rhino introduces the option to declare pass-by-reference semantics to CMP fields.

Reference scopes

Pass-by-reference may be declared with one of three reference scopes: TRANSACTIONAL, WHILE_READY, or PERMANENT.

The TRANSACTIONAL and WHILE_READY reference scopes operate in similar ways. An intermediate reference cache is used to store the CMP field value during a transaction. If the CMP field is written to, the cache simply stores the reference to the stored value. If the CMP field is read, then the reference stored by the cache is returned. The actual stored value is only written back to persistent storage, and thus a copy is made, when the transaction commits; and that value is only read back from persistent storage if the CMP field is read by the application and the cache has been invalidated. The difference in the two reference scopes lies in when the cache is invalidated. When using the TRANSACTIONAL reference scope, the reference cache is invalidated whenever a bounding transaction completes, either by commit or rollback. When using the WHILE_READY reference scope, the reference cache is invalidated only when the owning container object (such as an SBB object) leaves the READY state and is thus disassociated from the persistent entity, or transaction rollback occurs and the owning container object needs to resynchronise with persistent state.

The PERMANENT reference scope completely removes any pass-by-value semantics. Any value stored in the CMP field is stored by reference only. Unlike other reference scopes, a permanent reference loses all transactional semantics. Any changes made to the referenced object will persist irrespective of whether the current transaction commits or rolls back. As such, permanent references are best suited to objects that are unlikely to change during the lifetime of an application entity, for example: parts of input network messages that must be retained across event handler transaction boundaries, but serialization of those messages is undesirable. Replication of permanent pass-by-reference CMP fields also needs to be given careful thought, as noted in the API documentation.

Declaration

Pass-by-reference semantics are declared using the @com.opencloud.rhino.cmp.PassByReference annotation. This annotation may be used on:

  • an SBB abstract class CMP field getter or setter method

  • an SBB or SBB Part CMP extension interface CMP field getter or setter method

  • an SBB abstract class itself

  • an SBB or SBB part CMP extension interface.

If the annotation is used on a class or interface declaration, then its meaning is applied to all CMP fields declared in the class or interface, and also any subclass if the annotation’s inherited attribute is set to true. An individual CMP field may override any pass-by-reference semantics inherited from a class or interface annotation by being annotated itself with a different reference scope. The reference scope DISABLED can also be used in this case to remove any inherited pass-by-reference semantics from the CMP field.

It is illegal for a single CMP field’s getter and setter methods to be annotated with different reference scopes.

A @PassByReference annotation with WHILE_READY or PERMANENT scope may not be used on a CMP field getter or setter method where the CMP field is one of the following types, or is an array of any dimension of one of the following types:

  • javax.slee.ActivityContextInterface, and any subclass of this interface

  • javax.slee.SbbLocalObject, and any subclass of this interface

  • javax.slee.EventContext

  • javax.slee.profile.ProfileLocalObject, and any subclass of this interface.

Additional considerations
  • If pass-by-reference semantics are desired for a CMP field that holds, either directly or indirectly, a reference to a SLEE-defined object that is only valid in the transaction it was materialised (for example an SbbLocalObject, ProfileLocalObject, ActivityContextInterface, or EventContext), then the reference scope must be limited to TRANSACTIONAL to avoid runtime application failures. Note that if a CMP field directly storing one of these types inherits pass-by-reference semantics from a class or interface annotation, Rhino will automatically limit the scope to TRANSACTIONAL for that CMP field. This means it is not necessary to specifically annotate a CMP field directly storing one of these types if the inherited scope is wider — the correct scope will be used automatically.

  • The @PassByReference annotation naturally has no effect on CMP fields that store primitive types. Primitive types are always stored by value.

  • The @PassByReference annotation also has no effect on CMP fields that store types that Rhino understands to be immutable, such as java.lang.Integer (and other primitive type wrappers), java.lang.String, javax.slee.Address, and so on…​ nor any class that implements com.opencloud.util.Immutable. Immutable types are always stored by reference.

Examples

Below is an example of how a CMP field annotation declaring pass-by-reference semantics can be applied using the default (WHILE_READY) reference scope:

@PassByReference
public abstract void setValue(FooValue value);
public abstract FooValue getValue();

Below is an example of how a CMP field annotation declaring pass-by-reference semantics can be applied using the TRANSACTIONAL reference scope:

@PassByReference(scope = PassByReference.Scope.TRANSACTIONAL)
public abstract void setValue(FooValue value);
public abstract FooValue getValue();

Below is an example of how an SBB abstract class annotation declaring pass-by-reference semantics can be used to apply the semantics to all CMP fields, unless otherwise indicated, using the default (WHILE_READY) reference scope. These pass-by-reference semantics will also be inherited by any subclass of this SBB abstract class.

@PassByReference(inherited = true)
public abstract class MySbb implements Sbb {
    ...

    // this CMP field will automatically be demoted to use the TRANSACTIONAL reference scope
    public abstract void setSbbObject(SbbLocalObject sbb);
    public abstract SbbLocalObject getSbbObject();

    // this CMP field will revert to pass by value semantics
    @PassByReference(scope = PassByReference.Scope.DISABLED)
    public abstract void setValue(FooValue value);
    public abstract FooValue getValue();
}

CMP field replication suppression

Normally, when an SBB entity of a replicated service is replicated between Rhino cluster nodes, the values of all SBB CMP fields form part of the replicated state. There may be some cases though where the replication of certain CMP fields is not meaningful, for example the CMP field value might refer to a resource whose identity is only meaningful on the node where that identity is obtained.

Replication of individual CMP fields can be suppressed using the @com.opencloud.rhino.cmp.CMPFieldReplication annotation. This annotation may be used on:

It is illegal for a single CMP field’s getter and setter methods to be annotated with different replication suppression modes.

If replication of a CMP field is suppressed by this annotation then the state of that CMP field is excluded from the replicated state stream if and when replication of the SBB occurs. If entities of the SBB are never replicated, then this annotation has no effect.

In the event that the CMP state of an SBB entity is retrieved from a replicated storage resource then CMP fields with their replication suppressed will revert to the initial value defined by their @InitialValueField annotation, if present, otherwise the default initial value for the field type: null for reference types, 0 for numeric types, etc.

Below is an example of a CMP field with replication suppressed:

@CMPFieldReplication(suppressed=true)
public abstract void setValue(FooValue value);
public abstract FooValue getValue();

CMP field tagging

CMP field tagging allows arbitrarily-named tags to be assigned to CMP fields at compile time so that they can be iterated over and processed at runtime.

It can be useful, for example, to group a set of CMP fields that have a particular function or purpose that needs to be interrogated during application execution, without the application code having to know the names of each individual CMP field, something that may not even be possible if the application is composed using dynamic bindings.

Declaration

CMP field tags are declared using the @com.opencloud.rhino.cmp.CMPFieldTag annotation. This annotation may be used on:

  • an SBB abstract class CMP field getter or setter method

  • an SBB or SBB Part CMP extension interface CMP field getter or setter method

  • an SBB abstract class itself

  • an SBB or SBB part CMP extension interface.

If the annotation is used on a class or interface declaration, then its meaning is applied to all CMP fields declared in the class or interface.

The annotation defines:

  • a list of tags to add

  • a list of tags to remove

  • inheritance behaviour.

For annotations defined at the class or interface level, the inheritance behaviour determines whether or not tags defined by a superclass or superinterfaces will be inherited.

For annotations defined on a CMP field accessor method, if inheritance is enabled then tags defined in the following locations will be inherited:

  • Tags defined at the class or interface level in the bounding class or interface, and any tags that that class or interface inherits.

  • Tags defined by duplicate definitions of the CMP field accessor methods in superclasses.

The option of removing tags is only relevant if inheritance is enabled, as in the case inheritance is disabled then the CMP field will only be tagged with names added by the annotation.

Runtime processing

Two methods in the CMPFields interface are provided specifically for dealing with tagged CMP fields in application code:

  • public Set<String> tags()
    This method returns the names of all CMP field tags used in the CMP fields defined in the SBB, including the SBB abstract class and any CMP extension interfaces it uses.

  • public void visit(String tag, CMPFieldVisitor visitor)
    This method iterates over each CMP field tagged with the specified name, invoking the visit method on the provided visitor object for each one. The visit method is passed a CMPField object representing a CMP field. The CMPField object can be used to inspect and/or update the CMP field’s state.

CMP extension interfaces

The SLEE specification requires the SBB CMP fields be defined using abstract getter and setter methods. For example, the SBB CMP field named firstName of type java.lang.String must have the following method declarations in the SBB abstract class:

public abstract void setFirstName(String firstName);
public abstract String getFirstName();

Rhino allows SBB CMP fields alternatively to be defined in separate CMP extension interfaces. A CMP extension interface is simply an interface that declares methods related only to CMP fields. One use of a CMP extension interface is to allow an SBB to store additional state that may not be known when the SBB abstract class is developed. For example, the SBB build process may allow additional components to be "plugged in" to the base SBB, each of which may require its own CMP state. The plug-in components can define the CMP state they need using CMP extension interfaces; then these interfaces can be declared in the SBB deployment descriptor when the SBB is packaged.

CMP extension interfaces are also the sole mechanism by which an SBB part component can define CMP fields.

The firstName CMP field example above could be defined in a CMP extension interface as shown below:

public interface MySbbCMPInterface {
    public void setFirstName(String firstName);
    public String getFirstName();

    ...
}

A CMP extension interface may also optionally include has and reset methods for each CMP field.

  • The has method determines if a value for the CMP field is present. Primitive types always have a value, so this method will always return true for a CMP field of a primitive type. For CMP fields storing object types, this method returns true if the CMP field has been assigned a non-null value. While this method is generally for convenience only, a has method can offer the potential for better-performing code when only the presence of a value in the CMP field is required to be known, as the implementation does not necessarily require the stored CMP field value to be deserialized in order to test for a non-null value.

  • The reset method resets the CMP field value to its initial value. The initial value is determined by an @InitialValueField annotation, if present. Otherwise, the Java-defined default initial value for the field type is used; for example: 0 for numeric primitives, null for object references.

Adding the has and reset methods to the MySbbCMPInterface example shown above results in the interface shown below:

public interface MySbbCMPInterface {
    public void setFirstName(String firstName);
    public String getFirstName();

    // returns true if firstName has an assigned non-null value
    // effectively equivalent to: getFirstName() != null
    public boolean hasFirstName();

    // effectively equivalent to: setFirstName(null)
    public void resetFirstName();

    ...
}

Below is an example of a how the reset method is influenced by an @InitialValueField annotation:

public interface MySecondSbbCMPInterface {
    @InitialValueField
    public void setNames(String[] names);
    public String[] getNames();
    public static final String[] _initialNames = new String[0];

    // effectively equivalent to: setNames(_initialNames)
    public void resetNames();

    ...
}

A CMP extension interface must be public, and optionally may extend the com.opencloud.rhino.cmp.CMPFields interface. A single SBB or SBB part may declare as many CMP extension interfaces as desired. An SBB declares its CMP extension interfaces in the oc-sbb-jar.xml extension deployment descriptor, while an SBB part declares its CMP extension interfaces in the oc-sbb-part-jar.xml deployment descriptor. Since a CMP extension interface defines only CMP fields, an SBB deployment descriptor does not need to specify <cmp-field> elements for any CMP field defined only in a CMP extension interface. The CMP fields will be determined by class introspection.

All methods defined in a CMP extension interface are mandatory transactional methods. If they are invoked without a valid transaction context, a javax.slee.TransactionRequiredLocalException will be thrown. In addition, these methods may only be invoked on an SBB or SBB part object that has been assigned to an SBB entity, or is in the process of being assigned to an SBB entity using the sbbCreate method. If the SBB or SBB part object is not assigned to an SBB entity (with the exclusion of the sbbCreate method), a java.lang.IllegalStateException is thrown.

An SBB or SBB part obtains access to the CMP fields defined in CMP extension interfaces using a com.opencloud.rhino.cmp.CMPFields object. The CMPFields object may be typecast to any CMP extension interface declared by the SBB or SBB part, regardless of whether or not the CMP extension interface extends the CMPFields interface, thus exposing the CMP field accessor methods defined by the interface.

The CMP fields defined by an SBB in its SBB abstract class, in any CMP extension interfaces, and in any CMP extension interfaces used by dependent SBB parts, all share the same namespace. As such, if the same CMP field is defined in multiple places, for example in the SBB abstract class and in a CMP extension interface, then it must be declared with the same type. All these CMP accessor methods will refer to the same underlying SBB CMP field.

Arbitrary CMP fields

There are sometimes occasions when an SBB needs to store arbitrary CMP state which cannot be predetermined at development time. For example, arbitrary session state could be created in response to interactions with some other network element. The typical response is to define a CMP field that stores a map and store the session state in the map as key, value pairs. This is an acceptable solution; however, serialization of the stored map value can be a performance hit if not explicitly managed; and even then a change to any mapped value requires serialization of the entire map as it is read from CMP, updated, then rewritten to CMP.

To help in these situations, Rhino allows an SBB or SBB part to optionally permit the use of arbitrary CMP fields. Arbitrary CMP fields are simply CMP fields that have not been explicitly defined in the SBB abstract class or any CMP extension interface. Support for arbitrary CMP fields not permitted by default but can be permitted using the arbitrary-cmp-fields-allowed attribute in the oc-sbb-jar.xml SBB extension deployment descriptor or oc-sbb-part-jar.xml SBB part deployment descriptor. Note that if an SBB has a dependency on an SBB part that has permitted support for arbitrary CMP fields, then that permission for arbitrary CMP fields extends back to the SBB and all its dependent SBB parts. It is not possible for an SBB or SBB part to override permission for arbitrary CMP fields from some other dependent SBB part (or the SBB itself in the case of an SBB part).

Arbitrary CMP fields have only a few basic rules and restrictions:

  • Arbitrary CMP fields must have a non-null name. Any non-null name is valid, including a zero-length name.

  • An arbitrary CMP field cannot have the same name as a CMP field predefined in the SBB abstract class or any CMP extension interface.

  • An arbitrary CMP field is only deemed to exist if it has an assigned non-null value. An existing arbitrary CMP field ceases to exist if assigned a null value.

  • Any non-null value assigned to an arbitrary CMP field must be serializable using standard Java serialization. FastSerializable types are also supported. Encodable types and those annotated with @DatatypeCodec, however, are not currently supported. FastSerializable should be used instead where possible.

  • Arbitrary CMP fields always exhibit pass-by-value semantics as per standard JAIN SLEE-defined CMP field behaviour. Pass-by-reference is not supported for these CMP fields.

Arbitrary CMP fields are accessed and managed using a CMPFields object.

The CMPFields interface

The CMPFields interface provides a means to access CMP fields indirectly by name, access metadata about CMP fields, and obtain information about and determine which CMP fields currently have a value. Arbitrary CMP fields are also managed using a CMPFields object.

The CMPFields interface is shown below:

package com.opencloud.rhino.cmp;

public interface CMPFields {
    public <T> T get(String name)
        throws NullPointerException, UnrecognisedFieldNameException, ClassCastException;
    public <T> void set(String name, T value)
        throws NullPointerException, UnrecognisedFieldNameException, ClassCastException;
    public boolean has(String name)
        throws NullPointerException, UnrecognisedFieldNameException;
    public void reset(String name)
        throws NullPointerException, UnrecognisedFieldNameException;
    public void reset();
    public Class<?> typeOf(String name)
        throws NullPointerException, UnrecognisedFieldNameException;
    public boolean isPredefined(String name)
        throws NullPointerException;
    public Set<String> predefinedNames();
    public Set<String> keySet();
    public Set<Map.Entry<String,Object>> entrySet();
    public CMPField cmpField(String name)
        throws NullPointerException, UnrecognisedFieldNameException;
    public Set<String> tags();
    public void visit(String tag, CMPFieldVisitor visitor)
        throws NullPointerException, UnrecognisedTagNameException;
    public void visit(boolean includeArbitraries, CMPFieldVisitor visitor)
        throws NullPointerException;
}

The get, set, has, and parameterised reset methods allow any CMP field to be accessed by name parameter. These methods provide an alternative access mechanism for CMP fields predefined in the SBB abstract class or a CMP extension interface. These methods also allow arbitrary CMP fields to be created and managed if support for this feature is permitted by the SBB or SBB part. Predefined and arbitrary CMP fields all share the same namespace. If any of these methods are invoked with a name parameter that matches a predefined CMP field name, then the name refers to the predefined CMP field; otherwise it refers to an arbitrary CMP field. These methods throw an UnrecognisedFieldNameException if the name refers to an arbitrary CMP field and arbitrary CMP fields are not permitted by the component.

When any of these methods are used to access a predefined CMP field, the invocation is equivalent to the corresponding CMP field method declaration. The example below illustrates this point:

public abstract class MySbb implements Sbb {
    public abstract void setFirstName(String firstName);
    public abstract String getFirstName();

    public void someMethod() {
        CMPFields cmpFields = ...

        // set CMP field to "Alice"
        setFirstName("Alice");

        // returns "Alice"
        String firstName = cmpFields.get("firstName");

        // set CMP field to "Bob"
        cmpFields.set("firstName", "Bob");

        // returns "Bob"
        firstName = getFirstName();

        // as the CMP field currently contains a value,
        // this will reset the CMP field to its initial value null
        if (cmpFields.has("firstName") {
            cmpFields.reset("firstName");
        }

        // returns null
        firstName = getFirstName();

        ...
    }

    ...
}

The unparameterised reset method resets all predefined CMP fields to their initial value and removes all arbitrary CMP fields that exist. This method effectively returns the SBB CMP state to as it was when the SBB entity was first created.

The typeOf method returns the Java class type of the named CMP field. For a predefined CMP field the return value is equal to the class used in its CMP field method declarations. For an arbitrary CMP field, this method returns the class of the value stored in the CMP field. The arbitrary CMP field must exist for this method to return a successful result, otherwise an UnrecognisedFieldNameException is thrown.

The isPredefined method returns true if the name corresponds with a predefined CMP field, and false otherwise. The predefinedNames method returns a set containing the names of all predefined CMP fields.

The keySet method returns a set containing the names of all CMP fields that currently have a value. The entrySet method returns a map containing the names and values of all CMP fields that currently have a value. The CMP fields that are deemed to have a value include:

  • any predefined CMP field of a primitive type

  • any predefined CMP field of an object type that currently has a non-null value

  • any arbitrary CMP field that currently exists.

The cmpField method returns a CMPField object that allows direct inspection and/or update to the named CMP field.

The tags method returns the names of all CMP field tags used in the CMP fields defined in the SBB abstract class and CMP extension interfaces.

There are two visit methods. One can be used to iterate over the set of CMP fields tagged with a specified name, invoking the visit method on the provided visitor object for each one. The second can be used to iterate over all CMP fields defined in the SBB abstract class and CMP extension interfaces, optionally including any arbitrarily defined CMP fields.

All methods defined in the CMPFields interface, with the exception of isPredefined, predefinedNames, and tags, are mandatory transactional methods. If they are invoked without a valid transaction context, a javax.slee.TransactionRequiredLocalException will be thrown. In addition, these methods may only be invoked on an SBB or SBB part object that has been assigned to an SBB entity, or is in the process of being assigned to an SBB entity using the sbbCreate method. If the SBB or SBB part object is not assigned to an SBB entity (with the exclusion of the sbbCreate method), a java.lang.IllegalStateException is thrown.

CMPFields object

An SBB obtains a CMPFields object from its com.opencloud.rhino.slee.RhinoSbbContext object (a Rhino extension of javax.slee.SbbContext). An SBB part obtains a CMPFields object from its com.opencloud.rhino.slee.sbbpart.SbbPartContext object.

A CMPFields object may be typecast to any CMP extension interface declared by the SBB or any of its dependent SBB parts, regardless of whether or not the CMP extension interface extends the com.opencloud.rhino.cmp.CMPFields interface.

The CMPField interface

The CMPField interface provides a means to inspect and/or update an individual CMP field.

The CMPField interface is shown below:

package com.opencloud.rhino.cmp;

public interface CMPField {
    public String getName();
    public Class<?> getType();
    public Set<String> getTags();
    public <T> T getValue()
        throws ClassCastException;
    public <T> void setValue(T value)
        throws NullPointerException, ClassCastException;
    public boolean hasValue();
    public void reset();
    public boolean isPredefined();
}

The getName and getType methods return the name and type of the CMP field respectively. For an arbitrary CMP field, the type is determined from the CMP field’s current value.

The getTags method returns the set of tags, if any, that have been assigned to the CMP field.

The getValue, setValue, hasValue, and reset methods perform the named function against the CMP field. For predefined CMP fields, these methods are equivalent to the get, set, has, and reset methods respectively that can be defined for a CMP field, and invoking any of these methods is equivalent to invoking the corresponding CMP field method. For example, invoking getValue() on a CMPField object for a field named firstName is equivalent to invoking the getFirstName() CMP field method.

The isPredefined method determines whether or not the CMP field was defined in the SBB abstract class or a CMP interface, or is an arbitrary CMP field. This method returns true if the CMPField object represents a predefined CMP field, and false otherwise.

The getName, getTags, and isPredefined methods are non-transactional methods. This means that a valid transaction context is not required in order to invoke these methods. The getType method is also a non-transactional method if the CMPField object represents a predefined CMP field.

All other methods, and getType when invoked on a CMPField object representing an arbitrary CMP field, are mandatory transactional methods. If they are invoked without a valid transaction context, a javax.slee.TransactionRequiredLocalException will be thrown. In addition, these methods may only be invoked on an SBB or SBB part object that has been assigned to an SBB entity, or is in the process of being assigned to an SBB entity using the sbbCreate method. If the SBB or SBB part object is not assigned to an SBB entity (with the exclusion of the sbbCreate method), a java.lang.IllegalStateException is thrown.

CMPField object

An SBB can obtain a CMPField object for any CMP field using the cmpField(String name) method on a CMPFields object.

A CMPField object is also passed to the CMPFieldVisitor.visit(CMPField cmpField) method when a visit operation is invoked on a CMPFields object.

An SBB obtains a CMPFields object from its com.opencloud.rhino.slee.RhinoSbbContext object (a Rhino extension of javax.slee.SbbContext). An SBB part obtains a CMPFields object from its com.opencloud.rhino.slee.sbbpart.SbbPartContext object.

A CMPFields object may be typecast to any CMP extension interface declared by the SBB or any of its dependent SBB parts, regardless of whether or not the CMP extension interface extends the com.opencloud.rhino.cmp.CMPFields interface.

Application initiated persistence

Under normal conditions, transactional state of replicated activities and SBBs that gets stored into Rhino’s internal memory database automatically gets scheduled for persistence to an external key/value store if one is configured as the replication resource for the application namespace. At the early stages of a call or session though it may not make sense to replicate intermediate state changes as the state changes rapidly, for example during preconditions negotiation. Therefore, it may only be meaningful to replicate state changes once a call or session is in a so-called stable state, where the state is not expected to immediately change. Typically a session is in a semi-stable state during the alerting phase. The next stable state is answered.

An application can choose to defer the initial replication of its state at deployment time using the initial-persistence-enabled attribute of the service-properties element in the extension service deployment descriptor. Alternatively, an administrator can change the initial replication persistence mode of an already deployed service using the ServiceManagementMBean.setInitialPersistence management operation.

At an appropriate time in its lifecycle, an SBB entity can enable replicated persistence of its state using RhinoSbbContext.enableEntityTreePersistence().

Alternatively, an SBB entity may decide that replication of its state should not occur at all. An SBB entity may permanently disable persistence of the entity tree state for its lifetime using RhinoSbbContext.disableEntityTreePersistence().

Persistence behaviour

Persistence of application state to a replicated storage resource such as a key/value store is decided on a transaction by transaction basis. If an SBB entity has disabled initial persistence, then all transactional state changes made during a transaction are buffered in memory by the replicated storage resource but are not scheduled for write to external storage. This means that in the event of node death prior to a write to external storage, state for those transactions will be lost and a remaining cluster node will be unable to recover that session state.

Session state includes the CMP fields of the SBB entity tree, attachment relationships to any activities, state of those activities where they are also replicated, SLEE timers, activity context name bindings, and so on.

Once the SBB entity decides that the session has reached a point where replicated persistence is appropriate, it can enable persistence using the using the enableEntityTreePersistence() method. This will cause the current transaction to be flagged as "persistable", and updates made in that transaction, along with any previously buffered unexpired unpersisted transactions related to it will be scheduled for external persistence.

Alternatively, the SBB entity can permanently disable persistence using the disableEntityTreePersistence(). This causes the current transaction to be flagged as "disabled for persistence", and updates made in that transaction, along with the state for any previously buffered unexpired unpersisted transactions related to it will be discarded. Future updates for that SBB entity tree and its related state will also be excluded from persistence.

Note

There may be cases where some state for an SBB entity with initial persistence disabled has already been persisted at the point where the SBB decides that persistence should be disabled. This may happen, for example, if the maximum persist-deferred transaction age is reached in buffered state and it is force-written to external storage.

In these cases, the state in external storage will remain, without further update, until the corresponding entries are removed in Rhino, for example, as SBB entities are removed, activities end, etc.

Rules and restrictions

An SBB entity can only either enable or disable replicated persistence. Once such a decision is made, it is permanent and it cannot be reversed (unless the corresponding transaction rolls back).

Replicated persistence is managed and decided at an SBB entity tree level. This means that if a separate SBB entity tree is used to manage separate calls/sessions, then each SBB entity tree can decide at which point the state for the call/session that it itself manages should begin to be persisted or discarded.

It doesn’t matter which SBB entity in an SBB entity tree invokes the enableEntityTreePersistence() or disableEntityTreePersistence() method. Any SBB entity in the SBB entity tree may do so with the same effect.

The enableEntityTreePersistence() and disableEntityTreePersistence() methods are transactional methods. If the transaction in which one of these methods is invoked happens to roll back, then future transactions for the same SBB entity will run as if this method had never been invoked.

SLEE Facilities

Rhino provides many extensions to the standard facilities and application functions provided by the JAIN SLEE specification, as detailed below.

SBB child relations

Tracer extensions

Usage extensions

Profile Facility extensions

Null Activity Factory extensions

Activity Context Naming Facility extensions

Lock Facility

Logging Context Facility

SAS Facility

Session Ownership Facility

JNDI environment

Warning The AsyncLogging facility is deprecated and the asynchronous tracers provided as part of the logging framework should be used instead.

SBB Child Relations

The JAIN SLEE specification defines that an SBB child relation requires:

  • an abstract get child relation method declared in the SBB abstract class; and

  • a <get-child-relation-method> deployment descriptor entry that binds the get child relation method to a particular SBB type.

The get child relation method returns a ChildRelation object, which the SBB developer uses to create, remove, or manage the SBBs in the child relation.

A difficulty with this approach is that child relations cannot be added to an SBB without code changes to the SBB abstract class. For example, consider an SBB that delegates call processing to different child SBBs based on the protocol in use for the call (CAPv2, CAPv3, ETSI INAP CS1, and so on). Since the child SBB for each protocol needs its own get child relation method in the SBB abstract class, it’s not easy to decide at build time what protocols will be supported without modifying the SBB abstract class and recompiling. Code changes are also extremely undesirable when service bindings are used to change the child relationships of the SBB after the SBB has been installed in the SLEE.

Rhino provides an alternative mechanism to declare and use SBB child relations, which eliminates the recompilation part of the build cycle in these types of use cases. Child relations are still declared in the deployment descriptor, but the SBB accesses the child relations using a Child Relation Facility provided by the SLEE.

Extended child relation declarations

Rhino allows an SBB to declare a child relation with an extension deployment descriptor entry only; in other words, no corresponding get child relation method is needed in the SBB abstract class. As such, these child relations are termed "declarative" child relations, to differentiate them from the standard child relations defined by the SLEE specification. Declarative child relations may be added to or removed from an SBB without the need to recompile any code.

Declarative child relations are declared in the oc-sbb-jar.xml extension deployment descriptor using the <child-relations> element. The child-relations element contains a child-relation element that defines each declarative child relation. A child-relation element contains the following sub-elements:

Sub-element What it does
 description

Provides information (optional).

 child-relation-name

Defines the name of the declarative child relation. The SBB uses this name with the Child Relation Facility to access the child relation. This name must be unique within the scope of the SBB’s declarative child relations.

 sbb-alias-ref

References an SBB by its sbb-alias that is specified within the corresponding sbb element in the standard sbb-jar.xml deployment descriptor. This element defines the type of the child SBB.

 default-priority

Specifies the default event delivery priority of the child SBB relative to its sibling SBBs.

Child Relation Facility

The Child Relation Facility is a Rhino extension that is used by SBBs to gain access to their declarative child relations. The Child Relation Facility bypasses the need for the SBB developer to declare a get child relation method in the SBB abstract class for each child SBB that the SBB desires.

ChildRelationFacility interface

SBB objects access the Child Relation Facility through a ChildRelationFacility object that implements the ChildRelationFacility interface. A ChildRelationFacility object can be obtained from the SBB’s RhinoSbbContext object (an extension of the standard SbbContext object).

The ChildRelationFacility interface is as follows:

package com.opencloud.rhino.facilities.childrelations;

import java.util.Collection;
import javax.slee.ChildRelation;
import javax.slee.SbbLocalObject;
import javax.slee.TransactionRequiredLocalException;
import javax.slee.facilities.FacilityException;
import com.opencloud.rhino.slee.RhinoSbbContext;
import com.opencloud.rhino.slee.RhinoSbbLocalHome;

public interface ChildRelationFacility {
    public Collection<String> getChildRelationNames()
        throws FacilityException;

    public ChildRelation getChildRelation(String name)
        throws NullPointerException, TransactionRequiredLocalException, IllegalStateException,
               UnrecognizedChildRelationException, FacilityException;

    public Collection<SbbLocalObject> getChildSbbs()
        throws TransactionRequiredLocalException, IllegalStateException, FacilityException;

    public <T> Collection<T> getChildSbbs(Class<T> type)
        throws NullPointerException, TransactionRequiredLocalException,
               IllegalStateException, FacilityException;

    public RhinoSbbLocalHome getChildSbbLocalHome(String name)
        throws NullPointerException, UnrecognizedChildRelationException, FacilityException;
}
Note
  • All methods of the ChildRelationFacility interface, except for the getChildRelationNames method and the getChildSbbLocalHome method, are required transactional methods. The getChildRelationNames method and the getChildSbbLocalHome method are non-transactional.

  • The SLEE provides a concrete class implementing the ChildRelationFacility interface.

  • The methods of this interface throw the javax.slee.facilities.FacilityException if the requested operation cannot be completed because of a system-level failure.

getChildRelationNames method

The getChildRelationNames method returns the set of declarative child relation names declared by the SBB. Each name contained by this set corresponds with a name contained by a <child-relation-name> element in the oc-sbb-jar.xml extension deployment descriptor.

getChildRelation method

The getChildRelation method returns a standard ChildRelation object for the named declarative child relation. The specified name argument must be one of the names contained by the <child-relation-name> elements in the oc-sbb-jar.xml extension deployment descriptor; that is, it must be one of the names contained in the set of names returned by the getChildRelationNames method.

This method performs the same function as the get child relation methods declared in the SBB abstract class for standard JAIN SLEE child relation declarations. A ChildRelation object returned from this method can be used in exactly the same way as a ChildRelation object returned by a get child relation method.

This method throws a NullPointerException if the name argument is null. If the name argument does not correspond with a declarative child relation, then this method throws an UnrecognizedChildRelationException. If this method is invoked without a valid transaction context, then the method throws a TransactionRequiredLocalException. If the method is invoked by an SBB object that is not assigned to an SBB entity, then the method throws an IllegalStateException.

getChildSbbs methods

The getChildSbbs methods each return a collection of child SBB local interface objects. The no-argument variant returns a collection of all child SBBs. The one-argument variant returns a collection of all child SBBs where the child SBB’s local interface is assignable to the specified Class argument. Both these methods will consider all SBB child relations; that is, child relations declared both in the standard JAIN SLEE manner and declarative child relations.

If either of these methods are invoked without a valid transaction context then a TransactionRequiredLocalException is thrown. If invoked by an SBB object that is not assigned to an SBB entity, then an IllegalStateException is thrown. If the one-argument method variant is invoked with a null argument, then the method throws a NullPointerException.

getChildSbbLocalHome method

The getChildSbbLocalHome method returns an object implementing the local home interface of the child SBB of the named declarative child relation. For more information on SBB local home interfaces, please see Miscellaneous SLEE Application API Enhancements.

This method throws a NullPointerException if the name argument is null. If the name argument does not correspond with a declarative child relation, then this method throws an UnrecognizedChildRelationException.

Tracer Extensions

Rhino extends the standard javax.slee.Tracer interface with the com.opencloud.rhino.facilities.Tracer interface, which adds additional functionality over what the JAIN SLEE specification provides.

Tracer interface

The com.opencloud.rhino.facilities.Tracer extended tracer interface adds two new methods to the standard tracer. Like all methods defined by the standard interface, both new methods are non-transactional; that is, they do not require an active transaction to return a successful result.

getParentTracer method

The getParentTracer method returns a Tracer object for the tracer’s parent tracer. The parent tracer is the tracer with the name returned by the getParentTracerName method defined in the standard Tracer interface.

If this method is invoked on a Tracer object for a root tracer, then null is returned.

getChildTracer method

The getChildTracer method returns a Tracer object for a tracer that is a descendant (in terms of a parent-child relationship) of the invoked tracer. The name argument specifies the name of the child tracer.

Formally:

  • if the invoked tracer is a root tracer, then this method returns a tracer with the name specified by the name argument

  • otherwise, this method returns a tracer with the name: invokedTracer.getName() + . + name.

The name argument must be a valid tracer name. Since any valid name can be specified, this method can be used to create any descendant tracer — child, grandchild, and so on.

This method throws a NullPointerException if the name argument is null. It throws an IllegalArgumentException if the name argument would result in an invalid tracer name. It throws a javax.slee.facilities.FacilityException if the child tracer cannot be returned because of a system-level failure.

Extended Tracer Interface

See the ExtendedTracer interface Javadoc.

Rhino extensions to the SLEE-defined Tracer interface.

Since 2.6.0 Rhino tracer extensions support use of a string interpolation format that allows deferring final string construction until it is certain that the message will be logged somewhere.

Methods taking object parameters and a string message expect the string to use "{}" to mark object insertion points. Parameters will be inserted in argument order.

tracer.info("An {} string with {} object parameters", "example", 2);

will log "An example string with 2 object parameters".

Rhino tracer extensions were written to support garbage free tracing. To support that unrolled overloads for up to 10 object parameters are provided. More than 10 object parameters to these methods should be avoided as these are incompatible with garbage free tracing.

For situations where simple string interpolation is not sufficient, we offer the full flexibility of java.util.Formatter string formatting using the printf method.

MDC(Mapped Diagnostic Context) support was added in 2.6.0. This provides a mechanism to associate logging context with activities. associated context will be automatically included in every Trace/log message in {name=value, name=value} format. as a consequence of this, some care must be taken when deciding what to add as context to avoid overly long and unweildy log messages.

Simple RA ping service tracing with bad MDC usage
 2017-11-29 12:18:42.236+1300 Fine    [trace.HA_PingService.1_1.HA_PingSbb.1_1] <jr-19> {simpleRa connection=1, txID=101:211148435325409} START
Tip Since Rhino 2.6.0

Usage Extensions

The JAIN SLEE specification allows SLEE components such as SBBs and resource adaptors to define a single usage parameters interface for the collection of runtime statistics. Statistics may be collected in different usage parameter sets — essentially named buckets each containing the same set of usage parameters as defined by the usage parameters interface. Creation and removal of named usage parameter sets is only supported through JMX management clients.

When building large SLEE applications or complex resource adaptors, the limitations of the SLEE-defined usage mechanism quickly becomes apparent. A single usage parameters interface lacks flexibility, and means that statistics from all parts of the system need to be lumped into a single view; and the inability of an application to be able to control its own named usage parameter sets can create a discord between any dynamic application behaviour and usage parameter set management requirements.

To alleviate these problems, Rhino provides a usage extension mechanism that allows an SBB or resource adaptor to declare multiple usage parameters interfaces, and defines a usage facility with which SBBs and resource adaptors can manage and access their own usage parameter sets. This section describes that extension mechanism.

Usage parameter types

The JAIN SLEE specification defines two types of usage parameters: counter-type and sample-type. Rhino’s extension mechanism does not add any new type of usage parameter, but does allow counter-type usage parameters to be set to a specific value rather than only incremented or decremented.

Usage parameter sets

A usage parameter set is a set that contains a usage parameter for each usage parameter name declared in the usage parameters interface of the corresponding SLEE component. Each method of the usage parameters interface declares the usage parameter name and type of a single usage parameter in this set. A SLEE component that generates usage information may access multiple usage parameters with the same lowest-level usage parameter name component, by using multiple usage parameter sets. The JAIN SLEE specification defines: a default usage parameter set, automatically available to any SLEE component that defines a usage parameters interface; and named usage parameter sets — which can also be used by the SLEE component, but can only be created and removed using the JMX management interface. Usage parameter sets in the JAIN SLEE specification occupy a flat namespace, and there is no relationship between any two usage parameter sets.

Rhino’s extension mechanism introduces a hierarchical structure and namespace for usage parameter sets. The SLEE-defined default usage parameter set is replaced with a root usage parameter set, and each usage parameter set can have zero or more child usage parameter sets. A usage parameter set name must be unique amongst its sibling usage parameter sets, but in any other case usage parameter set names may be reused.

A SLEE component creates, removes, or otherwise manages its own usage parameter sets itself using the Usage Facility and the methods defined on the Usage Parameter Interfaces.

Usage parameter set types

Each usage parameter set may or may not have a type. Usage parameter set types are declared in the deployment descriptor, each with a corresponding usage parameters interface. A usage parameter set with no type has no usage parameters, and can be used as a structural placeholder in the usage parameter set hierarchy.

The type of the root usage parameter set is also declared in the deployment descriptor. This declaration is optional. If declared, the root usage parameter set will be created with the specified type; otherwise the root usage parameter set will be created with no type.

The type of a child usage parameter set is specified at runtime when the usage parameter set is created by the SLEE component. A child usage parameter set may be created with any recognised usage parameter set type, or may be created with no type.

A SLEE component must declare at least one usage parameter set type in order to use the usage facility and manage its usage parameter sets.

Aggregation and extension

Under certain conditions, a usage parameter update to a usage parameter set may aggregate to its parent usage parameter set. Aggregation simply means that the update is also applied to the parent usage parameter set, and then its parent, and so on, so long as an aggregation relationship holds between the parent and child usage parameter sets, or the root usage parameter set is reached. Aggregation is useful, for example, to record total usage in a parent usage parameter set where individual child usage parameter sets record usage for different conditions, such as the triggering protocol of the session.

Aggregation for a given usage parameter name can only occur from a child usage parameter set to a parent usage parameter set if:

  • the parent usage parameter set and the child usage parameter set have the same usage parameter set type; or

  • the child usage parameter set type extends, either directly or indirectly, the parent usage parameter set type; and both usage parameter set types declare a usage parameter of the same type (counter-type or sample-type) with that usage parameter name.

Usage parameter set type extension is declarative rather than programmatic. A usage parameter set type declares in the deployment descriptor if it extends another usage parameter set type. The usage parameters interface of a usage parameter set type that extends another usage parameter set type is not required to extend or otherwise be related in any way to the usage parameters interface of the extended usage parameter set type. As long as the two usage parameters interfaces declare a usage parameter with the same name and type (counter-type or sample-type), then aggregation may occur between the two usage parameter set types for that usage parameter name.

Aggregation is enabled by default for all usage parameters. Aggregation can be disabled on a per usage parameter name basis using the relevant annotation on each usage parameters interface usage parameter method declaration.

Usage parameters interfaces

SLEE components declare their usage parameters using one or more usage parameters interfaces. Each usage parameters interface must be defined according to the following rules:

  • A usage parameters interface must be defined in a named package; in other words, the class must have a package declaration.

  • A usage parameters interface must be declared as public.

  • A usage parameters interface may optionally extend the com.opencloud.rhino.facilities.usage.UsageParametersInterface interface.

  • Each increment, set, or sample method within the usage parameters interface must declare a lowest-level usage parameter name relevant to the SLEE component.

    • The SLEE derives the usage parameter type associated with this usage parameter name from the method name of the declared method.

  • Each get accessor method within the usage parameters interface provides access to the current approximate value or sample statistics for the lowest-level usage parameter name.

  • A single usage parameter name can only be associated with a single usage parameter type. The SLEE will reject a usage parameters interface that declares both a sample method and an increment or set method for the same usage parameter name.

    • It is legal to declare both increment and set methods for the same usage parameter name. These two methods simply offer alternative ways to update the same counter-type usage parameter.

  • A usage parameter name must be a valid Java identifier and begin with a lowercase letter, as determined by java.lang.Character.isLowerCase.

Counter-type usage parameter increment methods, sample-type usage parameter sample methods, and all usage parameter accessor methods, are declared in the usage parameters interface as defined in the SLEE specification.

Counter-type usage parameter set method

A usage parameter set method must be defined in the usage parameters interface to declare the presence of and to permit updates to a counter-type usage parameter. The method name of the set method is derived by adding a "set" prefix to the usage parameter name. The set method has the following method signature:

public abstract void set<usage parameter name>(long value);
Note
  • The set method must be declared as public and abstract.

  • The first letter of the usage parameter name is uppercased in the definition of the set method.

  • The set method does not have a throws clause.

  • This method runs in an unspecified transaction context. Counter-type usage parameter updates do not require an active transaction. Counter-type usage parameter updates occur regardless of the outcome of any transaction active at the time of the update. If multiple threads update the same usage parameter at the same time, these updates are applied as if the updates were serial.

  • The method throws a javax.slee.SLEEException if the requested operation cannot be performed due to a system-level failure.

UsageParametersInterface interface

A usage parameters interface may optionally extend the UsageParametersInterface. By extending this interface, a usage parameters interface provides its corresponding usage parameter sets with easy access to methods reporting metadata about themselves and methods to manage their child usage parameter sets. The UsageParametersInterface is shown below:

package com.opencloud.rhino.facilities.usage;

import java.util.Collection;
import javax.slee.SLEEException;
import javax.slee.usage.SampleStatistics;

public interface UsageParametersInterface {
    public String name();
    public String type();
    public String key();
    public <T extends UsageParametersInterface> T getOrCreateChild(String name)
        throws NullPointerException, IllegalArgumentException, SLEEException;
    public <T extends UsageParametersInterface> T getOrCreateChild(String name, String type)
        throws NullPointerException, IllegalArgumentException,
               UnrecognizedUsageParameterSetTypeException, SLEEException;
    public boolean hasChild(String name)
        throws NullPointerException, SLEEException;
    public Collection<? extends UsageParametersInterface> children()
        throws SLEEException;
    public <T extends UsageParametersInterface> T parent()
        throws SLEEException;
    public void remove()
        throws SLEEException;
}

All methods in the UsageParametersInterface are non-transactional methods; that is, they do not require an active transaction to return a successful result, and their effects persist regardless of the outcome of any transaction active at the time of the method call.

The methods throw a SLEEException if the requested operation cannot be performed due to a system-level failure.

name method

The name method returns the name of the usage parameter set that the UsageParametersInterface object is providing usage access to. This name is the name that the usage parameter set was created with. The root usage parameter set has no name; therefore this method will return null if invoked on the root usage parameter set.

type method

The type method returns the type name of the usage parameter set that the UsageParametersInterface object is providing usage access to. If the usage parameter set was created with no type, this method returns null.

key method

The key method returns a unique identifier that identifies the usage parameter set that the UsageParametersInterface object is providing usage access to. The key differs from the parameter set name in that the key is an absolute identifier that takes into account the usage parameter set’s place in the usage parameter set hierarchy, and is able to identify the usage parameter set without any other context; whereas the usage parameter set name is relative to the usage parameter set’s parent usage parameter set only.

getOrCreateChild methods

The getOrCreateChild methods return the child usage parameter set with the name specified by the name argument. If the usage parameter set already exists, then the existing usage parameter set is returned; otherwise a new usage parameter set is created. If a new usage parameter set is created, and the type argument is specified, then the child usage parameter set is created with the specified type; otherwise it is created with the same type as the usage parameter set the method is invoked on.

These methods throw a NullPointerException if the name argument is null. The methods throw an IllegalArgumentException if the name argument is zero-length. If the type argument is specified and not null, but is not recognised as a defined usage parameters interface type, the method throws an UnrecognizedUsageParameterSetTypeException.

hasChild method

The hasChild method returns a Boolean value indicating if a child usage parameter set with a name equal to the name argument currently exists.

This method throws a NullPointerException if the name argument is null.

children method

The children method returns a collection containing all the child usage parameter sets of the usage parameter set that the UsageParametersInterface object is providing usage access to.

parent method

The parent method returns the usage parameter set that is the parent of the usage parameter set that the UsageParametersInterface object is providing usage access to. If the UsageParametersInterface object represents the root usage parameter set, then this method returns null.

remove method

The remove method removes the usage parameter set that the UsageParametersInterface object is providing usage access to. All child usage parameter sets are also removed, recursively.

The root usage parameter set cannot be removed; however, this method may be invoked on a root usage parameter set, in which case the following behaviour is observed:

  • All child usage parameter sets are removed as usual.

  • All usage parameters in the root usage parameter set are reset to their initial value, as if the root usage parameter set had been removed and recreated in a single atomic action.

Annotations

A usage parameters interface and its usage parameter methods may all be annotated to provide additional information to Rhino’s statistics and SNMP subsystems. This is supported in both the SLEE-defined usage mechanism and Rhino’s extension mechanism. Information provided to the statistics subsystem helps clients display statistics appropriately, whereas information provided to the SNMP subsystem is used to configure the OIDs included in SNMP notifications for usage parameter set updates.

@UsageParameters annotation

A usage parameters interface may be annotated with the @UsageParameters annotation. The @UsageParameters annotation is shown below:

package com.opencloud.rhino.facilities.usage;

import java.lang.annotation.*;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@Documented
public @interface UsageParameters {
    String description() default "";
    String oid() default "";
}
Note
  • The description parameter provides a general description for the usage parameters interface.

  • The oid parameter defines the base SNMP Object Identifier (OID) to use for all the usage parameter sets created from the usage parameters interface. The base OID must be specified using dotted string notation, such as 1.3.6.1.4.1.19808.2.1.1001. If a base OID is not specified, or is specified as a zero-length string, then a base OID is dynamically generated for the usage parameters interface. See SNMP statistics for OID detailed explanation.

Warning When installing a SLEE component that has default oid parameter specified, please make sure the base oid mapping is not in-use. Otherwise a duplicate oid mapping alarm will be raised. If the mapping is in-use, rhino console commands setsnmpoidmapping or removesnmpmappingconfig can be used to clear/remove it.

@UsageCounter annotation

A counter-type usage parameter increment or set method may be annotated with the @UsageCounter annotation. The @UsageCounter annotation is shown below:

package com.opencloud.rhino.facilities.usage;

import java.lang.annotation.*;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
@Documented
public @interface UsageCounter {
    String description() default "";
    CounterType counterType();
    boolean aggregate() default true;
    String shortName() default "";
    String unitLabel() default "";
    int mapping() default -1;
}
Note
  • The description parameter provides a general description for the usage parameter.

  • The counterType parameter identifies the specific type of counter that the usage parameter represents. A counter-type usage parameter may be one of the following subtypes:

    • counter — an unbounded counter that typically only either increments or decrements.

    • gauge — a counter that typically has a lower and/or upper bound with a value that may oscillate within the bounds.

  • The aggregate parameter indicates whether or not updates to the usage parameter may aggregate to the parent usage parameter set.

  • The shortName parameter defines a short, possibly abbreviated version of the usage parameter name.

  • The unitLabel parameter specifies a label for the counter’s unit type.

  • The mapping parameter specifies a numeric SNMP ID for the usage parameter. This ID is appended to the usage parameter interface’s base SNMP OID to form the OID of the usage parameter. Defining a static ID for each usage parameter can eliminate renumbering issues if the usage parameters interface is later expanded with new usage parameters.

If both an increment method and a set method are defined for a single counter-type usage parameter, and both methods are annotated with @UsageCounter, then Rhino will arbitrarily choose one of the annotations to use for the usage parameter, and ignore the other annotation. In this case it is recommended only one of the methods be annotated.

@UsageSample annotation

A sample-type usage parameter sample method may be annotated with the @UsageSample annotation. The @UsageSample annotation is shown below:

package com.opencloud.rhino.facilities.usage;

import java.lang.annotation.*;

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
@Documented
public @interface UsageSample {
    String description() default "";
    SampleUnits sourceUnits();
    SampleUnits displayUnits();
    boolean aggregate() default true;
    String shortName() default "";
    String unitLabel() default "";
}
Note
  • The description parameter provides a general description for the usage parameter.

  • The sourceUnits parameter identifies the units that the sample values are recorded with. Units must be one of: time in seconds, time in milliseconds, time in nanoseconds, or a dimensionless count.

  • The displayUnits parameter identifies the units with which the sample statistics should be displayed.

    • This parameter has no effect on how sample values are reported when using a JMX Usage MBean interface to inspect usage parameters. The parameter is only meaningful to statistics clients using Rhino’s proprietary statistics API.

  • The aggregate parameter indicates whether or not updates to the usage parameter may aggregate to the parent usage parameter set.

  • The shortName parameter defines a short, possibly abbreviated, version of the usage parameter name.

  • The unitLabel parameter specifies a label for the sample’s unit type.

SBB usage parameters interface deployment descriptor

If an SBB declares one or more usage parameters interfaces using the Rhino usage extension mechanism, the oc-sbb-jar.xml Rhino extension deployment descriptor of the SBB must identify the usage parameters interfaces. The sbb-usage-parameters-interfaces element of the SBB extension deployment descriptor identifies these interfaces. It contains the following sub-elements:

Sub-element What it does
 description

This is an optional informational element.

 sbb-usage-parameters-interface

Each usage parameters interface type defined by the SBB must be identified by a sbb-usage-parameters-interface element.

Each sbb-usage-parameters-interface element has the following attributes and sub-elements:

Sub-element or attribute What it does
 root

This attribute indicates if this usage parameters interface should be used as the root usage parameter set type. At most one usage parameters interface may be declared as the root usage parameter set type.

 description

This is an optional informational element.

 sbb-usage-parameters-interface-type

This element specifies the usage parameters interface type name. This type name identifies the usage parameters interface when creating new child usage parameter sets. Each usage parameters interface declared in the deployment descriptor must have a unique type name.

 sbb-usage-parameters-interface-name

This element identifies the interface name of the usage parameters interface.

The sbb-usage-parameters-interface-type element has the following attribute:

Attribute What it does
 extends

This attribute indicates if the usage parameters interface type extends another usage parameters interface type. The extends attribute contains the type name of the usage parameters interface that this usage parameters interface type extends.

Resource adaptor usage parameters interface deployment descriptor

If a resource adaptor declares one or more usage parameters interfaces using the Rhino usage extension mechanism, the oc-resource-adaptor-jar.xml Rhino extension deployment descriptor of the resource adaptor must identify the usage parameters interfaces. The resource-adaptor-usage-parameters-interfaces element of the resource adaptor extension deployment descriptor identifies these interfaces. It contains the following sub-elements:

Sub-element What it does
 description

This is an optional informational element.

 resource-adaptor-usage-parameters-interface

Each usage parameters interface type defined by the resource adaptor must be identified by a resource-adaptor-usage-parameters-interface element.

Each resource-adaptor-usage-parameters-interface element has the following attributes and sub-elements:

Sub-element or attribute What it does
 root

This attribute indicates if this usage parameters interface should be used as the root usage parameter set type. At most one usage parameters interface may be declared as the root usage parameter set type.

 description

This is an optional informational element.

 resource-adaptor-usage-parameters-interface-type

This element specifies the usage parameters interface type name. This type name identifies the usage parameters interface when creating new child usage parameter sets. Each usage parameters interface declared in the deployment descriptor must have a unique type name.

 resource-adaptor-usage-parameters-interface-name

This element identifies the interface name of the usage parameters interface.

The resource-adaptor-usage-parameters-interface-type element has the following attribute:

Attribute What it does
 extends

This attribute indicates if the usage parameters interface type extends another usage parameters interface type. The extends attribute contains the type name of the usage parameters interface that this usage parameters interface type extends.

Usage facility

The usage facility is used by SBBs and resource adaptors to obtain access to their root usage parameter set and to create and manage child usage parameter sets. The usage facility is defined by the com.opencloud.rhino.facilities.usage.UsageFacility interface. An SBB obtains access to a UsageFacility object using a JNDI name lookup. A resource adaptor obtains access to a UsageFacility object from the ConfigProperties object passed to it by the SLEE.

A UsageFacility object is only made available to SLEE components that declare usage parameter interfaces using the Rhino usage extension mechanism. A SLEE component that does not declare any usage parameters interface, or declares a usage parameters interface using the SLEE-defined mechanism, will not be able to access the usage facility.

UsageFacility interface

The com.opencloud.rhino.facilities.usage.UsageFacility interface is shown below:

package com.opencloud.rhino.facilities.usage;

import java.util.Collection;
import javax.slee.facilities.FacilityException;

public interface UsageFacility {
    public static final String JNDI_NAME = "java:comp/env/slee/facilities/usage";
    public static final String CONFIG_PROPERTY_NAME = "slee-vendor:com.opencloud.rhino.facilities.usage";

    public <T extends UsageParametersInterface> T getRootUsageParameterSet();

    public <T extends UsageParametersInterface> T getUsageParameterSet(String key)
        throws NullPointerException, UnrecognizedUsageParameterSetException, FacilityException;

    public <T extends UsageParametersInterface> T getOrCreateChild(UsageParametersInterface parent, String name)
        throws NullPointerException, IllegalArgumentException,
               UnrecognizedUsageParameterSetException, FacilityException;

    public <T extends UsageParametersInterface> T getOrCreateChild(UsageParametersInterface parent, String name, String type)
        throws NullPointerException, IllegalArgumentException, UnrecognizedUsageParameterSetException,
               UnrecognizedUsageParameterSetTypeException, FacilityException;

    public boolean hasChild(UsageParametersInterface parent, String name)
        throws NullPointerException, UnrecognizedUsageParameterSetException, FacilityException;

    public Collection<? extends UsageParametersInterface> getChildren(UsageParametersInterface parent)
        throws NullPointerException, FacilityException;

    public void removeUsageParameterSet(UsageParametersInterface paramSet)
        throws NullPointerException, UnrecognizedUsageParameterSetException, FacilityException;
}
Note
  • The JNDI_NAME constant specifies the JNDI location where a UsageFacility object may be located by an SBB component in its component environment.

  • The CONFIG_PROPERTY_NAME constant specifies the configuration property name where a UsageFacility object may be located by a resource adaptor component in the ConfigProperties object passed to it in the raVerifyConfiguration, raConfigure, and raConfigurationUpdate methods.

  • All methods of the UsageFacility interface are non-transactional methods.

  • The SLEE provides a concrete class implementing the UsageFacility interface.

  • The methods of this interface throw the javax.slee.facilities.FacilityException if the requested operation cannot be completed because of a system-level failure.

getRootUsageParameterSet method

The getRootUsageParameterSet method returns the root usage parameter set for the SLEE component. If the SLEE component declares a root usage parameter set type, then the object returned from this method will be castable to the corresponding usage parameters interface for that type.

getUsageParameterSet method

The getUsageParameterSet method returns the usage parameter set with the key specified by the key argument. A usage parameter set’s identifying key can be obtained using the key method of the UsageParametersInterface interface.

The usage parameter set object returned from this method will be castable to the usage parameters interface corresponding with its type, as returned by the type method of the UsageParametersInterface interface.

This method throws a NullPointerException if the key argument is null. The method throws an UnrecognizedUsageParameterSetException if no usage parameter set currently exists with the specified key.

getOrCreateChild methods

The getOrCreateChild methods return the child usage parameter set of the usage parameter set specified by the parent argument with the name specified by the name argument. If the usage parameter set already exists, then the existing usage parameter set is returned; otherwise a new usage parameter set is created. If a new usage parameter set is created, and the type argument is specified, then the child usage parameter set is created with the specified type; otherwise it is created with the same type as the usage parameter set the method is invoked on.

These methods throw a NullPointerException if the name argument is null. The methods throw an UnrecognizedUsageParameterSetException if the parent argument is not recognised by this usage facility object, for example if the usage parameter set was created by some other usage facility object. The methods throw an IllegalArgumentException if the name argument is zero-length. If the type argument is specified and not null, but is not recognised as a defined usage parameters interface type, the method throws an UnrecognizedUsageParameterSetTypeException.

hasChild method

The hasChild method returns a boolean value indicating if the usage parameter set identified by the parent argument contains a child usage parameter set with a name equal to the name argument.

This method throws a NullPointerException if the name argument is null. This method throws an UnrecognizedUsageParameterSetException if the parent argument is not recognised by this usage facility object, for example if the usage parameter set was created by some other usage facility object.

getChildren method

The getChildren method returns a collection containing all the child usage parameter sets of the usage parameter set identified by the parent argument.

This method throws a NullPointerException if the parent argument is null.

removeUsageParameterSet method

The getRootUsageParameterSet method removes the usage parameter set identified by the paramSet argument. All child usage parameter sets are also removed, recursively.

This method throws a NullPointerException if the paramSet argument is null. This method throws an UnrecognizedUsageParameterSetException if the paramSet argument is not recognised by this usage facility object, for example if the usage parameter set was created by some other usage facility object.

Timer Facility Extensions

The timer facility is used by SBBs and SBB parts to manage single-shot and periodic timers. The default replication behaviour of Rhino when a timer is created depends on the replication mode of the activity that the timer is created against. Non-replicated timers are created for non-replicated activities, and replicated timers are created for activities that are replicated using the traditional savanna framework.

However, activities whose state is replicated using a key/value store only exist on one node at a time, and thus it does not make sense to replicate a timer created on such an activity across the entire cluster. Thus, Rhino’s default behaviour for timers created on a key/value store replicated activity is to not replicate those timers at all.

There are cases, however, where an application desires that such a timer survives a node failure. The firing of the timer may even be expected to trigger the retrieval of application state from the key/value store and adoption of the SBB entity tree on a surviving node after the node originally owning the SBB entity tree has failed.

In order to support replication of timers on key/value store replicated activities, Rhino extends the standard javax.slee.facilities.TimerOptions class with the com.opencloud.rhino.facilities.RhinoTimerOptions class. The RhinoTimerOptions class defines two additional timer options: replication factor and convergence name session ownership record.

Replication Factor

The replication factor specifies the number of remote timer server nodes that the timer will be replicated to.

The default value of this option is 0, meaning the timer will not be replicated.

The maximum supported replication factor in the current version of Rhino is 1. A larger replication factor may be specified by an application without error, but it will be treated as if the application had specified a replication factor of 1.

Also in the current version of Rhino, the remote timer server is implemented as an internal component of a Rhino node, thus creating a replicated timer by the mechanism described here arms the timer on some other Rhino node (as well as the local node). The remote timer server node chosen for replication is chosen at random - the application has no control over which node is chosen. If there is only one node in the cluster when a timer with a non-zero replication factor is created, the local timer will still be created but a remote timer will not.

A replication factor greater than 0 can only be specified for timers set on replicated activities, but the value is ignored for activities that are not replicated using a key/value store. A replication factor less than 0 is treated as if it was 0.

If a replication factor greater than 0 is specified, then the convergence name session ownership record option must also be set in the RhinoTimerOptions object.

Convergence Name Session Ownership Record

This option contains a reference to the convergence name session ownership record of the SBB entity tree arming the timer. This record is used to identify which Rhino node within the cluster, if any, currently owns the SBB entity tree when the replicated timer fires.

A convergence name session ownership record can be obtained from the SBB context of the SBB, or SBB part context of the SBB part.

This option must be specified if the replication factor option is set to a value greater that 0, otherwise it is ignored.

Profile Facility Extensions

Rhino extends the standard javax.slee.facilities.ProfileFacility interface with the com.opencloud.rhino.facilities.profile.ProfileFacility interface, which adds additional functionality over what the JAIN SLEE specification provides.

ProfileFacility interface

The com.opencloud.rhino.facilities.profile.ProfileFacility interface is shown below:

package com.opencloud.rhino.facilities.profile;

import javax.slee.facilities.FacilityException;
import javax.slee.profile.UnrecognizedProfileTableNameException;

public interface ProfileFacility extends javax.slee.profile.ProfileFacility {
    public boolean profileTableExists(String profileTableName)
        throws NullPointerException, FacilityException;

    public ProfileTableDescriptor getProfileTableDescriptor(String profileTableName)
        throws NullPointerException, UnrecognizedProfileTableNameException, FacilityException;

    public Class<?> getProfileLocalInterface(String profileTableName)
        throws NullPointerException, UnrecognizedProfileTableNameException, FacilityException;
}

The extended interface is implemented by all ProfileFacility objects provided by Rhino to SBBs and resource adaptors.

The extended interface adds two new methods to the profile facility. Both methods are non-transactional; that is, they do not require an active transaction to return a successful result. The methods throw the javax.slee.facilities.FacilityException if the requested operation cannot be completed because of a system-level failure.

profileTableExists method

The profileTableExists method returns a boolean value that reports whether or not a profile table with the name specified by the profileTableName argument currently exists in the SLEE.

This method throws a NullPointerException if the profileTableName argument is null.

getProfileTableDescriptor method

The getProfileTableDescriptor method returns a metadata object that provides information about the profile table with the name specified by the profileTableName argument. The ProfileTableDescriptor interface is described below.

This method throws a NullPointerException if the profileTableName argument is null. The method throws an UnrecognizedProfileTableNameException if no profile table exists with the name specified by the profileTableName argument.

ProfileTableDescriptor interface

The ProfileTableDescriptor interface provides metadata information about a profile table. This information might be useful, for example, to determine if a profile table contains profiles of an expected type before querying the profile table or retrieving profiles from it.

The ProfileTableDescriptor interface is shown below:

package com.opencloud.rhino.facilities.profile;

import javax.slee.profile.ProfileSpecificationID;
import javax.slee.profile.ProfileTable;

public interface ProfileTableDescriptor {
    public ProfileSpecificationID getProfileSpecification();

    public Class<? extends ProfileTable> getProfileTableInterface();

    public Class<?> getProfileLocalInterface();
}

All methods in the ProfileTableDescriptor interface are non-transactional methods.

getProfileSpecification method

The getProfileSpecification method returns the component identifier of the profile specification of the profile table the metadata object describes.

getProfileTableInterface method

The getProfileTableInterface method returns the Class object of the profile table interface declared by the profile specification of the profile table the metadata object describes. If the profile specification does not declare a profile table interface, then this method returns the Class object for the default SLEE-defined javax.slee.profile.ProfileTable interface instead.

getProfileLocalInterface method

The getProfileLocalInterface method returns the Class object of the profile local interface declared by the profile specification of the profile table the metadata object describes. If the profile specification does not declare a profile local interface, then this method returns the Class object of the profile CMP interface instead.

Null Activity Factory Extensions

Rhino does not introduce any API changes to the standard javax.slee.nullactivity.NullActivityFactory interface, but instead offers multiple variants with different behaviours.

Each of these variants, which all implement the standard interface, can be found at different JNDI locations in an SBB’s environment.

JNDI location Behaviour

java:comp/env/rhino/replicated/nullactivity/factory

A Null Activity Factory implementation that always creates replicated Null Activities.

These activities are replicated using the namespace’s replication resource, for example savanna or key/value store replication.

java:comp/env/rhino/nonreplicated/nullactivity/factory

A Null Activity Factory implementation that always creates non-replicated Null Activities.

These activities will only ever exist on the Rhino node that they were created on.

java:comp/env/slee/nullactivity/factory

At the default JAIN SLEE-defined JNDI location is a Null Activity Factory whose behaviour depends on the replication mode of the service it belongs to.

  • If the service is replicated, then a Null Activity Factory that always returns replicated Null Activities will be bound here.

  • If the service is not replicated, then a Null Activity Factory that always creates non-replicated Null Activities will be bound here instead.

Activity Context Naming Facility Extensions

Rhino does not introduce any API changes to the standard javax.slee.facilities.ActivityContextNamingFacility interface, but instead offers multiple variants with different behaviours.

Each of these variants, which all implement the standard interface, can be found at different JNDI locations in an SBB’s environment.

JNDI location Behaviour

java:comp/env/rhino/replicated/facilities/activitycontextnaming

An Activity Context Naming Facility implementation that will bind non-replicated activities as well as those activities replicated using the namespace’s replication resource. It will only look up names in non-replicated storage and the storage associated with the namespace’s replication resource.

  • If the namespace is replicated using savanna, then this facility behaves no differently than the Activity Context Naming Facility found at the default SLEE-defined JNDI location and can handle any activity.

  • If the namespace is replicated using a key/value store, then this facility will only handle non-replicated and key/value store replicated activities. Attempting to bind, for example, a Service Activity or Profile Table Activity, which are always replicated using savanna, using this facility will cause a FacilityException to be thrown.

java:comp/env/rhino/nonreplicated/facilities/activitycontextnaming

An Activity Context Naming Facility implementation that will only bind non-replicated activities, and will only look up bindings in non-replicated storage.

Attempting to bind a replicated activity using this facility will cause a FacilityException to be thrown.

java:comp/env/slee/facilities/activitycontextnaming

At the default JAIN SLEE-defined JNDI location is an Activity Context Naming Facility that will handle any type of activity. Name lookups will consult non-replicated storage as well as all replicated storage resources.

  • If the namespace is replicated using savanna, then this facility behaves identically to the facility found at java:comp/env/rhino/replicated/facilities/activitycontextnaming as described above.

  • If the namespace is replicated using a key/value store, then this facility will perform name lookups in all of non-replicated storage, key/value store-replicated storage, and savanna-replicated storage.

Tip The Rhino-specific JNDI locations for the Activity Context Naming Facility are available from Rhino 2.7.0.

Lock Facility

The lock facility allows resource adaptors to obtain transaction-based distributed locks.

In order to use the lock facility, one must obtain:

  • a reference to the LockFacility itself; and

  • a reference to the SLEE Transaction Manager, as all locks must be obtained from within a transaction.

The following code fragment illustrates how you can obtain these references in a resource adaptor:

package ...

import javax.slee.resource.ConfigProperties;
import javax.slee.resource.ResourceAdaptor;
import javax.slee.resource.ResourceAdaptorContext;
import javax.slee.transaction.SleeTransactionManager;
import com.opencloud.rhino.facilities.lock.LockFacility;

public class FooResourceAdaptor implements ResourceAdaptor {
    @Override
    public void setResourceAdaptorContext(ResourceAdaptorContext context) {
        // save context ref
        this.context = context;

        // ...
    }

    @Override
    public void raConfigure(ConfigProperties configProps) {
        // get refs to transaction manager and lock facility
        txManager = context.getSleeTransactionManager();
        lockFacility = (LockFacility)configProps.getProperty(LockFacility.CONFIG_PROPERTY_NAME).getValue();

        // ...
    }

    // ...

    private ResourceAdaptorContext context;
    private SleeTransactionManager txManager;
    private LockFacility lockFacility;
}

With these references, one can then proceed to acquire locks as necessary. For example:

private void doSomeWorkThatRequiresALock() {
    // start a transaction
    SleeTransaction tx = txManager.beginSleeTransaction();
    try {
        // acquire exclusive lock
        lockFacility.acquireExclusive("SomeLock");

        // do the work
        // ...

        // successfully completed work - commit transaction
        // automatically causes the lock to be released
        tx = null;
        txManager.commit();
    }
    catch (Exception e) {
        if (tx != null) {
            // failed to complete work - rollback transaction
            // automatically causes the lock to be released
            tx.rollback();
        }
    }
}

Logging Context Facility

This package provides functionality for Resource Adaptors to interact with ThreadLocal stored MDC (Mapped Diagnostic Context), also known as logging context.

What is MDC

MDC is a Map<String, String> of keys and values which will are available to the logging subsystem for various purposes. The most common and straightfoward use of MDC is printing in every log message. This provides a mechanism for providing context to log messages that would otherwise require manually passing around. A simple SIP example of useful context would be including the p-charging-vector header. As this uniquely identifies a single call, it becomes trivial to identify all log messages related to handling an individual call.

By default Rhino logs the entire MDC on every log message.

Rhino provides transparent management of MDC in the event router threadpool. This is achieved by every Activity storing a copy of the creating thread’s MDC at activity creation. The activity stored context is restored to the EventRouter thread on beginning event processing.

Services (and service components) have a separate interface for managing MDC elements.

Resource Adaptors have access to this package to provide the tools necessary to manipulate MDC appropriately. RA internal ThreadPools do not have MDC transparently replicated across thread transitions by Rhino. This is the responsibility of the RA owning the ThreadPool.

Thread Boundaries

As MDC is stored in a ThreadLocal Map, it will not follow over thread boundaries. For example, in the asynchronous API of any RA providing such, there will be an RA provided mechanism to hand off work to another thread (the handling thread). In order for any logging done by the handling thread to log the MDC, the handoff mechanism must support this by copying the MDC and clearing it from the handling thread when done.

What to include

As the entire MDC map is logged unconditionally with every log message human readable logs require that the MDC map be kept small. Both a small number of total entries, and the shortest practicable entires. A key should only be added to the MDC if it will provide significant value in tracking or sorting log messages,

For example, Rhino includes by default two MDC entries:

  • The current namespace (if not the default) as ns=foo

  • The current transaction ID (if present) as txId=101:203462114705319

The included txId is used for Rhino diagnostics, to track exactly what occurred within a given SLEE transaction, at all stages. From the service code to Rhino internals of transaction handling.

Identifiers that are short lived or not meaningful outside of the RA generally should not be added to MDC at all. And certainly not in a persistent manner except in extraordinary circumstances.

As an example of an identifier that should not be added persistently consider the SLEE TransactionID. This is not persisted within the backing Activity across multiple events, as each event takes place within a single transaction. This is a rare exception where an ephemeral key makes sense to add to MDC as processing each event involves at least one transaction, and being able to track the flow of logic within a single transaction can be extremely useful in error conditions. However, the transaction ID becomes meaningless on transaction completion, weather as a commit, or rollback. The txId key is therefore removed from MDC as soon as the transaction is completed.

Tip Since Rhino 2.6.0

SAS Facility

The SAS facility provides Resource Adaptor and service developers an interface for integrating with the Metaswitch Service Assurance Server, an end-to-end tracing system. SAS provides an integrated end-to-end view of calls passing through an operator’s network. It combines traces from all network elements with reporting capability into a complete trace of the call that can be examined at multiple levels of detail to determine how the call was processed.

The principal interface for reporting data to SAS is the Trail. Trails are created by the SAS facility, either explicitly, when an RA calls startTrail() or implicitly, when an RA or service calls getOrCreateTrail() with an activity reference. Trails typically last the lifetime of an activity but may be shared by multiple activities, e.g. a database lookup in call setup will use the trail of the SIP dialog or transaction.

SAS trails are composed of two data message types that are reported by the network element, Events and Markers. Each event and marker in a trail is reported asynchronously to the SAS server. SAS events are functional events that affect the processing of a call, e.g. a network message or a decision made by a service and the data that was used. SAS markers are informational data about a trail, typically used for search or correlation between trails. Both events and markers can contain parameters to provide information for display, search or correlation.

Bundles and Mini-bundles

A bundle file is a YAML document mapping event names to human readable descriptions. SAS requires event decoding bundle files to display the events received. Correlation and storage do not require the bundle file, it is only used at display time.

Rhino extends the SAS bundle model to use composable mini-bundles that are assembled at runtime into a bundle to export for loading into SAS. Each network element may only use one bundle when reporting to a SAS instance so Rhino builds this bundle from all the mini-bundles found in components deployed to a namespace. Developers of resource adaptors and services that use the SAS facility must write mini-bundle files to describe the events their components report.

Mini-bundles contain a set of named event descriptions and enumerated values used to construct a SAS bundle. These are combined with system-identifying information configured in Rhino to produce the bundle SAS will use to decode messages. Each mini-bundle file starts with a version, followed by a set of events and, optionally, enums listing values that can be expanded from integer "static" parameters in event messages. A component may report events from multiple mini-bundles. Events have symbolic names to support use by multiple components. They must be packaged in a deployment jar that is installed into Rhino with the component that reports the events. This may be the same jar as the component or another that the component depends on.

After deployment the system operator exports the merged bundle and installs it into the SAS UI. At this time the component bundles are combined with the configured Resource ID and written with numeric IDs to form a bundle file SAS can load. If multiple versions of a mini-bundle are deployed, only the latest version will be used. For more information on how Rhino combines mini-bundles see SAS Bundle Generation and SAS Bundle Mappings in the Rhino Administration and Deployment Guide

Note It is not safe to alter the meaning or order of event or enum values between versions of a bundle. This includes removing old events, a bundle may only grow larger.

Rhino comes with an Ant task for creating enum classes from bundle files generate-bundle-enums. Java enums are created for the SAS events and enums in the bundle file provided. The bundle files must be named for the package the enums will be created in.

Each event in a bundle must have a summary and a level. It may optionally contain details and call-flow data. Call flow descriptions must contain data, protocol and direction, other attributes are optional but should be provided where available. All text attributes of events can contain parameterised text using the Liquid templating language. For full details of the event structure contact your Metaswitch representative.

For an example of a mini-bundle file, see Service Assurance Server (SAS) Tracing in the HTTP resource adaptor guide.

Invoking Trail Accessor

On event delivery to a service, the invoking Trail is made available through the Invoking Trail Accessor. Calling getInvokingTrail() will return the SAS trail attached to the ACI on which the event was fired. On subsequent downcalls into RAs that result in a new SLEE activity being created via SleeEndpoint.startSuspended, the invoking trail (if one exists) will automatically be attached to the new activity. This behaviour means that a SAS trail will automatically be passed along through any number of downcalls into RAs and asynchronous events as long as the service always attaches to the ACI for the new activity.

In most cases, this behaviour is desired and saves one from having to add code that explicitly gets a trail from the invoking ACI and attaches it a new ACI. If necessary, however, one can call setInvokingTrail(Trail) or remove() to manually set or clear the invoking trail before calling into a RA.

Trail Association and Colocation

SAS Trails that form part of the same call can be associated by calling associate(Trail). If the Rhino SAS facility is configured with multiple SAS servers, different trails may not be using the same server.

Associating trails that are using the same SAS server is more efficient, as a trail association message can be sent to the one server informing it that the trails form part of the same trail group. If the trails being associated are using different servers, then a generic correlation marker gets sent to each SAS server with the UUID of the one trail. The SAS reporting then needs to perform some additional work to correlate the trails.

To ensure related trails are colocated on the same SAS server, SLEE components can call startColocatedTrail(Trail) or startAndAssociateTrail(Trail, Scope). These methods will create a new SAS trail using the same server as the given trail and, in the second case, will automatically send a trail association message to the server.

Entry point to the SAS event reporting facility.

Users of this facility create a Trail, then use that to create and report events The Trail interface provides two ways to create then report Events and Markers:

  1. create a message, add parameters, then report the message

  2. convenience methods that create a Marker or Event, with various combinations of parameters, then report it, in one call.

The SAS Facility supports communication with a federation of SAS servers. As such, trails created with startTrail() are distributed between servers by simple round-robin. When starting a new trail that will be associated with an existing one, use startColocatedTrail(Trail) or startAndAssociateTrail(Trail, Scope) to start a new trail on the same server as the given trail.

Tip Since Rhino 2.6.0

The primary interface for creating and reporting markers and events. A trail is a sequence of related events and markers representing the processing sequence for a dialog or transaction.

Tip Since Rhino 2.6.0

Provides access to the InvokingTrail. The InvokingTrail is the SAS trail attached to the activity owning the current event, at the time the event is delivered.

Tip Since Rhino 2.6.0

Superinterface of EventMessage and MarkerMessage with functionality common to both.

A Message sent to SAS may contain parameters to be used when decoding the message on the SAS server for display or correlation.

Variable-length parameters

Thread-safety

There are some critical thread-safety issues to consider for parameters added to messages.

Parameters are not copied or marshalled into the message until report() is called, and then only if SAS tracing is enabled.

This means two things:

  1. Parameters must not be modified until the report() method is called.

  2. Large or complex objects can be passed to this method, and if tracing is disabled or the event is discarded and not reported, it will have negligible memory or CPU cost

The threadSafeParam(byte[]) method allows callers to add a parameter that will not be copied, even after report() is called. This should be used for parameters which the caller plans to modify and has therefore defensively copied or marshalled into a new byte array.

Object parameter handling

Parameters passed to varParam(Object) and the multi-parameter equivalents will be handled differently depending on their type:

  • null — Encoded as zero length byte array.

  • byte[] — copied directly into the message

  • java.nio.ByteBuffer — Warning: Unsupported. Will be coerced to zero length byte array. Use threadSafeParam(byte[]) instead.

  • java.lang.String — encoded as UTF-8 and copied into the message

  • implements EncodeableParameter — call EncodeableParameter#encode(ByteBuffer) copy bytes written to stream into the message

  • implements MarshalableParameter — call MarshalableParameter#marshal() and copy the returned byte[] into the message

  • any other type — call Object#toString() and proceed as for java.lang.String

The varParam method should not be used for parameters that implement EnumParameter). It will still add the parameter but will log an error message. staticParam(EnumParameter) should be used for enum parameters.

As noted above, this marshalling/copying happens when the report() method is called, not when the parameters are added to the message. There are exceptions to this. These conversions happen when the parameter is added:

  1. null parameter to empty byte array

  2. ByteBuffer to empty byte array

  3. EnumParameter to its integer value

Tip Since Rhino 2.6.0

Methods to set message fields specific to Events.

An EventMessage is a message to SAS describing a network event or processing step within a network service. EventMessages have an ID used to look up a description from a bundle deployed to the SAS server and contain parameters holding information about the state of the system that led to the event being reported.

Tip Since Rhino 2.6.0

Methods to set message fields specific to Markers.

A MarkerMessage is a message sent to SAS to provide context about a trail. The marker may be used for correlating multiple trails into a trace or branch. It may also be used for searching for a trace in the SAS UI.

Some markers have special meaning to the SAS server, the START, END and FLUSH markers indicate when a trail is started, when one is ended, and when no more data is expected in the next few seconds.

Tip Since Rhino 2.6.0

Ant task to generate Java enums from mini-bundle files. Contains an implicit fileset parameter that identifies the set of SAS bundle files to create Java enums for.

Task attributes are:

dir: The base directory to search for bundle files destDir: The output base directory where enum classes will be created eventsClassName: Optional. The classname for the generated events enum

See Fileset for attributes to control the selection of files parsed by the task

Generate enums from a bundle file com/opencloud/slee/services/example/sas/sas-bundle.yaml found in ${resources}/sas-bundles.

The output classes will be created in ${src}, including com.opencloud.slee.services.example.sas.SasEvent and any enums representing SAS enums in the bundle.

<generate-bundle-enums destDir="${src}" dir="${resources}/sas-bundles" includes="com/opencloud/slee/services/example/sas/sas-bundle.yaml"/>
Generate enums from a bundle file at the path sas/com.opencloud.test.rhino.sas.bundle.ra.yaml relative to ${resources}.

The output classes will be created in ${src}, including com.opencloud.test.rhino.sas.bundle.ra.RaSasEvent and any enums representing SAS enums in the bundle.

<oc:generate-bundle-enums destDir="${src}"
                              dir="${resources}"
                              eventsClassname="RaSasEvent">
   <include file="sas/com.opencloud.test.rhino.sas.bundle.ra.yaml"/>
</oc:generate-bundle-enums>

Session Ownership Facility

The session ownership facility allows SLEE resource adaptors to interact with the session ownership subsystem.

The principal interface to the session ownership facility is SessionOwnershipFacility. A resource adaptor uses this facility to store, retrieve, or delete session ownership records. Each of these operations is executed asynchronously. If the resource adaptor is interested in the result of the operation (and for retrieves this is likely always the case), the resource adaptor can provide a listener object when invoking the operation. The listener object will receive a callback, typically from another thread, when the operation result becomes available.

Full API documentation for the session ownership facility is available here.

Tip Since Rhino 2.6.1

JNDI Environment

The JAIN SLEE specification defines that an SBB component has access to a JNDI API namespace where it may obtain access to various SLEE facilities and factories. Using the JNDI API for name lookups in SBB code generally works when you know exactly what you’re looking for, but an SBB that uses SBB parts may have entries in its JNDI namespace introduced by those SBB parts that may not be known at SBB compile time but are still of interest to the SBB. The JNDI API does not differentiate one binding from another; so it’s not easy for an SBB, for example, to find all resource adaptor entity bindings.

As an extension, Rhino provides all JNDI environment bindings to an SBB or SBB part in a separate map structure. The map is keyed on the fully qualified binding name, with map values containing metadata about the type of binding as well as the bound object itself.

An SBB obtains access to the JNDI bindings map from its RhinoSbbContext object (a Rhino extension of SbbContext):

package com.opencloud.rhino.slee;

import java.util.Map;
import javax.slee.SLEEException;
import javax.slee.SbbContext;
import com.opencloud.rhino.slee.environment.JndiBinding;

public interface RhinoSbbContext extends SbbContext {
    public Map<String,JndiBinding> getJndiBindings()
        throws SLEEException;

    ...
}

An SBB part obtains access to the JNDI bindings map from its SbbPartContext object:

package com.opencloud.rhino.slee.sbbpart;

import java.util.Map;
import javax.slee.SLEEException;
import javax.slee.SbbContext;
import com.opencloud.rhino.slee.environment.JndiBinding;

public interface SbbPartContext extends SbbContext {
    public Map<String,JndiBinding> getJndiBindings()
        throws SLEEException;

    ...
}

JndiBinding class

The com.opencloud.rhino.slee.environment.JndiBinding class is shown below:

package com.opencloud.rhino.slee.environment;

public abstract class JndiBinding {
    public abstract BindingType getType();
    public String getJndiName() { ... }
    public Object getValue() { ... }

    public boolean equals(Object o) { ... }
    public int hashCode() { ... }
}

getType method

The getType method returns the type of the JNDI binding. The BindingType enumeration is shown below:

package com.opencloud.rhino.slee.environment;

public enum BindingType {
    ENV_ENTRY,
    ACTIVITY_CONTEXT_INTERFACE_FACTORY,
    RESOURCE_ADAPTOR_ENTITY,
    FACILITY,
    LIMITER_ENDPOINT
}

A JNDI binding may be one of:

  • an environment entry

  • an activity context interface factory, such as the null activity context interface factory

  • a resource adaptor entity link binding

  • a SLEE facility, such as the timer facility and alarm facility

    Note Activity factories, such as the null activity factory and service activity factory are considered to be SLEE facilities for this purpose.
  • a configured Rhino limiter endpoint.

getJndiName method

The getJndiName method returns the fully qualified name of the JNDI binding. This value is equal to the key that the JndiBinding object is stored with in the map of bindings.

getValue method

The getValue method returns the object bound to the JNDI name. For example, this could be an environment entry value, a SLEE facility, and so on.

Subclasses of JndiBinding

The JndiBinding class is an abstract class. A subclass of JndiBinding exists for each different type of binding. Each of these is described below.

EnvEntry class

The EnvEntry class is used to describe a JNDI binding for an environment entry. The EnvEntry class is shown below:

package com.opencloud.rhino.slee.environment;

public final class EnvEntry extends JndiBinding {
    public EnvEntry(String jndiName, Object value) { ... }
    public BindingType getType() { return BindingType.ENV_ENTRY; }
    public String toString() { ... }
}
Facility class

The Facility class is used to describe a JNDI binding for a SLEE facility such as the timer facility as well as activity factories such as the null activity factory. The Facility class is shown below:

package com.opencloud.rhino.slee.environment;

public final class Facility extends JndiBinding {
    public Facility(String jndiName, Object value) { ... }
    public BindingType getType() { return BindingType.FACILITY; }
    public String toString() { ... }
}
LimiterEndpoint class

The LimiterEndpoint class is used to describe a JNDI binding for a configured limiter endpoint. The LimiterEndpoint class is shown below:

package com.opencloud.rhino.slee.environment;

public final class LimiterEndpoint extends JndiBinding {
    public LimiterEndpoint(String jndiName, Object value) { ... }
    public BindingType getType() { return BindingType.LIMITER_ENDPOINT; }
    public String toString() { ... }
}
ResourceAdaptorTypeBinding class

The ResourceAdaptorTypeBinding class is the superclass for JNDI binding metadata classes related to resource adaptor types. The ResourceAdaptorTypeBinding class is shown below:

package com.opencloud.rhino.slee.environment;

import javax.slee.resource.ResourceAdaptorTypeID;

public abstract class ResourceAdaptorTypeBinding extends JndiBinding {
    public ResourceAdaptorTypeID getResourceAdaptorType() { ... }
    public int hashCode() { ... }
}

The getResourceAdaptorType method returns the component identifier of the resource adaptor type that the binding is associated with.

ActivityContextInterfaceFactoryBinding class

The ActivityContextInterfaceFactoryBinding class is used to describe a JNDI binding for an activity context interface factory. The ActivityContextInterfaceFactoryBinding class is shown below:

package com.opencloud.rhino.slee.environment;

import javax.slee.resource.ResourceAdaptorTypeID;

public final class ActivityContextInterfaceFactoryBinding extends ResourceAdaptorTypeBinding {
    public ActivityContextInterfaceFactoryBinding(String jndiName, ResourceAdaptorTypeID raType, Object value) { ... }
    public BindingType getType() { return BindingType.ACTIVITY_CONTEXT_INTERFACE_FACTORY; }
    public String toString() { ... }
}
ResourceAdaptorEntityBinding class

The ResourceAdaptorEntityBinding class is used to describe a JNDI binding for a resource adaptor entity. The ResourceAdaptorEntityBinding class is shown below:

package com.opencloud.rhino.slee.environment;

import javax.slee.resource.ResourceAdaptorTypeID;

public final class ResourceAdaptorEntityBinding extends ResourceAdaptorTypeBinding {
    public ResourceAdaptorEntityBinding(String jndiName, ResourceAdaptorTypeID raType, Object value) { ... }
    public BindingType getType() { return BindingType.RESOURCE_ADAPTOR_ENTITY; }
    public String toString() { ... }
}

Bindings

Rhino has a dynamic approach to dependency specification and binding, supported by multiple install levels for components, virtual copy of installed components, and dependency specification and application.

Dynamic component reference bindings

The JAIN SLEE specification states that every deployable component has an identity specified by a name, vendor, and version tuple. This identity must be unique within the scope of a given component type. A component may reference other components of various types by specifying their type and identity in the component deployment descriptor.

These references are static, that is: they are defined prior to installation in the SLEE; and once the component is installed the references cannot be changed. Static references in general function adequately; however they can make the application upgrade process more complicated that it need be.

For example, if a library component used by an application has a bug fixed and a new version of the library is produced, then all dependent components such as SBBs and profile specifications need to have their references updated and new versions released; and all components dependent on those, such as other SBBs and service components, also need updating and releasing; and so on through the chain of dependencies.

A more dynamic approach to dependency specification and binding would help make the upgrade process easier. Rhino provides such an approach, which is supported by the following concepts:

Each of these concepts is described in detail below.

Install levels

The JAIN SLEE specification defines that a deployable unit and the components it contains is either installed or not installed in the SLEE. When a deployable unit is installed, its components are verified for correctness, and if verification is successful the components are made ready for use; for example, service components are initialised ready for activation.

To facilitate the management of dynamic component bindings, Rhino provides finer-grained control over the degree to which a deployable unit and its components are installed. Installed components each have one of the following install levels:

  • INSTALLED — The component is installed in the SLEE. The deployment descriptor has been validated for syntax and and has been parsed; however any component and class references, configuration parameters, and other relevant information contained in the deployment descriptor has not been verified for correctness.

  • VERIFIED — The component is installed and has successfully passed all verification checks.

  • DEPLOYED — The component is installed, it has passed verification, any necessary implementation code has been generated, and it has been deployed to each cluster node ready for immediate use (such as service activation).

Here are some ways you can manage install levels:

Specify the level

The initial install level for a deployable unit and the components it contains can be specified when the deployable unit is installed.

installlocaldu <file url> [-type <type>] [-installlevel <level>] [-url url]
  Description
    Install a deployable unit or other artifact. This command will attempt to
    forward the file content (by reading the file) to rhino if the management client
    is on a different host.  To install something other than a deployable unit, the
    -type option must be specified.  The -installlevel option controls to what
    degree the deployable artifact is installed.  The -url option allows the
    deployment unit to be installed with an alternative URL identifier

For example, using rhino-console:

[Rhino@localhost:2199 (#0)] installlocaldu /path/to/deployable-unit.jar -installlevel INSTALLED
Installed: DeployableUnitID[url=file:/path/to/deployable-unit.jar]
Note The -installlevel parameter is optional. If an install level is not specified when a deployable unit is installed, an install level of DEPLOYED is assumed.

Verify an installed component

An installed component can subsequently be verified using the verify management operation.

verify <type> <url|component-id>
  Description
    Verify an installed deployable unit or component

For example, using the rhino-console command:

[Rhino@localhost:2199 (#0)] verify sbb name=MySBB,vendor=OpenCloud,version=1.0
SbbID[name=MySBB,vendor=OpenCloud,version=1.0] verified
The following components were also verified:
  ProfileSpecificationID[name=MyProfileSpec,vendor=OpenCloud,version=1.0]
  LibraryID[name=MyLibrary,vendor=OpenCloud,version=1.0]

If the component being verified depends on other components that are installed but yet to be verified, then Rhino will automatically verify those dependent components first. (See the note about install levels below.)

Verify an installed DU

An installed deployable unit may also be "verified". Deployable unit verification is a convenience mechanism to verify all the components contained in the deployable unit with a single command. For example, using rhino-console:

[Rhino@localhost:2199 (#0)] verify du file:/path/to/deployable-unit.jar
Deployable unit file:/path/to/deployable-unit.jar verified
The following components were verified:
  SbbID[name=MySBB,vendor=OpenCloud,version=1.0]
  ProfileSpecificationID[name=MyProfileSpec,vendor=OpenCloud,version=1.0]
  LibraryID[name=MyLibrary,vendor=OpenCloud,version=1.0]

Only the deployable unit components that have yet to be verified are affected by this command. As such, verifying a deployable unit that only contains already verified components has no further effect.

Deploy a component

A component can be deployed using the deploy management operation.

deploy <type> <url|component-id>
  Description
    Deploy an installed deployable unit or component

For example, using rhino-console:

[Rhino@localhost:2199 (#0)] deploy sbb name=MySBB,vendor=OpenCloud,version=1.0
SbbID[name=MySBB,vendor=OpenCloud,version=1.0] deployed
No other components were affected by this operation

If the component being deployed depends on other components that are not yet deployed, then Rhino will automatically deploy those dependent components first, and so on recursively. (See the note about install levels below.)

If any component being deployed has not yet been verified, then Rhino will automatically verify the component before deploying it. The deploy operation will fail if any verification errors are found in this case.

Deploy an installed DU

An installed deployable unit may also be "deployed". Deployable unit deployment is, again, a convenience mechanism to deploy all the components contained in the deployable unit with a single command. A deployable unit can be deployed in rhino-console, for example, as follows:

[Rhino@localhost:2199 (#0)] deploy du file:/path/to/deployable-unit.jar
Deployable unit file:/path/to/deployable-unit.jar deployed
The following components were deployed:
  SbbID[name=MySBB,vendor=OpenCloud,version=1.0] deployed

Only the deployable unit components that have yet to be deployed are affected by this command. As such, deploying a deployable unit that only contains already deployed components has no further effect.

Undeploy a deployed component

A deployed component can subsequently be undeployed. When a component is undeployed it reverts to the VERIFIED install level. Dynamic changes to a component’s bindings cannot be made while a component is DEPLOYED; therefore it is necessary to undeploy a component if it is currently deployed and its bindings need to be changed.

A deployed component is undeployed using the undeploy management operation.

undeploy <type> <component-id>
  Description
    Undeploy a deployed component

For example, using rhino-console:

[Rhino@localhost:2199 (#0)] undeploy service name=MyService,vendor=OpenCloud,version=1.0
ServiceID[name=MyService,vendor=OpenCloud,version=1.0] undeployed
No other components were affected by this operation

If there are any components that depend on the component being undeployed that themselves have an install level of DEPLOYED, then those components too will also be undeployed by this operation, and so on recursively. (See the note about install levels below.)

An undeploy operation will fail if any component that would be affected by the operation does not satisfy the below requirements:

  • A service component can only be undeployed if it is in the INACTIVE state.

  • A profile specification component can only be undeployed if there are no profile tables created from it currently in existence.

  • A resource adaptor component can only be undeployed if there are no resource adaptor entities created from it currently in existence.

Unverify a component

A verified or deployed component can subsequently be unverified. When a component is unverified it reverts to the INSTALLED install level. If the component was deployed then it is undeployed first. Dynamic changes to a component’s bindings cannot be made while a component install level is not INSTALLED. Rhino will automatically unverify (but not undeploy) a verified component affected by a binding operation; however the unverify management operation exists to allow this transition to be manually controlled. Manually unverifying a component is also necessary if, for example, dependent linked or shadowed components need to be updated.

A component is unverified using the unverify management operation.

unverify <type> <component-id>
  Description
    Unverify a verified component

For example, using rhino-console:

[Rhino@localhost:2199 (#0)] unverify service name=MyService,vendor=OpenCloud,version=1.0
ServiceID[name=MyService,vendor=OpenCloud,version=1.0] unverified
No other components were affected by this operation

If there are any components that depend on the component being unverified that themselves have an install level of VERIFIED or DEPLOYED, then those components too will also be unverified by this operation, and so on recursively. (See the note about install levels below.)

Note
About install levels

The general rule for install levels is that any given component cannot have an install level greater than any of the components that it depends on.

If a component desires to transition to a higher install level then all its dependent components must successfully transition to the new install level first.

If a component desires to transition to a lower install level then all components that depend on it must successfully transition to the new install level first.

Component copy

In order to maintain compatibility with the JAIN SLEE specification, Rhino does not permit the component references of components installed using a standard SLEE deployable unit — hereinafter termed "original" components — to be dynamically modified. Rather, a virtual copy of each component to be modified must be made, and the references of the copied components modified instead. A copied component is still a SLEE component in its own right, and must have a unique identity within the scope of the component type as for any other component. The only difference between an original SLEE component and a copied component is from where the component originates in the SLEE. A copied component uses the same set of interfaces and classes, and inherits the configuration properties such as environment entries, from the component it is copied from.

Here are some commands for managing copies:

Copy

A component can be copied using the copyComponent management operation.

copycomponent <type> <source-id> <target-name> <target-vendor> <target-version>
[-installlevel <level>]
  Description
    Make a copy of the source component with the target identity.  The -installlevel
    option controls to what degree the copied component is installed after it is
    created

Example, using rhino-console:

[Rhino@localhost:2199 (#0)] help copycomponent
copycomponent <type> <source-id> <target-name> <target-vendor> <target-version> [-installlevel <level>]
   Make a copy of the source component with the target identity. The -installlevel
   option controls to what degree the copied component is installed after it is
   created

[Rhino@localhost:2199 (#1)] copycomponent sbb name=MySBB,vendor=OpenCloud,version=1.0 MySBB OpenCloud 1.0.1
Component SbbID[name=MySBB=OpenCloud,version=1.0] copied to SbbID[name=MySBB,vendor=OpenCloud,version=1.0.1]

The install level of the copied component can optionally be specified when the copy is made. If not specified, a default install level of DEPLOYED is used. Specifying an install level of INSTALLED is typically most efficient when making a copy of a component for use in a later binding operation, and is necessary when copying a component that itself will not pass verification checks.

Show copy history

A copied component may be subsequently copied again, leading to the treelike structure of copies branching out from the original. The ComponentDescriptorExtensions class in rhino-management included in a component’s metadata descriptor provides information about the source of a component and what copies have been made of it.

getdescriptor <type> <id>
  Description
    Displays the descriptor for the specified deployable unit or component

The getdescriptor command in rhino-console reports this information, as shown in the example below:

[Rhino@localhost:2199 (#0)] getdescriptor sbb name=MySBB,vendor=OpenCloud,version=1.0
For component SbbID[name=MySBB,vendor=OpenCloud,version=1.0]:
 Deployable unit: DeployableUnitID[url=file:/path/to/deployable-unit.jar]
 Component source: my-sbb.jar
 Defined using SLEE version: 1.1
 ...
 Copies made from this component:
  SbbID[name=MySBB,vendor=OpenCloud,version=1.0.1]
 ...

[Rhino@localhost:2199 (#1)] getdescriptor sbb name=MySBB,vendor=OpenCloud,version=1.0.1
For component SbbID[name=MySBB,vendor=OpenCloud,version=1.0.1]:
 Copied from: SbbID[name=MySBB,vendor=OpenCloud,version=1.0]
 ...
 Original component: SbbID[name=MySBB,vendor=OpenCloud,version=1.0]
 Copies made from this component: none
 ...

Remove copied components

A copied component has an implicit dependency on its original. This means that a deployable unit that has copied components cannot be uninstalled from the SLEE until all its copied components have been removed. Copied components are removed using the removeCopiedComponents management operation.

removecopiedcomponents [<type> <url|component-id>]*
  Description
    Remove components that have been copied from another component.  Either
    individual components or a single deployable unit identifier can be specified.
    In the latter case, all copied components of the DU will be removed

For example, in rhino-console:

[Rhino@localhost:2199 (#0)] help removecopiedcomponents
removecopiedcomponents [<type> <url|component-id>]*
   Remove components that have been copied from another component. Either
   individual components or a single deployable unit identifier can be specified.
   In the latter case, all copied components of the DU will be removed

[Rhino@localhost:2199 (#1)] removecopiedcomponents sbb name=MySBB,vendor=OpenCloud,version=1.0.1
1 component removed

(or)

[Rhino@localhost:2199 (#2)] removecopiedcomponents du file:/path/to/deployable-unit.jar
The following copied components were removed:
 SbbID[name=MySBB,vendor=OpenCloud,version=1.0.1]

Find orphaned copied components

After some copied components have been removed, other copied components may remain that are no longer referenced by any other component. The findOrphanedCopiedComponents command in rhino-console can be helpful in finding these components, as shown in the example below:

[Rhino@localhost:2199 (#0)] help findorphanedcopiedcomponents
findorphanedcopiedcomponents
   Find copied components that are no longer referenced by any other component.
   Components such as services and resource adaptors that sit at the top of the
   dependency hierarchy are not included

[Rhino@localhost (#1)] findorphanedcopiedcomponents
Copied components not used by any other component:
  SbbID[name=MySBB,vendor=OpenCloud,version=1.0.2]
  ...

Dynamic dependency specification

Dynamic component dependencies are specified using a binding descriptor. A binding descriptor is a JSON document that describes the changes that should be made to the deployment descriptors of one or more components. For example, a binding descriptor may change the root SBB of a service, or may add a new library reference to an SBB. A binding descriptor can only add to or change existing information contained in a deployment descriptor; it cannot remove any existing information.

A binding descriptor is a new component type in Rhino. As a component type, a binding descriptor has an identity described by the name, vendor, and version tuple — like all other SLEE component types. A binding descriptor document may be installed in Rhino as a new type of deployable entity.

Binding descriptor format

A binding descriptor is a JSON document that must conform to the schema defined by Rhino. The binding descriptor schema can be found in the doc/dtd directory of a Rhino install. Loosely speaking, a binding descriptor document must contain the declaration of a single JSON object with:

  • an optional description property containing an arbitrary string description

  • mandatory name, vendor, and version properties

  • an optional service property containing a service descriptor

  • an optional sbbs property containing an array of zero or more SBB descriptors

  • an optional sbbParts property containing an array of zero or more SBB part descriptors

  • an optional profileSpecs property containing an array of zero or more profile specification descriptors

  • an optional libraries property containing an array of zero or more library descriptors.

Each component descriptor that can be contained by a binding descriptor has a structure based on the corresponding SLEE-defined DTD for that component type. Only component properties that can be modified by bindings are defined by the schema and can be included in a binding descriptor.

Below is an example of a binding descriptor that can be used to change the root SBB of a service:

{
    "description": "Change service's root SBB",

    "name": "Example binding descriptor",
    "vendor": "OpenCloud",
    "version": "1.0",

    "service": {
        "rootSbb": {
            "name": "MyOtherSBB",
            "vendor": "OpenCloud",
            "version": "1.0"
        }
    }
}

Managing binding descriptors

A binding descriptor is installed into Rhino much like a SLEE deployable unit. The deployable type option must be used to indicate that the type of object being installed is a binding descriptor.

Here’s how to install and uninstall them:

Install a binding descriptor

The example below shows how a binding descriptor can be installed using rhino-console:

install <url> [-type <type>] [-installlevel <level>]
  Description
    Install a deployable unit jar or other artifact.  To install something other
    than a deployable unit, the -type option must be specified.  The -installlevel
    option controls to what degree the deployable artifact is installed
[Rhino@localhost:2199 (#0)] install file:/path/to/my-binding-descriptor.json -type bindings
Installed: DeployableUnitID[url=file:/path/to/my-binding-descriptor.json]

Install levels are not relevant for binding descriptors. Any install level specified when a binding descriptor is installed is ignored.

Note An installed binding descriptor document is a deployable unit that contains one binding descriptor component with an identity as specified by the name, vendor, and version properties in the JSON document. Binding descriptor components do not support Component copy operations, as such an operation has little meaning.

Uninstall a binding descriptor

An installed binding descriptor can be uninstalled in the same way as any other SLEE deployable unit. For example, using rhino-console:

[Rhino@localhost:2199 (#1)] uninstall file:/path/to/my-binding-descriptor.json
uninstalled: DeployableUnitID[url=file:/path/to/my-binding-descriptor.json]

Binding descriptor application

Binding descriptors are applied to components within the scope of a service. That is, a binding descriptor can be associated with a service, and its effects are propagated to the affected components used by the service. Binding descriptors can be associated with, and subsequently disassociated from, any service with an install level of INSTALLED or VERIFIED. A service with an install level of DEPLOYED must be undeployed before its binding descriptor associations can be changed.

Binding descriptors can only be associated with a component copy service component. If a command is given to associate a binding descriptor with an original service component, a new copy of the service component will automatically be made by Rhino and the binding descriptor associated with the copied component instead.

A binding descriptor associated with a service that affects dependent components of the service, such as SBBs or libraries, requires that those components be copied and the effects of the binding descriptor applied to the copied components. Rhino will automatically copy components where necessary to fulfil this requirement, according to the following rules:

  • If an original component needs its bindings modified, a copy is first made and the copy modified.

  • If a copied component needs its bindings modified, and the copied component is not used in any other service, then the copied component is reused for the new modifications.

  • If a copied component needs its bindings modified, but the copied component is in use by some other service, then a new copy is made and the new copy modified.

  • The ripple effect may cause other copies to be generated. For example, if SBB A references SBB B and SBB B is copied and its bindings modified, then SBB A also needs to be modified with an update to reference the copied SBB B. To do this SBB A may also need to be copied first as described above.

The component identifiers of copied components may be specified as part of the binding operation, if specific identifiers are desired. If a component needs to be copied but a component identifier has not been specified for the copy, then Rhino will automatically generate a new unique component identifier based on the original component’s identifier.

Service binding capabilities

A binding descriptor applied to a service may cause any of the following actions:

  • change the root SBB of the service

  • modify one or more dependent SBBs by:

  • modify one or more dependent SBB parts by:

  • modify one or more dependent profile specifications by:

    • adding new library and/or profile specification references

    • adding new environment entries, or changing the values of existing environment entries

    • changing the definition or options of static queries

  • modify one or more dependent libraries by:

    • adding new library references.

Warning Due to classloader limitations, a profile specification can only have its bindings modified if there are no profile tables, resource adaptor entities, or services with an install level of DEPLOYED, that depend on any profile specification in the same profile specification component jar present in the SLEE. Attempting to change the bindings of a profile specification that does not meet this criteria will result in the failure of the binding operation.

Binding conflicts

It is possible that a conflict may arise with a binding descriptor that is associated with a service:

  • A binding descriptor may declare a usage parameters interface of type X, as extending a different usage parameters interface type than a previous definition of X (either in the original deployment descriptor or in another associated binding descriptor).

    Note No conflict arises if X has previously been declared with no extends type, but an associated binding descriptor specifies an extends type for X. In this case, the SLEE assumes that X should now extend the specified type rather than extend nothing.
  • A binding may declare an environment entry with name X with a different Java type than a previous definition of X.

If either of these types of conflicts occur, the binding descriptor association fails. The conflict must be resolved before the binding descriptor can be successfully associated.

Duplicate definitions such as component references do not cause a conflict. For example if the deployment descriptor and one or more binding descriptors all declare the same library reference, they are simply merged together into a single reference.

Associating a binding with a service

Here’s how to associate a binding descriptor with a service, map target component identifiers of copied component, and disassociate a binding descriptor from a service:

Associate

A binding descriptor is associated with a service using the addBindings operation defined on the ServiceManagementMBean.

addservicebinding <service-id> [-binding <binding-descriptor-id>]* [-mapping
<map-name>] [-dryrun]
  Description
    Add one or more bindings to a service.  The -mapping option specifies a
    component mapping created with the createbindingcomponentmap command.  The
    -dryrun option will display the affects the binding operation will make but will
    not actually commit the changes

This operation may be invoked in rhino-console, for example, as shown below:

[Rhino@localhost:2199 (#0)] help addservicebinding
addservicebinding <service-id> [-binding <binding-descriptor-id>]* [-mapping <map-name>] [-dryrun]
   Add one or more bindings to a service. The -mapping option specifies a component
   mapping created with the createbindingcomponentmap command. The -dryrun option
   will display the affects the binding operation will make but will not actually
   commit the changes

The example below associates a binding descriptor with a service:

[Rhino@localhost:2199 (#0)] addservicebinding name=MyService,vendor=OpenCloud,version=1.0 -binding name=MyBinding,vendor=OpenCloud,version=1.0
Bindings added to service ServiceID[name=MyService,vendor=OpenCloud,version=1.0]
The following new components were created:
  SbbID[name=MySbb,vendor=OpenCloud,version=1.0-copy#1]
  ServiceID[name=MyService,vendor=OpenCloud,version=1.0-copy#1]
No components were removed

Create mappings

The -mapping argument can be used to specify the target component identifiers of copied components, rather than have Rhino autogenerate them. In rhino-console such a map is managed using the following additional commands:

createbindingcomponentmap <map-name>
  Description
    Create a component mapping that can be used with the addservicebinding command.
    Mappings can be added using the addbindingcomponentmapping command.  Created
    mappings exist only in the client, and will be lost when the client terminates
removebindingcomponentmap <map-name>
  Description
    Remove an existing bindings component mapping
listbindingcomponentmaps
  Description
    List the current bindings component maps
addbindingcomponentmapping <map-name> <source-id> <target-name> <target-vendor>
<target-version>
  Description
    Add a bindings mapping from the source component to the target identity
removebindingcomponentmapping <map-name> <source-id>
  Description
    Remove a bindings component mapping
dumpbindingcomponentmap <map-name>
  Description
    Dump the current mappings in the specified bindings component maps

The following example creates a mapping and uses it to control the component identifier of the SBB copied by the binding operation:

[Rhino@localhost:2199 (#1)] createbindingcomponentmap mymap
Bindings component mapping mymap created

[Rhino@localhost:2199 (#2)] addbindingcomponentmapping mymap sbb name=MySbb,vendor=OpenCloud,version=1.0 MySbb OpenCloud 1.0.1
Mapping SbbID[name=MySbb,vendor=OpenCloud,version=1.0] -> SbbID[name=MySbb,vendor=OpenCloud,version=1.0.1] added to mapping mymap

[Rhino@localhost:2199 (#3)] dumpbindingcomponentmap mymap
Component mappings for mymap:
  SbbID[name=MySbb,vendor=OpenCloud,version=1.0] -> SbbID[name=MySbb,vendor=OpenCloud,version=1.0.1]

[Rhino@localhost:2199 (#4)] addservicebinding name=MyService,vendor=OpenCloud,version=1.0 -binding name=MyBinding,vendor=OpenCloud,version=1.0 -mapping mymap
Bindings added to service ServiceID[name=MyService,vendor=OpenCloud,version=1.0]
The following new components were created:
  SbbID[name=MySbb,vendor=OpenCloud,version=1.0.1]
  ServiceID[name=MyService,vendor=OpenCloud,version=1.0-copy#2]
No components were removed
Note As part of a binding operation, the service and all its dependent components transition to the VERIFIED install level. This means that the addition (or removal) of a binding to (or from) a service must leave the service in a state that will pass all SLEE verification checks. If any of these checks fail, then the binding operation will also fail.

Disassociate

A binding descriptor can be disassociated from a service using the removeBindings operation defined on the ServiceManagementMBean.

removeservicebinding <service-id> [-binding <binding-descriptor-id>]* [-dryrun]
  Description
    Remove one or more bindings from a service.  The -dryrun option will display the
    affects the binding operation will make but will not actually commit the changes

This operation may be invoked in rhino-console, for example, as shown below:

[Rhino@localhost:2199 (#0)] help removeservicebinding
removeservicebinding <service-id> [-binding <binding-descriptor-id>]* [-dryrun]
  Remove one or more bindings from a service. The -dryrun option will display the
  affects the binding operation will make but will not actually commit the changes

An example of a binding descriptor being disassociated from a service is shown below:

[Rhino@localhost:2199 (#0)] removeservicebinding name=MyService,vendor=OpenCloud,version=1.0-copy#1 -binding name=MyBinding,vendor=OpenCloud,version=1.0
Bindings removed from service ServiceID[name=MyService,vendor=OpenCloud,version=1.0-copy#1]
No new components were created
The following components were no longer required and were removed:
  SbbID[name=MySbb,vendor=OpenCloud,version=1.0-copy#1]
  Service ServiceID[name=MyService,vendor=OpenCloud,version=1.0-copy#1] now has no bindings and may be removed if no longer required

Miscellaneous SLEE Application API Enhancements

This page details the following enhancements to SLEE APIs, which Rhino provides for SLEE applications:

SbbContext interface extensions

Rhino provides an extension to the standard javax.slee.SbbContext interface with the com.opencloud.rhino.slee.RhinoSbbContext interface. The RhinoSbbContext interface provides additional functionality to SBBs running in Rhino. The RhinoSbbContext interface is as follows:

package com.opencloud.rhino.slee;

import java.util.Map;
import javax.slee.SLEEException;
import javax.slee.Sbb;
import javax.slee.SbbContext;
import javax.slee.TransactionRequiredLocalException;
import javax.slee.TransactionRolledbackLocalException;
import com.opencloud.rhino.cmp.CMPFields;
import com.opencloud.rhino.cmp.Encodable;
import com.opencloud.rhino.cmp.codecs.DatatypeCodec;
import com.opencloud.rhino.cmp.codecs.DecoderUtils;
import com.opencloud.rhino.cmp.codecs.EncoderUtils;
import com.opencloud.rhino.facilities.Tracer;
import com.opencloud.rhino.facilities.childrelations.ChildRelationFacility;
import com.opencloud.rhino.slee.environment.JndiBinding;

public interface RhinoSbbContext extends SbbContext {
    public String getConvergenceName()
        throws TransactionRolledbackLocalException, IllegalStateException, SLEEException;

    public Tracer getTracer(String tracerName)
        throws NullPointerException, IllegalArgumentException, SLEEException;

    public RhinoActivityContextInterface[] getActivities()
        throws TransactionRequiredLocalException, IllegalStateException, SLEEException;

    public RhinoActivityContextInterface[] getActivities(Class<?> type)
        throws NullPointerException, TransactionRequiredLocalException, IllegalStateException, SLEEException;

    public ChildRelationFacility getChildRelationFacility()
        throws SLEEException;

    public CMPFields getCMPFields()
        throws SLEEException;

    public Map<String,JndiBinding> getJndiBindings()
        throws SLEEException;

    public <T> void setServiceContext(T context)
        throws SLEEException;

    public <T> T getServiceContext()
        throws SLEEException;

    public <T> void setEncodableContext(T context)
        throws SLEEException;

    public void enableEntityTreePersistence()
        throws TransactionRequiredLocalException, SLEEException;

    public ConvergenceNameSessionOwnershipRecord getConvergenceNameSessionOwnershipRecord()
        throws TransactionRequiredLocalException, IllegalStateException, SLEEException;

    public ConvergenceNameSessionOwnershipRecord getConvergenceNameSessionOwnershipRecord(long ttl)
        throws TransactionRequiredLocalException, IllegalStateException, SLEEException;
}

RhinoSbbContext interface getConvergenceName method

The getConvergenceName method returns the convergence name that the SBB entity was created with. The value returned from this method is a vendor-specific string that uniquely identifies the initial event selector conditions that led to the SBB entity’s creation.

This method only returns a non-null value if invoked on a RhinoSbbContext object belonging to a root SBB entity.

RhinoSbbContext interface getTracer method

The getTracer method overrides the same method from SbbContext to return a Rhino-specific extension of the Tracer interface.

Tip For more about this Tracer extension, please see Tracer extensions.

RhinoSbbContext interface getActivities methods

The RhinoSbbContext interface defines two getActivities methods:

  • getActivities() — This method overrides the same method from SbbContext to return a Rhino-specific extension of the ActivityContextInterface interface. This ActivityContextInterface extension is described in more detail below. Otherwise, this method behaves in the same way as defined by the JAIN SLEE specification.

  • getActivities(Class) — This method behaves similarly to the no-argument version; however it only returns activity context objects where the type of the underlying activity object is assignable to the class argument. For example, if this method was invoked with NullActivity.class as an argument, then only activity context objects for the null activities the SBB entity is attached to would be returned.

RhinoSbbContext interface getChildRelationFacility method

The getChildRelationFacility returns a Child Relation Facility object for the SBB.

Tip For more about the Child Relation Facility, please see Child Relation Facility

RhinoSbbContext interface getCMPFields method

The getCMPFields method provides the SBB with access to its per-instance state.

Tip For details on the CMPFields object returned from this method, please see The CMPFields interface.

RhinoSbbContext interface getJndiBindings method

The getJndiBindings method returns a map describing the JNDI bindings available to the SBB.

Tip For more about this method, please see JNDI environment

RhinoSbbContext interface setServiceContext and getServiceContext methods

The setServiceContext and getServiceContext methods allow setting and retrieving the service context object for the SBB. The service context provides an alternative storage mechanism to using static class fields for arbitrary, typically constant data that the SBB wants to share between SBB objects in the same service. The use of static class fields to store shared data becomes problematic when the encapsulating class resides in a library component jar rather than an SBB jar. In this case the static fields end up being shared between all uses of that library, rather than being scoped to a single service. The service context provides similar functionality to that of a static class field, but it is guaranteed to have visibility only within the service.

A typical use of the service context is to store data calculated by the SBB during the SBB Service Lifecycle Callbacks service lifecycle callback methods.

Any object may be stored in the service context. The service context object is stored by reference, and is never serialised. As the service context object may be accessed concurrently by different SBB objects, care must be taken that the service context object provides thread-safe access where necessary.

A service context object will persist across service deactivation and reactivation cycles unless the SBB explicitly resets the service context to null at the appropriate time.

The visibility of a service context object is scoped to a single SBB type within a service. All SBB objects of the same SBB type share the same service context. Different SBB types within the same service may each store their own service context object without conflict.

RhinoSbbContext interface setEncodableContext method

The setEncodableContext method sets the encodable context for the SBB.

Tip For more about encodable contexts, please see Encodable context.

RhinoSbbContext interface enableEntityTreePersistence method

The enableEntityTreePersistence method enables persistence of application state to external replicated storage resources such as a key/value store. Initial replicated persistence of application state can be disabled using the service-properties element in the extension service deployment descriptor, then enabled using this method when the SBB entity has reached a stable state.

Tip For more about application initiated persistence, please see Application initiated persistence.

RhinoSbbContext interface getConvergenceNameSessionOwnershipRecord methods

These methods obtain a reference to the convergence name session ownership record for the SBB entity tree.

Tip For more about convergence name session ownership records, please see Convergence name session ownership record.

Activity context Suspend/Resume Delivery extensions

The JAIN SLEE specification allows delivery of events to be suspended and resumed using the methods defined in the javax.slee.EventContext interface. An obvious restriction to this is that suspending event delivery requires an EventContext object, and the only way to obtain an EventContext object of an unsuspended event is by event handler method argument; therefore an SBB can only suspend delivery of an event that is being delivered to it. An SBB that attaches to multiple activities may at times desire to suspend delivery of events on a selected subset of these activities while waiting for the result of some asynchronous action. The SLEE specification does not provide a trivial solution to this problem, instead requiring a complicated process of suspending events on those activities as they are received, then resuming and processing those events at an appropriate time in a later transaction.

Rhino simplifies this problem by allowing event delivery on any activity context to be suspended and resumed at any time. Rhino defines the com.opencloud.rhino.slee.RhinoActivityContextInterface interface, an extension to javax.slee.ActivityContextInterface, with methods providing this functionality. The RhinoActivityContextInterface interface is shown below:

package com.opencloud.rhino.slee;

import javax.slee.ActivityContextInterface;
import javax.slee.SLEEException;
import javax.slee.TransactionRequiredLocalException;

public interface RhinoActivityContextInterface extends ActivityContextInterface {
    public void suspendDelivery()
        throws IllegalStateException, TransactionRequiredLocalException, SLEEException;

    public void suspendDelivery(int timeout)
        throws IllegalArgumentException, IllegalStateException,
               TransactionRequiredLocalException, SLEEException;

    public void resumeDelivery()
        throws IllegalStateException, TransactionRequiredLocalException, SLEEException;

    public boolean isSuspended()
        throws TransactionRequiredLocalException, SLEEException;
}

All activity context objects that Rhino provides to SBBs implement RhinoActivityContextInterface; therefore a typecast of an activity ccntext object to this interface will always succeed. Rhino will also recognise an event handler method defined with this type, instead of the standard ActivityContextInterface, as the second method argument, removing the need to perform a typecast within the method body if the extensions are required.

An SBB or SBB Part activity context Interface may be declared as extending RhinoActivityContextInterface rather than ActivityContextInterface if desired.

RhinoActivityContextInterface interface suspendDelivery methods

The suspendDelivery methods suspend further delivery of events on the invoked activity context. An activity context is only ever suspended for a specific period of time. The no-argument variant of this method suspends event delivery until some system specific default timeout is reached, while the one-argument variant suspends event delivery for a specific timeout period in milliseconds. The timeout period is measured from the time the suspendDelivery method is invoked. Some time after the timeout period expires, delivery of events on the activity context is automatically resumed. Event delivery can also be manually resumed by an SBB before the timeout period expires, using the resumeDelivery method.

If an SBB suspends delivery of events on an activity context for which it is currently processing an event, then event delivery of that event is suspended in the same way as if suspended using the event context associated with the event.

If an SBB suspends delivery of events on an activity context for which it is not currently processing an event, and the SLEE is already asynchronously delivering an event on that activity context to an SBB, then event delivery suspension of that activity context takes effect after the event handler method invoked on that SBB returns. If the SLEE is not currently delivering an event on that activity context, then event delivery suspension takes immediate effect.

These methods are mandatory transactional methods. The delivery of events fired on the activity context is only suspended if the enclosing transaction commits. If the transaction does not commit, then event delivery will not be suspended.

These methods throw the following exceptions:

Exception When thrown
 java.lang.IllegalArgumentException

The timeout argument is zero or a negative value.

 javax.slee.IllegalStateException

Event delivery has already been suspended on the activity context, either by a suspendDelivery method invocation on the activity context itself or a suspendDelivery method invocation on an EventContext object associated with an event delivered on the activity context.

 javax.slee.TransactionRequiredLocalException

This method is invoked without a valid transaction context.

 javax.slee.SLEEException

Event delivery on the activity context could not be suspended due to a system-level failure.

RhinoActivityContextInterface interface resumeDelivery methods

The resumeDelivery method resumes the delivery of events on the invoked activity context.

This method is a mandatory transactional method. The delivery of events occurring on the activity context is only resumed if the enclosing transaction commits. If the transaction does not commit, then event delivery will not be resumed.

This method throws the following exceptions:

Exception When thrown
 javax.slee.IllegalStateException

Delivery of events on the activity context is not currently suspended; for example, if event delivery has not been suspended or has already been resumed.

 javax.slee.TransactionRequiredLocalException

This method is invoked without a valid transaction context.

 javax.slee.SLEEException

Event delivery on the activity context could not be resumed due to a system-level failure.

RhinoActivityContextInterface interface isSuspended methods

The isSuspended method determines if the delivery of events on the activity context is currently suspended. This method returns true if the event delivery is suspended or false otherwise.

This method is a mandatory transactional method. This method throws the following exceptions:

Exception When thrown
 javax.slee.TransactionRequiredLocalException

This method is invoked without a valid transaction context.

 javax.slee.SleeException

The event delivery status of the activity context could not be determined due to a system-level failure.

Relationship to event context suspend/resume

The event delivery status of an event context is linked to the event delivery status of its associated activity context. If an event context is suspended, then event delivery on the activity context is also suspended, and vice versa. Similarly, if an event context is resumed, then event delivery on the activity context is also resumed. Therefore, an SBB that wants to suspend delivery of events on an activity context for which it is currently processing an event may do so in one of two ways — by invoking a suspendDelivery method on either the event context for the received event or on the activity context that the event was delivered on.

Both these operations have the same effect. Of particular note, after either method has been invoked, both the event context and the activity context will return true from their respective `isSuspended methods.

An SBB may also resume delivery of events on an activity context in one of two ways —  by invoking the resumeDelivery method on either the event context of the suspended event (if the event context is available) or on the activity context.

Again, both these operations will have the same effect; and after either method has been invoked, both the event context and the activity context will return false from their respective isSuspended methods.

SBB local home interface

The JAIN SLEE specification allows an SBB component to declare a local interface. An SBB entity can invoke a target SBB entity in a synchronous manner through the SBB local interface of the target SBB. Rhino also allows an SBB to declare a local home interface. Methods invoked on the local home interface are not specific to any SBB entity and are executed by an SBB object in the Pooled state. An SBB local home object is an instance of a SLEE-implemented class that implements an SBB local home interface.

How to get an SBB local home object

An SBB can only get an SBB local home object for its child SBBs. An SBB can get an SBB local home object for each of its child SBBs using the Child Relation Facility. The getChildSbbLocalHome method defined by the ChildRelationFacility interface returns an SBB local home object for the specified child SBB relation.

The RhinoSbbLocalHome interface

This interface is the base interface of all SBB local home interfaces. Currently Rhino does not allow an SBB to extend this interface; so all SBB local home interfaces, if declared, must be declared as this interface. If an SBB does not declare an SBB local home interface, then the SBB local home interface defaults to RhinoSbbLocalHome.

The RhinoSbbLocalHome interface is shown below:

package com.opencloud.rhino.slee;

import javax.slee.TransactionRequiredLocalException;

public interface RhinoSbbLocalHome {
    public Object verifyConfiguration()
        throws InvalidConfigurationException, TransactionRequiredLocalException;

    public void serviceActivating(Object config)
        throws TransactionRequiredLocalException;

    public void serviceDeactivating()
        throws TransactionRequiredLocalException;
}

If an SBB declares a local home interface in its deployment descriptor, then the local home interface methods must be implemented in the SBB abstract class using a method name constructed by capitalising the first letter of the method name as defined in this interface then prefixing sbbHome. The method parameters and return type must be identical, and the throws clause of the implemented method must be the same as or a subset of the interface method declaration, excluding any runtime exceptions.

Although an SBB that does not declare an SBB local interface receives RhinoSbbLocalHome as a default local home interface, such an SBB is not required to implement the local home interface methods in the SBB abstract class. The SLEE will instead provide a default no-operation implementation of these methods.

SBB service lifecycle callbacks

The RhinoSbbLocalHome interface defines methods that allow an SBB to receive callbacks when a service it is used in is activated or deactivated. These methods are described below.

RhinoSbbLocalHome interface verifyConfiguration method

The SLEE invokes the verifyConfiguration method on the root SBB of a service when the service is about to transition from Inactive to Active. This callback can be used by an SBB, for example, to check that its configuration in environment entries or elsewhere is valid. The SBB can throw an InvalidConfigurationException from this method if a configuration error or other reason means that the service should not be activated at this time.

Child SBBs in the service may also receive this callback method invocation, as described in the Lifecycle callback method invocation cascade section below.

In Rhino, this method is invoked on a service on each cluster node where the service is about to transition to the Active state.

If an SBB declares a local home interface, then a corresponding method with the following signature must be implemented in the SBB abstract class:

public Object sbbHomeVerifyConfiguration() throws InvalidConfigurationException;

The throws clause is optional.

  • If the method throws an InvalidConfigurationException when the administrator has requested activation of the service on a node that is currently operational, then the activation request fails and the service state remains unchanged.

  • If the method throws an InvalidConfigurationException exception when a node (re)starts, where the per-node state for that node indicates that the service should be reactivated, then the reactivation attempt fails, the service transition back to the Inactive state on that node, and an alarm is raised to indicate that the service could not be activated.

The return result from this method is passed as an argument to the serviceActivating method if the service successfully activates. This object can be used, for example, to pass some configuration information calculated during this method to the serviceActivating method if that method would otherwise need to recalculate the same information again.

This method is a mandatory transactional method. When this method is invoked by the SLEE, it is invoked with an active transaction context. If this method is invoked by an SBB without a valid transaction context then a TransactionRequiredLocalException is thrown.

RhinoSbbLocalHome interface serviceActivating method

The SLEE invokes the serviceActivating method on the root SBB of a service when the service is about to transition from Inactive to Active. This method is invoked after the verifyConfiguration method has returned successfully and the SLEE has determined that the service activation can proceed to completion. The SBB can use this callback, for example, to initialise common state shared between SBB objects (such as the service context of the same SBB).

Child SBBs in the service may also receive this callback method invocation, as described in the Lifecycle callback method invocation cascade section below.

In Rhino, this method is invoked on a service on each cluster node where the service is about to transition to the Active state. The service transitions to the Active state on the corresponding node after this method returns.

If an SBB declares a local home interface, then a corresponding method with the following signature must be implemented in the SBB abstract class:

public void sbbHomeServiceActivating(Object config);

When this method is invoked by the SLEE, the object passed as an argument to this method is the return result from the previous corresponding verifyConfiguration method invocation.

This method is a mandatory transactional method. When this method is invoked by the SLEE, it is invoked with an active transaction context. If this method is invoked by an SBB without a valid transaction context then a TransactionRequiredLocalException is thrown.

RhinoSbbLocalHome interface serviceDeactivating method

The SLEE invokes the serviceDeactivating method on the root SBB of a service when no SBB entity trees remain in the service and it is about to transition from the Stopping state to the Inactive state. An SBB can use this callback, for example, to clean up any shared state that is no longer required after the service has deactivated.

Child SBBs in the service may also receive this callback method invocation, as described in the Lifecycle callback method invocation cascade section below.

In Rhino, this method is invoked on a service on each cluster node where the service is about to transition to the Inactive state. The service transitions to the Inactive state on the corresponding node after this method returns.

If an SBB declares a local home interface, then a corresponding method with the following signature must be implemented in the SBB abstract class:

public void sbbHomeServiceDeactivating();

This method is a mandatory transactional method. When this method is invoked by the SLEE, it is invoked with an active transaction context. If this method is invoked by an SBB without a valid transaction context, then a TransactionRequiredLocalException is thrown.

Lifecycle callback method invocation cascade

The SBB service lifecycle callback methods are initially invoked by the SLEE on the root SBB of a service. If the root SBB has child SBB relations where the cascade-service-lifecycle-callbacks option in the SBB extension deployment descriptor is set to True, then the SLEE will also automatically invoke the same lifecycle callback method on each of those child SBBs. This process repeats or "cascades" to each child SBB in the service where the cascade-service-lifecycle-callbacks option in the parent SBB requests it. The SLEE however will invoke this method at most once for each SBB type present in the service, regardless of how many times a given SBB appears as a child SBB in the service.

Lifecycle callback method invocation cascade is enabled by default on all child relations. Cascade may be disabled for a given child SBB relation by setting the corresponding cascade-service-lifecycle-callbacks option in the SBB extension deployment descriptor to False. As an alternative to automatic cascade, an SBB can manually invoke the lifecycle callback methods directly on any of its child SBBs by obtaining the child SBB’s local home object from the Child Relation Facility. Lifecycle callback methods invoked by an SBB, rather than the SLEE, do not cascade to other child SBBs, even if those child SBB relations are flagged for cascade.

Lifecycle callback method invocations proceed to cascade through eligible child SBB relations irrespective of whether or not the root SBB or any given child SBB declares a local home interface. An SBB that does not declare a local home interface is simply unaware that the callback method invocation occurred.

Per-Node Service Activity and Lifecycle Events

Tip Since Rhino 2.6.1

Service Node Activities can be used by SBBs to monitor the managed lifecycle of the service they are a part of on their individual cluster node. Service Node Activities perform the same function as SLEE 1.1 Service Activities, but for each node in a Rhino cluster instead of only once for the whole cluster. This is useful when a service needs to initialise local state on each node in a cluster or perform shutdown tasks without storing replicated SBB state.

When a service is started on a node, a Service Node Activity is created on that node and a com.opencloud.rhino.slee.servicenodeactivity.ServiceNodeStartedEvent is fired on the activity to the service. When the service is stopped on a node, the service node activity on that node is ended and a javax.slee.ActivityEndEvent is fired on the activity. A service starts on a node when either of the following occurs on that node:

  • the SLEE is in the Running state and the service is activated via the ServiceManagementMBean; or

  • the persistent state of the service says that the service should be active, and a previously stopped SLEE transitions to the Running state.

The event type name of Service Node Started events is com.opencloud.rhino.slee.servicenodeactivity.ServiceNodeStartedEvent, the vendor is com.opencloud, and the version is 1.0. The event, when fired by the SLEE, will only be delivered to SBBs in the service that is starting, and not any other service.

Service Node Activities are never replicated. Each node has their own Service Node Activity that does not share state with any other node.

The Service Node Activity allows the SBBs making up the service to identify the service. The Service Node Activity for an SBB entity can be obtained by looking it up using the ServiceNodeActivityFactory class, an instance of which can be obtained by JNDI lookup using the name java:comp/env/rhino/servicenodeactivity/factory. An SBB can create an Activity Context Interface for the Service Node Activity using the ServiceNodeActivityContextInterfaceFactory class. An instance of this can be looked up using the JNDI name java:comp/env/rhino/servicenodeactivity/activitycontextinterfacefactory.

Unchecked throwable propagation

The JAIN SLEE specification states that if a method invocation on an SBB or a profile returns by throwing an unchecked exception, then (amongst other things) the transaction is marked for rollback and a javax.slee.TransactionRolledBackLocalException is propagated back to the caller. There may be times, however, in certain applications where this behaviour is undesirable, and the caller would rather catch and handle the exception itself rather than have the transaction forcibly rolled back.

To address this need, Rhino provides the @PropagateUncheckedThrowables annotation. This annotation can be used on any SBB local interface method or profile local interface method to indicate that any unchecked throwable (RuntimeExceptions or Errors) produced by the method must be propagated back to the caller as-is. The transaction is not marked for rollback, and the invoked SBB or profile object remains in the same state; in other words, it is not discarded by a transition to the Does Not Exist state.

An SBB local interface or profile local interface class declaration may also be annotated with @PropagateUncheckedThrowables. When used in this way, the annotation indicates that all methods defined in the interface, and all inherited methods, shall exhibit the behaviour defined by the annotation, as if all these methods were annotated individually.

The propagation of unchecked throwables only applies to exceptions produced by the invoked method itself. The annotation has no effect against container-generated exceptions that cause rollback, such as trying to invoke a local interface method on an SBB that no longer exists. The annotation is also ignored for methods originally defined in the base local interfaces: javax.slee.SbbLocalObject and javax.slee.profile.ProfileLocalObject.

Convergence name session ownership record

Tip Since Rhino 2.6.1

A convergence name session ownership record is a session ownership record related to an SBB entity tree. These records are managed by Rhino using the session ownership store on behalf of each SBB entity tree. This means that applications do not need to create, update, or store the record directly themselves, they simply ask for the record and modify attributes on it in a CMP-style fashion as desired. Modifications to the underlying record will be stored automatically by Rhino during transaction commit, and the record will be automatically deleted when the SBB entity tree is removed.

The convergence name session ownership record for the current SBB entity tree can be obtained from the SBB context of an SBB, or SBB part context of an SBB part.

An application interacts with a convergence name session ownership record using the ConvergenceNameSessionOwnershipRecord interface. The ConvergenceNameSessionOwnershipRecord interface is shown below:

package com.opencloud.rhino.facilities.sessionownership;

import java.util.Map;
import com.opencloud.rhino.slee.ConvergenceName;

public interface ConvergenceNameSessionOwnershipRecord {
    public String getPrimaryKey();

    public ConvergenceName getConvergenceName();

    public Map<String,String> getAttributes();

    public String getAttribute(String name);

    public void setAttributes(Map<String,String> attributes, boolean exclusive)
        throws NullPointerException;

    public void setAttribute(String name, String value)
        throws NullPointerException;

    public boolean removeAttribute(String name);

    public long getTimeToLive();

    public void setTimeToLive(int ttl)
        throws IllegalArgumentException;
}

Note that this interface does not extend and is not related to the SessionOwnershipRecord interface used by the session ownership resource adaptor type. This is intentional to avoid a convergence name session ownership record — a Rhino-managed record — to be manipulated in the same way as a regular non-managed session ownership record. A convergence name session ownership record is intended to be manipulated by applications in a CMP-like manner, the various getter and setter methods on the interface serving to reinforce this mentality.

ConvergenceNameSessionOwnershipRecord interface getPrimaryKey method

The getPrimaryKey method returns the primary key of the session ownership record.

The main reason for exposing this key is to allow other protocol-specific session ownership records to link to this record as a master record of session ownership.

It is strongly discouraged that this key be used with the session ownership resource adaptor type to retrieve, update, or other manipulate the underlying session ownership record. Any manual update of the underlying record by an application will be overwritten by Rhino when it synchronises the managed record state with the underlying record state.

ConvergenceNameSessionOwnershipRecord interface getConvergenceName method

The getConvergenceName method returns a ConvergenceName object that describes the convergence name of the SBB entity tree associated with the record.

ConvergenceNameSessionOwnershipRecord interface get/set/removeAttribute methods

These methods get, set, or remove a single application-defined attribute in the record.

Only application-defined attributes may be manipulated by an application. While a convergence name session ownership record may define other attributes for internal use, these are not visible to application components.

ConvergenceNameSessionOwnershipRecord interface getAttributes method

The getAttributes method returns a map of attribute name to attribute value for all the application-defined attributes stored in the record.

ConvergenceNameSessionOwnershipRecord interface setAttributes method

The setAttributes method allows multiple application-defined attributes to be set in the record at once using a map of attribute names to attribute values.

If the exclusive method argument is true, then any existing application-defined attributes in the record are removed before the attributes specified in the map are set. In other words, after this method returns, the only application-defined attributes contained in the record will be those contained in the map passed as an argument to the method.

ConvergenceNameSessionOwnershipRecord interface get/setTimeToLive methods

All session ownership records have a time-to-live (TTL) period after which they expire and are deleted. The initial TTL of a convergence name session ownership record is set when the record is first obtained, but the TTL may be changed at any time by either:

  • reobtaining the record from the SBB context or SBB part context with the new TTL; or

  • invoking the setTimeToLive method on an already obtained ConvergenceNameSessionOwnershipRecord object with the new TTL.

The current record TTL, measured in milliseconds, can be obtained using the getTimeToLive method.

Convergence name session ownership record continuity

Rhino will use the TTL of a convergence name session ownership record to determine if and when a record needs refreshing to ensure its continued survival for the lifetime of the SBB entity. However, such checks are only made after an SBB entity tree processes an event. This means that an application should set the TTL of this record to a value greater than the maximum expected time between received events.

If a convergence name session ownership record expires, then all application-defined attributes will be lost.

Coordinated session adoption

Many SLEE applications use multiple types of activities within a single service instance (or SBB entity tree). For example there may be activities for call signaling, charging, and so on. The resource adaptors that manage each of these activities may maintain their own session ownership records in addition to the convergence name session ownership record managed by Rhino.

In a session failover scenario, there must be a coordinated approach to how a new node adopts a session such that race conditions do not occur. For example, if a signaling event is received on an activity by one surviving node at the same time a charging event for the same session is received on another related activity on another node, there must be a mechanism for resolution such that the session only gets adopted by one of the nodes, otherwise application state becomes fragmented and inconsistent.

The purpose of the convergence name session ownership record is to allow the protocol sessions (and their internal state variables), as well as the application state (SBB entity CMP variables) to move between SLEE nodes together, to preserve sticky-sessions. During failover, adoption of a session is controlled by a race for ownership, and the convergence name session ownership record is the single point of contact for that race.

The mechanism as a whole is intended to work as follows:

  • Each resource adaptor that supports failover creates and manages a session ownership record for each activity or set of related activities. A protocol-specific mapping is used to obtain a session ownership record primary key from an activity handle or identifier.

  • These resource adaptors also provide an API that allows an application to indicate the protocol-specific session ownership record should be linked with its convergence name session ownership record. For example, the resource adaptor could provide a method that takes an activity object and a convergence name session ownership record primary key as arguments.

  • The resource adaptor then links its protocol-specific session ownership record with the convergence name session ownership record. It does this by setting an attribute on the protocol-specific record with the value of the convergence name record primary key passed in by the application. It is recommended that resource adaptors use the SessionOwnershipRecord.ASSOCIATED_RECORD_ATTRIBUTE_NAME attribute for this purpose but this is not mandatory.

  • When the resource adaptor receives an event on an activity for which it has no local state, it retrieves the corresponding protocol-specific session ownership record and extracts the primary key of the convergence name session ownership record from that. The resource adaptor then uses the SessionOwnershipFacility.tryAdoptRecord method, passing in the convergence name record primary key, to try to claim ownership of the convergence name session ownership record.

    • If the adoption attempt results in the application session being owned by the current node, then the resource adaptor restores any activity session state that it needs to and continues to process the event locally.

    • If not, the adoption result may indicate that the event should be forwarded to another cluster node, or that the adoption attempt was inconclusive and may need to retried in a short period of time. A resource adaptor should limit the maximum number of times it will retry an adoption attempt before giving up and aborting the session.

Session adoption pseudo-code

The pseudo-code shown below illustrates how a resource adaptor is expected to use the session ownership facility to adopt a session. Although the session ownership facility only provides asynchronous operations for record retrieval, this pseudo-code is written assuming synchronous operations for the purposes of brevity and clarity.

SessionOwnershipFacility sessionOwnershipFacility = ...;

// this method is called when a network event has been parsed
// but no other "receive logic" has executed yet
void onRequest(Request request, SessionID sessionID) {
    if (haveLocalState(sessionID)) {
        // no need to care about session ownership because the session state is already here
        processRequestHere(request, sessionID);
        return;
    }

    // note: the real retrieveRecord() operation here is asynchronous
    SessionOwnershipRecord protoSpecificRecord = sessionOwnershipFacility.retrieveRecord(sessionID.toTrackingKey());
    if (protoSpecificRecord == null || protoSpecificRecord.getAttribute(SessionOwnershipRecord.ASSOCIATED_RECORD_ATTRIBUTE_NAME) {
        log.debug("do not have a protocol specific record, or don't have an associated-record attribute");
        sendTemporaryErrorResponse(request, sessionID);
        return;
    }

    boolean success = tryAdopt(request, sessionID, protoSpecificRecord);
    if (!success) {
        log.debug("adoption loop gave up");
        sendTemporaryErrorResponse(request, sessionID);
        return;
    }

    log.debug("successfully handled request");
}

// this method attempts to adopt the session
// it returns true if successful, false if we gave up
boolean tryAdopt(Request request, SessionID sessionID, SessionOwnershipRecord protoSpecificRecord) {
    String convergenceNamePKey = protoSpecificRecord.getAttribute(SessionOwnershipRecord.ASSOCIATED_RECORD_ATTRIBUTE_NAME);
    boolean finishedAdopt = false;
    int adoptCount = 0;

    // other nodes may have received a request for a related record too, so we need to race
    SessionOwnershipAdoptionResult adoptRes;
    while (!finishedAdopt && adoptCount <= 3) {
        adoptRes = sessionOwnershipFacility.tryAdoptRecord(convergenceNamePKey);
        log.debug("tryAdoptRecord(%s) result is %s", convergenceNamePKey, adoptRes);

        switch (adoptRes.getType()) {
            case ALREADY_OWNED_BY_THIS_NODE:
                log.debug("I am already owner for session %s", sessionID);
                finishedAdopt = true;
                processRequestHere(request, sessionID);
                return true;
            case ALREADY_OWNED_BY_OTHER_NODE:
                log.debug("Another node %s is already owner for session %s", adoptRes.getNodeID(), sessionID);
                finishedAdopt = true;
                break;
            case RACE_WON_BY_THIS_NODE:
                // we won the race - process it here
                // to process it here we have to
                // 1) adjust the protocol specific record to say this node is the owner
                // so that future reads of that record note the correct owner for the record
                // 2) process the request here
                log.debug("I won the race for session %s", adoptRes);
                finishedAdopt = true;
                String previousOwner = findLocalOwnerURI(protoSpecificRecord.getOwnerUris());
                String newOwner = computeOwnerURI(request);
                SessionOwnershipRecord newProtoSpecificRecord = protoSpecificRecord
                    .toBuilder()
                    .removeOwnerURI(previousOwner)
                    .addOwnerURI(newOwner)
                    .build();
                // we don't need to wait for the result of this operation
                // if it fails there's nothing we can do and if it succeeds we don't need the latency
                sessionOwnershipFacility.storeRecord(newProtoSpecificRecord, myListener);
                processRequestHere(request, sessionID);
                return true;
            case RACE_WON_BY_OTHER_NODE:
                // another node won the race, back off and start the read again
                break;
            case RECORD_VIEW_ID_NEWER:
                // current node is behind a more recent cluster view change
                // back off and try the read again
                break;
            case RECORD_NOT_APPROPRIATE:
                // the convergence name pkey did not identify an appropriate convergence name record for the race
                break;
            case RECORD_DOES_NOT_EXIST:
                // the convergence name record was identifed but does not exist in the session ownership store
                break;
            case RECORD_NOT_SAME_CLUSTER:
                // the related record is not in the same cluster
                break;
            case SYSTEM_ISSUE:
                // an internal error occurred, back off and retry
                break;
        }
        if (!finishedAdopt) {
            Thread.currentThread().sleep(INTER_CAS_SLEEP);
        }
        adoptCount++;
    }

    if (!finishedAdopt) {
      // we've retried several times and it didn't work, give up
       return false;
    }

    // now we know the adoption result
    // if it was won by this node, or already owned by this node, we'd have processed and returned
    // now we have to wait for the protocol specific record to catch up
    // first use the record we've already got and see if in sync
    if (protocolRecordAndAdoptionResultInSync(protoSpecificRecord, adoptRes)) {
        // they are in sync, proxy and return
        moveRequestSideways(request, protoSpecificRecord);
        return true;
    }

    // the first protocol specific record was not in sync, so loop on that waiting for it to be updated
    boolean inSyncProtocolRecord = false;
    int readCount = 0;
    while (!inSyncProtocolRecord && readCount <= 3) {
        // note: the real retrieveRecord() operation here is asynchronous
        SessionOwnershipRecord protoSpecificRecord = sessionOwnershipFacility.retrieveRecord(sessionID.toTrackingKey());
        if (protoSpecificRecord == null || protoSpecificRecord.getAttribute(SessionOwnershipRecord.ASSOCIATED_RECORD_ATTRIBUTE_NAME) {
            log.debug("do not have a protocol specific record, or don't have an associated-record attribute");
            return false;
        }
        if (protocolRecordAndAdoptionResultInSync(protoSpecificRecord, adoptRes)) {
            moveRequestSideways(request, protoSpecificRecord);
            return true;
        }
        else {
            Thread.currentThread().sleep(INTER_PROTO_READ_TIMEOUT);
        }
        readCount++;
    }
    // give up
    return false;
}

void moveRequestSideways(Request request, ProtocolURI destination) {
   // do something protocol and resource adaptor specific
   // to move the request to another node
}

void processRequestHere(Request request, SessionID sessionID) {
   // go to higher layers in the receiving protocol stack
   // eventually calling SLEEEndpoint.fireEvent
}

String findLocalOwnerURI(Set<String> ownerURIs) {
    // returns the owner URI from the set that looks like an owner URI formatted by this resource adaptor
}

String computeOwnerURI(Request request) {
    // returns a string that identifies this node as the owner of the request
}

boolean protocolRecordAndAdoptionResultInSync(SessionOwnershipRecord record, AdoptionResult result) {
    // if the record's owner URI's encoded cluster ID and node ID match the AdoptionResult
    // then return true - they are in sync
}