Sentinel VoLTE includes a capability to store information related to ongoing SIP Sessions in Rhino’s Session Ownership Subsystem.

This enables global access from various Sentinel VoLTE cluster nodes to the same information.

Session Tracking is accessible to features through an API. It has two primary uses:

  1. SCC-AS to support implementation of various Access Transfer mechanisms

  2. steering requests for related dialogs to a node when replicating Sessions

Sessions are said to be Tracked Sessions if their existence and meta-data is stored in the Session Ownership Subsystem.

Use of Session Tracking by the SCC-AS

The SCC-AS tracks any originating or terminating session where the served user’s identity is registered by a UE that has a UE-SRVCC-Capability indicator. This occurs regardless of whether or not Session Replication is enabled.

In order to enable the Re-INVITE Based Three-party conference setup the SCC-AS will track the access leg for every originating or terminating session if the EnableSCCConfHandling attribute is set to true. For further information refer to SCC Conference Handling Configuration and Reuse of Access Transfer procedures for conferencing.

Use of Session Tracking for Session Replication

When Session Replication is enabled all SIP Dialogs in the replicated Session are tracked - not just the access leg.

When Sentinel VoLTE is configured to use CAP charging the IM-SSF service (which is not a Sentinel service) will also track the SIP dialogs on each side of its B2BUA.

Tracked Session Information

A Tracked Session is a session where Session Tracking is enabled. A session can have its tracking enabled at various "points".

These are:

  • Half dialog - A SIP INVITE request has been sent, and no dialog-creating 1xx response has been received yet. This corresponds to the "Trying" and "Proceeding" states in RFC 4235.

  • Early - a SIP dialog is in the "early" state. This means that the INVITE request has received a dialog-creating 1xx response. Forks of this dialog may exist due to SIP forking.

  • Confirmed - a SIP dialog exists that has both local and remote tags, its INVITE request has been responded to with a 2xx, and the ACK for the 2xx has arrived at the TAS.

In the case of the SCC-AS, session tracking is enabled for "Confirmed" if the UE is SR-VCC capable. Session tracking is enabled for "Early" and "Confirmed" if there is a terminating attempt and the UE indicates support for alerting access transfer. Session tracking is enabled for "Half dialog", "Early", and "Confirmed" if the SCC-AS is triggered for an originating session, and the UE indicates support for PS to CS SRVCC for originating calls in the pre-alerting state.

A tracked session writes information related to the SIP dialog(s) for the served user. In the case of an originating B2BUA, this is the dialog(s) towards the originating user. In the case of a terminating B2BUA this is the dialog(s) towards the terminating user.

State of Tracked Session Input to session tracking Session tracking behaviour

Does not exist

Initial INVITE request received and a feature enables half dialog session tracking

Session tracking creates appropriate rows in the Database. Each row is in the half dialog state.

Half dialog

A dialog creating 1xx response is received

Each half dialog row is removed from the database. A 'PRE_ALERTING' row is created if the Early Dialog was established with a 183. An 'ALERTING' row is created if the Early Dialog was established with a 180.


A dialog creating 1xx response with a different remote-tag (a fork) is received

New rows representing the Early dialog for the fork are created in the database.


An ACK to the 2xx-INVITE is received at the TAS

Any rows representing forked dialogs that did not receive the final response to the INVITE are removed from the database. The rows representing the established dialog are updated to be state 'ACTIVE' in the database.


A 2xx response to a hold request is received at the TAS

Any rows representing the confirmed dialog are updated to be state 'HELD' in the database.


A 2xx response to a resume request received at the TAS

Any rows representing the confirmed dialog are updated to be state 'ACTIVE' in the database.


A BYE request is sent/received on the Served User’s Dialog

Any rows representing the confirmed dialog are removed.

Session Tracking Capabilities

Session Tracking is implemented by the Access Leg Tracking and External Session Tracking features.

SIP stack level request proxying

When Session Replication is enabled, every replicated SIP dialog has an entry in a Session Ownership store. This per-dialog entry contains a SIP URI of the preferred node for processing that dialog.

When a mid-dialog request arrives at a node, the node checks a local memory cache for the existence of the dialog. If the dialog does not exist in cache, then the Session Ownership store is queried to fetch the entry for the dialog. If there is no entry for the dialog, then a 481 Call/Transaction Does Not Exist response is returned. Once the entry is returned, the URI is checked to see if the node exists in the cluster. If the node is a member of the cluster, then the request is proxied to that node. If the node is not a member of the cluster, then the entry for the dialog is updated in the Session Ownership subsystem pointing to the current node, and the request is processed locally.

This causes various Session States to be loaded by the Local Memory Database, and so the session has been adopted locally.

The Session Ownership store is a "key-value store" that supplies:

  • Compare and set operations (CAS)

  • a "consistent read" in cases where a failure did not occur when writing and replicating the write

Components running on the Rhino platform use the Session Ownership APIs in order to access the Session Ownership store. That is, the components are not tightly coupled with the particular implementation of the Session Ownership store. This means that it is possible to add support for additional Session Ownership Store implementation(s).

The Cassandra Database is supported in the initial Session Ownership Store implementation.

Failure detection

A dialog is intended for processing on one node at any point in time. This is maintained by the Session Ownership Subsystem.

If a request arrives at the "wrong" node, then it may consult the Session Ownership Subsystem in order to retrieve the current location of the Dialog. If the current location of the dialog is not available, then the dialog is processed locally. Essentially it is failed over when a request arrives, to the node receiving the request.

Rhino’s cluster membership subsystem delivers a voting result indicating that a node has failed. The SIS recognises and saves the current cluster membership for use when processing received requests, therefore it will not proxy requests to a failed node.

Session Tracking and Rhino’s Session Ownership Subsystem

Sentinel VoLTE version 2.6.0 introduced the concept of "Session Tracking". This concept is where certain SIP dialogs have meta-data about them written out to an external store. The meta-data includes a number of variables, including a URI for the home node of the dialog. Typically, Session Tracking is enabled in the SCC-AS for the purposes of Access Transfer. The Session Tracking features in 2.6.0 were implemented directly on top of the Cassandra Database. Further details can be found in the 2.6.0 Session Tracking description.

Sentinel VoLTE version 2.8.0 and later use a new Rhino capability called the "Session Ownership Subsystem". The Session Tracking features are now implemented on top of the Session Ownership Subsystem APIs. The SIS is also updated to use the Session Ownership APIs.

Area of difference Session Tracking prior to 2.8.0 Session Tracking in 2.8.0

Dialogs recorded into Database

SIP Access Leg/Dialog only

All SIP Dialogs in a replicated session. In a non-replicated session the only the SIP Access Leg is recorded.

Tracking key used

Application controlled key for Primary Key, application controlled keys for additional tracking keys

Platform controlled key for Primary Key, application controlled keys for additional tracking keys

Point in call where Dialog is recorded into Database

Application specific, typically used by the SCC-AS

Default is only Established Dialogs are recorded into Database, the SCC-AS will request recording of early and pre-early dialogs as necessary

Database schema

Very specific attributes are part of the schema, including major application uses as direct attributes

Schema is more general, application attributes are part of a "bucket" per row

Layer of software implemented on top of

Cassandra CQL API

Rhino’s Session Ownership RA Type

SLEE components that may access API

Sentinel features, SBBs, SBB parts

Sentinel features, SBBs, SBB parts, and Resource Adaptors

API abstraction between API user and Database



Implementation provided by

Sentinel Features

Rhino platform 2.6.1 and later, and Sentinel features

Capabilities required of a datastore

Rhino’s Session Ownership Subsystem provides an API to components. This API is intended to have a very thin implementation on top of any particular Key/Value store. The capabilities that the Key/Value store needs to provide are:

  • Compare and Set (CAS) operations for a single "row" or "value"

  • a "consistent read" in cases where a failure did not occur when writing and replicating the write

Use of Cassandra as the Session Ownership Database

Rhino’s Session Ownership Subsystem provides support for integration to various Key/Value "NoSQL" Databases. Out-of-the-box support for the Cassandra database is provided for Session Ownership due to its high availability, and replication capabilities.

Each site runs a TAS cluster, and a Cassandra cluster. The minimum number of nodes per-site for the Cassandra cluster is three, and for the TAS cluster is two. Each site should essentially repeat this structure.

Row Time-to-Live

Each row that is created in Cassandra has a "Time-to-Live" (TTL) set. When a row’s TTL expires, the row is no longer visible in the database and its disk storage is eventually removed. Row TTLs are used to ensure that in various failure cases (such as communication failure between the TAS and Cassandra, TAS failure, or Cassandra failure) that Cassandra does not "fill up" and then "overflow" its storage.

Tracked Sessions in different states have different TTL values set, as the expected frequency of signalling changes in different session phases.

State of tracked session Default row TTL Configuration location

Half dialog


Not configurable



Not configurable


1.5 × session refresh period

As per SessionRefresh configuration

Consistency Level

Session Ownership uses a consistency level of LOCAL_QUORUM for reads and writes. This means that:

  • reads in a site will return the most recently written data in that site

  • to survive a single database failure, a minimum of three Cassandra database instances must be configured per site

  • database replication occurs synchronously within a site, and asynchronously between sites

  • there will be a replication lag between sites due to inter-site communication latency.

Cassandra Schema

The Rhino platform defines and manages the Cassandra database schema, including Keyspace and Table creation. Rhino provides a "Session Ownership Facility" API for Resource Adaptors to use, and a "Session Ownership Resource Adaptor Type" for SBBs and SBB-part components to use. The Session Ownership Subsystem implements these APIs on top of the Cassandra database.

Minimising the impact of Cassandra Database issues on Session processing

As the SCC-AS is involved in potentially all originating IMS INVITE sessions, and all terminating IMS sessions, care is taken to reduce the impact of a database failure.

When tracking sessions the TAS uses the following strategies:

  • Write-only statements for session tracking - session tracking only writes to the Cassandra Database. All data necessary to be written is held in local session state. Application features (such as SCC features) then add a read path as necessary. This enables global access to session tracking state.

  • Batch statements - any query that affects multiple rows is executed as a batch statement.

  • Asynchronous queries - threads used for processing signalling messages are not blocked waiting for a response from the database.

  • Signaling visibility after database transaction - signalling is only written "on the wire" after the database transaction has completed, or a guard timer has fired.

  • Per-query guard timeout - the TAS arms an internal timer for each Cassandra query, and if it fires before there is a response from Cassandra, signalling continues and the session is marked as "not tracked" so that further tracking of that session is disabled.

The effect of the guard timeout firing is that the user’s session setup can be successful, albeit with a small delay (the guard timer value). However if PS to CS SR-VCC is attempted, the access transfer may fail as when reading from the Cassandra Database the SCC-AS is likely to obtain either:

  • no tracked session information - as rows may not have been created, or may have been removed due to TTL, or

  • out-of-date information - as there may have been successful queries prior to a query hitting its guard timer

Previous page Next page