Staging refers to the micro-management of work within the Rhino SLEE.

This work is divided up into items, executed by workers. A system-level thread represents each worker. You can configure the number of threads available to process items on the stage, to minimise latency, and thus increase the performance capacity of the SLEE.

The staging-thread system

Rhino performs event delivery on a pool of threads, called staging threads. The staging-thread system operates a queue of units of work for Rhino to perform, called stage items. Typically, these units of work involve the delivery of SLEE events to SBBs. A stage item enters staging in a processing queue. Then, the first available staging thread removes it, to perform its associated work. How long the thread spends in the staging queue, before a stage worker processes it, contributes to the overall latency in handling the event. Thus, it is important to make sure that the SLEE is using staging threads optimally.

Tunable parameters

To improve performance, you can tune the following staging parameters: maximumSize, threadCount, maximumAge, queueType.

Warning The node must be restarted for any change in maximumSize, maximumAge, or queueType to take effect.
Tip For instructions on tuning staging parameters, see Configuring Staging Parameters. You can observe the effects of configuration changes in the statistics client by simulating heavy concurrency using a load simulator.

maximumSize

Description

Maximum size of the staging queue. Determines how many stage items may be queued awaiting processing. When the queue reaches maximum size, the SLEE automatically fails and removes the oldest item, to accomodate new items.

Default

3000

Recommendation

The default works well for most scenarios. Should be high enough that the SLEE can ride out short bursts of peak traffic, but not so large that under extreme overload stage items wait in the queue for too long (to be useful to the protocol generating the event), before being properly failed.

threadCount

Description

Number of staging threads in the thread pool.

Tip Of all staging parameters, this has the greatest impact on overall event-processing latency. To achieve optimal performance, give careful attention to tuning the thread count.

Default

30

Recommendation

The default works well for many applications on a wide range of hardware. However for some applications, or with hardware using four or more CPUs, more staging threads may be useful. In particular, when the SLEE is running services that perform high-latency blocking requests to an external system, more staging threads may often be necessary.

For example, for a credit-check application that only allows a call setup to continue after performing a synchronous call to an external system:

  • If a credit check takes on average 150ms, the staging thread that processes the call-setup event will be blocked and unable to process other events for 150ms.

  • With the default configuration of 30 staging threads, such a system would be able to handle an input rate of approximately 200 events/second. Above this rate, the stage worker threads will not be able to service event-processing stage items fast enough, and stage items will begin to back up in staging queues, eventually causing some calls to be dropped.

  • The problem is easily solved by configuring a higher number of staging threads.

Warning In real-world applications, it is seldom a matter of applying a simple formula to work out the optimal number of staging threads. Instead, performance-monitoring tools would be used to examine the behaviour of staging, alongside such metrics as event-processing time and system-CPU usage, to find a suitable value for this parameter.

maximumAge

Description

Maximum possible age of a staging item, in milliseconds. Determines how long an item of work can remain in the staging queue and still be considered valid for processing. Staging threads automatically fail and remove stage items that stay in the staging queue for longer than this maximum age. Tuning this (along with maximumSize), helps determine your application’s behaviour under overload conditions.

Default

10000

queueType

Description

Determines ordering of the staging queue. These options are available:

  • LIFO ("Last In First Out") — the newest item in the queue is processed first

  • FIFO ("First In First Out") — the oldest item in the queue is processed first

  • transfer — acts as many FIFO queues, and may perform better under high load on systems with many processors (introduced in Rhino 2.3.1.7).

Default

LIFO

Recommendation

The default LIFO behaviour works well for most scenarios. In situations where shorts bursts of work exceed capacity then newer work items will see prompt handling at the expense of lengthened delays for those already waiting. In contrast, FIFO behaviour will see delays hit all items in the queue until the queue is cleared.

Previous page Next page
Rhino Version 2.6.2