Servicing is the iterative process of proceeding through the Execution in BRAHMS, along Base Clock time. Process and Data objects are both called to Service, but we describe only the Process point-of-view, here. Currently, development of new Data classes is not recommended.

The Process point-of-view (Short Version)

When EVENT_RUN_SERVICE is fired on a Process, it must service all of its input and output ports that are Due. For a simple process, that has all of its inputs and outputs at the same sample rate, they will all be due on every call, and time->now will give the time at which servicing is expected. During this call, the Process can perform its own computations as necessary so that it knows what to write to its outputs at time->now. Thus, the Process is never explicitly called to "compute", rather it is expected to compute as necessary when called to "service" its outputs. See also "When to perform computations", below.

A simple Process might take one or more inputs at 20Hz, process the data received from them, and generate one or more outputs at 20Hz. Since all of its inputs and outputs are at the same rate, the Process should set F_NOT_RATE_CHANGER. It must perform, simply, as follows in response to EVENT_RUN_SERVICE:

  • At time T=0, T=1/20, T=2/20, ...
    • Read data from input ports
    • Do computations with data
    • Write data into output ports

If a process wants to do something more advanced in terms of timing (that is, if it wants to allow one or more inputs or outputs at different sample rates), it must not set F_NOT_RATE_CHANGER, and it must only read or write data to ports that are Due on the current service call.

The Engine point-of-view (Long Version)

A major task of the Engine is servicing the input and output interfaces of the Processes being executed. This is a matter of servicing their constituent Ports.

Servicing an input Port consists of making the new data available on the input ports of a Process, and firing EVENT_RUN_SERVICE on the Process. Servicing an output Port consists of firing EVENT_RUN_SERVICE on the Process and propagating the newly written data on its output ports through any connected Links. Each Port has a set of service times which is the set of all sample boundaries of that Port that lie within the execution interval (usually 0, T, 2T, ..., with T the sample period of the Port). Note that a separate call to EVENT_RUN_SERVICE is not made for Ports due at the same time; rather, one call is made at time t, and all Ports (input or output) Due at time t are expected to be serviced within the context of that call.

Servicing an interface consists of servicing all the Ports on that interface. Therefore, the set of service times of an interface is the union of the service times of each Port on that interface. Servicing a Process consists of servicing its input and output interfaces. Therefore, the set of service times of a Process is the union of the service times of both of its interfaces (i.e. the union of the service times of each Port on either interface). Servicing a System consists of servicing all the Processes within that System. Therefore, the set of service times of a System is the union of the service times of all contained Processes (i.e. the union of the service times of all Ports on all Processes within that System).

In general, then, not all Processes will be due for service at every System service time, and not all Ports of a Process will be due for service at every Process service time. That is, not all Ports of a Process will share sample rates (and, thus, sample boundaries). Ports that have a sample boundary at the current service time are said to be Due. At each service time (call to EVENT_RUN_SERVICE), the Process has two responsibilities:

  • Perform any necessary computations.
  • Service all input/output Ports that are Due (that have a sample boundary at time.now).

Note that for the purposes of the description given on this page, it makes no difference exactly how a System is being computed, whether using Solo or Concerto, multi-threading or not.

In general, then, the Engine's task consists of stepping through the System service times and servicing all Processes that have that time amongst their service times (i.e. all Processes that have at least one input or output Port Due). It is the responsibility of each Process to perform actual computations as appropriate in amongst interface service calls.


One of the problems most commonly encountered by BRAHMS users so far is the relationship between empty Ports, full Ports, Due Data, and timing logic. Process developers should make sure they understand what is going on, here. A brief summary of the two main categories of Process (with respect to timing logic) follows.

  1. Many BRAHMS Processes are "not rate changers" - they set F_NOT_RATE_CHANGER, and, thus, insist that their inputs and outputs all share their Sample Rate. In this case, on every call to EVENT_RUN_SERVICE all inputs and output are Due, and all Ports are full. Thus, Processes that set F_NOT_RATE_CHANGER do not need to consider whether their Ports are empty, because they never are.
  2. Some BRAHMS Processes (e.g. resamplers) will have to accept inputs or generate outputs that do not share the same Sample Rate. In such Processes, calls to EVENT_RUN_SERVICE will be made with some Data not Due, and thus some Ports empty. Before authoring such a Process, the developer must understand when Ports may be empty, and how to work with this state of affairs. Such Processes must either perform their own timing logic so that they know when Ports will be full, or they must check the value of Data objects returned when they resolve Ports, and deal with S_NULL (port empty) appropriately.


Example 1

At the top is a depiction of the System which consists of two processes, A and B. A has one output Port producing output at 1Hz. It is Linked (purple arrow) to an input Port on B, the sample rate of which is defined by the sample rate of the connected output Port, i.e. 1Hz also. There is no way that a Linked input Port and output Port can have different sample rates. The Link has a delay of 1 sample period (indicated by the value in the middle of the purple arrow), so the output of A at t0 will arrive at B at t0+1.

At the bottom is a timing diagram of the execution of this System in BRAHMS. One horizontal line represents the timeline of each of the executed Processes. One black vertical line represents each System service time. Where the intersection of a black line (System service time) and a coloured line (Process timeline) is marked with a black dot, that Process has a service time also. The number of Due inputs and outputs it has is indicated by the presence of input/output arrows attached to that black dot. The flow of data amongst those input/output instants is indicated by the magenta lines.

In this case, A and B and the System all share the same set of service times: every second starting at zero. At each service call to A it is expected to write its output. At each service call to B it is expected to read its input (written on the previous sample by A). Since the Link has a delay of 1 sample period, the first input to B (at t=0) was written by A at t=-1, and cannot be obtained from the Process itself. This value is specified in the SystemML document, either as a user-specified initial state, or as the last value produced by A during a previous execution.

As a result, on each call to EVENT_RUN_SERVICE, for either Process, all Ports will be due. If this was the only configuration in which these Processes were ever to be used, they could both set F_NOT_RATE_CHANGER and assume Data was Due on every call, simplifying their code flow.

Example 2

This System is very similar to Example 1, with two changes. First, the output of A is running at 3Hz: this is reflected by the scale change of the timing diagram. Second, the Link has a zero sample delay, which has several effects, as follows.

First, outputs of A are redirected to the input of B at the same System service time; the first input B receives is timestamped with t=0, i.e. undelayed. Second, and as a direct result, there is no initial data required in the Link, and all data fed into B is generated during the execution by A. Third, and this is of only passing interest to the System developer, B cannot be serviced until A has returned from its first call to EVENT_RUN_SERVICE; i.e. the Engine must service these Processes sequentially. Note that this last does not mean that the Processes cannot be computed in parallel - it just means that A will always be one sample period ahead of B during parallel computation.

Extra Credit

If an additional Link was added connecting B to A, what would be the effect?

  • If the Link had a delay of one sample period, the effect would be to prevent parallel computation completely, since each Process would now always be waiting for the output of the other.
  • If the Link had a delay of zero samples, this would generate a Circular Reference, since neither Process could compute its first output until it had seen the output of the other (this corresponds to an arithmetic loop, B(t) = f(A(t)), A(t) = g(B(t)), which BRAHMS will not attempt to solve iteratively).

Example 3

This is similar to Example 1, except that B now produces an output too, at a sample rate of 3Hz. Comparing the timing diagram with that from Example 1, you will see that it is the same, but with the output service times of B overlaid.

The System service times are still the same as the service times of B, but the service times of A is a smaller set. At some System service times (1/3, 2/3, 4/3, 5/3) the Engine will service only B, and not A. Also, whilst A can proceed exactly as before, B must now switch its behaviour based on the context, since it is required to do different things on each service call (either write its output, or read its input and write its output). It can detect whether it is time to read its input in two ways: the first is by checking whether its input should be Due using timing data (time modulo period will be zero); the second is simply to check with the framework whether the input is Due.

Example 4

This example is similar to Example 3, but the sample rates are swapped so that A now produces output at 3Hz and B at 1Hz. Once again, A does the same thing on each call (which now come at 3Hz instead of 1Hz), but now B must decide on each call whether to write an output (it should write an output every three calls).

Example 5

In all the previous examples, the occurrence of service calls has been periodic for both Processes. Here, we introduce a third process, C, running at 2Hz, to illustrate how aperiodic service calls can arise. The rules do not change, but the service times of B are now the union of the service times of A (t=0, 1/3, 2/3, 1, ...) and those of C (t=0, 1/2, 1, ...). That is, t=0, 1/3, 1/2, 2/3, 1, ..., an aperiodic set.

In addition, B now generates an output once every two seconds, and the set of different behaviours it may have to exhibit is expanding (read from A, read from C, read from A and C, read from A and C and write output). The simplest way to generate correct code flow for all these cases is to handle each aspect separately, in a series of steps such as these:

  • Service inputs
    • If input 1 is Due, read it.
    • If input 2 is Due, read it.
    • ...
  • Perform computations
    • If any computations are needed to bring our state up to date (time->now), do them.
  • Service outputs
    • If output 1 is Due, write it.
    • If output 2 is Due, write it.
    • ...

The fact that one Link has unit delay and the other has zero delay does not affect the above, but is reflected in the timing diagram by way of illustration.

Extra Credit

Aperiodic service calls can arise with only one Process in the system, in fact. How?

  • Simply if the Process has two output Ports with sample rates that are not factors of one another. For example, with one at 2Hz and one at 3Hz, the Process will be called at t=0, 1/3, 1/2, 2/3, 1, ..., an aperiodic set.
  • In none of the above examples did we encounter the sample rates of a Process. This is because these sample rates are ignored by BRAHMS (it is of no interest to BRAHMS what a Process does internally). In general, the Process sample rate will be used to govern the internal sample rate of a Process but also most Processes will set their output sample rates equal to their internal sample rate (all Standard Library Processes, for instance, use this approach).
  • There is an equivalent of the System Service Time for any Subsystem (subset of Processes). For example, the Thread Service Time is the Subsystem Service Time for the Subsystem formed by all the Processes being computed in that Thread. This is relevant only to Framework developers, however (individual Processes don't know nor care which other Processes they share a thread or a memory with).

When to perform computations

All computations must be performed during interface service calls (EVENT_RUN_SERVICE). How much computation you do in each service call may, in some cases, be a design decision. Some likely strategies are outlined below, but it really depends on what your process needs to do to perform the required servicing.


One strategy is to perform all computations at the latest possible moment, i.e. just before their results are needed to write an output. This is perhaps the simplest approach, but may lead to unbalanced computation. For instance, if inputs are arriving at 10Hz, and outputs are required at only 1Hz, this strategy will concentrate all processing in one of every ten service calls. Whilst this will cause no degradation on a single-processor non-realtime system, it may reduce throughput when multi-processing, and may particularly cause a bottleneck in realtime systems.


A second strategy is to perform computations to bring the process state up to t=time.now on every service call. This strategy has the advantage that it tends to balance the computation across calls. However, if, as in the above example, outputs are required only on every tenth call, this approach may actually involve more computation overall than "just-in-time", e.g. if the system being computed can be progressed forward in time by any period with an equal amount of computation.

Variable Step

A third strategy is to progress the system in aperiodic time steps, as required by the computations being performed. This amounts to variable time-step integration. This strategy is the most complex, though it may lead to the most efficient computation. Of course, the overriding goal is to service inputs and outputs, so this approach may tend toward "balanced" in many cases.