Skip to content

Dispatcher tuning

The dispatcher decides when actor messages run on the JavaScript event loop. Three shapes ship:

DispatcherSchedules viaFits
MicrotaskDispatcherqueueMicrotaskCPU-tight; no I/O.
ImmediateDispatcher (default)setImmediate / setTimeout(0)HTTP servers + mixed I/O.
ThroughputDispatchersetImmediate with N-then-yieldBatch processing.

The default — ImmediateDispatcher — is right for most apps. This page covers when it isn’t, and how to pick a better fit.

const system = ActorSystem.create('my-app');
// ↑ uses ImmediateDispatcher by default

Actor messages run via setImmediate, which yields between each message — letting I/O callbacks (HTTP handlers, broker messages, timer fires) interleave naturally.

For HTTP servers + broker-actor clusters (the common case), this gives good HTTP latency at the cost of slightly higher per-message overhead.

Symptom: high HTTP latency under actor load

Section titled “Symptom: high HTTP latency under actor load”
P99 HTTP response time is 200ms; actors are processing tens
of thousands of messages/sec. The actor work isn't the
problem — it's that HTTP requests can't get a turn.

Cause: an actor (or group of actors) is processing messages so fast that HTTP handlers wait for a turn.

Fix: stick with ImmediateDispatcher (the default) and bound the busy actors with a throughput dispatcher per-actor:

import { ThroughputDispatcher } from 'actor-ts';
const heavyActor = system.actorOf(
Props.create(() => new HeavyWorker())
.withDispatcher(new ThroughputDispatcher({ throughput: 100 })),
);

The heavy actor processes 100 messages, yields, lets HTTP catch up, processes 100 more. HTTP latency drops; throughput on the heavy actor is barely affected.

Symptom: low actor throughput, low CPU usage

Section titled “Symptom: low actor throughput, low CPU usage”
The actor system is doing 1000 msg/sec on an idle CPU.
Profile shows time spent in setImmediate.

Cause: ImmediateDispatcher has per-message overhead from yielding to the event loop on every message. For tight-loop CPU work without I/O, this is wasted.

Fix: use MicrotaskDispatcher:

import { MicrotaskDispatcher } from 'actor-ts';
const cpuActor = system.actorOf(
Props.create(() => new CpuIntensive())
.withDispatcher(new MicrotaskDispatcher()),
);

Microtasks bypass the event loop, ~50× faster scheduling. Caveat: a CPU-tight actor on microtask can starve I/O (network reads, timers). Use only when:

  • The actor doesn’t share the system with HTTP traffic (compute-only workers).
  • The actor itself doesn’t await I/O (purely CPU).
new ThroughputDispatcher({
throughput: 100, // messages per batch
yieldStrategy: 'setImmediate', // or 'setTimeout' or 'microtask'
});
  • throughput — messages processed per actor before yielding. Higher = more throughput, worse I/O interleaving. Common values: 10-1000.
  • yieldStrategy — how to yield between batches. setImmediate is the I/O-friendly default; microtask yields shorter but doesn’t release the event loop.

For a batch processor handling broker messages: throughput: 200 is a reasonable starting point.

const heavy = system.actorOf(
Props.create(() => new BulkProcessor())
.withDispatcher(new ThroughputDispatcher({ throughput: 500 })),
);
const httpHandler = system.actorOf(
Props.create(() => new HttpHandler()),
// → uses system's default ImmediateDispatcher
);

Mix freely. Heavy actors get their own throughput-tuned dispatcher; HTTP handlers stay on the default. This is the recommended production pattern.

const system = ActorSystem.create('my-app', {
dispatcher: new ThroughputDispatcher({ throughput: 100 }),
});

Override the default for every actor that doesn’t specify otherwise. Useful for batch-only systems with no HTTP traffic.

Use the stock metric actor_message_duration_ms:

P50 ≈ work time
P99 - P50 ≈ dispatcher latency (queueing)

If P99 is far higher than P50 with little work-time variance, dispatcher tuning helps.

The actor_mailbox_size gauge under load shows whether actors are keeping up. Persistently growing depth = either a slow handler or a misconfigured dispatcher.

HTTP server + actors → ImmediateDispatcher (default)
Compute-heavy + no HTTP → MicrotaskDispatcher
Batch processing → ThroughputDispatcher (throughput 100-500)
Mixed: heavy actor + HTTP → ImmediateDispatcher default + ThroughputDispatcher per heavy actor