Cluster router
The local Router creates its routees
as its own children — a fixed pool on one node. The
ClusterRouter is different: its routees are other nodes’
actors at a well-known path, and the routee set changes as
cluster membership changes.
ClusterRouter on node-A │ ┌───────────┼───────────┐ ▼ ▼ ▼ node-A's node-B's node-C's /user/worker /user/worker /user/worker (one per up-member with role=compute)Every up-member with role compute (configurable) has a worker
at /user/worker; the router routes incoming messages across them
per a strategy. Add a node, the router picks it up; remove a node,
the router stops sending to it — no restart needed.
See Pool vs group for the distinction between local pools and cluster groups.
A minimal example
Section titled “A minimal example”import { ActorSystem, Cluster, ClusterRouter, Props, Actor } from 'actor-ts';
class Worker extends Actor<{ payload: string }> { override onReceive(msg: { payload: string }): void { this.log.info(`worked on ${msg.payload}`); }}
const system = ActorSystem.create('my-app');const cluster = await Cluster.join(system, { host, port, seeds, roles: ['compute'] });
// 1. Every node spawns its own worker at /user/workersystem.actorOf(Props.create(() => new Worker()), 'worker');
// 2. Any node can build a cluster router targeting these workersconst router = system.actorOf( ClusterRouter.props({ cluster, routerType: 'round-robin', routeePath: '/user/worker', role: 'compute', }), 'compute-router',);
// 3. Tell the router — message gets routed to one node's workerrouter.tell({ payload: 'work-1' });The pattern: each node deploys the routee actors locally; one (or
more) nodes spawn a ClusterRouter that targets them. The
router’s strategy decides which node’s worker handles each message.
Configuration
Section titled “Configuration”interface ClusterRouterOptions<TMsg> { cluster: Cluster; routerType: 'round-robin' | 'random' | 'consistent-hashing' | 'broadcast'; routeePath: string; role?: string; extractKey?: (msg: TMsg) => string;}| Field | Required | What |
|---|---|---|
cluster | Yes | The cluster — used for membership tracking + the wire transport. |
routerType | Yes | One of the four strategies. |
routeePath | Yes | The path the routee actor lives at on each node (typically /user/<actorName>). |
role | No | If set, only members carrying this role are routees. |
extractKey | When routerType: 'consistent-hashing' | Extracts the routing key from a message. |
The four strategies
Section titled “The four strategies”| Strategy | What it does |
|---|---|
'round-robin' | One routee per message, cycling. |
'random' | One routee per message, uniformly random. |
'consistent-hashing' | Pin same extractKey to same routee via rendezvous hashing. |
'broadcast' | Send to every routee. |
The first three are 1-of-N routing; broadcast is fan-out. See Strategies for the picking guidance — same trade-offs apply, just spread across cluster nodes instead of pool members.
Consistent-hashing
Section titled “Consistent-hashing”ClusterRouter.props({ cluster, routerType: 'consistent-hashing', routeePath: '/user/cache', extractKey: (msg) => msg.userId,});Required for 'consistent-hashing'. The function pulls a string
key out of each message; the router pins messages with the same
key to the same node via rendezvous hashing.
Useful when each routee maintains per-key state: a cache of that user’s data, a session, an in-progress workflow. Topology changes shuffle a fraction of keys (proportional to the add/remove), not all of them.
If extractKey returns the same value forever, every message goes
to the same routee (effectively a singleton). Make sure it varies
across your actual workload.
Routee discovery
Section titled “Routee discovery”Every gossip round, the router re-derives its routee set from the cluster’s up-members. Triggers a rebuild on:
MemberUp— a new up-member matching the role. Add it.MemberRemoved— a removed member. Drop it.
The set is ordered deterministically (by address), so round-robin counters stay sane across rebuilds.
When the routee set is empty
Section titled “When the routee set is empty”router.tell({ payload: 'a' });// → if no up-members match `role`, message is dropped with a warning logImportant: an empty routee set means messages drop to dead letters. The framework doesn’t queue while waiting for routees — that would silently grow without bound.
For “start serving once the pool has at least N routees,”
subscribe to MemberUp and gate request handling on a counter.
Role-filtering
Section titled “Role-filtering”ClusterRouter.props({ cluster, role: 'compute', // ...});Only up-members carrying the compute role are candidates.
Useful for asymmetric clusters:
- Nodes with
computerole do heavy work. - Nodes with
gatewayrole handle HTTP traffic. - Nodes with
coordinatorrole host singletons.
The role is declared at Cluster.join time per node. The router’s
role field filters; without it, every up-member is a candidate.
Self-routing
Section titled “Self-routing”If the local node is a candidate (matches the role), the router may route to a worker on the same node. The transport handles loopback the same as any cross-node delivery — through the transport’s local-loopback path.
This means the router’s load distribution is symmetric — no preference for local routees, no penalty either. Round-robin puts you in the cycle like every other node.
Stopping
Section titled “Stopping”router.stop();// or: router.tell(PoisonPill.instance);Stops the router actor. Routees are unaffected — they’re on other nodes; they keep running. This is the group-router model: the router owns the routing, not the routees themselves.
For comparison, a local pool router cascade-stops its routees on stop. See Pool vs group.
Where to next
Section titled “Where to next”- Routing overview — the bigger routing picture.
- Pool vs group — the
conceptual difference from local
Router. - Strategies — round-robin / random / consistent-hashing in detail.
- Sharding overview — for per-key actors with stronger placement guarantees.
- Cluster overview — the membership the router reads.
The ClusterRouter API
reference covers the full options.