Skip to content

MultiNodeSpec

MultiNodeSpec runs N cluster nodes inside one test process. Each is a real ActorSystem; they communicate via InMemoryTransport (no TCP). Useful for testing cluster behavior — sharding, singletons, gossip convergence, failover — without Docker.

import { MultiNodeSpec } from 'actor-ts/testkit';
const spec = await MultiNodeSpec.create({
systemName: 'cluster-spec',
nodes: 3,
});
// spec.nodes is an array of ActorSystems, all joined into one cluster:
const node1 = spec.nodes[0];
const node2 = spec.nodes[1];
// Spawn actors on different nodes:
const probe = spec.createTestProbe(0);
const remote = node2.actorOf(Props.create(() => new Worker()));
remote.tell({ kind: 'do', replyTo: probe });
await probe.expectMsg({ kind: 'done' });
await spec.shutdown();

Three primary cases:

  1. Testing cluster behavior — sharding rebalances, singleton failover, gossip convergence — verifying these without real network setup.
  2. Reproducing distributed bugs — easier to isolate when the entire cluster runs in one process.
  3. CI-friendly cluster tests — fast (sub-second), no Docker, no ports.

For tests that need real network behavior (TCP semantics, TLS, cross-host latency), use ParallelMultiNodeSpec or external Docker-Compose.

interface MultiNodeSpecSettings {
systemName: string;
nodes: number;
roles?: Array<string | undefined>; // per-node roles
config?: Record<string, unknown>;
}
FieldPurpose
systemNameLogical name — appears in actor paths.
nodesNumber of cluster nodes to spin up.
rolesPer-node role tags.
configHOCON overrides for all nodes.
const spec = await MultiNodeSpec.create({
systemName: 'cluster-spec',
nodes: 3,
roles: ['frontend', 'compute', 'compute'],
});
// Now:
// node 0 has role 'frontend'
// node 1 + 2 have role 'compute'

Useful for testing role-filtered allocation:

ClusterSharding.get(spec.nodes[1], cluster1).start({
// ...
role: 'compute',
// → only nodes 1 + 2 host shards
});

All nodes share:

  • The same InMemoryTransport bus — they can talk to each other.
  • The same gossip protocol — membership converges.
  • Independent actor systems — separate dispatchers, schedulers, supervisor trees.

This means:

  • Real cluster semantics — members come up, gossip converges, failure detector observes, sharding rebalances.
  • No serialization — messages between “nodes” pass by reference (in-process). Not a fit for testing serialization-dependent behavior.
// Verify singleton failover:
const spec = await MultiNodeSpec.create({ systemName: 'spec', nodes: 3 });
// ... start singleton ...
// Find the host node:
const hostNode = ...; // identify via cluster.state.leader
// Simulate failure:
await spec.terminateNode(hostNode);
// Wait for failover:
await new Promise(r => setTimeout(r, 5_000));
// Verify singleton moved:
const newHost = ...;
expect(newHost).not.toBe(hostNode);
await spec.shutdown();

spec.terminateNode(index) terminates a specific node — the others observe the unreachable status, gossip the change, trigger downing + failover.

const probe0 = spec.createTestProbe(0); // probe on node 0
const probe1 = spec.createTestProbe(1); // probe on node 1

Each probe is bound to one node’s actor system. Useful when testing routing — verify that a message ends up on the expected node.

const spec = await MultiNodeSpec.create({ nodes: 3 });
const regions = spec.nodes.map(sys =>
ClusterSharding.get(sys, /* cluster ref */).start<Cmd>({
typeName: 'entity',
entityProps: Props.create(() => new Entity()),
extractEntityId: (msg) => msg.id,
})
);
// Spawn entities; verify they spread across nodes:
for (const id of ['e1', 'e2', 'e3', 'e4', 'e5']) {
regions[0].tell({ id, kind: 'wake-up' });
}
// Inspect placement:
const placement = await ask(regions[0], { kind: 'list-shards', replyTo: ... });
expect(placement.regions.size).toBeGreaterThan(1); // distributed
await spec.shutdown();

The framework’s coordinator distributes shards across the multi-node-spec just like a real cluster.