Skip to content

Worker mesh

JavaScript is single-threaded per actor system. For parallelism within one OS process, the framework’s worker mesh runs multiple ActorSystems — one per worker thread — all participating in the same cluster via a MessageChannel transport.

Main process (single OS process)
├── ActorSystem 'main' (main thread)
├── ActorSystem 'w1' (Worker thread 1)
├── ActorSystem 'w2' (Worker thread 2)
└── ActorSystem 'w3' (Worker thread 3)

Each is a separate cluster node to the cluster’s view — gossip + membership + sharding all apply. Communication between them goes via in-process MessageChannel (no serialization to bytes, no TCP).

Two main scenarios:

  1. CPU-bound parallelism in one process — actor-ts is single-threaded per system; multi-threading needs multiple systems. Worker mesh distributes them.
  2. Isolation within one process — a “worker” failing doesn’t take down the main system.

For multi-process parallelism (separate OS processes), use regular cluster + TCP transport. Worker mesh is specifically for the in-process case.

// main.ts — main thread
import { Worker } from 'node:worker_threads';
import { ActorSystem, Cluster, MessageChannelTransport } from 'actor-ts';
const channel = new MessageChannel();
const w1 = new Worker('./worker.js', {
workerData: { mainPort: channel.port2 },
transferList: [channel.port2],
});
const transport = new MessageChannelTransport({
self: 'main',
ports: [channel.port1],
});
const system = ActorSystem.create('main');
await Cluster.join(system, {
host: 'main',
port: 0,
seeds: ['main'],
transport,
});
// worker.js — runs in the worker thread
import { parentPort, workerData } from 'node:worker_threads';
import { ActorSystem, Cluster, MessageChannelTransport } from 'actor-ts';
const transport = new MessageChannelTransport({
self: 'w1',
ports: [workerData.mainPort],
});
const system = ActorSystem.create('w1');
await Cluster.join(system, {
host: 'w1',
port: 0,
seeds: ['main'],
transport,
});
// From here on, w1 is just another cluster node

For multiple workers, each pair needs a MessageChannel. A fully-connected mesh of 4 workers requires 6 channels (binomial(4,2)).

The framework’s MessageChannelTransport accepts an array of ports:

new MessageChannelTransport({
self: 'main',
ports: [
portToW1,
portToW2,
portToW3,
],
});

Each port targets one peer.

For larger meshes, the star topology (everyone talks to main; main relays) is simpler — only N-1 channels needed. But that makes main the bottleneck.

TCP transport: MessageChannelTransport:
- Sockets, framing - postMessage between threads
- Serialized bytes - Structured cloning (no JSON)
- Network latency - Sub-microsecond
- Cross-host - Same process only

Messages between worker systems go through structured clone — faster than JSON.stringify + parse, and preserves more types (Map, Set, Date, etc.).

// 4-worker mesh; sharding distributes entities across them:
sharding.start({
typeName: 'order',
entityProps: ...,
extractEntityId: (msg) => msg.id,
numShards: 16,
});

The coordinator (on main) allocates shards to the 4 workers. CPU-bound entity work parallelizes across cores.

// Worker that handles GPU-bound jobs:
system.actorOf(Props.create(() => new GpuJobActor()), 'gpu-jobs');
// Crashes within this worker stay isolated from main + other workers

A worker crashing doesn’t take down the main system — separate event loops.

  • Cluster overview — the cluster model worker-mesh participates in.
  • Transports — the transport interface MessageChannelTransport implements.
  • Sharding — the primary consumer of mesh parallelism.