Refs across nodes
In a single-node system, an ActorRef is a direct in-memory
handle. In a clustered system, an ActorRef can point at an
actor on any node — and the same tell works the same way:
const remote: ActorRef<Msg> = /* ref to an actor on a different node */;remote.tell({ kind: 'do-it' });// → serialized, sent over the cluster transport, delivered to the remote mailboxThe framework hides the network layer. But understanding how helps when something goes wrong (messages disappearing, latency spiking, refs failing to resolve).
The path-with-host format
Section titled “The path-with-host format”A local-only ref’s path looks like:
actor-ts://my-app/user/api/sessions/user-42A cluster-aware ref includes the host:port of its owning node:
actor-ts://my-app@10.0.0.5:2552/user/api/sessions/user-42 │ └────────┘ │ node's address (assigned at Cluster.join time) └── system nameThe host fragment tells the local runtime which node hosts this
actor. When you tell a remote ref, the framework:
- Reads the host:port from the ref’s path.
- Looks up the cluster transport’s connection to that node.
- Serializes the message + the ref’s local-path segments.
- Writes a wire frame.
- Receiving node deserializes; locally resolves the path
segments to a real actor;
tells the local ref.
How refs cross the wire
Section titled “How refs cross the wire”Inside a message, an ActorRef field serializes as the
path-with-host string. On the receiving node, the
deserializer reconstructs a RemoteActorRef pointing back at the
original node:
type GetMsg = { kind: 'get'; replyTo: ActorRef<number>; // ← will serialize as a path string};
remoteRegistry.tell({ kind: 'get', replyTo: this.context.self, // → "actor-ts://my-app@10.0.0.3:2552/user/asker"});The receiving node sees replyTo as a RemoteActorRef pointing
at 10.0.0.3:2552/.../asker. Calling tell on it goes through
the same encoding-and-transport machinery in reverse.
This means request/response between actors on different nodes just works — the reply travels back to the original asker over the cluster transport.
The cluster transport
Section titled “The cluster transport”The transport (TCP by default, in-memory in tests) carries:
- Cluster control traffic — gossip, heartbeats, downing signals.
- Envelope traffic — your
tells wrapped in a routing envelope.
Both share the same TCP connection per peer pair. The framework multiplexes them; you don’t see this distinction.
See Transports for the TCP and in-memory implementations.
Resolving a remote ref by path
Section titled “Resolving a remote ref by path”import { ActorSelection } from 'actor-ts';
const remote = await system.actorSelection( 'actor-ts://my-app@10.0.0.5:2552/user/api/sessions/user-42',).resolveOne(5_000);
remote.tell({ kind: 'do-it' });actorSelection parses the path-with-host; resolveOne returns a
RemoteActorRef you can tell. Useful when:
- The target’s location is known by convention (a well-known sharding path or singleton proxy).
- A message contains a path string (e.g., from an HTTP request).
For routine cluster work, you’d usually hold a ref obtained from
the cluster’s own machinery (sharding region, singleton proxy,
event-stream subscriber) — actorSelection is the lookup
escape hatch.
What happens when the remote node disappears
Section titled “What happens when the remote node disappears”const remote: ActorRef = /* on a node that just left the cluster */;
remote.tell({ kind: 'do-it' });// → message routes to dead letters; sender doesn't see the failuretell to a remote ref on a stopped/unreachable node:
- The framework tries to write to the transport.
- The transport sees no live connection (or one in half-closed state).
- The message is dropped to dead letters.
Two ways to detect this:
context.watch(remoteRef)— receive aTerminatednotification when the remote actor stops or the remote node goes unreachable+downed. TheTerminated.addressTerminatedflag distinguishes “actor stopped” from “node lost.”- Subscribe to cluster events —
MemberRemoved/UnreachableMemberfor the host’s address tells you the whole node is gone.
See Death watch for the per-actor variant.
Serialization
Section titled “Serialization”Messages crossing the wire need to be serializable. The framework’s SerializationExtension handles this — JSON by default, CBOR when configured.
{ kind: 'place', order: { items: ['book-1'], total: 19.99 }, replyTo: this.context.self, // ← ref serializes as path string}Plain values (strings, numbers, arrays, plain objects) serialize trivially. Things that don’t survive:
- Functions / closures — can’t cross a process boundary.
- Class instances with methods — methods are lost; only data fields survive.
Map/Set— JSON serializes them as{}(no entries). Use plain objects or arrays.- Symbols, BigInts (without serializer registration).
See Messages for the immutability
- wire-format conventions.
Performance
Section titled “Performance”Rough numbers:
- Local
tell(same actor system) — 50-200 ns. - Cluster-local
tell(loopback transport) — sub-millisecond. - Cluster
tellover LAN — 0.5-2 ms for small messages. - Cluster
tellover WAN — bounded by network RTT.
The framework batches envelopes opportunistically — many tells in quick succession share TCP writes when possible.
Where to next
Section titled “Where to next”- Cluster overview — the bigger picture: how nodes find each other, gossip, membership.
- Transports — TCP and in-memory implementations.
- Serialization overview — the wire format for cross-node messages.
- Messages — the conventions that make messages serializable.
- Death watch — detecting that a remote actor stopped.
The RemoteActorRef API
reference covers the wire ref implementation.