Skip to content

Delivery overview

By default, tell is fire-and-forget. Messages may be lost (stopped recipient, mailbox overflow, network drops in cluster setups). For workloads where loss is unacceptable, the framework provides reliable delivery via the ProducerController / ConsumerController pair.

Sender side Receiver side
│ │
ProducerController ConsumerController
│ │
│ msg #1 │
├────────────────────────────►│ buffer + dispatch to handler
│ │
│ ack #1 │
│◄────────────────────────────┤ handler succeeded
│ │
│ msg #2 │
├────────────────────────────►│ ...

Adds sequence numbers + acks to the basic tell contract. Producer holds unacked messages; consumer dedups by seq.

For workloads where:

  • Loss is unacceptable — payment instructions, audit records, billing events.
  • Duplicates are tolerable but rare — at-least-once with consumer-side dedup is fine.
  • Order matters within a stream — sequence numbers preserve.

For workloads where loss is acceptable (telemetry, metrics, fire-and-forget UX updates), use plain tell.

The framework’s reliable delivery is at-least-once by default — a message may be redelivered if the producer crashes between sending and receiving the ack.

For effectively-once, the consumer dedupes by sequence number:

class IdempotentConsumer extends Actor<DeliveryMsg> {
private highWatermark = 0;
override onReceive(msg: DeliveryMsg): void {
if (msg.seq <= this.highWatermark) {
// Duplicate — already processed
return;
}
this.handle(msg);
this.highWatermark = msg.seq;
}
}

Combined with a persistent highWatermark, this gives exactly-once-processing in the durable sense.

ControllerRole
ProducerControllerWraps the sender side — assigns sequence numbers, holds unacked messages, retransmits.
ConsumerControllerWraps the receiver side — orders messages by seq, dedupes, sends acks.

Pair them via a producer-consumer link — each producer talks to one consumer (or N consumers via routing, but the link is 1:1 per stream).

import {
ProducerController,
ConsumerController,
type DeliveryEnvelope,
} from 'actor-ts';
// Producer side:
const producer = system.actorOf(
ProducerController.props<OrderEvent>({
producerId: 'order-producer-1',
consumer: consumerRef,
maxOutstanding: 100,
}),
);
producer.tell({ kind: 'send', payload: { orderId: 'o-1', amount: 100 } });
// Consumer side:
class OrderProcessor extends Actor<DeliveryEnvelope<OrderEvent>> {
private highWatermark = 0;
override onReceive(msg: DeliveryEnvelope<OrderEvent>): void {
if (msg.seq <= this.highWatermark) {
msg.ack(); // acknowledge (already-processed-skip)
return;
}
this.processOrder(msg.payload);
this.highWatermark = msg.seq;
msg.ack();
}
}
const consumer = system.actorOf(
ConsumerController.props<OrderEvent>({
consumerId: 'order-consumer-1',
delegate: processorRef,
}),
);

The framework handles seq assignment, retransmission, ordering — your code handles the business logic + dedup.

Each producer assigns strictly increasing seq numbers, starting at 1:
msg 1, msg 2, msg 3, ...
The consumer sees them IN ORDER (after retransmissions sort out):
msg 1, msg 2, msg 3, ...
Duplicates (due to retransmit) appear with the SAME seq:
msg 1, msg 1 (dup), msg 2, ...
→ consumer dedupes by seq

The seq is per-producer — multiple producers have independent seq spaces.

For full durability across producer / consumer crashes:

class Producer extends PersistentActor<...> {
// ... persists state including unacked messages ...
}
class Consumer extends PersistentActor<...> {
// ... persists the highWatermark ...
}

Persisting on both sides gives end-to-end exactly-once:

  • Producer crashes mid-send → recovers from journal → resumes from the last-acked seq.
  • Consumer crashes mid-process → recovers highWatermark → dedup still works.

Without persistence, recovery resets to zero on both sides.

Producer/ConsumerController: in-cluster reliable delivery
Kafka / RabbitMQ / NATS: external broker-mediated delivery

Both achieve at-least-once + dedup-via-seq. Differences:

AspectIn-cluster controllersExternal broker
Operational complexityLow — part of the clusterHigh — separate broker to run
LatencySub-millisecondNetwork + broker overhead
ThroughputBounded by single-actor processingHigher (broker scales independently)
External consumersNo (cluster-internal)Yes
PersistenceVia PersistentActorBuilt into broker

For cluster-internal reliable delivery, use the controllers. For external systems or cross-cluster, use a broker (Kafka, etc.).