Skip to content

Replicated event sourcing overview

Standard event sourcing has one writer per persistenceId — a single actor instance appends events; another instance (after failover) replays them. This is fine for sharded entities where the framework guarantees one-actor-per-key.

Replicated event sourcing removes that constraint. Multiple replicas of the same entity can be active at once — on different nodes, in different regions — and each persists independently. The framework uses vector clocks to detect concurrent edits + conflict resolvers to merge them.

Replica A (eu-west) Replica B (us-east)
│ │
│ persist event_A1 │ persist event_B1
▼ ▼
Shared journal ← gossip / async replication → Shared journal
│ │
│ both reads from journal │
▼ ▼
converge (via vector clock + resolver)

This is the niche persistence pattern. Most apps shouldn’t need it. Use cases:

  • Multi-region active-active — same entity writable in EU + US. Network partition between regions doesn’t stop either side.
  • Edge-style replication — entities replicate close to users, reconcile centrally.
  • Cluster-spanning concurrent writers — same entity edited on multiple cluster nodes without singleton coordination.

For typical sharded-entity setups, ClusterSharding + PersistentActor gives exactly-one-writer per key automatically — simpler than this.

Replicated event sourcing trades simplicity for availability:

Single-writer ESReplicated ES
Total event order per pidPartial order — concurrent events can be unordered
State is a deterministic foldState is a fold + conflict resolution
Commands are validated against latest stateCommands are validated against local replica’s view
Restart replays the logRestart replays the log + reconciles concurrent branches

The mental model is CRDT-like for events — multi-writer convergence by design.

import {
ReplicatedEventSourcedActor,
vectorClock,
type ConflictResolver,
} from 'actor-ts';
type State = { value: number };
type Event = { kind: 'set'; value: number };
const resolver: ConflictResolver<State, Event> = {
resolve(state, conflicts) {
// When two replicas concurrently set different values, max wins:
const values = conflicts.map(c => (c.event as Event).value);
return { value: Math.max(...values) };
},
};
class Counter extends ReplicatedEventSourcedActor<Cmd, Event, State> {
readonly persistenceId = 'counter-42';
readonly replicaId = process.env.REPLICA_ID!;
readonly conflictResolver = resolver;
initialState() { return { value: 0 }; }
onEvent(state: State, event: Event) {
return { value: event.value };
}
// ... onCommand etc.
}

The actor extends ReplicatedEventSourcedActor instead of PersistentActor. Three additional things to specify:

  • replicaId — this replica’s stable identifier (different from persistenceId).
  • conflictResolver — how to merge concurrent events.
  • The journal must be shared across replicas (Cassandra, shared object-storage, etc.).
ComponentPurpose
VectorClockTracks causality across replicas — detects concurrent writes.
ConflictResolverDecides how to merge concurrent events into a single state.
Single-writer leaseOptional — gates writes via a lease for stronger consistency.
Replicated snapshotsSnapshots that include the vector clock for full recovery.

Each gets its own deep-dive page.

Sharding + PersistentActor Replicated ES
Multiple writers per entity? No (exactly one) Yes
Conflict resolution needed? No Yes
Cross-region active-active? Sharding favors one region Yes
Operational complexity? Low High
Use when Default You really need it