Skip to content

Quorum reads and writes

DistributedData’s default update / get are local-only — they apply to the local replica immediately; propagation happens via gossip. For workloads needing stronger guarantees, the updateAsync and getAsync variants wait for explicit replica acknowledgments.

dd.update<GCounter>(
'hits',
GCounter.empty,
(c) => c.increment(dd.selfReplicaId(), 1),
);
const counter = dd.get<GCounter>('hits');

These operations:

  • update — apply locally, fire-and-forget. Returns immediately.
  • get — read the local replica’s view.

Behind the scenes, gossip propagates updates to other replicas over the next few rounds.

This is the right default — 99 % of DD operations use these.

Three common cases:

  1. Read-after-write across nodes. Node A writes, node B immediately reads — the write might not have gossiped yet, so node B sees stale data.
  2. Confirmed write. The app wants to know “at least N other replicas have my update before I move on” — e.g., before replying to a client.
  3. Strongest-known read. When stale is unacceptable for this read (a balance check before a payment), force the read to consult majority replicas.

updateAsync — wait for write acknowledgments

Section titled “updateAsync — wait for write acknowledgments”
await dd.updateAsync<GCounter>(
'hits',
GCounter.empty,
(c) => c.increment(dd.selfReplicaId(), 1),
{ consistency: 'majority', timeoutMs: 2_000 },
);

Applies locally + sends to the gossip layer, then waits for acknowledgments from the configured number of replicas.

consistencyWhat it waits for
'local' (default)Self only — local apply is the ack.
'majority'⌈N/2 + 1⌉ replicas have acked.
'all'Every up-member replica has acked.
{ kind: 'count'; n: 3 }Exactly n replicas have acked.

The Promise:

  • Resolves when enough acks arrive within timeoutMs.
  • Rejects with a timeout error if not.

A timeout doesn’t undo the local write — the value is already applied locally and gossips normally. The rejection just signals “I’m not sure enough replicas saw it within the deadline.”

const counter = await dd.getAsync<GCounter>('hits',
{ consistency: 'majority' });

Same consistency options. The replicator queries other replicas for their views, merges the responses, and returns the merged value.

This means a majority read sees at least every write that majority-acknowledged — the latest “confirmed” value.

Is stale data acceptable for THIS read/write?
├── Yes (most cases) → 'local'
├── No — must be confirmed by enough replicas → 'majority'
└── Absolutely no — every up-member must agree → 'all'

'majority' is the sweet spot for “important” reads/writes — covers most failure scenarios at modest latency. 'all' guarantees consistency but fails if any replica is down (or slow), making it brittle.

// Write with majority — guarantees other replicas know
await dd.updateAsync('counter', ..., { consistency: 'majority' });
// Read locally — fast, and the value reflects (eventually) what
// other writers wrote with majority
const value = dd.get('counter');

This is the common production pattern. Writes pay the majority cost; reads are cheap. The cluster eventually converges.

// Read majority — see the latest known value across the cluster
const balance = await dd.getAsync<PNCounter>('balance',
{ consistency: 'majority' });
if (balance.value() < 0) {
// ... reject the transaction
}

For one-off reads where staleness matters, pay the cost for a strong read. Don’t make every read a majority read — it defeats the local-first design.

await dd.updateAsync('config-locked', ..., { consistency: 'all' });

Rare — when you need every replica to have seen the change before moving forward. Brittle (any down replica fails the operation), so reserve for truly-irrevocable state transitions.

await dd.updateAsync('x', ..., {
consistency: 'majority',
timeoutMs: 5_000,
});

Default timeout is gossipIntervalMs × 5 — five gossip rounds, typically 5 seconds. Override per call when:

  • Stricter latency budget — set lower; reject early if consistency can’t be achieved.
  • Looser — for cross-region clusters where gossip RTT is high, raise the timeout.

On timeout, the Promise rejects with an error you can catch:

try {
await dd.updateAsync('x', ..., { consistency: 'majority', timeoutMs: 1_000 });
} catch (e) {
// The write succeeded locally but didn't quorum in time.
// Decide: retry, surface the error, accept the eventual-consistency outcome.
}