Skip to content

Designing data

CRDTs are powerful — but only when the data fits. The wrong type produces correct merges of incorrect-for-your-app values. This page is the decision guide for picking the right CRDT, or deciding CRDTs aren’t right at all.

What's the data?
├── A single number that only grows
│ → GCounter
├── A single number that goes up and down
│ → PNCounter
├── A single value (latest writer wins)
│ → LWWRegister<T>
├── A single value (want to detect concurrent writes)
│ → MVRegister<T>
├── A set that only grows
│ → GSet<E>
├── A set with adds and removes
│ → ORSet<E>
├── A map of <key → single LWW value>
│ → LWWMap<K, V>
├── A map of <key → counter>
│ → GCounterMap<K>
├── A map of <key → CRDT>
│ → ORMap<K, C>
└── Anything else (rich/structured data, transactional updates, ordered lists)
→ Not a CRDT fit (see below)

Pin the decision early. Switching CRDT type later requires a migration — the wire format differs per type, and stored data isn’t compatible across them.

// Set of currently-online user IDs:
ORSet<string>

Users connect (add) and disconnect (remove). Concurrent connect-from-different-clients is “add wins” — exactly what you want when a user opens a second tab while the first is still active.

GCounterMap<string>; // string = page URL or ID

Clicks only go up; you want per-page counts. Increment on each click; read by page URL.

LWWMap<UserId, UserPrefs>; // UserPrefs is your prefs shape

Per-user single-value blob. Latest write wins — fine for “user changed their theme in two tabs, the last save persists.”

For fine-grained pref editing where concurrent changes to different fields should both stick: use ORMap<UserId, LWWMap<FieldName, FieldValue>> — each field is independently LWW.

ORMap<UserId, ORSet<ItemId>>;

Per-user mutable set of items. Concurrent add-and-remove of the same item: add wins (user added on tab A while clearing on tab B → item stays).

PNCounter;

Sessions come and go; the net count is interesting. Use PNCounter (not GCounter!) because sessions also disconnect.

Per-resource view counter that resets daily

Section titled “Per-resource view counter that resets daily”
// Not a great CRDT fit — "reset" isn't a natural CRDT op.

Instead: store the GCounter as-is; record the current day in a separate LWWRegister; subtract the value as of “start of day” when reading. The CRDT keeps growing; the user-visible number appears to reset.

Some shapes look CRDT-friendly but don’t fit:

  • High-cardinality keys — one DD entry per session for 10M sessions = 10M entries × N nodes. Use sharding instead.
  • Large values — replicating a 50 MB doc to every node every gossip round is wasteful. Store the doc externally, replicate a pointer.
  • Frequent writes from a single source — DD’s gossip amortizes over many writers; with one hot writer, you’re paying gossip cost for no benefit. Use a local PersistentActor.
  • Strict transactional semantics — DD doesn’t do isolation levels.

Real apps often combine CRDTs:

// Per-tenant configuration:
// - tenantId → { features, quota }
// - features is a set
// - quota is LWW
type Features = ORSet<string>;
type Quota = LWWRegister<number>;
type Tenant = ORMap<string, Features | Quota>;
const tenants = ORMap.empty<string, Tenant>();

The pattern: pick the CRDT for each leaf based on its semantics; nest under ORMap (the most flexible container).

For homogeneous nests (every tenant has identical structure), multiple top-level maps often read more cleanly:

const features = ORMap.empty<string, ORSet<string>>(); // tenantId → features
const quotas = LWWMap.empty<string, number>(); // tenantId → quota

Trade-off: composition gives atomic per-tenant operations; split maps give simpler types but cross-map inconsistency is possible (quota updated, features not yet propagated).