Cached snapshot store
CachedSnapshotStore is a decorator — wrap any
SnapshotStore to add an in-process LRU cache for reads.
import { CachedSnapshotStore, ObjectStorageSnapshotStore, PersistenceExtensionId,} from 'actor-ts';
const underlying = new ObjectStorageSnapshotStore({ /* S3 bucket config */});
const cached = new CachedSnapshotStore({ underlying, maxEntries: 1_000, // LRU size ttlMs: 60_000, // optional TTL});
system.extension(PersistenceExtensionId).configure({ journal: myJournal, snapshotStore: cached,});When you need it
Section titled “When you need it”Three patterns:
- Slow underlying store — object storage with multi-hop network latency, encrypted state with expensive decryption.
- Frequent actor churn — sharded entities passivating / re-spawning constantly, each load re-reading the same snapshot.
- Recovery storms — full-cluster restart, every actor loads its snapshot at once. Cache reduces redundant loads when the same snapshot is queried during the storm.
For local SQLite-backed snapshots (sub-millisecond reads), the cache adds overhead with no benefit. Use it only when the underlying store has measurable read latency.
Configuration
Section titled “Configuration”interface CachedSnapshotStoreSettings { underlying: SnapshotStore; maxEntries: number; // LRU capacity ttlMs?: number; // optional expiration}| Field | What |
|---|---|
underlying | The real snapshot store the cache fronts. |
maxEntries | Max cached entries. LRU eviction beyond this. |
ttlMs | Optional — drop cache entries this old. Useful if snapshots may be deleted externally. |
Cache semantics
Section titled “Cache semantics”loadLatest(pid)— check cache; on hit, return. On miss, load from underlying, cache the result, return.save(pid, snapshot)— write through to underlying, then update the cache.deleteUpTo(pid, seqNr)— write through, invalidate the pid’s cache entry.
The cache is consistent with writes — a save followed by
loadLatest returns the just-saved snapshot, never a stale value.
Performance
Section titled “Performance”For a slow underlying store (say, 50 ms per load), the cache turns subsequent loads of the same snapshot into sub-microsecond operations. A typical sharded-entity workload sees 80-95 % hit rate after warm-up.
The cache itself is in-process — no shared cache across nodes. For a cluster of N nodes, each has its own cache; misses on a fresh node pay the full underlying-load cost.
Pitfalls
Section titled “Pitfalls”Where to next
Section titled “Where to next”- Snapshots — the policy this is in service of.
- Object storage snapshot store — the common slow underlying store.
- In-memory snapshot store — for tests.
- SQLite snapshot store — the single-node default.