Cache overview
The Cache interface is the framework’s opportunistic
key/value cache. Three operations cover ~95 % of real cases:
get, set, atomic increment, set-if-absent, delete.
Used by:
- Response cache middleware — GET-response caching.
- Rate limit middleware — per-key counters.
- Idempotency-key middleware — write-dedup.
- CachedSnapshotStore — snapshot read-through cache.
The interface
Section titled “The interface”interface Cache { get<V>(key: string): Promise<Option<V>>; set<V>(key: string, value: V, ttlMs?: number): Promise<void>; incr(key: string, ttlMs?: number): Promise<number>; setIfAbsent<V>(key: string, value: V, ttlMs?: number): Promise<boolean>; delete(...keys: string[]): Promise<void>; mget<V>(keys: string[]): Promise<Map<string, V>>; mset(entries: Array<[string, unknown, number?]>): Promise<void>;}Small surface — no pattern scans, no pub/sub (the cluster has its own pub/sub).
Three backends
Section titled “Three backends”| Backend | Use |
|---|---|
InMemoryCache | Single-pod / tests. In-process Map. |
RedisCache | Multi-pod production. Wraps ioredis. |
MemcachedCache | Multi-pod where Memcached fits. Wraps memjs. |
Pick by deployment shape:
- Single pod — InMemoryCache. Fast, no extra peer deps.
- Multi-pod with Redis already — RedisCache.
- Multi-pod with Memcached already — MemcachedCache.
- Multi-pod, no preference — RedisCache. More features (pub/sub, persistence, etc.) and the wider ecosystem.
Opportunistic semantics
Section titled “Opportunistic semantics”Caches are lossy by design. A get returning None means
“not cached” — the caller’s job is to fall back to the source
of truth:
const cached = await cache.get<User>(`user:${id}`);if (cached.isSome()) return cached.value;
const user = await db.users.findById(id); // ← source of truthawait cache.set(`user:${id}`, user, 60_000);return user;If set fails (Redis down, network blip), the cache stays
empty — but the call still returns the right answer (the
source of truth was consulted).
Cache implementations return defaults on transient failures rather than throwing. Misuse (bad TTL, malformed value) throws.
TTL semantics
Section titled “TTL semantics”await cache.set('key', value, 60_000); // expires in 60sawait cache.set('key', value); // no expiry- With
ttlMs— entry expires after the window. - Without — entry stays until evicted (LRU on InMemoryCache; backend-policy on Redis / Memcached).
Most uses: always set a TTL. No-TTL entries grow until eviction; explicit TTLs are predictable.
Atomic increment
Section titled “Atomic increment”const count = await cache.incr(`requests:${userId}`, 60_000);if (count > 100) throw new Error('rate limit');incr returns the new count after incrementing. When
ttlMs is set AND the counter was just created (count === 1),
the TTL is set.
Used by rate-limit middleware for fixed-window counters.
Set-if-absent
Section titled “Set-if-absent”const got = await cache.setIfAbsent('lock:key', 'me', 5_000);if (got) { // I won the race; do the work} else { // Someone else has it}Atomic CAS-style write. Used by idempotency-key middleware to detect “I’m the first request with this key.”
Bulk operations
Section titled “Bulk operations”const users = await cache.mget<User>(['user:1', 'user:2', 'user:3']);// → Map<string, User> — missing keys absentRound-trip optimization. Critical for shared-entity-hydration patterns after a sharding rebalance — pull every active entity’s state in one Redis call instead of N.
mset is the dual:
await cache.mset([ ['user:1', user1, 60_000], ['user:2', user2, 60_000],]);When NOT to use the cache
Section titled “When NOT to use the cache”Where to next
Section titled “Where to next”- In-memory cache — the default backend.
- Memcached cache — for Memcached deployments.
- Redis cache — for Redis deployments.
- Response cache middleware — the primary consumer.
The Cache API reference
covers the full interface.