Skip to content

InMemoryCache

Defined in: src/cache/InMemoryCache.ts:20

Generic distributed-cache abstraction — used by HTTP middleware (response-cache, rate-limit, idempotency-key) and the optional CachedSnapshotStore decorator. Three implementations ship:

  • InMemoryCache — single-process Map; default, ideal for tests/dev.
  • RedisCache — wraps ioredis (optional peer dependency).
  • MemcachedCache — wraps memjs (optional peer dependency).

The surface is intentionally small. Seven operations cover ~95% of the real cases in this codebase; we deliberately exclude pattern-scans (anti-pattern at scale) and pub/sub (already provided by the cluster layer). Bulk mget / mset (#14) cut round-trips for the hot sharded-entity-hydration path after a rebalance.

Failure model: a cache is opportunistic by definition. Backends are encouraged to return a sensible default rather than throw on transient connection errors — get returning None on network failure is fine, since the caller’s job is to fall back to the source of truth anyway. Exceptions are reserved for misuse (invalid TTL, etc).

new InMemoryCache(): InMemoryCache

InMemoryCache

close(): Promise<void>

Defined in: src/cache/InMemoryCache.ts:102

Best-effort teardown. Idempotent.

Promise<void>

Cache.close


delete(…keys): Promise<void>

Defined in: src/cache/InMemoryCache.ts:73

Delete one or many keys. Idempotent — missing keys are a no-op.

string[]

Promise<void>

Cache.delete


get<V>(key): Promise<Option<V>>

Defined in: src/cache/InMemoryCache.ts:23

Get a value; returns None on miss, expiry, or transient backend failure.

V

string

Promise<Option<V>>

Cache.get


incr(key, ttlMs?): Promise<number>

Defined in: src/cache/InMemoryCache.ts:41

Atomic increment by 1 — returns the new value. When ttlMs is supplied AND the key was newly created (counter value is 1 after the call), the TTL is set; subsequent increments do not refresh it. This is the right semantics for a fixed-window rate-limiter.

string

number

Promise<number>

Cache.incr


mget<V>(keys): Promise<Map<string, V>>

Defined in: src/cache/InMemoryCache.ts:77

Bulk get (#14) — fetch multiple keys in a single round-trip when the backend supports it. Returns a Map keyed by the input keys; misses (no entry, expired, malformed payload, transient backend failure) are simply absent from the result rather than mapped to undefined. Map.get(k) therefore returns V | undefined with the same “missing key” semantics as the single-key get.

Order of the returned Map matches the order of the input keys for backends that support it (Redis MGET); backends that fall back to parallel single-key reads (Memcached) may surface a different iteration order — don’t rely on it.

V

readonly string[]

Promise<Map<string, V>>

Cache.mget


mset<V>(entries, ttlMs?): Promise<void>

Defined in: src/cache/InMemoryCache.ts:92

Bulk set (#14) — write multiple key/value pairs with a shared TTL. The atomicity guarantee is per backend: Redis emits a single MSET (no-TTL) or pipelined SET ... PX (with-TTL); Memcached has no native bulk write so the calls go out in parallel. Single-process backends (InMemory) trivially see the whole bag at once. ttlMs applies to every entry.

V

ReadonlyMap<string, V>

number

Promise<void>

Cache.mset


set<V>(key, value, ttlMs?): Promise<void>

Defined in: src/cache/InMemoryCache.ts:33

Set a value with optional TTL (milliseconds). Omitting ttlMs means no expiry.

V

string

V

number

Promise<void>

Cache.set


setIfAbsent<V>(key, value, ttlMs?): Promise<boolean>

Defined in: src/cache/InMemoryCache.ts:61

Set only if the key does not yet exist. Returns true on success (the value was stored), false on collision (someone else got there first). Used as the kernel of idempotency-key dedup.

V

string

V

number

Promise<boolean>

Cache.setIfAbsent


sizeForTest(): number

Defined in: src/cache/InMemoryCache.ts:107

Test hook — current entry count, including expired-but-not-cleaned entries.

number