Skip to content

In-memory cache

InMemoryCache is the default Cache implementation. In-process, LRU eviction, per-entry TTL — fast and zero dependencies.

import { InMemoryCache } from 'actor-ts';
const cache = new InMemoryCache({
maxEntries: 10_000,
});

Three scenarios:

  1. Tests — fast, no IO, clean teardown.
  2. Single-pod production — no need to share cache state.
  3. Dev / local — same code without Redis on the laptop.

For multi-pod production, use Redis or Memcached instead — pods would each have their own copy otherwise.

interface InMemoryCacheSettings {
maxEntries?: number; // LRU cap; default 10_000
cleanupMs?: number; // expired-entry sweep cadence; default 60_000
}
FieldPurpose
maxEntriesBound on cache size. LRU eviction beyond this.
cleanupMsHow often to sweep expired entries. Without this, expired entries linger until accessed.

For most apps, defaults are fine.

maxEntries: 1000
→ 1001st distinct key inserted → least-recently-used entry evicted

LRU means frequently-accessed entries stay; rarely-accessed ones go. Good for read-heavy caches where the hot set fits in memory but the long tail doesn’t.

await cache.set('key', value, 60_000); // expires at now + 60s

Two cleanup paths:

  1. Lazy — on get, expired entries return None (and are removed).
  2. Periodic sweep — every cleanupMs, the cache walks expired entries and removes them. Reduces memory for write-then-never-read keys.
import { CacheExtensionId } from 'actor-ts';
system.extension(CacheExtensionId).configure({
defaultCache: new InMemoryCache({ maxEntries: 50_000 }),
});
// Reach via:
const cache = system.extension(CacheExtensionId).cache;

A system-wide cache that multiple consumers (HTTP middleware, projection actors, custom code) share. Useful when:

  • You want one configured cache instance, not one per consumer.
  • Cache stats accumulate across the system.

For per-route caches, instantiate new InMemoryCache() directly without going through the extension.