Skip to content

Memcached cache

MemcachedCache is the Memcached-backed Cache implementation. Distributed cache across pods, shared across processes.

import { MemcachedCache } from 'actor-ts';
const cache = new MemcachedCache({
servers: ['memcached-1:11211', 'memcached-2:11211'],
});

Two main reasons:

  1. Existing Memcached infrastructure — your team runs it already.
  2. Pure cache use case — you don’t need Redis’s extra features (persistence, pub/sub, scripting, sorted sets).

Memcached is simpler than Redis — fewer features, smaller operational footprint, smaller memory overhead. For pure key-value caching with TTLs, it’s plenty.

interface MemcachedCacheSettings {
servers: string[]; // 'host:port'
username?: string;
password?: string;
timeoutMs?: number;
retries?: number;
}

memjs (the underlying client) supports SASL auth via username/password. Multiple servers are hash-distributed.

Terminal window
npm install memjs
# or: bun add memjs
Cache methodMemcached implementation
get/set/deleteDirect Memcached commands.
incrMemcached INCR/ADD combination.
setIfAbsentMemcached ADD (atomic).
mget/msetParallel single-key operations.

mget/mset aren’t single-round-trip on Memcached (no multi-key commands). The framework parallelizes the individual operations.

AspectMemcachedRedis
Memory overhead per entryLowerHigher
Multi-key operationsSlower (parallel single-key)Faster (MGET, MSET)
PersistenceNoneRDB / AOF available
Pub/subNoYes
Data typesString onlyStrings, lists, sets, hashes, sorted sets
Cluster supportClient-side hashingBuilt-in clustering
ReplicationNo (or via sidecars)Built-in replication

For pure cache: either works. Redis wins for almost everything else the framework might want. Memcached wins for minimum operational footprint when caching is the only need.

// Memcached configured with --memory-limit=2048 (MB)
// → LRU eviction once full

Memcached’s eviction is LRU, configured at the Memcached-server level (not from the cache client). No client-side eviction policy.

For a fixed-size cache, this is fine — Memcached drops old entries to make room.

memjs maintains its own connection pool; the framework doesn’t manage it directly. Tune via memjs options if needed (usually defaults are fine).