Memcached cache
MemcachedCache is the Memcached-backed Cache
implementation. Distributed cache across pods, shared across
processes.
import { MemcachedCache } from 'actor-ts';
const cache = new MemcachedCache({ servers: ['memcached-1:11211', 'memcached-2:11211'],});When to use Memcached
Section titled “When to use Memcached”Two main reasons:
- Existing Memcached infrastructure — your team runs it already.
- Pure cache use case — you don’t need Redis’s extra features (persistence, pub/sub, scripting, sorted sets).
Memcached is simpler than Redis — fewer features, smaller operational footprint, smaller memory overhead. For pure key-value caching with TTLs, it’s plenty.
Configuration
Section titled “Configuration”interface MemcachedCacheSettings { servers: string[]; // 'host:port' username?: string; password?: string; timeoutMs?: number; retries?: number;}memjs (the underlying client) supports SASL auth via
username/password. Multiple servers are hash-distributed.
Peer dependency
Section titled “Peer dependency”npm install memjs# or: bun add memjsWhat works
Section titled “What works”| Cache method | Memcached implementation |
|---|---|
get/set/delete | Direct Memcached commands. |
incr | Memcached INCR/ADD combination. |
setIfAbsent | Memcached ADD (atomic). |
mget/mset | Parallel single-key operations. |
mget/mset aren’t single-round-trip on Memcached (no
multi-key commands). The framework parallelizes the
individual operations.
Memcached vs Redis
Section titled “Memcached vs Redis”| Aspect | Memcached | Redis |
|---|---|---|
| Memory overhead per entry | Lower | Higher |
| Multi-key operations | Slower (parallel single-key) | Faster (MGET, MSET) |
| Persistence | None | RDB / AOF available |
| Pub/sub | No | Yes |
| Data types | String only | Strings, lists, sets, hashes, sorted sets |
| Cluster support | Client-side hashing | Built-in clustering |
| Replication | No (or via sidecars) | Built-in replication |
For pure cache: either works. Redis wins for almost everything else the framework might want. Memcached wins for minimum operational footprint when caching is the only need.
Eviction
Section titled “Eviction”// Memcached configured with --memory-limit=2048 (MB)// → LRU eviction once fullMemcached’s eviction is LRU, configured at the Memcached-server level (not from the cache client). No client-side eviction policy.
For a fixed-size cache, this is fine — Memcached drops old entries to make room.
Connection pooling
Section titled “Connection pooling”memjs maintains its own connection pool; the framework
doesn’t manage it directly. Tune via memjs options if needed
(usually defaults are fine).
Where to next
Section titled “Where to next”- Cache overview — the bigger picture.
- In-memory cache — for single-pod.
- Redis cache — more features.