Rate limit middleware
rateLimitMiddleware enforces per-key request limits on a
route subtree. Default: per-IP, fixed-window counter via the
cache. Returns 429 when the limit is exceeded.
import { rateLimitMiddleware } from 'actor-ts/http';import { InMemoryCache } from 'actor-ts';
const routes = rateLimitMiddleware({ cache: new InMemoryCache(), windowMs: 60_000, // 1 minute window maxRequests: 100, // 100 reqs per IP per minute})( path('api', /* ... */),);Each unique IP gets a 100/minute budget. Excess requests get
429 with a Retry-After header.
Configuration
Section titled “Configuration”interface RateLimitSettings { cache: Cache; windowMs: number; maxRequests: number; keyExtra?: (req: HttpRequest) => string; message?: string; statusCode?: number; // default 429}| Field | Purpose |
|---|---|
cache | Backing store (in-memory or Redis). |
windowMs | Fixed-window size. |
maxRequests | Max requests per key per window. |
keyExtra(req) | Override the key. Default: client IP. |
message | Body of the 429 response. |
statusCode | Override the status (default 429). |
Custom keying
Section titled “Custom keying”rateLimitMiddleware({ cache, windowMs: 60_000, maxRequests: 100, keyExtra: (req) => { const userId = extractUserId(req); return userId ? `user:${userId}` : `ip:${req.ip}`; },});Per-user when authenticated; per-IP when not. Common pattern:
- Auth’d routes — per-user limit.
- Public routes — per-IP limit.
Combine with the framework’s Cache abstraction so the limiter works across pods with Redis backing.
How counting works
Section titled “How counting works”Fixed window (default): - Window starts when the first request in a key arrives. - Increments a counter in the cache. - When counter exceeds maxRequests, subsequent requests in the window get 429. - TTL on the counter ensures fresh windows start cleanly.The implementation uses the cache’s incr with TTL —
atomic across pods when backed by Redis.
Response headers
Section titled “Response headers”The middleware sets:
X-RateLimit-Limit: 100X-RateLimit-Remaining: 47X-RateLimit-Reset: 1716297600 ← unix timestamp when window resetsRetry-After: 23 ← (on 429 only) seconds until resetClients use these to back off without retrying randomly.
Cluster-aware rate limiting
Section titled “Cluster-aware rate limiting”import { RedisCache } from 'actor-ts';
rateLimitMiddleware({ cache: new RedisCache({ url: 'redis://...' }), windowMs: 60_000, maxRequests: 100,});With Redis-backed cache, all pods share the counter — a client hitting any pod accumulates against the same budget.
With InMemoryCache, each pod has its own counter — a client
distributed across 4 pods gets effectively 4× the limit.
Acceptable for permissive limits; problematic for strict ones.
When to use it
Section titled “When to use it”Three good fits:
- Public APIs — prevent abuse / DoS.
- Tier-based limits — different
keyExtraper tier (free / paid / enterprise). - Per-action limits — login attempts, password resets, email-send rates.
Layered rate limits
Section titled “Layered rate limits”// Different limits per route subtree:const apiRoutes = rateLimitMiddleware({ windowMs: 60_000, maxRequests: 1000, keyExtra: (req) => req.user?.id ?? req.ip,})( concat( rateLimitMiddleware({ windowMs: 60_000, maxRequests: 10, keyExtra: (req) => `login:${req.ip}`, })(path('login', post(loginHandler))),
path('orders', /* normal limit */), ),);Stricter limits on sensitive routes; more permissive on general traffic. Compose by nesting.
Where to next
Section titled “Where to next”- HTTP overview — the bigger picture.
- Response cache middleware — complementary read-side middleware.
- Idempotency-key middleware — for write-dedup.
- Cache overview — the backing store.