Skip to content

Rate limit middleware

rateLimitMiddleware enforces per-key request limits on a route subtree. Default: per-IP, fixed-window counter via the cache. Returns 429 when the limit is exceeded.

import { rateLimitMiddleware } from 'actor-ts/http';
import { InMemoryCache } from 'actor-ts';
const routes = rateLimitMiddleware({
cache: new InMemoryCache(),
windowMs: 60_000, // 1 minute window
maxRequests: 100, // 100 reqs per IP per minute
})(
path('api', /* ... */),
);

Each unique IP gets a 100/minute budget. Excess requests get 429 with a Retry-After header.

interface RateLimitSettings {
cache: Cache;
windowMs: number;
maxRequests: number;
keyExtra?: (req: HttpRequest) => string;
message?: string;
statusCode?: number; // default 429
}
FieldPurpose
cacheBacking store (in-memory or Redis).
windowMsFixed-window size.
maxRequestsMax requests per key per window.
keyExtra(req)Override the key. Default: client IP.
messageBody of the 429 response.
statusCodeOverride the status (default 429).
rateLimitMiddleware({
cache,
windowMs: 60_000,
maxRequests: 100,
keyExtra: (req) => {
const userId = extractUserId(req);
return userId ? `user:${userId}` : `ip:${req.ip}`;
},
});

Per-user when authenticated; per-IP when not. Common pattern:

  • Auth’d routes — per-user limit.
  • Public routes — per-IP limit.

Combine with the framework’s Cache abstraction so the limiter works across pods with Redis backing.

Fixed window (default):
- Window starts when the first request in a key arrives.
- Increments a counter in the cache.
- When counter exceeds maxRequests, subsequent requests in the
window get 429.
- TTL on the counter ensures fresh windows start cleanly.

The implementation uses the cache’s incr with TTL — atomic across pods when backed by Redis.

The middleware sets:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 47
X-RateLimit-Reset: 1716297600 ← unix timestamp when window resets
Retry-After: 23 ← (on 429 only) seconds until reset

Clients use these to back off without retrying randomly.

import { RedisCache } from 'actor-ts';
rateLimitMiddleware({
cache: new RedisCache({ url: 'redis://...' }),
windowMs: 60_000,
maxRequests: 100,
});

With Redis-backed cache, all pods share the counter — a client hitting any pod accumulates against the same budget.

With InMemoryCache, each pod has its own counter — a client distributed across 4 pods gets effectively 4× the limit. Acceptable for permissive limits; problematic for strict ones.

Three good fits:

  1. Public APIs — prevent abuse / DoS.
  2. Tier-based limits — different keyExtra per tier (free / paid / enterprise).
  3. Per-action limits — login attempts, password resets, email-send rates.
// Different limits per route subtree:
const apiRoutes = rateLimitMiddleware({
windowMs: 60_000, maxRequests: 1000,
keyExtra: (req) => req.user?.id ?? req.ip,
})(
concat(
rateLimitMiddleware({
windowMs: 60_000, maxRequests: 10,
keyExtra: (req) => `login:${req.ip}`,
})(path('login', post(loginHandler))),
path('orders', /* normal limit */),
),
);

Stricter limits on sensitive routes; more permissive on general traffic. Compose by nesting.