Idempotency-key middleware
idempotencyKeyMiddleware enforces at-most-once write
processing via a client-provided header:
POST /api/paymentsIdempotency-Key: tx-1684923847-abc
→ 201 Created (first time — processed + stored){ "txId": "tx-42" }
POST /api/paymentsIdempotency-Key: tx-1684923847-abc ← same key
→ 201 Created (second time — returned from cache, no re-processing){ "txId": "tx-42" }The handler runs only on the first request. Subsequent requests with the same key return the cached response.
import { idempotencyKeyMiddleware, path, post } from 'actor-ts/http';import { InMemoryCache } from 'actor-ts';
const routes = idempotencyKeyMiddleware({ cache: new InMemoryCache(), ttlMs: 24 * 60 * 60_000, // 24 hours required: false, // header is optional by default})( path('api', path('payments', post(processPayment)), ),);Why this matters
Section titled “Why this matters”Network retries are common. Without idempotency:
Client → POST /payments ($100) → network timeout (request actually succeeded server-side)Client → POST /payments ($100) ← retry; charges twiceWith idempotency-key, the retry sees “key already processed, here’s the original response.” No double charge.
Configuration
Section titled “Configuration”interface IdempotencyKeyMiddlewareSettings { cache: Cache; ttlMs: number; required?: boolean; // require header; reject if missing headerName?: string; // default 'idempotency-key' scope?: (req) => string; // namespace keys by tenant / user}| Field | Purpose |
|---|---|
cache | Backing store. Redis is required for multi-pod. |
ttlMs | How long to remember each key. Typically 24-48 hours. |
required | If true, missing header → 400. Default false (optional). |
headerName | Customize the header name. |
scope(req) | Namespace keys (e.g., per-user) so collisions don’t cross tenants. |
What gets cached
Section titled “What gets cached”{ status: 201, headers: { 'content-type': 'application/json' }, body: '{"txId":"tx-42"}',}The middleware stores the complete response. Subsequent requests with the same key get an identical response — same status, headers, body.
For error responses, behavior is configurable. By default: 4xx + 5xx are also cached (so a “payment failed” reply isn’t re-processed into a “payment succeeded” on retry). This matches the standard idempotency-key semantic.
Per-tenant scoping
Section titled “Per-tenant scoping”idempotencyKeyMiddleware({ cache, ttlMs: 24 * 60 * 60_000, scope: (req) => req.headers['x-tenant-id'] ?? 'global',});Tenant A’s key-123 is different from tenant B’s key-123.
Important when:
- Different tenants might pick the same key by chance.
- You’re billing or auditing per-tenant.
Multi-pod with Redis
Section titled “Multi-pod with Redis”import { RedisCache } from 'actor-ts';
idempotencyKeyMiddleware({ cache: new RedisCache({ url: 'redis://...' }), ttlMs: 24 * 60 * 60_000,});With Redis backing, every pod sees the same idempotency state — a retry to pod-2 after the original hit pod-1 returns the cached response.
InMemoryCache → per-pod state → retries hitting different pods could double-process. Always Redis for prod.
Where to use
Section titled “Where to use”POST /api/payments ✓ idempotency-key recommendedPOST /api/orders ✓ samePOST /api/emails ✓ avoid double-sendsPUT /api/users/:id ✓ retries safeGET /api/users/me ✗ no need (idempotent already)DELETE /api/orders/:id ✓ retries safeApply to any mutating endpoint where double-processing is harmful.
Client-side responsibility
Section titled “Client-side responsibility”const key = `${userId}-${operation}-${Date.now()}-${random}`;
fetch('/api/payments', { method: 'POST', headers: { 'idempotency-key': key, 'content-type': 'application/json', }, body: JSON.stringify({ amount: 100 }),});
// On retry: REUSE THE SAME KEYfetch('/api/payments', { method: 'POST', headers: { 'idempotency-key': key }, // ← same key body: JSON.stringify({ amount: 100 }),});The client must generate the key + retry with the same key. If the client generates a fresh key per try, the middleware sees them as different requests and processes each.
Common bug: generating a key inside the retry loop instead of once before the first attempt.
In-flight handling
Section titled “In-flight handling”If two requests with the same key arrive simultaneously (double-click, concurrent retry):
- The first to grab the cache lock processes; the second sees “still in flight” and… varies by implementation.
The framework’s middleware locks per-key: the second request waits for the first to complete, then returns the cached response. Sub-second wait usually; bounded by handler runtime.
Where to next
Section titled “Where to next”- HTTP overview — the bigger picture.
- Response cache middleware — complementary read-side.
- Rate limit middleware — per-key request limits.
- Cache overview — the backing store.