Skip to content

Idempotency-key middleware

idempotencyKeyMiddleware enforces at-most-once write processing via a client-provided header:

POST /api/payments
Idempotency-Key: tx-1684923847-abc
→ 201 Created (first time — processed + stored)
{ "txId": "tx-42" }
POST /api/payments
Idempotency-Key: tx-1684923847-abc ← same key
→ 201 Created (second time — returned from cache, no re-processing)
{ "txId": "tx-42" }

The handler runs only on the first request. Subsequent requests with the same key return the cached response.

import { idempotencyKeyMiddleware, path, post } from 'actor-ts/http';
import { InMemoryCache } from 'actor-ts';
const routes = idempotencyKeyMiddleware({
cache: new InMemoryCache(),
ttlMs: 24 * 60 * 60_000, // 24 hours
required: false, // header is optional by default
})(
path('api',
path('payments', post(processPayment)),
),
);

Network retries are common. Without idempotency:

Client → POST /payments ($100)
→ network timeout (request actually succeeded server-side)
Client → POST /payments ($100) ← retry; charges twice

With idempotency-key, the retry sees “key already processed, here’s the original response.” No double charge.

interface IdempotencyKeyMiddlewareSettings {
cache: Cache;
ttlMs: number;
required?: boolean; // require header; reject if missing
headerName?: string; // default 'idempotency-key'
scope?: (req) => string; // namespace keys by tenant / user
}
FieldPurpose
cacheBacking store. Redis is required for multi-pod.
ttlMsHow long to remember each key. Typically 24-48 hours.
requiredIf true, missing header → 400. Default false (optional).
headerNameCustomize the header name.
scope(req)Namespace keys (e.g., per-user) so collisions don’t cross tenants.
{
status: 201,
headers: { 'content-type': 'application/json' },
body: '{"txId":"tx-42"}',
}

The middleware stores the complete response. Subsequent requests with the same key get an identical response — same status, headers, body.

For error responses, behavior is configurable. By default: 4xx + 5xx are also cached (so a “payment failed” reply isn’t re-processed into a “payment succeeded” on retry). This matches the standard idempotency-key semantic.

idempotencyKeyMiddleware({
cache,
ttlMs: 24 * 60 * 60_000,
scope: (req) => req.headers['x-tenant-id'] ?? 'global',
});

Tenant A’s key-123 is different from tenant B’s key-123. Important when:

  • Different tenants might pick the same key by chance.
  • You’re billing or auditing per-tenant.
import { RedisCache } from 'actor-ts';
idempotencyKeyMiddleware({
cache: new RedisCache({ url: 'redis://...' }),
ttlMs: 24 * 60 * 60_000,
});

With Redis backing, every pod sees the same idempotency state — a retry to pod-2 after the original hit pod-1 returns the cached response.

InMemoryCache → per-pod state → retries hitting different pods could double-process. Always Redis for prod.

POST /api/payments ✓ idempotency-key recommended
POST /api/orders ✓ same
POST /api/emails ✓ avoid double-sends
PUT /api/users/:id ✓ retries safe
GET /api/users/me ✗ no need (idempotent already)
DELETE /api/orders/:id ✓ retries safe

Apply to any mutating endpoint where double-processing is harmful.

const key = `${userId}-${operation}-${Date.now()}-${random}`;
fetch('/api/payments', {
method: 'POST',
headers: {
'idempotency-key': key,
'content-type': 'application/json',
},
body: JSON.stringify({ amount: 100 }),
});
// On retry: REUSE THE SAME KEY
fetch('/api/payments', {
method: 'POST',
headers: { 'idempotency-key': key }, // ← same key
body: JSON.stringify({ amount: 100 }),
});

The client must generate the key + retry with the same key. If the client generates a fresh key per try, the middleware sees them as different requests and processes each.

Common bug: generating a key inside the retry loop instead of once before the first attempt.

If two requests with the same key arrive simultaneously (double-click, concurrent retry):

  • The first to grab the cache lock processes; the second sees “still in flight” and… varies by implementation.

The framework’s middleware locks per-key: the second request waits for the first to complete, then returns the cached response. Sub-second wait usually; bounded by handler runtime.