Skip to content

HTTP overview

The HTTP module is separate from the I/O broker actors — HTTP servers don’t fit the “one connection many messages” shape, so they get their own DSL and backend abstraction.

import { ActorSystem, HttpExtensionId } from 'actor-ts';
import { path, get, post, completeJson, entity } from 'actor-ts/http';
const system = ActorSystem.create('my-app');
const http = system.extension(HttpExtensionId);
const routes = path('api',
path('orders',
concat(
get(async () => completeJson(200, { orders: [] })),
post(async (req) => {
const order = entity<NewOrder>(req);
// ... handle ...
return completeJson(201, { id: 'o-1' });
}),
),
),
);
const binding = await http.newServerAt('0.0.0.0', 8080).bind(routes);
console.log(`bound on ${binding.host}:${binding.port}`);

Three things going on:

  • The route DSLpath, get, post, concat compose a tree of routes. Type-safe; compiles to a flat list at bind time.
  • The marshallerentity<T>(req) decodes the request body by Content-Type; completeJson / completeText encode the response.
  • The extensionsystem.extension(HttpExtensionId).newServerAt(host, port).bind(routes) starts a server.
PieceLives inPage
Routingactor-ts/http exports path, get, post, complete*, etc.Route DSL
Marshallingentity<T>(req) decode + completeJson encode.Marshalling
BackendPluggable HTTP server — Fastify by default, Bun.serve, Express.Backends

The DSL builds an in-memory route tree; the extension’s newServerAt(...).bind(routes) flattens it and registers everything with the chosen backend.

import { FastifyBackend, BunServeBackend, ExpressBackend } from 'actor-ts/http';
// Default — no setup needed:
await http.newServerAt(host, port).bind(routes);
// → uses FastifyBackend
// Explicit choice:
await http.newServerAt(host, port)
.useBackend(new BunServeBackend())
.bind(routes);
BackendRuntime fitWhen
FastifyBackend (default)Bun, NodeProduction default — Fastify is the most-battle-tested.
BunServeBackendBun onlyWhen you want zero peer dependencies; uses Bun’s native Bun.serve.
ExpressBackendNode, BunExisting Express middleware ecosystem fits cleanly.

All three implement the same HttpServerBackend interface — the route DSL compiles identically; the backend just owns the listen/dispatch loop.

See the backend pages for the configuration options each accepts.

const response = await http.singleRequest({
method: 'POST',
url: 'https://api.example.com/orders',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ sku: 'book-1' }),
});
console.log(response.status, response.body);

The shared client wraps the runtime’s native fetch. Works identically on Bun, Node 20+, and Deno (every supported runtime has fetch built in).

For more elaborate client patterns (retries, caching), wrap the client in a retry + circuit-breaker call.

import { responseCacheMiddleware, rateLimitMiddleware } from 'actor-ts/http';
const routes = concat(
rateLimitMiddleware({ rps: 100 })(
responseCacheMiddleware({ ttlMs: 30_000 })(
path('api', /* ... */),
),
),
);

The framework ships three:

MiddlewareWhat it does
Response cacheCaches GET responses keyed by URL + Vary headers.
Rate limitPer-IP / per-key token-bucket rate limiter.
Idempotency keyDe-duplicates writes based on an Idempotency-Key header.

Each is a Route -> Route transformer — wraps a sub-tree of routes with its behavior. Compose by nesting. See middleware pages.

The route handler returns a Promise<HttpResponse> — you can freely await actor calls inside:

import { ask } from 'actor-ts';
const routes = path('orders',
path(':id',
get(async (req) => {
const id = req.path.split('/').pop();
const order = await ask(
orderRegistry,
{ kind: 'get', id, replyTo: undefined as any },
5_000,
);
return completeJson(200, order);
}),
),
);

This is the common pattern — HTTP handlers act as a thin adapter: parse the request, ask an actor, marshal the reply. Keep business logic in actors; HTTP handlers stay short.

The HttpExtension API reference covers the full surface.