Skip to content

Management overview

HttpManagement is a small HTTP server for operations. Separate from your app’s main HTTP server, on a separate port (8558 by convention), exposing:

  • Health probes — liveness + readiness for K8s.
  • Cluster info — members, leader, sharding regions.
  • Metrics — Prometheus exposition (optional).
  • Admin endpoints — leave / down (optional, off by default).
import { HttpManagement } from 'actor-ts';
await HttpManagement.start(system, {
port: 8558,
cluster, // optional — enables cluster endpoints
enableMetricsEndpoint: true,
});

That’s it. The server runs on 0.0.0.0:8558, serving the endpoints below.

interface HttpManagementSettings {
port: number;
host?: string; // default '0.0.0.0'
cluster?: Cluster | null; // enables cluster routes
enableLeaveEndpoint?: boolean; // default false
enableDownEndpoint?: boolean; // default false
enableMetricsEndpoint?: boolean; // default false
}

Most production deployments:

await HttpManagement.start(system, {
port: 8558,
cluster,
enableMetricsEndpoint: true, // for Prometheus
enableLeaveEndpoint: false, // admin-only; gate behind auth
enableDownEndpoint: false,
});
EndpointAlways on?Purpose
GET /healthLiveness — 200 iff health checks pass.
GET /readyReadiness — 200 iff cluster up + checks pass.
GET /cluster/membersWhen cluster is setMembership JSON.
GET /cluster/leaderWhen cluster is setLeader address.
GET /cluster/shards?type=<name>When cluster is setShard placement for a sharded type.
POST /cluster/leaveOpt-in (enableLeaveEndpoint)Trigger graceful cluster-leave.
POST /cluster/downOpt-in (enableDownEndpoint)Force-down a peer by address.
GET /metricsOpt-in (enableMetricsEndpoint)Prometheus text format.

See HTTP endpoints for the full surface + response shapes.

const { health } = await HttpManagement.start(system, { port: 8558 });
health.addCheck('database', async () => {
return (await db.ping()) ? { ok: true } : { ok: false, reason: 'db unreachable' };
});
health.addCheck('cache', async () => {
return (await cache.ping()) ? { ok: true } : { ok: false };
});

Custom checks plug into /health and /ready. A failing check makes the endpoint return 503. See Health checks.

# In your pod spec:
readinessProbe:
httpGet:
path: /ready
port: 8558
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8558
initialDelaySeconds: 30
periodSeconds: 10

K8s polls these endpoints to decide if the pod should receive traffic (ready) or be restarted (live). See Kubernetes deployment for the full deployment recipe.

App port (8080): public, behind a load balancer
Management (8558): internal-only, firewalled off

The management endpoints reveal internal state — cluster member addresses, metric values, etc. Exposing them publicly is a security risk. Keep on a separate port + firewall internally.

In K8s, this is per-pod — probes hit :8558 from the kubelet (same node), but no Service exposes it externally.

For more access control:

# In your Service / Ingress config:
# - 8080 → public
# - 8558 → not exposed publicly; mTLS internally

Some production setups expose management behind a side-car proxy that handles auth (Envoy + JWT, Linkerd + mTLS). The framework doesn’t bundle auth — keep management closed; let infrastructure handle access control.

Two cases:

  1. You already have an HTTP server and want to mount management routes inline. Then use managementRoutes(system, cluster) directly + bind into your existing HTTP routes.
  2. You’re running a single-node app without cluster — health checks alone might not justify the extra port. Consider just wiring /health into your existing HTTP server.