Skip to content

Actor tracing

When the tracing extension is configured, the framework automatically creates one span per actor message plus infrastructure spans for cluster-wire envelopes. Spans chain across tells — a message handler that tells another actor passes the active span context, so the receiver’s span links back to the sender’s.

Span nameWhenNotable attributes
actor.receiveOnce per message delivered to onReceive.actor.path, actor.class, messaging.message_kind
actor.persistPersistentActor.persist() call.persistence.id, persistence.sequence_nr
actor.askAn ask(...) call.actor.path of the target
cluster.envelope.sendOutbound cross-cluster envelope.peer.address, messaging.message_kind
cluster.envelope.receiveInbound cross-cluster envelope.Same

For a typical request flow:

HTTP request ← root span
└── actor.receive (api-actor) ← child
└── actor.ask (ask db-actor) ← grandchild
└── cluster.envelope.send ← cross-wire
└── cluster.envelope.receive (on db node) ← peer side
└── actor.receive (db-actor) ← processes
└── actor.persist ← writes journal

Each span carries the trace ID — your tracing backend stitches them into one trace.

// Within an actor:
override async onReceive(msg) {
// tracer.activeSpan() returns the actor.receive span
// tell creates an envelope with traceparent set
this.downstream.tell({ kind: 'derived', from: msg.id });
// The downstream actor's actor.receive span has THIS span as parent
}

tell snapshots the active span context onto the envelope (via tracer.injectContext()). On receive, the framework runs tracer.withActiveSpan(span, ...) to make the actor’s onReceive see the chain.

Across cluster nodes, the same traceparent rides on the wire envelope. The receiving node extracts it; its actor.receive span links back to the sender’s.

// actor.receive attributes:
{
'actor.path': 'actor-ts://my-app/user/api/sessions/user-42',
'actor.class': 'SessionActor',
'actor.system': 'my-app',
'messaging.message_kind': 'login',
// Persisted on error:
'error.message': '...',
}

For cluster envelopes:

{
'peer.address': 'actor-ts://my-app@10.0.0.5:2552',
'messaging.message_kind': 'login',
'wire.bytes': 234,
}

These follow OpenTelemetry semantic conventions where applicable, so off-the-shelf dashboards (Honeycomb, Datadog, Grafana Tempo) work without customization.

system.extension(TracingExtensionId).configure({
tracer: new OtelTracerAdapter(...),
autoSpanReceive: false, // don't auto-span actor.receive
autoSpanPersist: false,
autoSpanClusterWire: true,
});

Useful when you only want manual spans at specific code points, or to reduce span volume in very-high-throughput systems.

The framework’s auto-instrumentation is opt-in by default when a non-Noop tracer is set; disable per-category if needed.

override async onReceive(msg) {
const tracer = this.context.system.extension(TracingExtensionId).tracer;
const span = tracer.startSpan('process-order', {
attributes: {
'order.id': msg.orderId,
'order.amount': msg.amount,
},
});
try {
await tracer.withActiveSpan(span, async () => {
// ... processing ...
});
span.setStatus('ok');
} catch (e) {
span.recordException(e as Error);
span.setStatus('error', (e as Error).message);
throw e;
} finally {
span.end();
}
}

Application spans appear as children of the auto-generated actor.receive span — your custom logic sits naturally inside the actor’s processing.

When tracing is active, the framework merges traceId and spanId into the LogContext so every log line emitted during a span includes them:

[2025-05-13T12:00:00Z] INFO ... actor processing {traceId=abc, spanId=def, correlationId=...}

Means your logs and traces share IDs — click from a slow trace in Honeycomb to the matching log lines in Loki. Plus: spans include the existing MDC keys as attributes.

When tracing is disabled (NoopTracer), the auto-instrumentation is zero overhead — the framework’s hot paths short-circuit without allocating spans or doing async-storage lookups.

When enabled, each message processed adds:

  • One span allocation (small object).
  • A few attribute writes.
  • One AsyncLocalStorage scope.

Total cost per message: ~5-10 microseconds. Significant for million-msg-per-second systems; negligible otherwise.