Skip to content

Remember entities

By default, sharded entities are lazy: an entity actor exists only after it’s received its first message. Without rememberEntities, a cluster cold-start or coordinator failover means the active entity set is empty — entities respawn on demand as messages arrive.

For many workloads this is fine. But sometimes you want entities to be eagerly recreated:

  • They have scheduled background work that should run regardless of incoming messages.
  • They’ve subscribed to something external and need to be alive to process it.
  • Failover should restore the full working set instantly, not trickle through as users hit each entity.

rememberEntities: true is the toggle.

sharding.start({
typeName: 'session',
entityProps: Props.create(() => new SessionActor()),
extractEntityId: (msg) => msg.userId,
rememberEntities: true,
});

What it does:

  • Each entity’s existence (its ID) is persisted to a remember-entities store.
  • On cluster cold-start, or after a coordinator failover, the coordinator reads the set of known entity IDs and asks each shard region to re-spawn them eagerly.

The default store uses the system’s Journal, so you get this for free when persistence is configured. Override with rememberEntitiesStore if you need a different backing store.

Just the identities of active entities — not their state. Entity state is the entity’s own responsibility (PersistentActor journals state changes; remember-entities tracks which IDs to bring back).

Two-store split:

Journal (your events) RememberEntitiesStore (just IDs)
pid=cart-user-42 entityIds = ['cart-user-42', 'cart-user-43', ...]
event 1
event 2
event 3

This is intentional — the remember store has different access patterns (small, frequent writes when entities start/stop) and benefits from being separable.

  • The shard a given entity belonged to. On respawn, allocation runs again (per the allocation strategy).
  • The entity’s state. If you want state to survive, the entity must persist it itself via PersistentActor or external storage.

If you use rememberEntities: true but haven’t configured a journal:

sharding.start({
typeName: '...',
entityProps: ...,
extractEntityId: ...,
rememberEntities: true,
rememberEntitiesStore: null, // explicit opt-out of persistence
});

Passing null falls back to the v1 in-memory-only behavior — the entity set is tracked in process memory and lost on coordinator failover. Useful for dev / tests; pointless in production.

Without rememberEntitiesStore: null and without a configured journal, the framework auto-instantiates a JournalRememberEntitiesStore backed by the (in-memory by default) journal — which is fine for testing but doesn’t survive restart. Make sure your journal is durable in production.

import type { RememberEntitiesStore } from 'actor-ts/cluster/sharding';
class RedisRememberEntitiesStore implements RememberEntitiesStore {
async addEntity(typeName: string, entityId: string): Promise<void> { /* SADD */ }
async removeEntity(typeName: string, entityId: string): Promise<void> { /* SREM */ }
async listEntities(typeName: string): Promise<ReadonlyArray<string>> { /* SMEMBERS */ }
}
sharding.start({
// ...
rememberEntities: true,
rememberEntitiesStore: new RedisRememberEntitiesStore(),
});

The interface is small. Useful if you want the entity registry on a different backing store than the rest of your persistence (e.g., Redis for low-latency entity-set lookups while the journal is on Cassandra).

rememberEntities: true adds two write operations per entity lifetime:

  1. One write when the entity spawns (addEntity).
  2. One write when the entity passivates / stops (removeEntity).

For high-churn entities (millions of short-lived sessions per hour), this is real overhead. Consider:

  • Skip rememberEntities for short-lived workloads.
  • Use a fast store (in-memory Redis) instead of the default journal.
  • Batch writes if you’re implementing a custom store.

For low-churn workloads (10K stable entities), the cost is negligible.

ScenarioUse rememberEntities?
Per-user sessions, short-lived, re-spawn on next requestNo
Long-running coordinators, must be alive to do scheduled workYes
IoT device handlers, must subscribe to MQTT topics on startupYes
Per-tenant cache, expensive to rebuildYes
Per-order saga, must continue from where it left off after failoverYes
Per-request transient actor, dies in secondsNo

The rule: if “do I need this entity alive even without a recent message?” is yes, enable it.

Passivation stops idle entities. With rememberEntities:

entity active → passivate (idle) → entity stops → store: removeEntity?

The framework’s behavior:

  • Idle-timeout passivation (passivationIdleMs) — removes the entity from the store. Next message respawns it; the store receives an addEntity again.
  • maxEntities LRU passivation — same.
  • Passivate from the entity itself — same.

For workloads where you want eager re-spawn even of recently-idle entities, set passivationIdleMs: 0 so they stay around.