Lease API
The Lease interface is the framework’s distributed-lock
abstraction. Implementations differ in backing store (in-memory,
K8s, etcd); the contract is identical.
interface Lease { acquire(): Promise<boolean>; release(): Promise<void>; checkAlive(): boolean; onLost(handler: (reason: string) => void): () => void;}Three methods that do work + one that registers a callback.
acquire(): Promise<boolean>
Section titled “acquire(): Promise<boolean>”const got = await lease.acquire();if (got) { // We hold the lease — proceed with leader-only work} else { // Someone else has it — back off and retry later}The semantics:
- Resolves
trueif the lease was successfully acquired. - Resolves
falseif another holder owns the lease. - Rejects on transient errors (network, backend unavailable).
Implementations typically retry internally up to acquireRetries
times before resolving false. A false result means “another
holder definitively has it”; a rejection means “I don’t know.”
acquire() is idempotent when this caller already holds the
lease — calling acquire() twice in a row by the same owner
returns true both times.
release(): Promise<void>
Section titled “release(): Promise<void>”await lease.release();Voluntarily drop ownership. Calling without holding the lease is a no-op — no error. Resolves once the backend has confirmed the release.
The framework calls release():
- When the singleton manager stops being leader (graceful hand-off to another node).
- When the actor system shuts down via coordinated shutdown.
For the non-graceful case — process crash — the backend’s
TTL handles cleanup automatically; no release is sent.
checkAlive(): boolean
Section titled “checkAlive(): boolean”if (lease.checkAlive()) { // We still own the lease — proceed}A synchronous, local check. No network roundtrip. Returns the holder’s most-recent knowledge of “do I still own this?”
Used by the framework to gate ownership-dependent work — e.g.,
before issuing a shard allocation, the coordinator calls
checkAlive() and aborts if it returns false.
Implementations track ownership locally; the backend’s renewal
loop updates the local flag. This means checkAlive() reflects
up to one missed renewal of staleness — a sub-second window
where the lease might actually be gone but checkAlive() still
returns true.
For absolute certainty, use onLost(...) and react to the
notification rather than polling.
onLost(handler): () => void
Section titled “onLost(handler): () => void”const unsubscribe = lease.onLost((reason) => { console.log(`lease lost: ${reason}`); // Stop leader-only work immediately});
// Later: unsubscribe();Register a callback fired when ownership is lost unexpectedly:
- The backend reported the lease was taken over by another holder.
- The TTL expired without successful renewal (e.g., network partition).
- The backend itself reported a state inconsistency.
onLost fires once per loss. After it fires, checkAlive()
returns false and acquire() is needed before regaining
ownership.
The handler should drop ownership state immediately — stop work, release locks, signal interested actors. Don’t await expensive operations; the lease is gone and any other holder may already be acting.
Returns an unsubscribe function — call to remove the handler when you no longer need it.
How the framework uses each method
Section titled “How the framework uses each method”For a singleton with a lease:
ClusterSingletonManager flow: ├── LeaderChanged event fires ├── If this node is now leader: │ ├── lease.acquire() │ │ ├── true → spawn singleton, register onLost │ │ └── false → retry after acquireRetryDelay ├── If this node is no longer leader: │ ├── stop the singleton │ └── lease.release() └── onLost fires → stop singleton, await next LeaderChangedSame pattern for sharding coordinator:
ShardCoordinator flow: ├── lease.acquire() before processing allocation requests ├── lease.checkAlive() before issuing each allocation ├── onLost → reject pending allocations, stop coordinatorWriting a custom backend
Section titled “Writing a custom backend”import type { Lease, LeaseSettings } from 'actor-ts';
class EtcdLease implements Lease { private alive = false; private onLostHandlers = new Set<(reason: string) => void>(); private renewTimer: NodeJS.Timeout | null = null;
constructor(private readonly settings: LeaseSettings & { /* etcd-specific */ }) {}
async acquire(): Promise<boolean> { // Try to atomically CAS the etcd key from empty to this owner. // Start a renewal timer on success. // ... }
async release(): Promise<void> { // Stop the renewal timer. // CAS the etcd key from this owner to empty. // ... }
checkAlive(): boolean { return this.alive; }
onLost(handler: (reason: string) => void): () => void { this.onLostHandlers.add(handler); return () => this.onLostHandlers.delete(handler); }
private fireOnLost(reason: string): void { this.alive = false; for (const h of this.onLostHandlers) { try { h(reason); } catch { /* swallow */ } } }}Three things any backend needs to get right:
- Atomicity on
acquire— two concurrentacquire()calls from different owners must produce one winner. The backend’s own consistency model has to provide this (CAS, paxos, raft-backed). - Periodic renewal — keep the lease alive in the backend.
Configurable interval, typically
ttl / 3. onLostaccuracy — fire when ownership truly transitions away, including the TTL-expiry case.
Test the implementation against:
- Two concurrent acquires from different owners.
- Network partition with both sides trying to renew.
- Holder crash + new acquire after TTL.
- Holder process pause (e.g., GC stall) longer than TTL.
Where to next
Section titled “Where to next”- Coordination overview — the bigger picture.
- InMemoryLease — the dev/test reference implementation.
- KubernetesLease — the production K8s backend.
- Singleton with lease — the main consumer.
The Lease API reference covers
the full contract.