Skip to content

Compression

Object storage supports at-rest compression — bodies are compressed before put, decompressed on get. Four algorithms supported; trade CPU for storage / bandwidth.

import {
ObjectStorageDurableStateStore,
S3ObjectStorageBackend,
} from 'actor-ts';
const store = new ObjectStorageDurableStateStore({
backend: new S3ObjectStorageBackend({ /* ... */ }),
compression: {
algorithm: 'gzip',
level: 6, // 1 (fast) – 9 (best); default 6
},
});

Now every persisted state is gzip-compressed before upload. Reads transparently decompress.

AlgorithmCompression ratioCPU costWhen
gzipGood (typical 50-70 %)ModerateSafe default — universal support, well-tuned.
brotliBetter (typical 60-80 %)HigherWhen storage / bandwidth costs > CPU.
deflateSimilar to gzipSlightly lessNiche; gzip is the same algo + header.
noneNoneZeroDefault.

For text-heavy state (JSON), gzip gives 70 % reduction easily. For already-compressed data (encrypted bytes, image data), compression is nearly free of value and wastes CPU.

interface CompressionConfig {
algorithm: 'gzip' | 'brotli' | 'deflate' | 'none';
level?: number; // algorithm-specific
}

Levels are algorithm-specific:

  • gzip: 1 (fastest, less compression) to 9 (slowest, best). Default 6.
  • brotli: 0-11. Default 4 (sweet spot for runtime).
  • deflate: 1-9. Default 6.

For most workloads, defaults are fine. Crank up only when storage cost dominates.

On put:
serialize value → bytes → gzip(bytes) → S3.put(compressed, contentEncoding: 'gzip')
On get:
S3.get → bytes + contentEncoding header → decompress → bytes → deserialize

The framework writes Content-Encoding: gzip (or br, etc.) on the put, reads it back on the get, decompresses accordingly. If a stored object has no Content-Encoding (was written by an old version without compression), it’s read as-is.

This means mixing compressed + uncompressed objects in the same bucket works — the framework handles each per the header.

serialize → JSON bytes → gzip → S3

Compression runs after serialization. For maximum size reduction, combine CBOR serialization + gzip compression — CBOR shrinks the structure; gzip eliminates remaining redundancy.

new ObjectStorageDurableStateStore({
backend,
serializer: new CborSerializer(),
compression: { algorithm: 'gzip', level: 6 },
});
ScenarioAlgorithm
Text-heavy state (large JSON, long descriptions)gzip or brotli
Mixed text + numbersgzip
Already-encrypted bytesnone (no benefit)
Storage-cost-boundedbrotli (level 6-8)
CPU-bounded write pathnone or gzip level 1

Per 100 KB blob, single thread:

  • gzip level 6 — ~1-3 ms encode, ~0.5 ms decode.
  • gzip level 1 — ~0.5 ms encode, ~0.3 ms decode.
  • brotli level 4 — ~2-5 ms encode, ~1 ms decode.
  • brotli level 11 — ~50-100 ms encode (!), ~1 ms decode.

Brotli level 11 is very slow on encode — useful for write-once read-many bulk archives, not for runtime persistence.

For typical state-store workloads (small objects, modest write rate), the CPU cost is invisible.

class SmallStateActor extends DurableStateActor<...> {
protected compression() { return { algorithm: 'none' as const }; }
}
class LogStateActor extends DurableStateActor<...> {
protected compression() { return { algorithm: 'brotli' as const, level: 8 }; }
}

Override the store-level default per actor. Useful when:

  • Most actors have small state (no compression needed).
  • A few have large text-heavy state (high compression).

See Per-actor policies.