Skip to content

v2.4.0 — Context Compression (15-40% Token Savings)

Choose a tag to compare

@1bcMax 1bcMax released this 04 Apr 14:12
· 78 commits to main since this release
fbc7137

7-Layer Context Compression

Integrated BlockRun's compression library directly into the agent loop. Saves 15-40% tokens automatically:

Layer Method Savings
1 Deduplication 2-5%
2 Whitespace normalization 3-8%
3 Dictionary encoding (41 codes) 4-8%
4 Path shortening 1-3%
5 JSON compaction 2-4%
6 Observation compression 15-97%
7 Dynamic codebook 5-15%

How it works:

  • Runs automatically when conversation has >10 messages
  • Adds a header explaining codes to the model
  • Layer 6 (observation) is the biggest win — summarizes large tool results to ~300 chars
  • All layers are LLM-safe (model can still understand compressed text)

This is the same compression used by BlockRun's backend, now running client-side for immediate token reduction.