Overview
TheCompressionManager automatically compresses verbose tool results when the context grows too large, keeping your agent within its context window limits while preserving key information.
Quick Start
Custom Configuration
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
compressAfter | number | 3 | Compress after N uncompressed tool results |
tokenLimit | number | — | Compress when total context exceeds this token count |
model | ModelProvider | Agent’s model | Model used for compression summaries |
instructions | string | Built-in prompt | Custom compression prompt |
How It Works
- Threshold Detection: After each tool result, the manager checks if compression is needed (count-based or token-based)
- Selective Compression: Only tool results over 200 characters are compressed — short results pass through unchanged
- Parallel Compression: Multiple tool results are compressed concurrently for speed
- Fact Preservation: The built-in prompt preserves numbers, dates, IDs, URLs, and proper nouns
Integration with Loop Hooks
CompressionManager integrates via thebeforeLLMCall loop hook. It runs before the existing ContextCompactor, giving you layered context management:
- Compression (summarize individual tool results)
- Compaction (trim overall context if still too large)
- User hooks (custom transformations)