DeterministicCompressor
org.llm4s.context.DeterministicCompressor
object DeterministicCompressor
Implements rule-based deterministic compression for conversation context.
This compressor applies predictable, reproducible transformations to reduce token usage while preserving semantic meaning. Unlike LLM-based compression, the output is deterministic and doesn't require API calls.
==Compression Pipeline==
Compression occurs in two phases:
- '''Tool Compaction''' (always applied):
- Compresses large JSON/YAML tool outputs
- Externalizes binary content
- Truncates verbose logs and error traces
- '''Subjective Edits''' (optional, requires
enableSubjectiveEdits):
- Removes filler words from transcript-like content
- Deduplicates repetitive sentences
- Truncates overly verbose assistant responses
==Safety Guarantees==
The compressor is designed to be '''safe''' and '''conservative''':
- User messages are '''never''' modified
- Code blocks and JSON are preserved verbatim
- Filler word removal only applies to "transcript-like" content
- Truncation preserves first and last sentences
Attributes
- See also
-
ToolOutputCompressor for the tool compaction implementation
CompressionRule for individual compression rules
- Example
-
val compressed = DeterministicCompressor.compressToCap( messages = conversation.messages, tokenCounter = counter, capTokens = 4000, enableSubjectiveEdits = true ) - Graph
-
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
Members list
In this article