org.llm4s.context.ContextManager
See theContextManager companion object
class ContextManager(tokenCounter: ConversationTokenCounter, config: ContextConfig, llmClient: Option[LLMClient], artifactStore: Option[ArtifactStore])
Orchestrates a 4-step context management pipeline (early-exit if budget is met at any step):
- ToolDeterministicCompaction
- Run DeterministicCompressor.compressToCap() with compressToolOutputs first (subjective edits OFF).
- Goal: shrink/cap tool outputs (JSON/logs/binary) without touching user/assistant text.
- HistoryCompression
- Run HistoryCompressor.compressToDigest(...): • Keep last K semantic blocks as is (K = config.maxSemanticBlocks). • Replace older blocks with a deterministic [HISTORY_SUMMARY] digest capped to config.summaryTokenTarget.
- LLMHistorySqueeze
- If still over budget AND LLM enabled: • Compress the digest string only to summaryTokenTarget via LLMCompressor.squeezeDigest(). • No whole-conversation LLM compression.
- FinalTokenTrim
- TokenWindow.trimToBudget() with headroom.
- Pin [HISTORY_SUMMARY] so it’s never dropped; pack remaining messages newest-first.
Attributes
- Companion
- object
- Graph
-
- Supertypes
-
class Objecttrait Matchableclass Any
Members list
In this article