ContextManager

org.llm4s.context.ContextManager
See theContextManager companion object
class ContextManager(tokenCounter: ConversationTokenCounter, config: ContextConfig, llmClient: Option[LLMClient], artifactStore: Option[ArtifactStore])

Orchestrates a 4-step context management pipeline (early-exit if budget is met at any step):

  1. ToolDeterministicCompaction
  • Run DeterministicCompressor.compressToCap() with compressToolOutputs first (subjective edits OFF).
  • Goal: shrink/cap tool outputs (JSON/logs/binary) without touching user/assistant text.
  1. HistoryCompression
  • Run HistoryCompressor.compressToDigest(...): • Keep last K semantic blocks as is (K = config.maxSemanticBlocks). • Replace older blocks with a deterministic [HISTORY_SUMMARY] digest capped to config.summaryTokenTarget.
  1. LLMHistorySqueeze
  • If still over budget AND LLM enabled: • Compress the digest string only to summaryTokenTarget via LLMCompressor.squeezeDigest(). • No whole-conversation LLM compression.
  1. FinalTokenTrim
  • TokenWindow.trimToBudget() with headroom.
  • Pin [HISTORY_SUMMARY] so it’s never dropped; pack remaining messages newest-first.

Attributes

Companion
object
Graph
Supertypes
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

Apply 4-step context management pipeline to fit conversation within budget

Apply 4-step context management pipeline to fit conversation within budget

Attributes