LLMCompressor

org.llm4s.context.LLMCompressor
object LLMCompressor

Implements digest-only compression for [HISTORY_SUMMARY] messages. Replaces full LLM-powered compression with targeted digest compression.

This is step 3 in the new 4-stage context management pipeline:

  1. Tool deterministic compaction
  2. History compression
  3. LLM digest squeeze (this module)
  4. Final token trim

Attributes

Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Value members

Concrete methods

def squeezeDigest(messages: Seq[Message], tokenCounter: ConversationTokenCounter, llmClient: LLMClient, capTokens: Int): Result[Seq[Message]]

Apply digest-only LLM compression to [HISTORY_SUMMARY] messages only. Leaves other message types unchanged.

Apply digest-only LLM compression to [HISTORY_SUMMARY] messages only. Leaves other message types unchanged.

Attributes

Deprecated methods

def compress(conversation: Conversation, tokenCounter: ConversationTokenCounter, llmClient: LLMClient, targetBudget: TokenBudget, customPrompt: Option[String]): Result[LLMCompressedConversation]

Attributes

Deprecated
[Since version 0.9.0] Use squeezeDigest for new context management pipeline

Use squeezeDigest for new context management pipeline