org.llm4s.context.ConversationTokenCounter
See theConversationTokenCounter companion object
class ConversationTokenCounter
Counts tokens in conversations and messages using configurable tokenizers. Provides accurate token counting for context management and budget planning.
Token counting is essential for:
- Ensuring conversations fit within model context windows
- Budget planning for API costs (many providers charge per token)
- Context compression decisions in ContextManager
The counter applies fixed overheads to account for special tokens:
- Message overhead: 4 tokens per message (role markers, delimiters)
- Tool call overhead: 10 tokens per tool call (function markers)
- Conversation overhead: 10 tokens (conversation framing)
Attributes
- See also
-
ConversationTokenCounter.forModel for model-aware counter creation
TokenBreakdown for detailed per-message token analysis
- Example
-
val counter = ConversationTokenCounter.forModel("gpt-4o").getOrElse(???) val tokens = counter.countConversation(conversation) println(s"Conversation uses $tokens tokens") - Companion
- object
- Graph
-
- Supertypes
-
class Objecttrait Matchableclass Any
Members list
In this article