ConversationTokenCounter

org.llm4s.context.ConversationTokenCounter
See theConversationTokenCounter companion object

Counts tokens in conversations and messages using configurable tokenizers. Provides accurate token counting for context management and budget planning.

Token counting is essential for:

  • Ensuring conversations fit within model context windows
  • Budget planning for API costs (many providers charge per token)
  • Context compression decisions in ContextManager

The counter applies fixed overheads to account for special tokens:

  • Message overhead: 4 tokens per message (role markers, delimiters)
  • Tool call overhead: 10 tokens per tool call (function markers)
  • Conversation overhead: 10 tokens (conversation framing)

Attributes

See also

ConversationTokenCounter.forModel for model-aware counter creation

TokenBreakdown for detailed per-message token analysis

Example
val result = for {
 counter <- ConversationTokenCounter.forModel("gpt-4o")
} yield {
 counter.countConversation(conversation)
}
result match {
 case Right(tokens) =>
   println(s"Conversation uses $$tokens tokens")
 case Left(error) =>
   println(s"Error: $${error.message}")
}
Companion
object
Graph
Supertypes
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def countConversation(conversation: Conversation): Int

Count total tokens in a conversation

Count total tokens in a conversation

Attributes

def countMessage(message: Message): Int

Count tokens in a single message

Count tokens in a single message

Attributes

Get detailed token breakdown for debugging

Get detailed token breakdown for debugging

Attributes