TokenUsage

org.llm4s.llmconnect.model.TokenUsage
case class TokenUsage(promptTokens: Int, completionTokens: Int, totalTokens: Int, thinkingTokens: Option[Int])

Token usage statistics for a completion request.

Value parameters

completionTokens

Number of tokens in the completion (output).

promptTokens

Number of tokens in the prompt (input).

thinkingTokens

Optional number of tokens used for thinking/reasoning. Present when using reasoning modes with Claude or o1/o3 models. These tokens count toward billing but are separate from completion tokens.

totalTokens

Total tokens (prompt + completion).

Attributes

Graph
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all

Members list

Value members

Concrete methods

def hasThinkingTokens: Boolean

Check if thinking tokens were used.

Check if thinking tokens were used.

Attributes

Total output tokens including thinking.

Total output tokens including thinking.

For billing purposes, thinking tokens are typically billed at the same rate as output tokens.

Attributes

Inherited methods

def productElementNames: Iterator[String]

Attributes

Inherited from:
Product
def productIterator: Iterator[Any]

Attributes

Inherited from:
Product