StreamingAccumulator

org.llm4s.llmconnect.streaming.StreamingAccumulator
See theStreamingAccumulator companion object

Accumulates streaming chunks into a complete response. Handles content accumulation, tool call accumulation, thinking content, and token tracking.

Attributes

Companion
object
Graph
Supertypes
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def addChunk(chunk: StreamedChunk): Unit

Add a streaming chunk to the accumulator

Add a streaming chunk to the accumulator

Attributes

def addThinkingDelta(delta: String): Unit

Add thinking content delta directly

Add thinking content delta directly

Attributes

def clear(): Unit

Clear the accumulator state

Clear the accumulator state

Attributes

def getCurrentContent: String

Get the current accumulated content

Get the current accumulated content

Attributes

def getCurrentThinking: Option[String]

Get the current accumulated thinking content

Get the current accumulated thinking content

Attributes

Get the current tool calls

Get the current tool calls

Attributes

def hasThinking: Boolean

Check if there is any thinking content

Check if there is any thinking content

Attributes

def isComplete: Boolean

Check if accumulation is complete

Check if accumulation is complete

Attributes

Get a snapshot of the current state

Get a snapshot of the current state

Attributes

Convert accumulated data to a Completion

Convert accumulated data to a Completion

Attributes

def updateTokens(prompt: Int, completion: Int): Unit

Update token counts

Update token counts

Attributes

def updateTokensWithThinking(prompt: Int, completion: Int, thinking: Int): Unit

Update token counts including thinking tokens

Update token counts including thinking tokens

Attributes