Agent

org.llm4s.agent.Agent
class Agent(client: LLMClient)

Core agent implementation for orchestrating LLM interactions with tool calling.

The Agent class provides a flexible framework for running LLM-powered workflows with support for tools, guardrails, handoffs, and streaming events.

== Key Features ==

  • '''Tool Calling''': Automatically executes tools requested by the LLM
  • '''Multi-turn Conversations''': Maintains conversation state across interactions
  • '''Handoffs''': Delegates to specialist agents when appropriate
  • '''Guardrails''': Input/output validation with composable guardrail chains
  • '''Streaming Events''': Real-time event callbacks during execution

== Basic Usage ==

for {
 client <- LLMConnect.fromEnv()
 agent = new Agent(client)
 tools = new ToolRegistry(Seq(myTool))
 state <- agent.run("What is 2+2?", tools)
} yield state.conversation.messages.last.content

== With Guardrails ==

agent.run(
 query = "Generate JSON",
 tools = tools,
 inputGuardrails = Seq(new LengthCheck(1, 10000)),
 outputGuardrails = Seq(new JSONValidator())
)

== With Streaming Events ==

agent.runWithEvents("Query", tools) { event =>
 event match {
   case AgentEvent.TextDelta(text, _) => print(text)
   case AgentEvent.ToolCallCompleted(name, result, _, _, _, _) =>
     println(s"Tool $name returned: $result")
   case _ => ()
 }
}

Value parameters

client

The LLM client for making completion requests

Attributes

See also

AgentState for the state management during execution

Handoff for agent-to-agent delegation

Graph
Supertypes
class Object
trait Matchable
class Any

Members list

Value members

Concrete methods

def continueConversation(previousState: AgentState, newUserMessage: String, inputGuardrails: Seq[InputGuardrail], outputGuardrails: Seq[OutputGuardrail], maxSteps: Option[Int], traceLogPath: Option[String], contextWindowConfig: Option[ContextWindowConfig], debug: Boolean): Result[AgentState]

Continue an agent conversation with a new user message. This is the functional way to handle multi-turn conversations.

Continue an agent conversation with a new user message. This is the functional way to handle multi-turn conversations.

The previous state must be in Complete or Failed status - cannot continue from InProgress or WaitingForTools. This ensures a clean turn boundary and prevents inconsistent state.

Value parameters

contextWindowConfig

Optional configuration for automatic context pruning

debug

Enable debug logging

inputGuardrails

Validate new message before processing

maxSteps

Optional limit on reasoning steps for this turn

newUserMessage

The new user message to process

outputGuardrails

Validate response before returning

previousState

The previous agent state (must be Complete or Failed)

traceLogPath

Optional path for trace logging

Attributes

Returns

Result containing the new agent state after processing the message

Example
val result = for {
 providerCfg <- /* load provider config */
 client      <- org.llm4s.llmconnect.LLMConnect.getClient(providerCfg)
 tools       = new ToolRegistry(Seq(WeatherTool.tool))
 agent       = new Agent(client)
 state1     <- agent.run("What's the weather in Paris?", tools)
 state2     <- agent.continueConversation(state1, "And in London?")
 state3     <- agent.continueConversation(state2, "Which is warmer?")
} yield state3
def continueConversationWithEvents(previousState: AgentState, newUserMessage: String, onEvent: AgentEvent => Unit, inputGuardrails: Seq[InputGuardrail], outputGuardrails: Seq[OutputGuardrail], maxSteps: Option[Int], traceLogPath: Option[String], contextWindowConfig: Option[ContextWindowConfig], debug: Boolean): Result[AgentState]

Continue a conversation with streaming events.

Continue a conversation with streaming events.

Value parameters

contextWindowConfig

Optional configuration for context pruning

debug

Enable debug logging

inputGuardrails

Validate new message before processing

maxSteps

Optional limit on reasoning steps

newUserMessage

The new user message to process

onEvent

Callback for streaming events

outputGuardrails

Validate response before returning

previousState

The previous agent state (must be Complete or Failed)

traceLogPath

Optional path for trace logging

Attributes

Returns

Result containing the new agent state

def continueConversationWithStrategy(previousState: AgentState, newUserMessage: String, toolExecutionStrategy: ToolExecutionStrategy, inputGuardrails: Seq[InputGuardrail], outputGuardrails: Seq[OutputGuardrail], maxSteps: Option[Int], traceLogPath: Option[String], contextWindowConfig: Option[ContextWindowConfig], debug: Boolean)(implicit ec: ExecutionContext): Result[AgentState]

Continue a conversation with a configurable tool execution strategy.

Continue a conversation with a configurable tool execution strategy.

Value parameters

contextWindowConfig

Optional configuration for automatic context pruning

debug

Enable debug logging

ec

ExecutionContext for async operations

inputGuardrails

Validate new message before processing

maxSteps

Optional limit on reasoning steps for this turn

newUserMessage

The new user message to process

outputGuardrails

Validate response before returning

previousState

The previous agent state (must be Complete or Failed)

toolExecutionStrategy

Strategy for executing multiple tool calls

traceLogPath

Optional path for trace logging

Attributes

Returns

Result containing the new agent state after processing the message

def formatStateAsMarkdown(state: AgentState): String

Formats the agent state as a markdown document for tracing

Formats the agent state as a markdown document for tracing

Value parameters

state

The agent state to format as markdown

Attributes

Returns

A markdown string representation of the agent state

def initialize(query: String, tools: ToolRegistry, handoffs: Seq[Handoff], systemPromptAddition: Option[String], completionOptions: CompletionOptions): AgentState

Initializes a new agent state with the given query

Initializes a new agent state with the given query

Value parameters

completionOptions

Optional completion options for LLM calls (temperature, maxTokens, etc.)

handoffs

Available handoffs (default: none)

query

The user query to process

systemPromptAddition

Optional additional text to append to the default system prompt

tools

The registry of available tools

Attributes

Returns

A new AgentState initialized with the query and tools

def run(initialState: AgentState, maxSteps: Option[Int], traceLogPath: Option[String], debug: Boolean): Result[AgentState]

Runs the agent from an existing state until completion, failure, or step limit is reached

Runs the agent from an existing state until completion, failure, or step limit is reached

Value parameters

debug

Enable detailed debug logging for tool calls and agent loop iterations

initialState

The initial agent state to run from

maxSteps

Optional limit on the number of steps to execute

traceLogPath

Optional path to write a markdown trace file

Attributes

Returns

Either an error or the final agent state

def run(query: String, tools: ToolRegistry, inputGuardrails: Seq[InputGuardrail], outputGuardrails: Seq[OutputGuardrail], handoffs: Seq[Handoff], maxSteps: Option[Int], traceLogPath: Option[String], systemPromptAddition: Option[String], completionOptions: CompletionOptions, debug: Boolean): Result[AgentState]

Runs the agent with a new query until completion, failure, or step limit is reached

Runs the agent with a new query until completion, failure, or step limit is reached

Value parameters

completionOptions

Optional completion options for LLM calls (temperature, maxTokens, etc.)

debug

Enable detailed debug logging for tool calls and agent loop iterations

handoffs

Available handoffs (default: none)

inputGuardrails

Validate query before processing (default: none)

maxSteps

Optional limit on the number of steps to execute

outputGuardrails

Validate response before returning (default: none)

query

The user query to process

systemPromptAddition

Optional additional text to append to the default system prompt

tools

The registry of available tools

traceLogPath

Optional path to write a markdown trace file

Attributes

Returns

Either an error or the final agent state

def runCollectingEvents(query: String, tools: ToolRegistry, maxSteps: Option[Int], systemPromptAddition: Option[String], completionOptions: CompletionOptions, debug: Boolean): Result[(AgentState, Seq[AgentEvent])]

Collect all events during execution into a sequence.

Collect all events during execution into a sequence.

Convenience method that runs the agent and returns both the final state and all events that were emitted during execution.

Value parameters

completionOptions

Completion options

debug

Enable debug logging

maxSteps

Optional limit on the number of steps

query

The user query to process

systemPromptAddition

Optional system prompt addition

tools

The registry of available tools

Attributes

Returns

Tuple of (final state, all events)

def runMultiTurn(initialQuery: String, followUpQueries: Seq[String], tools: ToolRegistry, maxStepsPerTurn: Option[Int], systemPromptAddition: Option[String], completionOptions: CompletionOptions, contextWindowConfig: Option[ContextWindowConfig], debug: Boolean): Result[AgentState]

Run multiple conversation turns sequentially. Each turn waits for the previous to complete before starting. This is a convenience method for running a complete multi-turn conversation.

Run multiple conversation turns sequentially. Each turn waits for the previous to complete before starting. This is a convenience method for running a complete multi-turn conversation.

Value parameters

completionOptions

Completion options

contextWindowConfig

Optional configuration for automatic context pruning

debug

Enable debug logging

followUpQueries

Additional user messages to process in sequence

initialQuery

The first user message

maxStepsPerTurn

Optional step limit per turn

systemPromptAddition

Optional system prompt addition

tools

Tool registry for the conversation

Attributes

Returns

Result containing the final agent state after all turns

Example
val result = agent.runMultiTurn(
 initialQuery = "What's the weather in Paris?",
 followUpQueries = Seq(
   "And in London?",
   "Which is warmer?"
 ),
 tools = tools
)
def runStep(state: AgentState, debug: Boolean): Result[AgentState]

Runs a single step of the agent's reasoning process

Runs a single step of the agent's reasoning process

Attributes

def runWithEvents(query: String, tools: ToolRegistry, onEvent: AgentEvent => Unit, inputGuardrails: Seq[InputGuardrail], outputGuardrails: Seq[OutputGuardrail], handoffs: Seq[Handoff], maxSteps: Option[Int], traceLogPath: Option[String], systemPromptAddition: Option[String], completionOptions: CompletionOptions, debug: Boolean): Result[AgentState]

Runs the agent with streaming events for real-time progress tracking.

Runs the agent with streaming events for real-time progress tracking.

This method provides fine-grained visibility into agent execution through a callback that receives org.llm4s.agent.streaming.AgentEvent instances as they occur. Events include:

  • Token-level streaming during LLM generation
  • Tool call start/complete notifications
  • Agent lifecycle events (start, step, complete, fail)

Value parameters

completionOptions

Optional completion options for LLM calls

debug

Enable detailed debug logging

handoffs

Available handoffs (default: none)

inputGuardrails

Validate query before processing (default: none)

maxSteps

Optional limit on the number of steps to execute

onEvent

Callback invoked for each event during execution

outputGuardrails

Validate response before returning (default: none)

query

The user query to process

systemPromptAddition

Optional additional text to append to the default system prompt

tools

The registry of available tools

traceLogPath

Optional path to write a markdown trace file

Attributes

Returns

Either an error or the final agent state

Example
import org.llm4s.agent.streaming.AgentEvent._
agent.runWithEvents(
 query = "What's the weather?",
 tools = weatherTools,
 onEvent = {
   case TextDelta(delta, _) => print(delta)
   case ToolCallStarted(_, name, _, _) => println(s"[Calling $$name]")
   case AgentCompleted(_, steps, ms, _) => println(s"Done in $$steps steps")
   case _ =>
 }
)
def runWithStrategy(query: String, tools: ToolRegistry, toolExecutionStrategy: ToolExecutionStrategy, inputGuardrails: Seq[InputGuardrail], outputGuardrails: Seq[OutputGuardrail], handoffs: Seq[Handoff], maxSteps: Option[Int], traceLogPath: Option[String], systemPromptAddition: Option[String], completionOptions: CompletionOptions, debug: Boolean)(implicit ec: ExecutionContext): Result[AgentState]

Runs the agent with a configurable tool execution strategy.

Runs the agent with a configurable tool execution strategy.

This method enables parallel or rate-limited execution of multiple tool calls, which can significantly improve performance when the LLM requests multiple independent tool calls (e.g., fetching weather for multiple cities).

Value parameters

completionOptions

Optional completion options for LLM calls

debug

Enable detailed debug logging

ec

ExecutionContext for async operations

handoffs

Available handoffs (default: none)

inputGuardrails

Validate query before processing (default: none)

maxSteps

Optional limit on the number of steps to execute

outputGuardrails

Validate response before returning (default: none)

query

The user query to process

systemPromptAddition

Optional additional text to append to the default system prompt

toolExecutionStrategy

Strategy for executing multiple tool calls: - Sequential: One at a time (default, safest) - Parallel: All tools simultaneously - ParallelWithLimit(n): Max n tools concurrently

tools

The registry of available tools

traceLogPath

Optional path to write a markdown trace file

Attributes

Returns

Either an error or the final agent state

Example
import scala.concurrent.ExecutionContext.Implicits.global
// Execute weather lookups in parallel
val result = agent.runWithStrategy(
 query = "Get weather in London, Paris, and Tokyo",
 tools = weatherTools,
 toolExecutionStrategy = ToolExecutionStrategy.Parallel
)
// Limit concurrency to avoid rate limits
val result = agent.runWithStrategy(
 query = "Search for 10 topics",
 tools = searchTools,
 toolExecutionStrategy = ToolExecutionStrategy.ParallelWithLimit(3)
)
def writeTraceLog(state: AgentState, traceLogPath: String): Unit

Writes the current state to a markdown trace file

Writes the current state to a markdown trace file

Value parameters

state

The agent state to write to the trace log

traceLogPath

The path to write the trace log to

Attributes