Let’s start with the simplest possible LLM4S program - a “Hello World” that asks the LLM a question.
Create the File
Create HelloLLM.scala:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
importorg.llm4s.llmconnect.LLMConnectimportorg.llm4s.llmconnect.model.UserMessageobjectHelloLLMextendsApp{valresult=for{client<-LLMConnect.create()response<-client.complete(messages=List(UserMessage("What is Scala?")),model=None)}yieldresponseresultmatch{caseRight(completion)=>println(s"Response: ${completion.content}")caseLeft(error)=>println(s"Error: $error")}}
Run It
1
2
3
4
5
# Make sure your API key is configuredexport LLM_MODEL=openai/gpt-4o
export OPENAI_API_KEY=sk-...
sbt run
Expected Output
1
2
3
Response: Scala is a high-level programming language that combines
object-oriented and functional programming paradigms. It runs on the
JVM and is known for its strong type system and concurrency support.
Understanding the Code
Let’s break down what’s happening:
1. LLMConnect.create()
1
client<-LLMConnect.create()
This creates an LLM client automatically based on your LLM_MODEL environment variable:
Reads configuration from env vars
Selects the appropriate provider (OpenAI, Anthropic, etc.)
Returns a Result[LLMClient] (Either[LLMError, LLMClient])
2. Complete with Messages
1
2
3
4
response<-client.complete(messages=List(UserMessage("What is Scala?")),model=None)
messages: A list of conversation messages (User, Assistant, System)
model: Optional model override (None uses configured model)
importorg.llm4s.llmconnect.LLMConnectimportorg.llm4s.llmconnect.model._objectConversationExampleextendsApp{valresult=for{client<-LLMConnect.create()response<-client.complete(messages=List(SystemMessage("You are a helpful programming tutor."),UserMessage("What is Scala?"),AssistantMessage("Scala is a high-level programming language..."),UserMessage("How does it compare to Java?")),model=None)}yieldresponseresult.fold(error=>println(s"Error: $error"),completion=>println(s"Response: ${completion.content}"))}
importorg.llm4s.llmconnect.LLMConnectimportorg.llm4s.llmconnect.model.UserMessageimportorg.llm4s.toolapi.{ToolFunction,ToolRegistry}importorg.llm4s.agent.AgentobjectToolExampleextendsApp{// Define a simple tooldefgetWeather(location:String):String={s"The weather in $location is sunny and 72°F"}valweatherTool=ToolFunction(name="get_weather",description="Get current weather for a location",function=getWeather_)valresult=for{client<-LLMConnect.create()tools=newToolRegistry(Seq(weatherTool))agent=newAgent(client)state<-agent.run("What's the weather in Paris?",tools)}yieldstateresult.fold(error=>println(s"Error: $error"),state=>println(s"Final response: ${state.finalResponse}"))}
importorg.llm4s.llmconnect.LLMConnectimportorg.llm4s.llmconnect.model.UserMessageobjectStreamingExampleextendsApp{valresult=for{client<-LLMConnect.create()stream<-client.completeStreaming(messages=List(UserMessage("Write a short poem about Scala")),model=None)}yield{print("Response: ")stream.foreach{chunk=>print(chunk.content)// Print each token as it arrives}println()}result.fold(error=>println(s"Error: $error"),_=>println("\nDone!"))}
Output appears token-by-token in real-time, like ChatGPT!
importorg.llm4s.llmconnect.LLMConnectimportorg.llm4s.llmconnect.model._importorg.llm4s.toolapi.{ToolFunction,ToolRegistry}importorg.llm4s.agent.AgentobjectComprehensiveExampleextendsApp{// Define toolsdefcalculate(expression:String):String={// Simple calculator (use proper eval in production!)s"Result: ${expression} = 42"}valcalcTool=ToolFunction(name="calculate",description="Evaluate a mathematical expression",function=calculate_)// Main programprintln("🚀 Starting LLM4S Example...")valresult=for{client<-LLMConnect.create()tools=newToolRegistry(Seq(calcTool))agent=newAgent(client)// Run agent with tool supportstate<-agent.run("What is 6 times 7? Please use the calculator.",tools)}yieldstateresultmatch{caseRight(state)=>println(s"✅ Success!")println(s"Response: ${state.finalResponse}")println(s"Messages exchanged: ${state.messages.length}")caseLeft(error)=>println(s"❌ Error: $error")System.exit(1)}}
for{client<-LLMConnect.create()response<-client.complete(List(SystemMessage("You are an expert in..."),UserMessage("Question")),None)}yieldresponse.content