Installation
Get LLM4S up and running in minutes.
Table of contents
- Prerequisites
- Add LLM4S to Your Project
- Quick Start with the Starter Kit
- Optional Dependencies
- API Keys Setup
- Verify Installation
- Troubleshooting
- Next Steps
- Additional Resources
Prerequisites
Before installing LLM4S, ensure you have:
- Java Development Kit (JDK) 11 or higher (JDK 21 recommended)
- Scala 2.13.16 or Scala 3.7.1 (or both for cross-compilation)
- SBT 1.10.6 or higher
- An API key from at least one LLM provider (OpenAI, Anthropic, Azure OpenAI, or Ollama)
Verify Prerequisites
1
2
3
4
5
6
7
8
# Check Java version
java -version # Should show 11 or higher
# Check Scala version
scala -version # 2.13.16 or 3.7.1
# Check SBT version
sbt version # 1.10.6 or higher
Add LLM4S to Your Project
SBT
Add LLM4S to your build.sbt:
1
2
3
4
5
// For Scala 2.13 or 3.x
libraryDependencies += "org.llm4s" %% "core" % "0.1.16"
// Cross-compile for both versions
ThisBuild / scalaVersion := "2.13.16" // or "3.7.1"
Maven
1
2
3
4
5
6
7
8
9
10
11
12
13
<!-- For Scala 3 -->
<dependency>
<groupId>org.llm4s</groupId>
<artifactId>core_3</artifactId>
<version>0.1.16</version>
</dependency>
<!-- For Scala 2.13 -->
<dependency>
<groupId>org.llm4s</groupId>
<artifactId>core_2.13</artifactId>
<version>0.1.16</version>
</dependency>
Multi-Module Project
If you have a multi-module project:
1
2
3
4
5
6
7
8
lazy val myProject = (project in file("."))
.settings(
name := "my-llm-project",
scalaVersion := "2.13.16",
libraryDependencies ++= Seq(
"org.llm4s" %% "core" % "0.1.16"
)
)
Snapshot Versions
To use the latest development snapshot:
1
2
resolvers += Resolver.sonatypeRepo("snapshots")
libraryDependencies += "org.llm4s" %% "core" % "0.1.0-SNAPSHOT"
Quick Start with the Starter Kit
The fastest way to get started is using the llm4s.g8 template:
1
2
3
4
5
6
7
8
9
10
11
# Install the template
sbt new llm4s/llm4s.g8
# Follow the prompts
# name [My LLM Project]: my-awesome-agent
# organization [com.example]: com.mycompany
# scala_version [2.13.16]:
# llm4s_version [0.1.16]:
cd my-awesome-agent
sbt run
The starter kit includes:
- ✅ Pre-configured SBT build
- ✅ Example agent with tool calling
- ✅ Configuration templates
- ✅ Multi-provider setup
- ✅ Docker configuration for workspace
Optional Dependencies
Additional modules are coming soon. The core library includes most functionality. Check Maven Central for available artifacts.
For Workspace (Containerized Execution)
1
libraryDependencies += "org.llm4s" %% "workspaceClient" % "0.1.16"
And install Docker:
1
2
3
4
5
6
7
8
# macOS
brew install docker
# Ubuntu/Debian
sudo apt-get install docker.io
# Verify
docker --version
API Keys Setup
LLM4S requires API keys for your chosen provider(s). You can configure these via:
- Environment variables (recommended)
- Configuration files (
application.conf) - System properties (
-Dflags)
Environment Variables
Create a .env file in your project root (add to .gitignore!):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Choose your provider
LLM_MODEL=openai/gpt-4o
# OpenAI
OPENAI_API_KEY=sk-proj-...
OPENAI_BASE_URL=https://api.openai.com/v1 # Optional
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_BASE_URL=https://api.anthropic.com # Optional
# Azure OpenAI
AZURE_API_KEY=your-azure-key
AZURE_API_BASE=https://your-resource.openai.azure.com
AZURE_DEPLOYMENT_NAME=gpt-4o
# Ollama (local)
OLLAMA_BASE_URL=http://localhost:11434
Load the .env file before running:
1
2
source .env
sbt run
Or use sbt-dotenv plugin:
1
2
// project/plugins.sbt
addSbtPlugin("au.com.onegeek" %% "sbt-dotenv" % "2.1.233")
Get API Keys
OpenAI
- Go to platform.openai.com
- Sign up or log in
- Navigate to API Keys
- Click Create new secret key
- Copy the key (starts with
sk-)
Anthropic
- Go to console.anthropic.com
- Sign up or log in
- Navigate to API Keys
- Click Create Key
- Copy the key (starts with
sk-ant-)
Azure OpenAI
- Create an Azure account
- Navigate to Azure OpenAI Service
- Create a resource
- Deploy a model (e.g., gpt-4o)
- Copy the API Key and Endpoint
Ollama (Local)
- Install Ollama: ollama.com
- Pull a model:
ollama pull llama2 - Start server:
ollama serve - No API key needed!
Verify Installation
Create a simple test file VerifyInstall.scala:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import org.llm4s.llmconnect.LLMConnect
import org.llm4s.llmconnect.model.UserMessage
object VerifyInstall extends App {
println("Testing LLM4S installation...")
val result = for {
client <- LLMConnect.create()
response <- client.complete(
messages = List(UserMessage("Say 'LLM4S is working!'")),
model = None
)
} yield response
result match {
case Right(completion) =>
println("✅ Success!")
println(s"Response: ${completion.content}")
case Left(error) =>
println("❌ Error:")
println(error)
}
}
Run it:
1
sbt run
Expected output:
1
2
3
Testing LLM4S installation...
✅ Success!
Response: LLM4S is working!
Troubleshooting
“API key not found”
Problem: LLM4S can’t find your API key.
Solution:
- Verify
.envfile exists and is in project root - Check you’ve sourced it:
source .env - Verify variable name matches your provider (e.g.,
OPENAI_API_KEY) - Check for typos in the key
“Provider not supported”
Problem: Invalid LLM_MODEL format.
Solution: Use the correct format:
- OpenAI:
openai/gpt-4o - Anthropic:
anthropic/claude-sonnet-4-5-latest - Azure:
azure/gpt-4o - Ollama:
ollama/llama2
Compilation Errors
Problem: Scala version mismatch.
Solution:
1
2
3
# Clean and recompile
sbt clean
sbt compile
Dependency Resolution Issues
Problem: Can’t resolve LLM4S dependency.
Solution:
- For release versions, no additional resolver needed (uses Maven Central)
- For snapshots, add the resolver:
1
resolvers += Resolver.sonatypeRepo("snapshots")
Next Steps
Now that LLM4S is installed:
- Write your first program → - Create a simple LLM application
- Configure providers → - Set up multiple LLM providers
- Explore examples → - Browse 69 working examples
Additional Resources
- GitHub Repository: llm4s/llm4s
- Starter Kit: llm4s.g8
- Discord Community: Join us
- API Reference: Core API
Installation complete! Ready to write your first program →