org.llm4s.llmconnect.config

Members list

Type members

Classlikes

case class AnthropicConfig(apiKey: String, model: String, baseUrl: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Configuration for the Anthropic Claude API.

Configuration for the Anthropic Claude API.

Prefer AnthropicConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically from the model name.

Value parameters

apiKey

Anthropic API key; redacted in toString.

baseUrl

API base URL, defaulting to "https://api.anthropic.com".

contextWindow

Model's total token capacity (prompt + completion combined).

model

Model identifier, e.g. "claude-sonnet-4-5-latest".

reserveCompletion

Tokens held back from prompt history for the completion.

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type
case class AzureConfig(endpoint: String, apiKey: String, model: String, apiVersion: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Configuration for Azure OpenAI deployments.

Configuration for Azure OpenAI deployments.

Although Azure exposes an OpenAI-compatible API, it uses a different URL structure (per-deployment endpoint) and requires an apiVersion query parameter. org.llm4s.llmconnect.LLMConnect constructs an org.llm4s.llmconnect.provider.OpenAIClient internally; this config carries the Azure-specific fields that OpenAIConfig does not have.

Prefer AzureConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically.

Value parameters

apiKey

Azure API key; redacted in toString.

apiVersion

Azure OpenAI API version string, e.g. "2025-01-01-preview".

contextWindow

Model's total token capacity (prompt + completion combined).

endpoint

Azure OpenAI deployment endpoint URL, e.g. "https://my-resource.openai.azure.com/openai/deployments/my-deploy".

model

Deployment name used as the model identifier.

reserveCompletion

Tokens held back from prompt history for the completion.

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
object AzureConfig

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type
case class CohereConfig(apiKey: String, model: String, baseUrl: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Configuration for the Cohere API.

Configuration for the Cohere API.

Prefer CohereConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically from the model name.

Value parameters

apiKey

Cohere API key; redacted in toString.

baseUrl

API base URL; defaults to CohereConfig.DEFAULT_BASE_URL.

contextWindow

Model's total token capacity (prompt + completion combined).

model

Model identifier, e.g. "command-r-plus".

reserveCompletion

Tokens held back from prompt history for the completion.

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
object CohereConfig

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type

Centralized resolver for model context window and reserve completion tokens.

Centralized resolver for model context window and reserve completion tokens.

Replaces duplicated getContextWindowForModel logic across provider configs. Performs registry lookup, then applies provider-specific fallbacks when not found.

Attributes

Supertypes
class Object
trait Matchable
class Any
Self type
case class DeepSeekConfig(apiKey: String, model: String, baseUrl: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Configuration for the DeepSeek API.

Configuration for the DeepSeek API.

Prefer DeepSeekConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically, and logs a warning for unknown or legacy model names.

Value parameters

apiKey

DeepSeek API key; redacted in toString.

baseUrl

API base URL; defaults to DeepSeekConfig.DEFAULT_BASE_URL.

contextWindow

Model's total token capacity (prompt + completion combined).

model

Model identifier, e.g. "deepseek-chat" or "deepseek-reasoner".

reserveCompletion

Tokens held back from prompt history for the completion.

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type
case class EmbeddingModelConfig(name: String, dimensions: Int)

Configuration for a text embedding model, pairing a model identifier with its output vector size.

Configuration for a text embedding model, pairing a model identifier with its output vector size.

Used by embedding providers and the model dimension registry to resolve the expected dimensionality of embeddings produced by a given model.

Value parameters

dimensions

Number of dimensions in the embedding vectors produced by this model.

name

Model identifier (e.g. "text-embedding-3-small", "voyage-3-large").

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
final case class EmbeddingProviderConfig(baseUrl: String, model: String, apiKey: String)

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
case class GeminiConfig(apiKey: String, model: String, baseUrl: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Configuration for the Google Gemini API.

Configuration for the Google Gemini API.

Prefer GeminiConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically from the model name.

Value parameters

apiKey

Google API key; redacted in toString.

baseUrl

API base URL.

contextWindow

Model's total token capacity (prompt + completion combined).

model

Model identifier, e.g. "gemini-2.0-flash".

reserveCompletion

Tokens held back from prompt history for the completion.

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
object GeminiConfig

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type
case class LangfuseConfig(url: String, publicKey: Option[String], secretKey: Option[String], env: String, release: String, version: String)

Connection and metadata settings for the Langfuse tracing backend.

Connection and metadata settings for the Langfuse tracing backend.

Normally obtained via org.llm4s.config.Llm4sConfig which reads LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and LANGFUSE_BASE_URL from the environment.

toString redacts both keys so that the config can be safely logged.

Value parameters

env

deployment environment tag attached to every trace; defaults to "production"

publicKey

Langfuse public API key; None disables tracing with a warning

release

application release identifier forwarded to Langfuse; defaults to "1.0.0"

secretKey

Langfuse secret API key; redacted in toString

url

Langfuse ingestion endpoint; defaults to https://cloud.langfuse.com/api/public/ingestion

version

SDK/integration version forwarded to Langfuse; defaults to "1.0.0"

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
final case class LocalEmbeddingModels(imageModel: String, audioModel: String, videoModel: String)

Configuration specifying local model names for non-text modality embedding.

Configuration specifying local model names for non-text modality embedding.

Holds the model identifiers used by local encoders to produce embeddings for image, audio, and video content. These models run locally (e.g. via ONNX or stub implementations) rather than calling a remote API.

Value parameters

audioModel

Local model name for audio embeddings (e.g. "wav2vec2-base").

imageModel

Local model name for image embeddings (e.g. "openclip-vit-b32").

videoModel

Local model name for video embeddings (e.g. "timesformer-base").

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
case class MistralConfig(apiKey: String, model: String, baseUrl: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
object MistralConfig

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type

Lookup service for embedding model vector dimensions.

Lookup service for embedding model vector dimensions.

Provides a single, authoritative mapping from (provider, model) pairs to the dimensionality of the vectors they produce. All configuration and encoding code should resolve dimensions through this registry to avoid duplicated or inconsistent dimension constants.

Attributes

Supertypes
class Object
trait Matchable
class Any
Self type
case class OllamaConfig(model: String, baseUrl: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Configuration for a locally-running Ollama instance.

Configuration for a locally-running Ollama instance.

Ollama requires no API key — authentication is handled at the network level by controlling access to the Ollama endpoint. Prefer OllamaConfig.fromValues over the primary constructor.

Value parameters

baseUrl

Ollama server URL, e.g. "http://localhost:11434".

contextWindow

Model's total token capacity (prompt + completion combined).

model

Model identifier as registered in Ollama, e.g. "llama3".

reserveCompletion

Tokens held back from prompt history for the completion.

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
object OllamaConfig

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type
case class OpenAIConfig(apiKey: String, model: String, organization: Option[String], baseUrl: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Configuration for the OpenAI API and providers that implement the OpenAI-compatible REST interface.

Configuration for the OpenAI API and providers that implement the OpenAI-compatible REST interface.

baseUrl governs which backend is contacted: "https://api.openai.com/v1" reaches OpenAI directly, while a URL containing "openrouter.ai" causes org.llm4s.llmconnect.LLMConnect to route to OpenRouter. Azure OpenAI uses AzureConfig, not this class.

Prefer OpenAIConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion from the model name automatically.

Value parameters

apiKey

OpenAI API key; redacted in toString.

baseUrl

API base URL; determines provider routing in org.llm4s.llmconnect.LLMConnect.

contextWindow

Model's total token capacity (prompt + completion combined).

model

Model identifier, e.g. "gpt-4o".

organization

Optional OpenAI organisation ID.

reserveCompletion

Tokens held back from prompt history for the completion.

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
object OpenAIConfig

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type
case class OpenTelemetryConfig(serviceName: String, endpoint: String, headers: Map[String, String])

Connection settings for an OpenTelemetry collector.

Connection settings for an OpenTelemetry collector.

Spans are exported via OTLP/gRPC to endpoint. Set OTEL_EXPORTER_OTLP_ENDPOINT in the environment (picked up by org.llm4s.config.Llm4sConfig) to override the default local collector address.

Value parameters

endpoint

OTLP/gRPC collector address; defaults to "http://localhost:4317"

headers

additional HTTP headers sent with each OTLP export request (e.g. authentication tokens for hosted collectors)

serviceName

logical service name attached to every span as service.name; defaults to "llm4s-agent"

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
sealed trait ProviderConfig

Identifies a specific LLM provider, model, and connection details.

Identifies a specific LLM provider, model, and connection details.

Each subtype carries the credentials, endpoint URL, and context-window metadata needed to construct an org.llm4s.llmconnect.LLMClient via org.llm4s.llmconnect.LLMConnect. Instances are normally obtained from org.llm4s.config.Llm4sConfig.provider, which reads standard environment variables (LLM_MODEL, OPENAI_API_KEY, etc.).

Prefer each subtype's fromValues factory over its primary constructor: fromValues resolves contextWindow and reserveCompletion automatically from the model name, so you only need to supply credentials and endpoint.

Attributes

Supertypes
class Object
trait Matchable
class Any
Known subtypes
case class TracingSettings(mode: TracingMode, langfuse: LangfuseConfig, openTelemetry: OpenTelemetryConfig)

Combined tracing configuration used by org.llm4s.trace.Tracing.

Combined tracing configuration used by org.llm4s.trace.Tracing.

mode selects the active backend; the other fields supply backend-specific connection details. Only the sub-config matching the active mode is used — e.g. when mode = TracingMode.Langfuse, openTelemetry is ignored.

Value parameters

langfuse

Langfuse connection details; only used when mode = TracingMode.Langfuse

mode

selects the tracing backend (Langfuse, OpenTelemetry, Console, or NoOp)

openTelemetry

OpenTelemetry collector details; only used when mode = TracingMode.OpenTelemetry

Attributes

Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
case class ZaiConfig(apiKey: String, model: String, baseUrl: String, contextWindow: Int, reserveCompletion: Int) extends ProviderConfig

Configuration for the Z.ai GLM API.

Configuration for the Z.ai GLM API.

Prefer ZaiConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically from the model name.

Value parameters

apiKey

Z.ai API key; redacted in toString.

baseUrl

API base URL; defaults to ZaiConfig.DEFAULT_BASE_URL.

contextWindow

Model's total token capacity (prompt + completion combined).

model

Model identifier, e.g. "GLM-4.7".

reserveCompletion

Tokens held back from prompt history for the completion.

Attributes

Companion
object
Supertypes
trait Serializable
trait Product
trait Equals
class Object
trait Matchable
class Any
Show all
object ZaiConfig

Attributes

Companion
class
Supertypes
trait Product
trait Mirror
class Object
trait Matchable
class Any
Self type
ZaiConfig.type