org.llm4s.llmconnect.config
Members list
Type members
Classlikes
Configuration for the Anthropic Claude API.
Configuration for the Anthropic Claude API.
Prefer AnthropicConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically from the model name.
Value parameters
- apiKey
-
Anthropic API key; redacted in
toString. - baseUrl
-
API base URL, defaulting to
"https://api.anthropic.com". - contextWindow
-
Model's total token capacity (prompt + completion combined).
- model
-
Model identifier, e.g.
"claude-sonnet-4-5-latest". - reserveCompletion
-
Tokens held back from prompt history for the completion.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- class
- Supertypes
-
trait Producttrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
AnthropicConfig.type
Configuration for Azure OpenAI deployments.
Configuration for Azure OpenAI deployments.
Although Azure exposes an OpenAI-compatible API, it uses a different URL structure (per-deployment endpoint) and requires an apiVersion query parameter. org.llm4s.llmconnect.LLMConnect constructs an org.llm4s.llmconnect.provider.OpenAIClient internally; this config carries the Azure-specific fields that OpenAIConfig does not have.
Prefer AzureConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically.
Value parameters
- apiKey
-
Azure API key; redacted in
toString. - apiVersion
-
Azure OpenAI API version string, e.g.
"2025-01-01-preview". - contextWindow
-
Model's total token capacity (prompt + completion combined).
- endpoint
-
Azure OpenAI deployment endpoint URL, e.g.
"https://my-resource.openai.azure.com/openai/deployments/my-deploy". - model
-
Deployment name used as the model identifier.
- reserveCompletion
-
Tokens held back from prompt history for the completion.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- class
- Supertypes
-
trait Producttrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
AzureConfig.type
Configuration for the Cohere API.
Configuration for the Cohere API.
Prefer CohereConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically from the model name.
Value parameters
- apiKey
-
Cohere API key; redacted in
toString. - baseUrl
-
API base URL; defaults to CohereConfig.DEFAULT_BASE_URL.
- contextWindow
-
Model's total token capacity (prompt + completion combined).
- model
-
Model identifier, e.g.
"command-r-plus". - reserveCompletion
-
Tokens held back from prompt history for the completion.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- class
- Supertypes
-
trait Producttrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
CohereConfig.type
Centralized resolver for model context window and reserve completion tokens.
Centralized resolver for model context window and reserve completion tokens.
Replaces duplicated getContextWindowForModel logic across provider configs. Performs registry lookup, then applies provider-specific fallbacks when not found.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
Configuration for the DeepSeek API.
Configuration for the DeepSeek API.
Prefer DeepSeekConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically, and logs a warning for unknown or legacy model names.
Value parameters
- apiKey
-
DeepSeek API key; redacted in
toString. - baseUrl
-
API base URL; defaults to DeepSeekConfig.DEFAULT_BASE_URL.
- contextWindow
-
Model's total token capacity (prompt + completion combined).
- model
-
Model identifier, e.g.
"deepseek-chat"or"deepseek-reasoner". - reserveCompletion
-
Tokens held back from prompt history for the completion.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- class
- Supertypes
-
trait Producttrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
DeepSeekConfig.type
Configuration for a text embedding model, pairing a model identifier with its output vector size.
Configuration for a text embedding model, pairing a model identifier with its output vector size.
Used by embedding providers and the model dimension registry to resolve the expected dimensionality of embeddings produced by a given model.
Value parameters
- dimensions
-
Number of dimensions in the embedding vectors produced by this model.
- name
-
Model identifier (e.g. "text-embedding-3-small", "voyage-3-large").
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
Configuration for the Google Gemini API.
Configuration for the Google Gemini API.
Prefer GeminiConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically from the model name.
Value parameters
- apiKey
-
Google API key; redacted in
toString. - baseUrl
-
API base URL.
- contextWindow
-
Model's total token capacity (prompt + completion combined).
- model
-
Model identifier, e.g.
"gemini-2.0-flash". - reserveCompletion
-
Tokens held back from prompt history for the completion.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- class
- Supertypes
-
trait Producttrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
GeminiConfig.type
Connection and metadata settings for the Langfuse tracing backend.
Connection and metadata settings for the Langfuse tracing backend.
Normally obtained via org.llm4s.config.Llm4sConfig which reads LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, and LANGFUSE_BASE_URL from the environment.
toString redacts both keys so that the config can be safely logged.
Value parameters
- env
-
deployment environment tag attached to every trace; defaults to
"production" - publicKey
-
Langfuse public API key;
Nonedisables tracing with a warning - release
-
application release identifier forwarded to Langfuse; defaults to
"1.0.0" - secretKey
-
Langfuse secret API key; redacted in
toString - url
-
Langfuse ingestion endpoint; defaults to
https://cloud.langfuse.com/api/public/ingestion - version
-
SDK/integration version forwarded to Langfuse; defaults to
"1.0.0"
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
Configuration specifying local model names for non-text modality embedding.
Configuration specifying local model names for non-text modality embedding.
Holds the model identifiers used by local encoders to produce embeddings for image, audio, and video content. These models run locally (e.g. via ONNX or stub implementations) rather than calling a remote API.
Value parameters
- audioModel
-
Local model name for audio embeddings (e.g. "wav2vec2-base").
- imageModel
-
Local model name for image embeddings (e.g. "openclip-vit-b32").
- videoModel
-
Local model name for video embeddings (e.g. "timesformer-base").
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- class
- Supertypes
-
trait Producttrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
MistralConfig.type
Lookup service for embedding model vector dimensions.
Lookup service for embedding model vector dimensions.
Provides a single, authoritative mapping from (provider, model) pairs to the dimensionality of the vectors they produce. All configuration and encoding code should resolve dimensions through this registry to avoid duplicated or inconsistent dimension constants.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
Configuration for a locally-running Ollama instance.
Configuration for a locally-running Ollama instance.
Ollama requires no API key — authentication is handled at the network level by controlling access to the Ollama endpoint. Prefer OllamaConfig.fromValues over the primary constructor.
Value parameters
- baseUrl
-
Ollama server URL, e.g.
"http://localhost:11434". - contextWindow
-
Model's total token capacity (prompt + completion combined).
- model
-
Model identifier as registered in Ollama, e.g.
"llama3". - reserveCompletion
-
Tokens held back from prompt history for the completion.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- class
- Supertypes
-
trait Producttrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
OllamaConfig.type
Configuration for the OpenAI API and providers that implement the OpenAI-compatible REST interface.
Configuration for the OpenAI API and providers that implement the OpenAI-compatible REST interface.
baseUrl governs which backend is contacted: "https://api.openai.com/v1" reaches OpenAI directly, while a URL containing "openrouter.ai" causes org.llm4s.llmconnect.LLMConnect to route to OpenRouter. Azure OpenAI uses AzureConfig, not this class.
Prefer OpenAIConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion from the model name automatically.
Value parameters
- apiKey
-
OpenAI API key; redacted in
toString. - baseUrl
-
API base URL; determines provider routing in org.llm4s.llmconnect.LLMConnect.
- contextWindow
-
Model's total token capacity (prompt + completion combined).
- model
-
Model identifier, e.g.
"gpt-4o". - organization
-
Optional OpenAI organisation ID.
- reserveCompletion
-
Tokens held back from prompt history for the completion.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all
Attributes
- Companion
- class
- Supertypes
-
trait Producttrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
OpenAIConfig.type
Connection settings for an OpenTelemetry collector.
Connection settings for an OpenTelemetry collector.
Spans are exported via OTLP/gRPC to endpoint. Set OTEL_EXPORTER_OTLP_ENDPOINT in the environment (picked up by org.llm4s.config.Llm4sConfig) to override the default local collector address.
Value parameters
- endpoint
-
OTLP/gRPC collector address; defaults to
"http://localhost:4317" - headers
-
additional HTTP headers sent with each OTLP export request (e.g. authentication tokens for hosted collectors)
- serviceName
-
logical service name attached to every span as
service.name; defaults to"llm4s-agent"
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
Identifies a specific LLM provider, model, and connection details.
Identifies a specific LLM provider, model, and connection details.
Each subtype carries the credentials, endpoint URL, and context-window metadata needed to construct an org.llm4s.llmconnect.LLMClient via org.llm4s.llmconnect.LLMConnect. Instances are normally obtained from org.llm4s.config.Llm4sConfig.provider, which reads standard environment variables (LLM_MODEL, OPENAI_API_KEY, etc.).
Prefer each subtype's fromValues factory over its primary constructor: fromValues resolves contextWindow and reserveCompletion automatically from the model name, so you only need to supply credentials and endpoint.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class AnthropicConfigclass AzureConfigclass CohereConfigclass DeepSeekConfigclass GeminiConfigclass MistralConfigclass OllamaConfigclass OpenAIConfigclass ZaiConfigShow all
Combined tracing configuration used by org.llm4s.trace.Tracing.
Combined tracing configuration used by org.llm4s.trace.Tracing.
mode selects the active backend; the other fields supply backend-specific connection details. Only the sub-config matching the active mode is used — e.g. when mode = TracingMode.Langfuse, openTelemetry is ignored.
Value parameters
- langfuse
-
Langfuse connection details; only used when
mode = TracingMode.Langfuse - mode
-
selects the tracing backend (
Langfuse,OpenTelemetry,Console, orNoOp) - openTelemetry
-
OpenTelemetry collector details; only used when
mode = TracingMode.OpenTelemetry
Attributes
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass AnyShow all
Configuration for the Z.ai GLM API.
Configuration for the Z.ai GLM API.
Prefer ZaiConfig.fromValues over the primary constructor; it resolves contextWindow and reserveCompletion automatically from the model name.
Value parameters
- apiKey
-
Z.ai API key; redacted in
toString. - baseUrl
-
API base URL; defaults to ZaiConfig.DEFAULT_BASE_URL.
- contextWindow
-
Model's total token capacity (prompt + completion combined).
- model
-
Model identifier, e.g.
"GLM-4.7". - reserveCompletion
-
Tokens held back from prompt history for the completion.
Attributes
- Companion
- object
- Supertypes
-
trait Serializabletrait Producttrait Equalstrait ProviderConfigclass Objecttrait Matchableclass AnyShow all