ProviderConfig

org.llm4s.llmconnect.config.ProviderConfig
sealed trait ProviderConfig

Identifies a specific LLM provider, model, and connection details.

Each subtype carries the credentials, endpoint URL, and context-window metadata needed to construct an org.llm4s.llmconnect.LLMClient via org.llm4s.llmconnect.LLMConnect. Instances are normally obtained from org.llm4s.config.Llm4sConfig.provider, which reads standard environment variables (LLM_MODEL, OPENAI_API_KEY, etc.).

Prefer each subtype's fromValues factory over its primary constructor: fromValues resolves contextWindow and reserveCompletion automatically from the model name, so you only need to supply credentials and endpoint.

Attributes

Graph
Supertypes
class Object
trait Matchable
class Any
Known subtypes

Members list

Value members

Abstract methods

def contextWindow: Int

Maximum token capacity of the model across both prompt and completion combined.

Maximum token capacity of the model across both prompt and completion combined.

Attributes

def model: String

Model identifier forwarded verbatim to the provider API (e.g. "gpt-4o", "claude-sonnet-4-5-latest").

Model identifier forwarded verbatim to the provider API (e.g. "gpt-4o", "claude-sonnet-4-5-latest").

Attributes

Tokens reserved for the model's completion response.

Tokens reserved for the model's completion response.

Context-compression logic caps the prompt history at contextWindow - reserveCompletion, ensuring the model always has at least this many tokens available to generate a reply.

Attributes