org.llm4s.llmconnect.provider
Members list
Type members
Classlikes
Attributes
- Companion
- object
- Supertypes
Attributes
- Companion
- class
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
AnthropicClient.type
DeepSeek LLM client implementation using the OpenAI-compatible API.
DeepSeek LLM client implementation using the OpenAI-compatible API.
Provides access to DeepSeek models including DeepSeek-Chat (V3) with 64K context and DeepSeek-Reasoner (R1) with 128K context for advanced reasoning tasks.
Uses the same request/response format as OpenAI, making it compatible with standard OpenAI tooling and client code patterns.
Value parameters
- config
-
DeepSeek configuration containing API key, model, base URL, and context settings
- metrics
-
MetricsCollector for recording request metrics
Attributes
- Companion
- object
- Supertypes
Attributes
- Companion
- class
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
DeepSeekClient.type
Text embedding provider interface for generating vector representations.
Text embedding provider interface for generating vector representations.
Provides a unified interface for different embedding services (OpenAI, VoyageAI, Ollama). Each implementation handles provider-specific API calls and response formats.
Text content is the primary input; multimedia content (images, audio) should be processed through the UniversalEncoder façade which handles content extraction before embedding.
== Usage Example ==
val provider: EmbeddingProvider = OpenAIEmbeddingProvider.fromConfig(config)
val request = EmbeddingRequest(
input = Seq("Hello world", "How are you?"),
model = EmbeddingModelName("text-embedding-3-small")
)
val result: Result[EmbeddingResponse] = provider.embed(request)
Attributes
- See also
-
OpenAIEmbeddingProvider for OpenAI text-embedding models
VoyageAIEmbeddingProvider for VoyageAI embedding models
OllamaEmbeddingProvider for local Ollama embedding models
- Supertypes
-
class Objecttrait Matchableclass Any
LLMClient implementation for Google Gemini models.
LLMClient implementation for Google Gemini models.
Provides access to Gemini 2.0, 1.5 Pro, 1.5 Flash and other Gemini models via Google's Generative AI API.
== Supported Features ==
- Chat completions
- Streaming responses
- Tool/function calling
- Large context windows (up to 1M+ tokens)
== Configuration ==
export LLM_MODEL=gemini/gemini-2.0-flash
export GOOGLE_API_KEY=your-api-key
== API Format ==
Gemini uses a different message format than OpenAI:
- Messages have
role(user/model) andparts(array of content) - System instructions are sent separately
- Tool calls use
functionDeclarationsformat
Value parameters
- config
-
Gemini configuration with API key, model, and base URL
- metrics
-
metrics collector for observability (default: noop)
Attributes
- See also
-
org.llm4s.llmconnect.config.GeminiConfig for configuration options
- Companion
- object
- Supertypes
Attributes
- Companion
- class
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
GeminiClient.type
Enumeration of supported LLM providers.
Enumeration of supported LLM providers.
Defines the available language model service providers that can be used with llm4s. Each provider has specific configuration requirements and API characteristics.
Attributes
- See also
-
org.llm4s.llmconnect.config.ProviderConfig for provider-specific configuration
- Companion
- object
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
Companion object providing LLM provider instances and utilities.
Companion object providing LLM provider instances and utilities.
Attributes
- Companion
- trait
- Supertypes
-
trait Sumtrait Mirrorclass Objecttrait Matchableclass Any
- Self type
-
LLMProvider.type
Helper trait for recording metrics consistently across all provider clients.
Helper trait for recording metrics consistently across all provider clients.
Extracts the common pattern of timing requests, observing outcomes, recording tokens, and calculating costs.
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Known subtypes
-
class AnthropicClientclass DeepSeekClientclass GeminiClientclass OllamaClientclass OpenAIClientclass OpenRouterClientclass ZaiClientShow all
Attributes
- Companion
- object
- Supertypes
Attributes
- Companion
- class
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
OllamaClient.type
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
LLMClient implementation supporting both OpenAI and Azure OpenAI services.
LLMClient implementation supporting both OpenAI and Azure OpenAI services.
Provides a unified interface for interacting with OpenAI's API and Azure's OpenAI service. Handles message conversion between llm4s format and OpenAI format, completion requests, streaming responses, and tool calling (function calling) capabilities.
Uses Azure's OpenAI client library internally, which supports both direct OpenAI and Azure-hosted OpenAI endpoints.
== Extended Thinking / Reasoning Support ==
For OpenAI o1/o3/o4 models with reasoning capabilities, use OpenRouterClient instead, which fully supports the reasoning_effort parameter. The Azure SDK used by this client does not yet expose the reasoning_effort API parameter.
For Anthropic Claude models with extended thinking, use AnthropicClient which has full support for the thinking parameter with budget_tokens.
Value parameters
- client
-
configured Azure OpenAI client instance
- config
-
provider configuration containing context window and reserve completion settings
- metrics
-
metrics collector for observability (default: noop)
- model
-
the model identifier (e.g., "gpt-4", "gpt-3.5-turbo")
Attributes
- Companion
- object
- Supertypes
Factory methods for creating OpenAIClient instances.
Factory methods for creating OpenAIClient instances.
Provides safe construction of OpenAI clients with error handling via Result type.
Attributes
- Companion
- class
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
OpenAIClient.type
OpenAI embedding provider implementation.
OpenAI embedding provider implementation.
Provides text embeddings using OpenAI's embedding API (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002). Supports batch embedding of multiple texts in a single request.
== Supported Models ==
text-embedding-3-small- Efficient, lower cost (recommended)text-embedding-3-large- Higher quality, higher costtext-embedding-ada-002- Legacy model
== Token Usage == The response includes token usage information when available from the API.
Attributes
- See also
-
EmbeddingProvider for the provider interface
org.llm4s.llmconnect.config.EmbeddingProviderConfig for configuration
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
Attributes
- Companion
- object
- Supertypes
Attributes
- Companion
- class
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
OpenRouterClient.type
Attributes
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
Attributes
- Companion
- object
- Supertypes