EmbeddingProvider

org.llm4s.llmconnect.provider.EmbeddingProvider

Text embedding provider interface for generating vector representations.

Provides a unified interface for different embedding services (OpenAI, VoyageAI, Ollama). Each implementation handles provider-specific API calls and response formats.

Text content is the primary input; multimedia content (images, audio) should be processed through the UniversalEncoder façade which handles content extraction before embedding.

== Usage Example ==

val provider: EmbeddingProvider = OpenAIEmbeddingProvider.fromConfig(config)
val request = EmbeddingRequest(
 input = Seq("Hello world", "How are you?"),
 model = EmbeddingModelName("text-embedding-3-small")
)
val result: Result[EmbeddingResponse] = provider.embed(request)

Attributes

See also

OpenAIEmbeddingProvider for OpenAI text-embedding models

VoyageAIEmbeddingProvider for VoyageAI embedding models

OllamaEmbeddingProvider for local Ollama embedding models

Graph
Supertypes
class Object
trait Matchable
class Any

Members list

Value members

Abstract methods

Generates embeddings for the given text inputs.

Generates embeddings for the given text inputs.

Value parameters

request

embedding request containing input texts and model configuration

Attributes

Returns

Right(EmbeddingResponse) with vectors on success, Left(error) on failure