RAGPipeline

org.llm4s.rag.benchmark.RAGPipeline
See theRAGPipeline companion class
object RAGPipeline

Attributes

Companion
class
Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Value members

Concrete methods

def createEmbeddingClient(config: EmbeddingConfig, resolveEmbeddingProvider: String => Result[EmbeddingProviderConfig]): Result[EmbeddingClient]

Create an embedding client for a specific embedding config.

Create an embedding client for a specific embedding config.

Reads API keys from configuration (environment or application.conf):

  • OpenAI: OPENAI_API_KEY
  • Voyage: VOYAGE_API_KEY
  • Ollama: No API key required

Value parameters

config

Embedding configuration

Attributes

Returns

Embedding client or error

def fromConfig(config: RAGExperimentConfig, llmClient: LLMClient, embeddingClient: EmbeddingClient, tracer: Option[Tracing]): Result[RAGPipeline]

Create a RAG pipeline from experiment configuration.

Create a RAG pipeline from experiment configuration.

Value parameters

config

Experiment configuration

embeddingClient

Embedding client for vectorization

llmClient

LLM client for answer generation

tracer

Optional tracer for cost tracking

Attributes

Returns

Configured pipeline or error

def withStores(config: RAGExperimentConfig, llmClient: LLMClient, embeddingClient: EmbeddingClient, vectorStore: VectorStore, keywordIndex: KeywordIndex, tracer: Option[Tracing]): RAGPipeline

Create a RAG pipeline with custom stores.

Create a RAG pipeline with custom stores.

Value parameters

config

Experiment configuration

embeddingClient

Embedding client

keywordIndex

Custom keyword index

llmClient

LLM client

tracer

Optional tracer for cost tracking

vectorStore

Custom vector store

Attributes

Returns

Configured pipeline