org.llm4s.rag.benchmark.RAGPipeline
See theRAGPipeline companion object
A configurable RAG pipeline for benchmark experiments.
Wraps all RAG components (chunker, embeddings, vector store, keyword index) and provides a unified interface for indexing documents and answering queries.
Value parameters
chunker
Document chunker based on config strategy
config
Experiment configuration
embeddingClient
Embedding client for vectorization
embeddingModelConfig
Model config for embedding requests
hybridSearcher
Hybrid search with vector + keyword fusion
llmClient
LLM client for answer generation
tracer
Optional tracer for cost tracking
Attributes
Companion
object
Graph
Reset zoom Hide graph Show graph
Supertypes
class Object
trait Matchable
class Any
Members list
Generate an answer to a question using retrieved context.
Generate an answer to a question using retrieved context.
Value parameters
question
The question to answer
topK
Number of chunks to retrieve (default from config)
Attributes
Returns
The generated answer and retrieved contexts
Get the number of chunks indexed
Get the number of chunks indexed
Attributes
Get the number of documents indexed
Get the number of documents indexed
Attributes
Value parameters
content
Document text content
id
Document identifier
metadata
Optional metadata
Attributes
Returns
Number of chunks created or error
Index multiple documents.
Index multiple documents.
Value parameters
documents
Sequence of (id, content, metadata) tuples
Attributes
Returns
Total chunks created or error
Search for relevant chunks.
Search for relevant chunks.
Value parameters
query
Search query
topK
Number of results (default from config)
Attributes
Returns
Search results