org.llm4s.agent.guardrails.rag.ContextRelevanceGuardrail
See theContextRelevanceGuardrail companion object
class ContextRelevanceGuardrail(val llmClient: LLMClient, val threshold: Double, val minRelevantRatio: Double, val onFail: GuardrailAction) extends RAGGuardrail
LLM-based guardrail to validate that retrieved chunks are relevant to the query.
ContextRelevanceGuardrail uses an LLM to evaluate whether the chunks retrieved from a vector store are actually relevant to the user's original query. This is critical for RAG quality - retrieving irrelevant chunks leads to poor answers.
Evaluation process:
- Each chunk is evaluated for relevance to the query
- Relevance is scored from 0.0 (completely irrelevant) to 1.0 (highly relevant)
- Overall score is computed as average chunk relevance
- Context passes if enough chunks are relevant
Use cases:
- Detect retrieval failures before generating responses
- Filter out irrelevant chunks before sending to LLM
- Measure and monitor retrieval quality
Example usage:
val guardrail = ContextRelevanceGuardrail(llmClient, threshold = 0.6)
val context = RAGContext(
query = "What are the symptoms of diabetes?",
retrievedChunks = Seq(
"Diabetes symptoms include increased thirst...",
"The history of the Roman Empire..." // Irrelevant
)
)
// Validate that retrieved context is relevant
guardrail.validateWithContext(response, context)
Value parameters
- llmClient
-
The LLM client for evaluation
- minRelevantRatio
-
Minimum ratio of relevant chunks required (default: 0.5)
- onFail
-
Action to take when relevance is insufficient (default: Block)
- threshold
-
Minimum relevance score for a chunk to be considered relevant (default: 0.5)
Attributes
- Companion
- object
- Graph
-
- Supertypes
-
trait RAGGuardrailtrait OutputGuardrailtrait Guardrail[String]class Objecttrait Matchableclass Any
Members list
In this article