org.llm4s.agent.guardrails.rag.GroundingGuardrail
See theGroundingGuardrail companion object
class GroundingGuardrail(val llmClient: LLMClient, val threshold: Double, val onFail: GuardrailAction, val strictMode: Boolean) extends RAGGuardrail
LLM-based grounding guardrail for RAG validation.
GroundingGuardrail uses an LLM to evaluate whether a response is factually grounded in the retrieved context. This is critical for RAG applications to prevent hallucination and ensure answer quality.
Evaluation process:
- Each claim in the response is checked against the retrieved chunks
- Claims are classified as: supported, not supported, or contradicted
- An overall grounding score is computed
- Response passes if score >= threshold
Scoring:
- 1.0: All claims are fully supported by the context
- 0.5-0.9: Most claims supported, some unverifiable
- 0.0-0.4: Many claims not supported or contradicted
Example usage:
val guardrail = GroundingGuardrail(llmClient, threshold = 0.8)
// Use in RAG pipeline
val context = RAGContext(
query = "What causes climate change?",
retrievedChunks = Seq(
"Greenhouse gases trap heat in the atmosphere...",
"Human activities release CO2 and methane..."
)
)
guardrail.validateWithContext(response, context)
Value parameters
- llmClient
-
The LLM client for evaluation
- onFail
-
Action to take when grounding fails (default: Block)
- strictMode
-
If true, ANY ungrounded claim fails. If false, uses score threshold.
- threshold
-
Minimum grounding score to pass (default: 0.7)
Attributes
- Companion
- object
- Graph
-
- Supertypes
-
trait RAGGuardrailtrait OutputGuardrailtrait Guardrail[String]class Objecttrait Matchableclass Any
Members list
In this article