org.llm4s.agent.guardrails.rag.SourceAttributionGuardrail
See theSourceAttributionGuardrail companion object
class SourceAttributionGuardrail(val llmClient: LLMClient, val requireAttributions: Boolean, val minAttributionScore: Double, val onFail: GuardrailAction) extends RAGGuardrail
LLM-based guardrail to validate that responses properly cite their sources.
SourceAttributionGuardrail ensures that RAG responses include proper citations to the source documents from which information was derived. This is important for transparency, verifiability, and trust.
Evaluation criteria:
- Does the response cite sources for factual claims?
- Are the citations accurate (pointing to the right chunks)?
- Are all major claims properly attributed?
Use cases:
- Ensure transparency in RAG responses
- Enable users to verify information
- Comply with requirements for attributing sources
- Detect when responses fail to cite available sources
Example usage:
val guardrail = SourceAttributionGuardrail(llmClient)
val context = RAGContext.withSources(
query = "What causes climate change?",
chunks = Seq("Human activities release greenhouse gases..."),
sources = Seq("IPCC Report 2023.pdf")
)
// Response should cite sources
val response = "According to the IPCC Report, human activities release greenhouse gases..."
guardrail.validateWithContext(response, context)
Value parameters
- llmClient
-
The LLM client for evaluation
- minAttributionScore
-
Minimum attribution quality score (default: 0.5)
- onFail
-
Action to take when attribution is insufficient (default: Block)
- requireAttributions
-
Whether citations are required (default: true)
Attributes
- Companion
- object
- Graph
-
- Supertypes
-
trait RAGGuardrailtrait OutputGuardrailtrait Guardrail[String]class Objecttrait Matchableclass Any
Members list
In this article