org.llm4s.rag.evaluation.metrics.ContextRecall
See theContextRecall companion object
class ContextRecall(llmClient: LLMClient) extends RAGASMetric
Context Recall metric: measures if all relevant information was retrieved.
Algorithm:
- Extract key facts/sentences from the ground truth answer
- For each fact, check if it can be attributed to the retrieved contexts
- Score = Number of facts covered by contexts / Total facts in ground truth
The intuition: if all facts needed to answer the question correctly are present in the retrieved contexts, recall is 1.0. Missing facts lower the score.
Value parameters
- llmClient
-
The LLM client for fact extraction and attribution
Attributes
- Example
-
{ val metric = ContextRecall(llmClient) val sample = EvalSample( question = "What are the symptoms of diabetes?", answer = "...", // answer not used for this metric contexts = Seq( "Diabetes symptoms include excessive thirst and frequent urination.", "Type 2 diabetes may cause fatigue and blurred vision." ), groundTruth = Some("Symptoms of diabetes include increased thirst, frequent urination, fatigue, and blurred vision.") ) val result = metric.evaluate(sample) // Score = facts covered / total facts from ground truth}
- Companion
- object
- Graph
-
- Supertypes
Members list
In this article