org.llm4s.rag.evaluation.metrics.ContextPrecision
See theContextPrecision companion object
class ContextPrecision(llmClient: LLMClient) extends RAGASMetric
Context Precision metric: measures if relevant contexts are ranked at the top.
Algorithm:
- For each retrieved context, determine if it's relevant to the question/ground_truth
- Calculate precision@k for each position where a relevant doc appears
- Score = Average Precision (AP) = sum of (precision@k * relevance@k) / total_relevant
The intuition: if your retrieval system ranks relevant documents at the top, you get a higher score. Documents ranked lower contribute less to the score.
Value parameters
- llmClient
-
The LLM client for relevance assessment
Attributes
- Example
-
{ val metric = ContextPrecision(llmClient) val sample = EvalSample( question = "What is the capital of France?", answer = "Paris is the capital of France.", contexts = Seq( "Paris is the capital and largest city of France.", // relevant "France has beautiful countryside.", // less relevant "Paris has the Eiffel Tower." // relevant ), groundTruth = Some("The capital of France is Paris.") ) val result = metric.evaluate(sample) // High score if relevant contexts are at positions 1 and 2 vs scattered}
- Companion
- object
- Graph
-
- Supertypes
Members list
In this article