org.llm4s.rag.evaluation.metrics.AnswerRelevancy
See theAnswerRelevancy companion object
class AnswerRelevancy(llmClient: LLMClient, embeddingClient: EmbeddingClient, modelConfig: EmbeddingModelConfig, numGeneratedQuestions: Int) extends RAGASMetric
Answer Relevancy metric: measures how well the answer addresses the question.
Algorithm:
- Generate N questions that the provided answer would address
- Compute embedding for the original question
- Compute embeddings for the generated questions
- Calculate cosine similarity between original and generated question embeddings
- Score = average similarity across generated questions
The intuition: if the answer is relevant to the question, then questions generated from the answer should be semantically similar to the original question.
Value parameters
- embeddingClient
-
Client for computing embeddings
- llmClient
-
LLM client for generating questions from the answer
- modelConfig
-
Embedding model configuration
- numGeneratedQuestions
-
Number of questions to generate (default: 3)
Attributes
- Example
-
val metric = AnswerRelevancy(llmClient, embeddingClient, modelConfig) val sample = EvalSample( question = "What is machine learning?", answer = "Machine learning is a subset of AI that enables systems to learn from data.", contexts = Seq("...") // contexts not used for this metric ) val result = metric.evaluate(sample) // High score if generated questions are similar to "What is machine learning?" - Companion
- object
- Graph
-
- Supertypes
Members list
In this article