RAGASFactory

org.llm4s.rag.evaluation.RAGASFactory
object RAGASFactory

Factory for creating RAGAS evaluators and individual metrics.

Provides convenient methods to create evaluators from environment configuration or with specific settings.

Attributes

Example
{
// Create from environment
val evaluator = RAGASFactory.fromConfigs(providerCfg, embeddingCfg)
// Create with specific metrics
val basicEvaluator = RAGASFactory.withMetrics(
 llmClient, embeddingClient, embeddingConfig,
 Set("faithfulness", "answer_relevancy")
)
// Create individual metrics
val faithfulness = RAGASFactory.faithfulness(llmClient)

}

Graph
Supertypes
class Object
trait Matchable
class Any
Self type

Members list

Value members

Concrete methods

def answerRelevancy(llmClient: LLMClient, embeddingClient: EmbeddingClient, embeddingModelConfig: EmbeddingModelConfig, numGeneratedQuestions: Int): AnswerRelevancy

Create an Answer Relevancy metric.

Create an Answer Relevancy metric.

Value parameters

embeddingClient

Embedding client for similarity calculations

embeddingModelConfig

Configuration for the embedding model

llmClient

LLM client for question generation

numGeneratedQuestions

Number of questions to generate

Attributes

Returns

A configured Answer Relevancy metric

def basic(llmClient: LLMClient, embeddingClient: EmbeddingClient, embeddingModelConfig: EmbeddingModelConfig): RAGASEvaluator

Create a basic evaluator with only Faithfulness and Answer Relevancy metrics. These metrics don't require ground truth, making them suitable for production evaluation.

Create a basic evaluator with only Faithfulness and Answer Relevancy metrics. These metrics don't require ground truth, making them suitable for production evaluation.

Value parameters

embeddingClient

Embedding client for similarity calculations

embeddingModelConfig

Configuration for the embedding model

llmClient

LLM client for semantic evaluation

Attributes

Returns

A basic evaluator without ground truth requirements

Create a basic evaluator from explicit configurations.

Create a basic evaluator from explicit configurations.

Attributes

Create a Context Precision metric.

Create a Context Precision metric.

Value parameters

llmClient

LLM client for relevance assessment

Attributes

Returns

A configured Context Precision metric

Create a Context Recall metric.

Create a Context Recall metric.

Value parameters

llmClient

LLM client for fact extraction and attribution

Attributes

Returns

A configured Context Recall metric

def create(llmClient: LLMClient, embeddingClient: EmbeddingClient, embeddingModelConfig: EmbeddingModelConfig): RAGASEvaluator

Create evaluator with all default metrics.

Create evaluator with all default metrics.

Value parameters

embeddingClient

Embedding client for similarity calculations

embeddingModelConfig

Configuration for the embedding model

llmClient

LLM client for semantic evaluation

Attributes

Returns

A configured evaluator

def faithfulness(llmClient: LLMClient, batchSize: Int): Faithfulness

Create a Faithfulness metric.

Create a Faithfulness metric.

Value parameters

batchSize

Number of claims to verify per LLM call

llmClient

LLM client for claim extraction and verification

Attributes

Returns

A configured Faithfulness metric

def fromConfigs(providerCfg: ProviderConfig, embedding: (String, EmbeddingProviderConfig)): Result[RAGASEvaluator]

Create evaluator with all default metrics from explicit configurations.

Create evaluator with all default metrics from explicit configurations.

Attributes

def withMetrics(llmClient: LLMClient, embeddingClient: EmbeddingClient, embeddingModelConfig: EmbeddingModelConfig, metricNames: Set[String]): RAGASEvaluator

Create evaluator with specific metrics only.

Create evaluator with specific metrics only.

Value parameters

embeddingClient

Embedding client for similarity calculations

embeddingModelConfig

Configuration for the embedding model

llmClient

LLM client for semantic evaluation

metricNames

Names of metrics to enable (faithfulness, answer_relevancy, context_precision, context_recall)

Attributes

Returns

A configured evaluator with only specified metrics

Concrete fields

val availableMetrics: Set[String]

Available metric names.

Available metric names.

Attributes

val metricsRequiringGroundTruth: Set[String]

Metrics that require ground truth.

Metrics that require ground truth.

Attributes

val metricsWithoutGroundTruth: Set[String]

Metrics that work without ground truth.

Metrics that work without ground truth.

Attributes