MetricsRecording

org.llm4s.llmconnect.provider.MetricsRecording

Helper trait for recording metrics consistently across all provider clients.

Extracts the common pattern of timing requests, observing outcomes, recording tokens, and reading costs from completion results.

Attributes

Graph
Supertypes
class Object
trait Matchable
class Any
Known subtypes

Members list

Value members

Abstract methods

protected def metrics: MetricsCollector

The org.llm4s.metrics.MetricsCollector that receives timing, token, and cost events.

The org.llm4s.metrics.MetricsCollector that receives timing, token, and cost events.

Injected by each concrete provider client. Defaults to MetricsCollector.noop in all public constructors, so callers that do not need metrics do not pay an allocation cost.

Attributes

Concrete methods

protected def withMetrics[A](provider: String, model: String, operation: => Result[A], extractUsage: A => Option[TokenUsage], extractCost: A => Option[Double]): Result[A]

Executes operation and records metrics for the call.

Executes operation and records metrics for the call.

Latency and outcome (success or classified error) are recorded for every call regardless of result. Token counts and cost are recorded only on success — a Left result emits an org.llm4s.metrics.Outcome.Error event whose kind is derived from the org.llm4s.error.LLMError subtype via ErrorKind.fromLLMError.

Value parameters

extractCost

Extracts the pre-computed cost (USD) from a successful result; return None to skip cost recording.

extractUsage

Extracts prompt/completion token counts from a successful result; return None to skip token recording.

model

Model identifier forwarded to the collector.

operation

The LLM call to time and observe.

provider

Provider label forwarded to the collector (e.g. "openai").

Attributes

Returns

The result of operation, unchanged.