Migration Guide
Migration Guide
MessageRole Enum Changes (v0.2.0)
Breaking Change
The MessageRole
has been converted from string-based constants to a proper enum type for better type safety.
Before (v0.1.x)
import org.llm4s.llmconnect.model.Message
val message = Message(role = "assistant", content = "Hello")
message.role match {
case "assistant" => // handle assistant
case "user" => // handle user
case _ => // handle other
}
After (v0.2.0)
import org.llm4s.llmconnect.model.{Message, MessageRole}
val message = AssistantMessage(content = "Hello")
// or
val message = Message(role = MessageRole.Assistant, content = "Hello")
message.role match {
case MessageRole.Assistant => // handle assistant
case MessageRole.User => // handle user
case MessageRole.System => // handle system
case MessageRole.Tool => // handle tool
}
Migration Steps
- Update imports: Add
MessageRole
to your importsimport org.llm4s.llmconnect.model.MessageRole
- Replace string comparisons: Update pattern matches and comparisons
// Before if (message.role == "assistant") { ... } // After if (message.role == MessageRole.Assistant) { ... }
- Update message creation: Use the typed constructors
// Before Message(role = "user", content = "Hello") // After UserMessage(content = "Hello") // or Message(role = MessageRole.User, content = "Hello")
Error Hierarchy Changes (v0.2.0)
New Error Categorization
Errors are now categorized using traits for better type safety and recovery strategies.
Before (v0.1.x)
error match {
case e: LLMError if e.isRecoverable => // retry logic
case e: LLMError => // handle non-recoverable
}
After (v0.2.0)
error match {
case e: RecoverableError => // retry logic
case e: NonRecoverableError => // handle non-recoverable
}
Error Recovery Pattern
import org.llm4s.error._
def handleError(error: LLMError): Unit = error match {
case _: RateLimitError => // wait and retry
case _: TimeoutError => // retry with backoff
case _: ServiceError with RecoverableError => // retry
case _: AuthenticationError => // refresh token or fail
case _: ValidationError => // fix input and retry
case _ => // non-recoverable, fail
}
Migration Steps
- Replace
isRecoverable
checks: Use pattern matching on traits// Before if (error.isRecoverable) { ... } // After error match { case _: RecoverableError => { ... } case _ => { ... } }
- Update error handling: Use the new trait-based categorization
// Before case e: ServiceError if e.isRecoverable => // After case e: ServiceError with RecoverableError =>
- Use smart constructors: Create errors using the companion object methods
// Before new RateLimitError(429, "Rate limit exceeded", Some(60.seconds)) // After RateLimitError(429, "Rate limit exceeded", Some(60.seconds))
Configuration Changes (v0.2.0)
ConfigReader replaces EnvLoader
The EnvLoader
has been replaced with a more flexible ConfigReader
system.
Before (v0.1.x)
import org.llm4s.config.EnvLoader
val apiKey = EnvLoader.get("OPENAI_API_KEY")
val model = EnvLoader.getOrElse("LLM_MODEL", "gpt-4")
After (v0.2.0)
import org.llm4s.config.ConfigReader
import org.llm4s.config.ConfigReader.LLMConfig
import org.llm4s.llmconnect.LLMConnect
// Result-first config and client acquisition
val client = for {
reader <- LLMConfig() // Result[ConfigReader]
client <- LLMConnect.getClient(reader) // Result[LLMClient]
} yield client
// Access individual keys (Result)
val apiKeyEither: org.llm4s.types.Result[String] =
LLMConfig().flatMap(_.require("OPENAI_API_KEY"))
// Or create a custom config (pure, no env)
val customConfig: ConfigReader = ConfigReader(Map(
"OPENAI_API_KEY" -> "sk-...",
"LLM_MODEL" -> "openai/gpt-4o"
))
Migration Steps
- Replace EnvLoader imports: Update to use ConfigReader
// Before import org.llm4s.config.EnvLoader // After import org.llm4s.config.ConfigReader import org.llm4s.config.ConfigReader.LLMConfig
- Update configuration access: Use ConfigReader methods
// Before EnvLoader.get("KEY") // After LLMConfigResult().flatMap(_.require("KEY"))
- Pass config to constructors: Many classes now take an implicit ConfigReader
// Before val client = LLM.client() // After (Result-first) val client = for { reader <- LLMConfig() client <- org.llm4s.llmconnect.LLMConnect.getClient(reader) } yield client
val client = LLMConfig().flatMap { reader => LLMConnect.getClient(reader) }