Next Steps

You’ve completed the getting started guide! Here’s where to go next.

Table of contents

  1. πŸŽ‰ Congratulations!
  2. Learning Paths
    1. πŸ€– Path 1: Build Agents
    2. πŸ› οΈ Path 2: Tool Integration
    3. πŸ’¬ Path 3: Conversational AI
    4. πŸ” Path 4: RAG & Knowledge
    5. πŸ“Š Path 5: Production Systems
  3. Quick Reference: Key Features
    1. Agents
    2. Tool Calling
    3. Multi-Turn Conversations
    4. Context Management
    5. Streaming
    6. Observability
    7. Embeddings
    8. MCP Integration
  4. Example Gallery
    1. Basic Examples (9)
    2. Agent Examples (6)
    3. Tool Examples (5)
    4. Context Management Examples (8)
    5. More Examples
  5. Common Recipes
    1. Recipe 1: Simple Q&A Bot
    2. Recipe 2: Agent with Custom Tools
    3. Recipe 3: Streaming Chat
    4. Recipe 4: Multi-Turn with Pruning
  6. Troubleshooting
    1. Common Issues
  7. Community & Support
    1. Get Help
    2. Stay Updated
    3. Contribute
  8. Recommended Learning Order
    1. Week 1: Fundamentals
    2. Week 2: Agents & Tools
    3. Week 3: Advanced Patterns
    4. Week 4: Production
  9. Quick Links
    1. Documentation
    2. Examples
    3. Reference
  10. What to Build?
  11. Ready to Build?

πŸŽ‰ Congratulations!

You’ve successfully:

βœ… Installed LLM4S βœ… Written your first LLM program βœ… Configured providers and API keys βœ… Understood Result-based error handling

Now let’s explore what you can build with LLM4S!


Learning Paths

Choose your path based on what you want to build:

πŸ€– Path 1: Build Agents

Best for: Interactive applications, chatbots, assistants

What you’ll learn:

  • Agent framework basics
  • Multi-turn conversations
  • Tool calling and integration
  • Conversation state management

Start here:

  1. Agent Framework Guide
  2. Single-Step Agent Example
  3. Multi-Turn Conversations
  4. Tool Calling Guide

Example project ideas:

  • Customer support chatbot
  • Code review assistant
  • Research assistant with web search
  • Interactive game master

πŸ› οΈ Path 2: Tool Integration

Best for: LLMs that interact with external systems

What you’ll learn:

  • Defining custom tools
  • Tool parameter schemas
  • Model Context Protocol (MCP)
  • Tool error handling

Start here:

  1. Tool Calling Guide
  2. Weather Tool Example
  3. MCP Integration
  4. Multi-Tool Example

Example project ideas:

  • Database query assistant
  • API integration agent
  • File system navigator
  • Task automation system

πŸ’¬ Path 3: Conversational AI

Best for: Chat applications, dialogue systems

What you’ll learn:

  • Context window management
  • Conversation persistence
  • History pruning strategies
  • Streaming responses

Start here:

  1. Multi-Turn Conversations
  2. Context Management
  3. Streaming Guide
  4. Long Conversation Example

Example project ideas:

  • Slack bot
  • Discord integration
  • Customer service chat
  • Educational tutor

πŸ” Path 4: RAG & Knowledge

Best for: Question answering, document search, knowledge bases

What you’ll learn:

  • Vector embeddings
  • Semantic search
  • Document processing
  • Retrieval-augmented generation

Start here:

  1. Embeddings Guide
  2. RAG Patterns
  3. Embedding Example
  4. Vector Search

Example project ideas:

  • Documentation Q&A system
  • PDF analyzer
  • Knowledge base search
  • Code search engine

πŸ“Š Path 5: Production Systems

Best for: Deploying LLM apps to production

What you’ll learn:

  • Error handling patterns
  • Observability and tracing
  • Performance optimization
  • Security best practices

Start here:

  1. Production Readiness
  2. Observability Guide
  3. Error Handling
  4. Security Guide

Example project ideas:

  • Scalable API service
  • Multi-tenant SaaS application
  • Enterprise integration
  • Monitoring dashboard

Quick Reference: Key Features

Agents

Build sophisticated multi-turn agents with automatic tool calling.

1
2
val agent = new Agent(client)
val state = agent.run("Your query", tools)

Learn more β†’


Tool Calling

Give LLMs access to external functions and APIs.

1
2
3
4
5
val tool = ToolFunction(
  name = "search",
  description = "Search the web",
  function = search _
)

Learn more β†’


Multi-Turn Conversations

Functional conversation management without mutation.

1
val state2 = agent.continueConversation(state1, "Next question")

Learn more β†’


Context Management

Automatically manage token windows and prune history.

1
2
3
4
val config = ContextWindowConfig(
  maxMessages = Some(20),
  pruningStrategy = PruningStrategy.OldestFirst
)

Learn more β†’


Streaming

Get real-time token-by-token responses.

1
2
val stream = client.completeStreaming(messages, None)
stream.foreach(chunk => print(chunk.content))

Learn more β†’


Observability

Trace LLM calls with Langfuse integration.

1
2
// Automatic tracing when configured
TRACING_MODE=langfuse

Learn more β†’


Embeddings

Create and search vector embeddings.

1
2
val embeddings = embeddingsClient.embed(documents)
val results = search(query, embeddings)

Learn more β†’


MCP Integration

Connect to external Model Context Protocol servers.

1
val mcpTools = MCPClient.loadTools("mcp-server-name")

Learn more β†’


Browse 69 working examples organized by category:

Basic Examples (9)

View all basic examples β†’

Agent Examples (6)

View all agent examples β†’

Tool Examples (5)

View all tool examples β†’

Context Management Examples (8)

View all context examples β†’

More Examples

Browse all 46 examples β†’


Common Recipes

Recipe 1: Simple Q&A Bot

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import org.llm4s.llmconnect.LLMConnect
import org.llm4s.llmconnect.model._

def askQuestion(question: String): String = {
  val result = for {
    client <- LLMConnect.create()
    response <- client.complete(
      List(
        SystemMessage("You are a helpful Q&A assistant."),
        UserMessage(question)
      ),
      None
    )
  } yield response.content

  result.getOrElse("Sorry, I couldn't process that question.")
}

Recipe 2: Agent with Custom Tools

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import org.llm4s.agent.Agent
import org.llm4s.toolapi.{ ToolFunction, ToolRegistry }

def getCurrentTime(): String =
  java.time.LocalDateTime.now().toString

val timeTool = ToolFunction(
  name = "get_time",
  description = "Get current date and time",
  function = getCurrentTime _
)

val result = for {
  client <- LLMConnect.create()
  tools = new ToolRegistry(Seq(timeTool))
  agent = new Agent(client)
  state <- agent.run("What time is it?", tools)
} yield state.finalResponse

Recipe 3: Streaming Chat

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def streamChat(message: String): Unit = {
  val result = for {
    client <- LLMConnect.create()
    stream <- client.completeStreaming(
      List(UserMessage(message)),
      None
    )
  } yield {
    stream.foreach(chunk => print(chunk.content))
    println()
  }

  result.left.foreach(error => println(s"Error: $error"))
}

Recipe 4: Multi-Turn with Pruning

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import org.llm4s.agent.{ Agent, ContextWindowConfig, PruningStrategy }

val config = ContextWindowConfig(
  maxMessages = Some(20),
  preserveSystemMessage = true,
  pruningStrategy = PruningStrategy.OldestFirst
)

// Turn 1
val state1 = agent.run("First question", tools)

// Turn 2 with pruning
val state2 = agent.continueConversation(
  state1.getOrElse(???),
  "Follow-up question",
  contextWindowConfig = Some(config)
)

Troubleshooting

Common Issues

Problem: API key errors

  • Check environment variables are set: echo $OPENAI_API_KEY
  • Verify .env file is sourced: source .env
  • Check key starts with correct prefix (sk- for OpenAI, sk-ant- for Anthropic)

Problem: Model not found

  • Verify LLM_MODEL format: provider/model-name
  • Check provider supports that model
  • Try a different model

Problem: Slow responses

  • Use streaming for real-time feedback
  • Consider using a faster model (gpt-3.5-turbo, claude-haiku)
  • Check your internet connection

Problem: Token limit errors

  • Implement context window pruning
  • Use shorter system prompts
  • Summarize conversation history

Full troubleshooting guide β†’


Community & Support

Get Help

Stay Updated

Contribute

  • Starter Kit: Use llm4s.g8 to scaffold projects
  • Share Examples: Post your projects in Discord
  • Contribute: See the contributing guide

Week 1: Fundamentals

  1. βœ… Complete Getting Started (you are here!)
  2. Read Basic Usage Guide
  3. Try Basic Examples
  4. Experiment with different providers

Week 2: Agents & Tools

  1. Read Agent Framework
  2. Build a simple agent with one tool
  3. Try Tool Examples
  4. Add multiple tools

Week 3: Advanced Patterns

  1. Implement Multi-Turn Conversations
  2. Add Context Management
  3. Set up Observability
  4. Try Long Conversation Example

Week 4: Production

  1. Read Production Guide
  2. Implement error handling
  3. Add monitoring and tracing
  4. Deploy your first production agent

Documentation

Examples

Reference


What to Build?

Need inspiration? Here are some project ideas:

Beginner Projects:

  • Simple Q&A bot
  • Code explainer
  • Translation service
  • Writing assistant

Intermediate Projects:

  • Multi-tool research agent
  • Database query interface
  • API integration bot
  • Document summarizer

Advanced Projects:

  • Multi-agent system
  • RAG-powered knowledge base
  • Production chatbot service
  • Custom tool ecosystem

Ready to Build?

Pick your learning path and start building:

πŸ€– Build Agents

Agent Framework β†’

πŸ› οΈ Add Tools

Tool Calling β†’

πŸ’¬ Chat Apps

Multi-Turn β†’

πŸ” RAG Systems

Embeddings β†’

Happy building with LLM4S! πŸš€

Questions? Join our Discord