GraphQAConfig
org.llm4s.knowledgegraph.query.GraphQAConfig
case class GraphQAConfig(maxHops: Int, maxContextNodes: Int, maxContextEdges: Int, useRanking: Boolean, rankingAlgorithm: RankingAlgorithm, includeCitations: Boolean, temperature: Double)
Configuration for the graph-guided question answering pipeline.
Value parameters
- includeCitations
-
Whether to track and return source citations
- maxContextEdges
-
Maximum number of edges to include in LLM context
- maxContextNodes
-
Maximum number of nodes to include in LLM context
- maxHops
-
Maximum number of hops for graph traversal during context gathering
- rankingAlgorithm
-
Which ranking algorithm to use (when useRanking is true)
- temperature
-
LLM temperature for answer generation (lower = more deterministic)
- useRanking
-
Whether to use graph ranking algorithms to prioritize entities
Attributes
- Example
-
val config = GraphQAConfig( maxHops = 3, useRanking = true, rankingAlgorithm = RankingAlgorithm.PageRank, includeCitations = true ) val pipeline = new GraphQAPipeline(llmClient, graphStore, config) - Graph
-
- Supertypes
-
trait Serializabletrait Producttrait Equalsclass Objecttrait Matchableclass Any
Members list
In this article