org.llm4s.agent.guardrails.builtin.ProfanityFilter
See theProfanityFilter companion object
class ProfanityFilter(customBadWords: Set[String], caseSensitive: Boolean) extends InputGuardrail, OutputGuardrail
Filters profanity and inappropriate content.
This is a basic implementation using a word list. For production, consider integrating with external APIs like:
- OpenAI Moderation API
- Google Perspective API
- Custom ML models
Can be used for both input and output validation.
Value parameters
- caseSensitive
-
Whether matching should be case-sensitive
- customBadWords
-
Additional words to filter beyond the default list
Attributes
- Companion
- object
- Graph
-
- Supertypes
-
trait OutputGuardrailtrait InputGuardrailtrait Guardrail[String]class Objecttrait Matchableclass Any
Members list
In this article