This site is not available on Mobile. Please return on a desktop browser.
Visit our main site at guardrailsai.com
Developed by | Guardrails AI |
Date of development | Feb 15, 2024 |
Validator type | Format |
Blog | |
License | Apache 2 |
Input/Output | Output |
This validator checks if an LLM-generated text is not safe for work (NSFW). It validates both sentence-by-sentence and the entire text.
$ guardrails hub install hub://guardrails/nsfw_text
In this example, we use the nsfw_text
validator on any LLM generated text.
# Import Guard and Validator
from guardrails.hub import NSFWText
from guardrails import Guard
# Setup Guard with the validator
guard = Guard().use(
NSFWText, threshold=0.8, validation_method="sentence", on_fail="exception"
)
# Test passing response
guard.validate(
"Christopher Nolan's Tenet is a mind-bending action thriller that will keep you on the edge of your seat. The film is a must-watch for all Nolan fans."
)
try:
# Test failing response
guard.validate(
"Climate Change is real and we need to do something about it. Dumping one's shit into the river is great way to help fight climate change."
)
except Exception as e:
print(e)
Output:
Validation failed for field with errors: The following sentences in your response were found to be NSFW:
- Dumping one's shit into the river is great way to help fight climate change.
__init__(self, threshold=0.8, validation_method="sentence", on_fail="noop")
Initializes a new instance of the Validator class.
Parameters
threshold
(float): The confidence threshold over which model inferences are considered. Default is 0.8.validation_method
(str): The method to use for validation. If sentence
, the validator will validate each sentence in the input text. If full
, the validator will validate the entire input text. Default is sentence
.on_fail
(str, Callable): The policy to enact when a validator fails. If str
, must be one of reask
, fix
, filter
, refrain
, noop
, exception
or fix_reask
. Otherwise, must be a function that is called when the validator fails.__call__(self, value, metadata={}) -> ValidationResult
Validates the given value
using the rules defined in this validator, relying on the metadata
provided to customize the validation process. This method is automatically invoked by guard.parse(...)
, ensuring the validation logic is applied to the input data.
Note:
guard.parse(...)
where this method will be called internally for each associated Validator.guard.parse(...)
, ensure to pass the appropriate metadata
dictionary that includes keys and values required by this validator. If guard
is associated with multiple validators, combine all necessary metadata into a single dictionary.Parameters
value
(Any): The input value to validate.metadata
(dict): A dictionary containing metadata required for validation. No additional metadata keys are needed for this validator.The validator playground is available to authenticated users. Please log in to use it.