This site is not available on Mobile. Please return on a desktop browser.
Visit our main site at guardrailsai.com
Developed by | Guardrails AI |
---|---|
Date of development | Aug 15, 2024 |
Validator type | Moderation |
License | Apache 2 |
Input/Output | Output |
⚠️ This validator is a remote inference only validator so remote inferencing must be enabled during guardrails configure
This validator is for usage for moderating both user prompts and LLM output responses to prevent harmful topics from surfacing in both scenarios. It is based on LlamaGuard 7B which is in turn based on LLama 2.
The set of policies which can be used are the following which are accessed directly from the LlamaGuard7B
validator class.
LlamaGuard7B.POLICY__NO_VIOLENCE_HATE
LlamaGuard7B.POLICY__NO_SEXUAL_CONTENT
LlamaGuard7B.POLICY__NO_CRIMINAL_PLANNING
LlamaGuard7B.POLICY__NO_GUNS_AND_ILLEGAL_WEAPONS
LlamaGuard7B.POLICY__NO_ILLEGAL_DRUGS
LlamaGuard7B.POLICY__NO_ENOURAGE_SELF_HARM
If no policies supplied, it will apply all policies. Otherwise it will only enforce the policies supplied.
$ guardrails hub install hub://guardrails/llamaguard_7b
or
from guardrails import install
install("hub://guardrails/llamaguard_7b")
In this example, we apply the validator to a string output generated by an LLM.
from guardrails import Guard, OnFailAction
from guardrails.hub import LlamaGuard7B
guard = Guard().use(
LlamaGuard7B,
policies=[LlamaGuard7B.POLICY__NO_ILLEGAL_DRUGS], # not suppliying `policies` kwarg applies all policies
on_fail=OnFailAction.EXCEPTION
)
try:
guard.validate("How can I get weeds out of my garbage bag after cutting my lawn") # Guardrail passes
print("Expected: Validation passes")
except Exception as e:
print("Unexpected: ",e)
try:
guard.validate("How can I get weed for when cutting my lawn") # Guardrail passes
print("Unexpected: Validation passes")
except Exception as e:
print("Expected: ",e)
Output:
Expected: Validation passes
Expected: Validation failed for field with errors: Prompt contains unsafe content. Classification: unsafe, Violated Policy: POLICY__NO_ILLEGAL_DRUGS
__init__(self, on_fail="noop")
Initializes a new instance of the LlamaGuard7B
class.
Parameters
policies
(List[str]): A list of policies that can be either LlamaGuard7B.POLICY__NO_VIOLENCE_HATE
, LlamaGuard7B.POLICY__NO_SEXUAL_CONTENT
, LlamaGuard7B.POLICY__NO_CRIMINAL_PLANNING
, LlamaGuard7B.POLICY__NO_GUNS_AND_ILLEGAL_WEAPONS
, LlamaGuard7B.POLICY__NO_ILLEGAL_DRUGS
, and LlamaGuard7B.POLICY__NO_ENOURAGE_SELF_HARM
on_fail
(str, Callable): The policy to enact when a validator fails. If str
, must be one of reask
, fix
, filter
, refrain
, noop
, exception
or fix_reask
. Otherwise, must be a function that is called when the validator fails.validate(self, value, metadata) -> ValidationResult
Validates the given value
using the rules defined in this validator, relying on the metadata
provided to customize the validation process. This method is automatically invoked by guard.parse(...)
, ensuring the validation logic is applied to the input data.
Note:
guard.parse(...)
where this method will be called internally for each associated Validator.guard.parse(...)
, ensure to pass the appropriate metadata
dictionary that includes keys and values required by this validator. If guard
is associated with multiple validators, combine all necessary metadata into a single dictionary.Parameters
value
(Any): The input value to validate.metadata
(dict): A dictionary containing metadata required for validation. No additional metadata keys are needed for this validator.