This site is not available on Mobile. Please return on a desktop browser.
Visit our main site at guardrailsai.com
Developed by | harrison@arize.com |
---|---|
Date of development | Jul 12, 2024 |
Validator type | Format |
License | Apache 2 |
Input/Output | Output |
This validator checks the relevancy of a reference text to an original question by prompting a LiteLLM (LLM) model to evaluate the output.
Dependencies:
Foundation model access keys:
$ guardrails hub install hub://arize-ai/relevancy_evaluator
In this example, we apply the validator to a string output generated by an LLM.
# Import Guard and Validator
from guardrails.hub import RelevancyEvaluator
from guardrails import Guard
# Setup Guard
guard = Guard().use(
RelevancyEvaluator(llm_callable="gpt-3.5-turbo")
)
# Example values
value = {
"original_prompt": "What is the capital of France?",
"reference_text": "The capital of France is Paris."
}
guard.validate(value) # Validator passes if the text is relevant
__init__(self, llm_callable="gpt-3.5-turbo", on_fail=None)
Initializes a new instance of the RelevancyEvaluator class.
Parameters
llm_callable
(str): The name of the LiteLLM model to use for validation. Defaults to "gpt-3.5-turbo".on_fail
(Callable, optional): A function to be called when validation fails. Defaults to None.validate(self, value, metadata) -> ValidationResult
Validates the given value
using the rules defined in this validator, relying on the metadata
provided to customize the validation process.
Note:
guard.validate(...)
where this method will be called internally for each associated Validator.guard.validate(...)
, ensure to pass the appropriate metadata
dictionary that includes keys and values required by this validator.Parameters
value
(Any): The input value to validate. It must contain 'original_prompt' and 'reference_text' keys.
metadata
(dict): A dictionary containing metadata required for validation. Keys and values must match the expectations of this validator.
Key | Type | Description | Default |
---|---|---|---|
original_prompt | String | The original question or prompt. | N/A |
reference_text | String | The reference text to evaluate for relevancy. | N/A |