This site is not available on Mobile. Please return on a desktop browser.
Visit our main site at guardrailsai.com
Developed by | Jonathan Bennion |
---|---|
Date of development | Mar 27, 2024 |
Validator type | Format |
License | Apache 2 |
Input/Output | Output |
Checks for any logical fallacies in model output, which could result from using RAG on similar documents and conflicts with optimized datasets, among other causes.
Intended to be used by developers to ensure that the model output is logically sound. Caveats are that this could intefere with use cases where sound logic is not needed
Dependencies:
Dev Dependencies:
Foundation model access keys:
$ guardrails hub install hub://guardrails/logic_check
In this example, we apply the validator to a string output generated by an LLM.
# Import Guard and Validator
from guardrails.hub import LogicCheck
from guardrails import Guard
# Setup Guard
guard = Guard.use(
LogicCheck()
)
guard.validate("Science can prove how the world works.") # Validator passes
guard.validate("The sky always contains clouds.") # Validator fails