guardrails is a Python framework that helps build reliable AI applications by performing two key functions:
guardrails Hub is a collection of pre-built measures of specific types of risks (called 'validators'). Multiple validators can be combined together into Input and Output Guards that intercept the inputs and outputs of LLMs. Visit guardrails Hub to see the full list of validators and their documentation.
pip install guardrails-ai
Download and configure the guardrails Hub CLI.
pip install guardrails-ai
guardrails configure
Install a guardrail from guardrails Hub.
guardrails hub install hub://guardrails/regex_match
Create a Guard from the installed guardrail.
from guardrails import Guard, OnFailAction
from guardrails.hub import RegexMatch
guard = Guard().use(
RegexMatch, regex="(?d{3})?-? *d{3}-? *-?d{4}", on_fail=OnFailAction.EXCEPTION
)
guard.validate("123-456-7890") # Guardrail passes
try:
guard.validate("1234-789-0000") # Guardrail fails
except Exception as e:
print(e)
Output:
Validation failed for field with errors: Result must match (?d{3})?-? *d{3}-? *-?d{4}
Run multiple guardrails within a Guard. First, install the necessary guardrails from guardrails Hub.
guardrails hub install hub://guardrails/competitor_check
guardrails hub install hub://guardrails/toxic_language
Then, create a Guard from the installed guardrails.
from guardrails import Guard, OnFailAction
from guardrails.hub import CompetitorCheck, ToxicLanguage
guard = Guard().use_many(
CompetitorCheck(["Apple", "Microsoft", "Google"], on_fail=OnFailAction.EXCEPTION),
ToxicLanguage(threshold=0.5, validation_method="sentence", on_fail=OnFailAction.EXCEPTION)
)
guard.validate(
"""An apple a day keeps a doctor away.
This is good advice for keeping your health."""
) # Both the guardrails pass
try:
guard.validate(
"""Shut the hell up! Apple just released a new iPhone."""
) # Both the guardrails fail
except Exception as e:
print(e)
Output:
Validation failed for field with errors: Found the following competitors: [['Apple']]. Please avoid naming those competitors next time, The following sentences in your response were found to be toxic:
- Shut the hell up!
Let's go through an example where we ask an LLM to generate fake pet names. To do this, we'll create a Pydantic BaseModel that represents the structure of the output we want.
from pydantic import BaseModel, Field
class Pet(BaseModel):
pet_type: str = Field(description="Species of pet")
name: str = Field(description="a unique pet name")
Now, create a Guard from the Pet
class. The Guard can be used to call the LLM in a manner so that the output is formatted to the Pet
class. Under the hood, this is done by either of two methods:
from guardrails import Guard
import openai
prompt = """
What kind of pet should I get and what should I name it?
${gr.complete_json_suffix_v2}
"""
guard = Guard.for_pydantic(output_class=Pet, prompt=prompt)
raw_output, validated_output, *rest = guard(
llm_api=openai.completions.create,
engine="gpt-3.5-turbo-instruct"
)
print(validated_output)
This prints:
{
"pet_type": "dog",
"name": "Buddy
}
guardrails can be set up as a standalone service served by Flask with guardrails start
, allowing you to interact with it via a REST API. This approach simplifies development and deployment of guardrails-powered applications.
pip install "guardrails-ai"
guardrails configure
guardrails create --validators=hub://guardrails/two_words --name=two-word-guard
guardrails start --config=./config.py
# with the guardrails client
import guardrails as gr
gr.settings.use_server = True
guard = gr.Guard(name='two-word-guard')
guard.validate('this is more than two words')
# or with the openai sdk
import openai
openai.base_url = "http://localhost:8000/guards/two-word-guard/openai/v1/"
os.environ["OPENAI_API_KEY"] = "youropenaikey"
messages = [
{
"role": "user",
"content": "tell me about an apple with 3 words exactly",
},
]
completion = openai.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)
For production deployments, we recommend using Docker with Gunicorn as the WSGI server for improved performance and scalability.
You can reach out to us on Discord or Twitter.
Yes, guardrails can be used with proprietary and open-source LLMs. Check out this guide on how to use guardrails with any LLM.
Yes, you can create your own validators and contribute them to guardrails Hub. Check out this guide on how to create your own validators.
guardrails can be used with Python and JavaScript. Check out the docs on how to use guardrails from JavaScript. We are working on adding support for other languages. If you would like to contribute to guardrails, please reach out to us on Discord or Twitter.
We welcome contributions to guardrails!
Get started by checking out Github issues and check out the Contributing Guide. Feel free to open an issue, or reach out if you would like to add to the project!