guardrails是一个 Python 框架,通过执行两个关键功能来帮助构建可靠的 AI 应用程序:
guardrails Hub 是特定类型风险的预构建措施(称为“验证器”)的集合。多个验证器可以组合在一起形成输入和输出防护,拦截 LLM 的输入和输出。访问guardrails Hub 查看验证器的完整列表及其文档。
pip install guardrails - ai
下载并配置guardrails Hub CLI。
pip install guardrails -ai
guardrails configure
从guardrails Hub 安装护栏。
guardrails hub install hub:// guardrails /regex_match
从已安装的护栏创建一个守卫。
from guardrails import Guard , OnFailAction
from guardrails . hub import RegexMatch
guard = Guard (). use (
RegexMatch , regex = "(?d{3})?-? *d{3}-? *-?d{4}" , on_fail = OnFailAction . EXCEPTION
)
guard . validate ( "123-456-7890" ) # Guardrail passes
try :
guard . validate ( "1234-789-0000" ) # Guardrail fails
except Exception as e :
print ( e )
输出:
Validation failed for field with errors: Result must match (?d{3})?-? *d{3}-? *-?d{4}
在一个 Guard 内运行多个guardrails 。首先,从guardrails中心安装必要的guardrails 。
guardrails hub install hub:// guardrails /competitor_check
guardrails hub install hub:// guardrails /toxic_language
然后,从已安装的guardrails创建一个 Guard 。
from guardrails import Guard , OnFailAction
from guardrails . hub import CompetitorCheck , ToxicLanguage
guard = Guard (). use_many (
CompetitorCheck ([ "Apple" , "Microsoft" , "Google" ], on_fail = OnFailAction . EXCEPTION ),
ToxicLanguage ( threshold = 0.5 , validation_method = "sentence" , on_fail = OnFailAction . EXCEPTION )
)
guard . validate (
"""An apple a day keeps a doctor away.
This is good advice for keeping your health."""
) # Both the guardrails pass
try :
guard . validate (
"""Shut the hell up! Apple just released a new iPhone."""
) # Both the guardrails fail
except Exception as e :
print ( e )
输出:
Validation failed for field with errors: Found the following competitors: [['Apple']]. Please avoid naming those competitors next time, The following sentences in your response were found to be toxic:
- Shut the hell up!
让我们看一个例子,我们要求法学硕士生成假宠物名。为此,我们将创建一个 Pydantic BaseModel 来表示我们想要的输出的结构。
from pydantic import BaseModel , Field
class Pet ( BaseModel ):
pet_type : str = Field ( description = "Species of pet" )
name : str = Field ( description = "a unique pet name" )
现在,从Pet
类创建一个 Guard。 Guard 可用于以某种方式调用 LLM,以便将输出格式化为Pet
类。在幕后,这是通过以下两种方法之一完成的:
from guardrails import Guard
import openai
prompt = """
What kind of pet should I get and what should I name it?
${gr.complete_json_suffix_v2}
"""
guard = Guard . for_pydantic ( output_class = Pet , prompt = prompt )
raw_output , validated_output , * rest = guard (
llm_api = openai . completions . create ,
engine = "gpt-3.5-turbo-instruct"
)
print ( validated_output )
这打印:
{
"pet_type": "dog",
"name": "Buddy
}
guardrails可以设置为由 Flask 通过guardrails start
提供的独立服务,允许您通过 REST API 与其进行交互。这种方法简化了guardrails驱动的应用程序的开发和部署。
pip install " guardrails -ai"
guardrails configure
guardrails create --validators=hub:// guardrails /two_words --name=two-word-guard
guardrails start --config=./config.py
# with the guardrails client
import guardrails as gr
gr.settings.use_server = True
guard = gr.Guard(name='two-word-guard')
guard.validate('this is more than two words')
# or with the openai sdk
import openai
openai.base_url = "http://localhost:8000/guards/two-word-guard/openai/v1/"
os.environ["OPENAI_API_KEY"] = "youropenaikey"
messages = [
{
"role": "user",
"content": "tell me about an apple with 3 words exactly",
},
]
completion = openai.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)
对于生产部署,我们建议使用 Docker 和 Gunicorn 作为 WSGI 服务器,以提高性能和可扩展性。
您可以通过 Discord 或 Twitter 联系我们。
是的, guardrails可以与专有和开源法学硕士一起使用。查看本指南,了解如何在任何法学硕士中使用guardrails 。
是的,您可以创建自己的验证器并将其贡献给guardrails Hub。查看本指南,了解如何创建您自己的验证器。
guardrails可以与 Python 和 JavaScript 一起使用。查看有关如何通过 JavaScript 使用guardrails的文档。我们正在努力增加对其他语言的支持。如果您想为guardrails做出贡献,请通过 Discord 或 Twitter 联系我们。
我们欢迎对guardrails的贡献!
首先查看 Github 问题并查看贡献指南。请随意提出问题,或者如果您想添加到项目中,请联系我们!