guardrails是一個 Python 框架,透過執行兩個關鍵功能來幫助建立可靠的 AI 應用程式:
guardrails Hub 是特定類型風險的預先建置措施(稱為「驗證器」)的集合。多個驗證器可以組合在一起形成輸入和輸出防護,攔截 LLM 的輸入和輸出。造訪guardrails Hub 查看驗證器的完整清單及其文件。
pip install guardrails - ai
下載並配置guardrails Hub CLI。
pip install guardrails -ai
guardrails configure
從guardrails Hub 安裝護欄。
guardrails hub install hub:// guardrails /regex_match
從已安裝的護欄建立一個守衛。
from guardrails import Guard , OnFailAction
from guardrails . hub import RegexMatch
guard = Guard (). use (
RegexMatch , regex = "(?d{3})?-? *d{3}-? *-?d{4}" , on_fail = OnFailAction . EXCEPTION
)
guard . validate ( "123-456-7890" ) # Guardrail passes
try :
guard . validate ( "1234-789-0000" ) # Guardrail fails
except Exception as e :
print ( e )
輸出:
Validation failed for field with errors: Result must match (?d{3})?-? *d{3}-? *-?d{4}
在一個 Guard 內運行多個guardrails 。首先,從guardrails中心安裝必要的guardrails 。
guardrails hub install hub:// guardrails /competitor_check
guardrails hub install hub:// guardrails /toxic_language
然後,從已安裝的guardrails建立一個 Guard 。
from guardrails import Guard , OnFailAction
from guardrails . hub import CompetitorCheck , ToxicLanguage
guard = Guard (). use_many (
CompetitorCheck ([ "Apple" , "Microsoft" , "Google" ], on_fail = OnFailAction . EXCEPTION ),
ToxicLanguage ( threshold = 0.5 , validation_method = "sentence" , on_fail = OnFailAction . EXCEPTION )
)
guard . validate (
"""An apple a day keeps a doctor away.
This is good advice for keeping your health."""
) # Both the guardrails pass
try :
guard . validate (
"""Shut the hell up! Apple just released a new iPhone."""
) # Both the guardrails fail
except Exception as e :
print ( e )
輸出:
Validation failed for field with errors: Found the following competitors: [['Apple']]. Please avoid naming those competitors next time, The following sentences in your response were found to be toxic:
- Shut the hell up!
讓我們來看一個例子,我們要求法學碩士產生假寵物名。為此,我們將建立一個 Pydantic BaseModel 來表示我們想要的輸出的結構。
from pydantic import BaseModel , Field
class Pet ( BaseModel ):
pet_type : str = Field ( description = "Species of pet" )
name : str = Field ( description = "a unique pet name" )
現在,從Pet
類別建立一個 Guard。 Guard 可用於以某種方式呼叫 LLM,以便將輸出格式化為Pet
類別。在幕後,這是透過以下兩種方法之一完成的:
from guardrails import Guard
import openai
prompt = """
What kind of pet should I get and what should I name it?
${gr.complete_json_suffix_v2}
"""
guard = Guard . for_pydantic ( output_class = Pet , prompt = prompt )
raw_output , validated_output , * rest = guard (
llm_api = openai . completions . create ,
engine = "gpt-3.5-turbo-instruct"
)
print ( validated_output )
這列印:
{
"pet_type": "dog",
"name": "Buddy
}
guardrails可以設定為 Flask 透過guardrails start
提供的獨立服務,讓您透過 REST API 與其進行互動。這種方法簡化了guardrails驅動的應用程式的開發和部署。
pip install " guardrails -ai"
guardrails configure
guardrails create --validators=hub:// guardrails /two_words --name=two-word-guard
guardrails start --config=./config.py
# with the guardrails client
import guardrails as gr
gr.settings.use_server = True
guard = gr.Guard(name='two-word-guard')
guard.validate('this is more than two words')
# or with the openai sdk
import openai
openai.base_url = "http://localhost:8000/guards/two-word-guard/openai/v1/"
os.environ["OPENAI_API_KEY"] = "youropenaikey"
messages = [
{
"role": "user",
"content": "tell me about an apple with 3 words exactly",
},
]
completion = openai.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)
對於生產部署,我們建議使用 Docker 和 Gunicorn 作為 WSGI 伺服器,以提高效能和可擴展性。
您可以透過 Discord 或 Twitter 與我們聯繫。
是的, guardrails可以與專有和開源法學碩士一起使用。請參閱本指南,了解如何在任何法學碩士中使用guardrails 。
是的,您可以建立自己的驗證器並將其貢獻給guardrails Hub。請查看本指南,了解如何建立自己的驗證器。
guardrails可以與 Python 和 JavaScript 一起使用。查看有關如何透過 JavaScript 使用guardrails的文件。我們正在努力增加對其他語言的支援。如果您想為guardrails做出貢獻,請透過 Discord 或 Twitter 與我們聯繫。
我們歡迎對guardrails的貢獻!
首先查看 Github 問題並查看貢獻指南。請隨意提出問題,或者如果您想添加到項目中,請與我們聯繫!