⚡ 將語言代理建構成圖表 ⚡
筆記
尋找 JS 版本?按一下此處(JS 文件)。
LangGraph 是一個使用 LLM 建立有狀態、多參與者應用程式的程式庫,用於建立代理程式和多代理程式工作流程。與其他LLM框架相比,它提供了以下核心優勢:週期、可控性和持久性。 LangGraph 可讓您定義涉及循環的流,這對於大多數代理架構至關重要,這與基於 DAG 的解決方案不同。作為一個非常低階的框架,它提供對應用程式的流程和狀態的細粒度控制,這對於創建可靠的代理至關重要。此外,LangGraph 還包含內建持久性,可實現先進的人機互動和記憶體功能。
LangGraph 的靈感來自 Pregel 和 Apache Beam。公共介面的靈感來自 NetworkX。 LangGraph由LangChain的創建者LangChain Inc構建,但可以在沒有LangChain的情況下使用。
LangGraph Platform 是用於部署 LangGraph 代理程式的基礎架構。它是一個用於將代理應用程式部署到生產環境的商業解決方案,基於開源 LangGraph 框架建置。 LangGraph 平台由多個元件組成,這些元件協同工作以支援LangGraph 應用程式的開發、部署、偵錯和監控:LangGraph Server (API)、LangGraph SDK(API 的用戶端)、LangGraph CLI(用於建置伺服器的命令行工具) )、LangGraph Studio(UI/調試器)、
要了解有關 LangGraph 的更多信息,請查看我們的第一門 LangChain Academy 課程LangGraph 簡介,該課程免費提供。
LangGraph Platform 是一個商業解決方案,用於將代理應用程式部署到生產環境,基於開源 LangGraph 框架建置。以下是複雜部署中出現的一些常見問題,LangGraph Platform 可以解決這些問題:
pip install -U langgraph
LangGraph 的核心概念之一是狀態。每個圖執行都會建立一個狀態,該狀態在執行時在圖中的節點之間傳遞,並且每個節點在執行後用其傳回值更新此內部狀態。圖形更新其內部狀態的方式由所選圖形的類型或自訂函數定義。
讓我們來看一個可以使用搜尋工具的代理程式的簡單範例。
pip install langchain-anthropic
export ANTHROPIC_API_KEY=sk-...
或者,我們可以設定 LangSmith 以獲得一流的可觀察性。
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=lsv2_sk_...
from typing import Annotated , Literal , TypedDict
from langchain_core . messages import HumanMessage
from langchain_anthropic import ChatAnthropic
from langchain_core . tools import tool
from langgraph . checkpoint . memory import MemorySaver
from langgraph . graph import END , START , StateGraph , MessagesState
from langgraph . prebuilt import ToolNode
# Define the tools for the agent to use
@ tool
def search ( query : str ):
"""Call to surf the web."""
# This is a placeholder, but don't tell the LLM that...
if "sf" in query . lower () or "san francisco" in query . lower ():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."
tools = [ search ]
tool_node = ToolNode ( tools )
model = ChatAnthropic ( model = "claude-3-5-sonnet-20240620" , temperature = 0 ). bind_tools ( tools )
# Define the function that determines whether to continue or not
def should_continue ( state : MessagesState ) -> Literal [ "tools" , END ]:
messages = state [ 'messages' ]
last_message = messages [ - 1 ]
# If the LLM makes a tool call, then we route to the "tools" node
if last_message . tool_calls :
return "tools"
# Otherwise, we stop (reply to the user)
return END
# Define the function that calls the model
def call_model ( state : MessagesState ):
messages = state [ 'messages' ]
response = model . invoke ( messages )
# We return a list, because this will get added to the existing list
return { "messages" : [ response ]}
# Define a new graph
workflow = StateGraph ( MessagesState )
# Define the two nodes we will cycle between
workflow . add_node ( "agent" , call_model )
workflow . add_node ( "tools" , tool_node )
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow . add_edge ( START , "agent" )
# We now add a conditional edge
workflow . add_conditional_edges (
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent" ,
# Next, we pass in the function that will determine which node is called next.
should_continue ,
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow . add_edge ( "tools" , 'agent' )
# Initialize memory to persist state between graph runs
checkpointer = MemorySaver ()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable.
# Note that we're (optionally) passing the memory when compiling the graph
app = workflow . compile ( checkpointer = checkpointer )
# Use the Runnable
final_state = app . invoke (
{ "messages" : [ HumanMessage ( content = "what is the weather in sf" )]},
config = { "configurable" : { "thread_id" : 42 }}
)
final_state [ "messages" ][ - 1 ]. content
"Based on the search results, I can tell you that the current weather in San Francisco is:nnTemperature: 60 degrees FahrenheitnConditions: FoggynnSan Francisco is known for its microclimates and frequent fog, especially during the summer months. The temperature of 60°F (about 15.5°C) is quite typical for the city, which tends to have mild temperatures year-round. The fog, often referred to as "Karl the Fog" by locals, is a characteristic feature of San Francisco's weather, particularly in the mornings and evenings.nnIs there anything else you'd like to know about the weather in San Francisco or any other location?"
現在,當我們傳遞相同的"thread_id"
時,對話上下文將透過儲存的狀態(即儲存的訊息清單)保留
final_state = app . invoke (
{ "messages" : [ HumanMessage ( content = "what about ny" )]},
config = { "configurable" : { "thread_id" : 42 }}
)
final_state [ "messages" ][ - 1 ]. content
"Based on the search results, I can tell you that the current weather in New York City is:nnTemperature: 90 degrees Fahrenheit (approximately 32.2 degrees Celsius)nConditions: SunnynnThis weather is quite different from what we just saw in San Francisco. New York is experiencing much warmer temperatures right now. Here are a few points to note:nn1. The temperature of 90°F is quite hot, typical of summer weather in New York City.n2. The sunny conditions suggest clear skies, which is great for outdoor activities but also means it might feel even hotter due to direct sunlight.n3. This kind of weather in New York often comes with high humidity, which can make it feel even warmer than the actual temperature suggests.nnIt's interesting to see the stark contrast between San Francisco's mild, foggy weather and New York's hot, sunny conditions. This difference illustrates how varied weather can be across different parts of the United States, even on the same day.nnIs there anything else you'd like to know about the weather in New York or any other location?"
ChatAnthropic
作為我們的法學碩士。注意:我們需要確保模型知道它有這些工具可供呼叫。我們可以透過使用.bind_tools()
方法將 LangChain 工具轉換為 OpenAI 工具呼叫的格式來實現這一點。StateGraph
)(在我們的例子中是MessagesState
)MessagesState
是一種預先建置的狀態模式,它有一個屬性——LangChain Message
物件的列表,以及將每個節點的更新合併到狀態中的邏輯我們需要兩個主要節點:
agent
節點:負責決定採取什麼(如果有)行動。tools
節點:如果代理程式決定採取操作,則該節點將執行該操作。首先,我們需要設定圖執行的入口點- agent
節點。
然後我們定義一條正常邊緣和一條條件邊緣。條件邊意味著目的地取決於圖狀態( MessageState
)的內容。在我們的例子中,目的地是未知的,直到代理人(法學碩士)決定。
.invoke()
、 .stream()
和.batch()
MemorySaver
- 一個簡單的記憶體檢查指針 LangGraph 將輸入訊息加入內部狀態,然後將狀態傳遞到入口點節點"agent"
。
"agent"
節點執行,呼叫聊天模型。
聊天模型回傳一個AIMessage
。 LangGraph 將其新增至狀態。
Graph 迴圈執行以下步驟,直到AIMessage
上不再有tool_calls
:
AIMessage
有tool_calls
,則執行"tools"
節點"agent"
節點再次執行並返回AIMessage
執行進行到特殊的END
值並輸出最終狀態。結果,我們得到了所有聊天訊息的清單作為輸出。
有關如何貢獻的更多信息,請參閱此處。