⚡ 将语言代理构建为图表 ⚡
笔记
寻找 JS 版本?单击此处(JS 文档)。
LangGraph 是一个使用 LLM 构建有状态、多参与者应用程序的库,用于创建代理和多代理工作流程。与其他LLM框架相比,它提供了以下核心优势:周期、可控性和持久性。 LangGraph 允许您定义涉及循环的流,这对于大多数代理架构至关重要,这与基于 DAG 的解决方案不同。作为一个非常低级的框架,它提供对应用程序的流程和状态的细粒度控制,这对于创建可靠的代理至关重要。此外,LangGraph 还包含内置持久性,可实现先进的人机交互和内存功能。
LangGraph 的灵感来自 Pregel 和 Apache Beam。公共界面的灵感来自 NetworkX。 LangGraph由LangChain的创建者LangChain Inc构建,但可以在没有LangChain的情况下使用。
LangGraph Platform 是用于部署 LangGraph 代理的基础设施。它是一个用于将代理应用程序部署到生产环境的商业解决方案,基于开源 LangGraph 框架构建。 LangGraph 平台由多个组件组成,这些组件协同工作以支持 LangGraph 应用程序的开发、部署、调试和监控:LangGraph Server (API)、LangGraph SDK(API 的客户端)、LangGraph CLI(用于构建服务器的命令行工具) )、LangGraph Studio(UI/调试器)、
要了解有关 LangGraph 的更多信息,请查看我们的第一门 LangChain Academy 课程LangGraph 简介,该课程免费提供。
LangGraph Platform 是一个商业解决方案,用于将代理应用程序部署到生产环境,基于开源 LangGraph 框架构建。以下是复杂部署中出现的一些常见问题,LangGraph Platform 可以解决这些问题:
pip install -U langgraph
LangGraph 的核心概念之一是状态。每个图执行都会创建一个状态,该状态在执行时在图中的节点之间传递,并且每个节点在执行后用其返回值更新此内部状态。图形更新其内部状态的方式由所选图形的类型或自定义函数定义。
让我们看一个可以使用搜索工具的代理的简单示例。
pip install langchain-anthropic
export ANTHROPIC_API_KEY=sk-...
或者,我们可以设置 LangSmith 以获得一流的可观察性。
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=lsv2_sk_...
from typing import Annotated , Literal , TypedDict
from langchain_core . messages import HumanMessage
from langchain_anthropic import ChatAnthropic
from langchain_core . tools import tool
from langgraph . checkpoint . memory import MemorySaver
from langgraph . graph import END , START , StateGraph , MessagesState
from langgraph . prebuilt import ToolNode
# Define the tools for the agent to use
@ tool
def search ( query : str ):
"""Call to surf the web."""
# This is a placeholder, but don't tell the LLM that...
if "sf" in query . lower () or "san francisco" in query . lower ():
return "It's 60 degrees and foggy."
return "It's 90 degrees and sunny."
tools = [ search ]
tool_node = ToolNode ( tools )
model = ChatAnthropic ( model = "claude-3-5-sonnet-20240620" , temperature = 0 ). bind_tools ( tools )
# Define the function that determines whether to continue or not
def should_continue ( state : MessagesState ) -> Literal [ "tools" , END ]:
messages = state [ 'messages' ]
last_message = messages [ - 1 ]
# If the LLM makes a tool call, then we route to the "tools" node
if last_message . tool_calls :
return "tools"
# Otherwise, we stop (reply to the user)
return END
# Define the function that calls the model
def call_model ( state : MessagesState ):
messages = state [ 'messages' ]
response = model . invoke ( messages )
# We return a list, because this will get added to the existing list
return { "messages" : [ response ]}
# Define a new graph
workflow = StateGraph ( MessagesState )
# Define the two nodes we will cycle between
workflow . add_node ( "agent" , call_model )
workflow . add_node ( "tools" , tool_node )
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow . add_edge ( START , "agent" )
# We now add a conditional edge
workflow . add_conditional_edges (
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent" ,
# Next, we pass in the function that will determine which node is called next.
should_continue ,
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow . add_edge ( "tools" , 'agent' )
# Initialize memory to persist state between graph runs
checkpointer = MemorySaver ()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable.
# Note that we're (optionally) passing the memory when compiling the graph
app = workflow . compile ( checkpointer = checkpointer )
# Use the Runnable
final_state = app . invoke (
{ "messages" : [ HumanMessage ( content = "what is the weather in sf" )]},
config = { "configurable" : { "thread_id" : 42 }}
)
final_state [ "messages" ][ - 1 ]. content
"Based on the search results, I can tell you that the current weather in San Francisco is:nnTemperature: 60 degrees FahrenheitnConditions: FoggynnSan Francisco is known for its microclimates and frequent fog, especially during the summer months. The temperature of 60°F (about 15.5°C) is quite typical for the city, which tends to have mild temperatures year-round. The fog, often referred to as "Karl the Fog" by locals, is a characteristic feature of San Francisco's weather, particularly in the mornings and evenings.nnIs there anything else you'd like to know about the weather in San Francisco or any other location?"
现在,当我们传递相同的"thread_id"
时,对话上下文将通过保存的状态(即存储的消息列表)保留
final_state = app . invoke (
{ "messages" : [ HumanMessage ( content = "what about ny" )]},
config = { "configurable" : { "thread_id" : 42 }}
)
final_state [ "messages" ][ - 1 ]. content
"Based on the search results, I can tell you that the current weather in New York City is:nnTemperature: 90 degrees Fahrenheit (approximately 32.2 degrees Celsius)nConditions: SunnynnThis weather is quite different from what we just saw in San Francisco. New York is experiencing much warmer temperatures right now. Here are a few points to note:nn1. The temperature of 90°F is quite hot, typical of summer weather in New York City.n2. The sunny conditions suggest clear skies, which is great for outdoor activities but also means it might feel even hotter due to direct sunlight.n3. This kind of weather in New York often comes with high humidity, which can make it feel even warmer than the actual temperature suggests.nnIt's interesting to see the stark contrast between San Francisco's mild, foggy weather and New York's hot, sunny conditions. This difference illustrates how varied weather can be across different parts of the United States, even on the same day.nnIs there anything else you'd like to know about the weather in New York or any other location?"
ChatAnthropic
作为我们的法学硕士。注意:我们需要确保模型知道它有这些工具可供调用。我们可以通过使用.bind_tools()
方法将 LangChain 工具转换为 OpenAI 工具调用的格式来实现这一点。StateGraph
)(在我们的例子中是MessagesState
)MessagesState
是一种预构建的状态模式,它有一个属性——LangChain Message
对象的列表,以及将每个节点的更新合并到状态中的逻辑我们需要两个主要节点:
agent
节点:负责决定采取什么(如果有)行动。tools
节点:如果代理决定采取操作,则该节点将执行该操作。首先,我们需要设置图执行的入口点—— agent
节点。
然后我们定义一条正常边缘和一条条件边缘。条件边意味着目的地取决于图状态( MessageState
)的内容。在我们的例子中,目的地是未知的,直到代理人(法学硕士)决定。
.invoke()
、 .stream()
和.batch()
MemorySaver
- 一个简单的内存检查指针 LangGraph 将输入消息添加到内部状态,然后将状态传递到入口点节点"agent"
。
"agent"
节点执行,调用聊天模型。
聊天模型返回一个AIMessage
。 LangGraph 将其添加到状态中。
Graph 循环执行以下步骤,直到AIMessage
上不再有tool_calls
:
AIMessage
有tool_calls
,则执行"tools"
节点"agent"
节点再次执行并返回AIMessage
执行进行到特殊的END
值并输出最终状态。结果,我们得到了所有聊天消息的列表作为输出。
有关如何贡献的更多信息,请参阅此处。