灵活而有力的框架,用于管理多个AI代理和处理复杂的对话。
多机构编排者是一个灵活的框架,用于管理多个AI代理并处理复杂的对话。它可以智能地路由查询并保持跨交互的上下文。
该系统提供预先构建的组件,以快速部署,同时还可以轻松整合自定义代理和对话消息存储解决方案。
这种适应性使其适合各种应用程序,从简单的聊天机器人到复杂的AI系统,可满足各种要求并有效地扩展。
为了快速了解多代理编排,我们为演示应用程序提供了一些基本代理。该交互式演示在用户友好的界面中展示了编目的功能。要了解有关设置和运行演示应用程序的更多信息,请参阅我们的演示应用程序部分。
在下面的屏幕录制中,我们演示了使用6种专业代理的演示应用程序的扩展版本:
观察系统无缝地在不同主题之间无缝切换上下文,从预订航班到检查天气,解决数学问题和提供健康信息。请注意,如何为每个查询选择适当的代理,即使有简短的后续输入,也可以保持连贯性。
该演示突出了系统处理复杂的多转交谈的能力,同时保留了各个领域的上下文并利用专门的代理。
要快速了解多代理编排者,请查看我们的演示应用程序。文档和examples
文件夹中都可以使用其他代码示例。
通过我们各种各样的示例,获得多代理编排的动手经验:
examples
文件夹中的示例实现:chat-demo-app
:基于Web的聊天界面,带有多个专用代理ecommerce-support-simulator
:AI驱动的客户支持系统chat-chainlit-app
:使用Chainlit构建的聊天应用程序fast-api-streaming
:带有流支持的FastAPI实现text-2-structured-output
:自然语言到结构化数据bedrock-inline-agents
:基岩内联代理样本所有示例均在Python和TypeScript实现中可用。查看我们的文档,以获取有关设置和使用多代理编排的全面指南!
发现创造性实现和多代理编排的各种应用:
从“ Bonjour”到“登机通行证”:多语言AI聊天机器人进行飞行预订
本文演示了如何使用多代理编排框架构建多语言聊天机器人。本文解释了如何将Amazon Lex机器人与其他两个新代理一起使用Amazon Lex Bot作为代理,以使其仅使用几行代码以多种语言起作用。
超越自动修复:建立AI驱动的电子商务支持系统
本文演示了如何为自动化电子商务客户电子邮件支持构建AI驱动的多代理系统。它涵盖了使用多代理编排框架的专业AI代理的架构和设置,将自动处理与人类的监督集成在一起。该指南探讨了电子邮件摄入,智能路由,自动响应生成和人类验证,从而提供了一种全面的方法,可以平衡AI效率与人类在客户支持方面的专业知识。
大声说,AI:用Amazon Connect,Lex和Bedrock表达您的代理商
本文演示了如何建立AI客户呼叫中心。它涵盖了使用多代理编排框架通过Amazon Connect和Amazon Lex与Voice交互的多代理编排框架的架构和设置。
npm install multi-agent-orchestrator
以下示例演示了如何使用两种不同类型的代理的多代理编排者:具有匡威API支持和LEX bot剂的基岩LLM代理。这展示了系统在整合各种AI服务方面的灵活性。
import { MultiAgentOrchestrator , BedrockLLMAgent , LexBotAgent } from "multi-agent-orchestrator" ;
const orchestrator = new MultiAgentOrchestrator ( ) ;
// Add a Bedrock LLM Agent with Converse API support
orchestrator . addAgent (
new BedrockLLMAgent ( {
name : "Tech Agent" ,
description :
"Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services." ,
streaming : true
} )
) ;
// Add a Lex Bot Agent for handling travel-related queries
orchestrator . addAgent (
new LexBotAgent ( {
name : "Travel Agent" ,
description : "Helps users book and manage their flight reservations" ,
botId : process . env . LEX_BOT_ID ,
botAliasId : process . env . LEX_BOT_ALIAS_ID ,
localeId : "en_US" ,
} )
) ;
// Example usage
const response = await orchestrator . routeRequest (
"I want to book a flight" ,
'user123' ,
'session456'
) ;
// Handle the response (streaming or non-streaming)
if ( response . streaming == true ) {
console . log ( "n** RESPONSE STREAMING ** n" ) ;
// Send metadata immediately
console . log ( `> Agent ID: ${ response . metadata . agentId } ` ) ;
console . log ( `> Agent Name: ${ response . metadata . agentName } ` ) ;
console . log ( `> User Input: ${ response . metadata . userInput } ` ) ;
console . log ( `> User ID: ${ response . metadata . userId } ` ) ;
console . log ( `> Session ID: ${ response . metadata . sessionId } ` ) ;
console . log (
`> Additional Parameters:` ,
response . metadata . additionalParams
) ;
console . log ( `n> Response: ` ) ;
// Stream the content
for await ( const chunk of response . output ) {
if ( typeof chunk === "string" ) {
process . stdout . write ( chunk ) ;
} else {
console . error ( "Received unexpected chunk type:" , typeof chunk ) ;
}
}
} else {
// Handle non-streaming response (AgentProcessingResult)
console . log ( "n** RESPONSE ** n" ) ;
console . log ( `> Agent ID: ${ response . metadata . agentId } ` ) ;
console . log ( `> Agent Name: ${ response . metadata . agentName } ` ) ;
console . log ( `> User Input: ${ response . metadata . userInput } ` ) ;
console . log ( `> User ID: ${ response . metadata . userId } ` ) ;
console . log ( `> Session ID: ${ response . metadata . sessionId } ` ) ;
console . log (
`> Additional Parameters:` ,
response . metadata . additionalParams
) ;
console . log ( `n> Response: ${ response . output } ` ) ;
}
# Optional: Set up a virtual environment
python -m venv venv
source venv/bin/activate # On Windows use `venvScriptsactivate`
pip install multi-agent-orchestrator
这是一个等效的python示例,证明了使用基岩LLM代理和Lex Bot代理使用多代理编排者:
import os
import asyncio
from multi_agent_orchestrator . orchestrator import MultiAgentOrchestrator
from multi_agent_orchestrator . agents import BedrockLLMAgent , LexBotAgent , BedrockLLMAgentOptions , LexBotAgentOptions , AgentCallbacks
orchestrator = MultiAgentOrchestrator ()
class BedrockLLMAgentCallbacks ( AgentCallbacks ):
def on_llm_new_token ( self , token : str ) -> None :
# handle response streaming here
print ( token , end = '' , flush = True )
tech_agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = "Tech Agent" ,
streaming = True ,
description = "Specializes in technology areas including software development, hardware, AI,
cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs
related to technology products and services." ,
model_id = "anthropic.claude-3-sonnet-20240229-v1:0" ,
callbacks = BedrockLLMAgentCallbacks ()
))
orchestrator . add_agent ( tech_agent )
# Add a Lex Bot Agent for handling travel-related queries
orchestrator . add_agent (
LexBotAgent ( LexBotAgentOptions (
name = "Travel Agent" ,
description = "Helps users book and manage their flight reservations" ,
bot_id = os . environ . get ( 'LEX_BOT_ID' ),
bot_alias_id = os . environ . get ( 'LEX_BOT_ALIAS_ID' ),
locale_id = "en_US" ,
))
)
async def main ():
# Example usage
response = await orchestrator . route_request (
"I want to book a flight" ,
'user123' ,
'session456'
)
# Handle the response (streaming or non-streaming)
if response . streaming :
print ( " n ** RESPONSE STREAMING ** n " )
# Send metadata immediately
print ( f"> Agent ID: { response . metadata . agent_id } " )
print ( f"> Agent Name: { response . metadata . agent_name } " )
print ( f"> User Input: { response . metadata . user_input } " )
print ( f"> User ID: { response . metadata . user_id } " )
print ( f"> Session ID: { response . metadata . session_id } " )
print ( f"> Additional Parameters: { response . metadata . additional_params } " )
print ( " n > Response: " )
# Stream the content
async for chunk in response . output :
if isinstance ( chunk , str ):
print ( chunk , end = '' , flush = True )
else :
print ( f"Received unexpected chunk type: { type ( chunk ) } " , file = sys . stderr )
else :
# Handle non-streaming response (AgentProcessingResult)
print ( " n ** RESPONSE ** n " )
print ( f"> Agent ID: { response . metadata . agent_id } " )
print ( f"> Agent Name: { response . metadata . agent_name } " )
print ( f"> User Input: { response . metadata . user_input } " )
print ( f"> User ID: { response . metadata . user_id } " )
print ( f"> Session ID: { response . metadata . session_id } " )
print ( f"> Additional Parameters: { response . metadata . additional_params } " )
print ( f" n > Response: { response . output . content } " )
if __name__ == "__main__" :
asyncio . run ( main ())
这些示例展示:
如果您想将拟人化或OpenAI用于分类器和/或代理,请确保安装具有相关额外功能的多代理 - 策划者。
pip install " multi-agent-orchestrator[anthropic] "
pip install " multi-agent-orchestrator[openai] "
要进行完整的安装(包括人类和OpenAI):
pip install " multi-agent-orchestrator[all] "
我们欢迎捐款!请参阅我们的贡献指南以获取更多详细信息。
向我们的贡献者大声喊叫!感谢您使这个项目变得更好! ?
请参阅我们的贡献指南,以获取有关如何提出错误编织和改进的指南。
该项目已在Apache 2.0许可下获得许可 - 有关详细信息,请参见许可证文件。
该项目使用JetbrainsMono NF字体,该字体已获得SIL Open Font License 1.1的许可。有关完整的许可详细信息,请参见字体libense.md。