A simple dynamic multi-agent framework based on atomic-agents and Instructor. Uses the power of Pydantic for data and schema validation and serialization.
compose Agents made of a system prompt, with a shared language of either Function Calls or else GraphQL mutations
a router uses an LLM to process complex 'composite' user prompts, and automatically route them to the best sequence of your agents
generate via OpenAI or AWS Bedrock or groq
note: the !! framework is at an early stage !!
- breaking changes will be indicated by increasing the minor version (major is still at zero).
An LLM based Agents Framework using an Agent Oriented Programming approach to orchestrate agents using a shared language.
The agent language can either be Function Calling based, or else GraphQL based.
The framework is generic and allows agents to be defined in terms of a name, description, accepted input calls, and allowed output calls.
The agents communicate indirectly using a blackboard. The language is a composed of (Function or GraphQL mutation) calls: each agent specifies what it understands as input, and what calls it is able to generate. In this way, the agents can understand each other's output.
A router takes the user prompt and generates an agent execution plan.
The execution plan uses the best sequence of the most suitable agents, to handle the user prompt.
The router rewrites the user prompt to suit each agent, which improves quality and avoids unwanted output.
note: Optionally, the router can be run separately, allowing for human-in-the-loop feedback on the execution plan that the router generated. In this way, the user can collaborate more with the router, before the generative agents are actually executed.
Finally, the output is returned in the form of an ordered list of (Function or GraphQL) calls.
To read more about this approach, you can see this Medium article.
framework is at an early stage
. The Evaluator is not currently implemented.When integrating, depending on which kind of Agent Definitions are used, the client needs to:
This is a demo 'Sim Life' world builder. It uses 3 agents (Creature Creature, Vegetation Creator, Relationship Creator) to process user prompts. The agents are defined in terms of functions. The output is a series of Function Calls which can be implemented by the client, to build the Sim Life world.
The AddCreature function:
function_add_creature = FunctionSpecSchema(
agent_name=creature_agent_name,
function_name="AddCreature",
description="Adds a new creature to the world (not vegetation)",
parameters=[
ParameterSpec(name="creature_name", type=ParameterType.string),
ParameterSpec(name="allowed_terrain", type=ParameterType.string, allowed_values=terrain_types),
ParameterSpec(name="age", type=ParameterType.int),
ParameterSpec(name="icon_name", type=ParameterType.string, allowed_values=creature_icons),
]
)
The AddCreatureRelationship function:
function_add_relationship = FunctionSpecSchema(
agent_name=relationship_agent_name,
function_name="AddCreatureRelationship",
description="Adds a new relationship between two creatures",
parameters=[
ParameterSpec(
name="from_name", type=ParameterType.string
),
ParameterSpec(
name="to_name", type=ParameterType.string
),
ParameterSpec(
name="relationship_name",
type=ParameterType.string,
allowed_values=["eats", "buys", "feeds", "sells"],
),
],
)
The Creature Creator agent is defined declaratively in terms of:
Agents can collaborate and exchange information indirectly, by reusing the same function defintions via a blackboard.
def build_creature_agent():
agent_definition = FunctionAgentDefinition(
agent_name="Creature Creator",
description="Creates new creatures given the user prompt. Ensures that ALL creatures mentioned by the user are created.",
accepted_functions=[function_add_creature, function_add_relationship],
input_schema=FunctionAgentInputSchema,
initial_input=FunctionAgentInputSchema(
functions_allowed_to_generate=[function_add_creature],
previously_generated_functions=[]
),
output_schema=FunctionAgentOutputSchema,
topics=["creature"]
)
return agent_definition
Notes about the Creature Creator agent:
function_add_relationship
. See the example source code for more details.This is a demo 'Sim Life' world builder. It uses 3 agents (Creature Creature, Vegetation Creator, Relationship Creator) to process user prompts. The agents are defined declaratively in terms of GraphQL input schema, and allowed generated mutations. The output is a series of GraphQL mutations which can be executed by the client, to build the Sim Life world.
The GraphQL schema:
type Creature {
id: ID!
creature_name: String!
allowed_terrain: TerrainType!
age: Int!
icon_name: IconType!
}
type Vegetation {
id: ID!
vegetation_name: String!
icon_name: IconType!
allowed_terrain: TerrainType!
}
type Relationship {
id: ID!
from_name: String!
to_name: String!
relationship_kind: RelationshipType!
}
...
The GraphQL mutations that we want the Agents to generate, are distinct for each agent:
Creature Creator agent:
type Mutation {
addCreature(input: CreatureInput!): Creature!
}
input CreatureInput {
creature_name: String!
allowed_terrain: TerrainType!
age: Int!
icon_name: IconType!
}
Vegetation Creator agent:
type Mutation {
addVegetation(input: VegetationInput!): Vegetation!
}
input VegetationInput {
vegetation_name: String!
icon_name: IconType!
allowed_terrain: TerrainType!
}
The Creature Creator agent is defined declaratively in terms of:
An agent is basically a composition of input and output schemas, together with a prompt.
Agents collaborate and exchange information indirectly via a blackboard, by reusing the same GraphQL schemas and mutation calls.
creatures_graphql = _read_schema("creature.graphql")
creature_mutations_graphql = _read_schema("creature.mutations.graphql")
def build_creature_agent():
agent_definition = GraphQLAgentDefinition(
agent_name="Creature Creator",
description="Creates new creatures given the user prompt. Ensures that ALL creatures mentioned by the user are created.",
accepted_graphql_schemas=[creatures_graphql, creature_mutations_graphql],
input_schema=GraphQLAgentInputSchema,
initial_input=GraphQLAgentInputSchema(
mutations_allowed_to_generate=[creature_mutations_graphql],
previously_generated_mutations=[]
),
output_schema=GraphQLAgentOutputSchema,
topics=["creature"]
)
return agent_definition
Notes about this agent:
creature_mutations_graphql
from the file "creature.mutations.graphql".creature_mutations_graphql
).creatures_graphql
from the file "creature.graphql".
The agents can be used together to form a chat bot:
from gpt_multi_atomic_agents import functions_expert_service, config
from . import agents
def run_chat_loop(given_user_prompt: str|None = None) -> list:
CHAT_AGENT_DESCRIPTION = "Handles users questions about an ecosystem game like Sim Life"
agent_definitions = [
build_creature_agent(), build_relationship_agent(), build_vegatation_agent() # for more capabilities, add more agents here
]
_config = config.Config(
ai_platform = config.AI_PLATFORM_Enum.bedrock_anthropic,
model = config.ANTHROPIC_MODEL,
max_tokens = config.ANTHROPIC_MAX_TOKENS,
is_debug = False
)
return functions_expert_service.run_chat_loop(agent_definitions=agent_definitions, chat_agent_description=CHAT_AGENT_DESCRIPTION, _config=_config, given_user_prompt=given_user_prompt)
note: if
given_user_prompt
is not set, thenrun_chat_loop()
will wait for user input from the keyboard
See the example source code for more details.
USER INPUT:
Add a sheep that eats grass
OUTPUT:
Generated 3 function calls
[Agent: Creature Creator] AddCreature( creature_name=sheep, icon_name=sheep-icon, land_type=prairie, age=1 )
[Agent: Plant Creator] AddPlant( plant_name=grass, icon_name=grass-icon, land_type=prairie )
[Agent: Relationship Creator] AddCreatureRelationship( from_name=sheep, to_name=grass, relationship_name=eats )
Because the framework has a dynamic router, it can handle more complex 'composite' prompts, such as:
Add a cow that eats grass. Add a human - the cow feeds the human. Add and alien that eats the human. The human also eats cows.
The router figures out which agents to use, what order to run them in, and what prompt to send to each agent.
Optionally, the router can be re-executed with user feedback on its genereated plan, before actually executing the agents.
The recommended agents are then executed in order, building up their results in the shared blackboard.
Finally, the framework combines the resulting calls together and returns them to the client.
USER INPUT:
Add a sheep that eats grass
OUTPUT:
['mutation {n addCreature(input: {n creature_name: "sheep",n allowed_terrain: GRASSLAND,n age: 2,n icon_name: SHEEPn }) {n creature_namen allowed_terrainn agen icon_namen }n }']
['mutation {', ' addVegetation(input: {', ' vegetation_name: "Grass",', ' icon_name: GRASS,', ' allowed_terrain: LAND', ' }) {', ' vegetation_name', ' icon_name', ' allowed_terrain', ' }', '}']
['mutation {', ' addCreatureRelationship(input: {', ' from_name: "Sheep",', ' to_name: "Grass",', ' relationship_kind: EATS', ' }) {', ' id', ' }', '}']
Install Python 3.11 and poetry
Install dependencies.
poetry install
For OpenAI:
export OPENAI_API_KEY="xxx"
Add that to your shell initializing script (~/.zprofile
or similar)
Load in current terminal:
source ~/.zprofile
Test script:
./test.sh
See the example source code for more details.