The Atomic Agents framework is designed to be modular, extensible, and easy to use. Its main goal is to eliminate redundant complexity, unnecessary abstractions, and hidden assumptions while still providing a flexible and powerful platform for building AI applications through atomicity. The framework provides a set of tools and agents that can be combined to create powerful applications. It is built on top of Instructor and leverages the power of Pydantic for data and schema validation and serialization.
While existing frameworks for agentic AI focus on building autonomous multi-agent systems, they often lack the control and predictability required for real-world applications. Businesses need AI systems that produce consistent, reliable outputs aligned with their brand and objectives.
Atomic Agents addresses this need by providing:
Modularity: Build AI applications by combining small, reusable components.
Predictability: Define clear input and output schemas to ensure consistent behavior.
Extensibility: Easily swap out components or integrate new ones without disrupting the entire system.
Control: Fine-tune each part of the system individually, from system prompts to tool integrations.
In Atomic Agents, an agent is composed of several key components:
System Prompt: Defines the agent's behavior and purpose.
Input Schema: Specifies the structure and validation rules for the agent's input.
Output Schema: Specifies the structure and validation rules for the agent's output.
Memory: Stores conversation history or other relevant data.
Context Providers: Inject dynamic context into the agent's system prompt at runtime.
Here's a high-level architecture diagram:
To install Atomic Agents, you can use pip:
pip install atomic-agents
Make sure you also install the provider you want to use. For example, to use OpenAI and Groq, you can install the openai
and groq
packages:
pip install openai groq
This also installs the CLI Atomic Assembler, which can be used to download Tools (and soon also Agents and Pipelines).
For local development, you can install from the repository:
git clone https://github.com/BrainBlend-AI/atomic-agents.gitcd atomic-agents poetry install
Atomic Agents uses a monorepo structure with the following main components:
atomic-agents/
: The core Atomic Agents library
atomic-assembler/
: The CLI tool for managing Atomic Agents components
atomic-examples/
: Example projects showcasing Atomic Agents usage
atomic-forge/
: A collection of tools that can be used with Atomic Agents
A complete list of examples can be found in the examples directory.
We strive to thoroughly document each example, but if something is unclear, please don't hesitate to open an issue or pull request to improve the documentation.
Here's a quick snippet demonstrating how easy it is to create a powerful agent with Atomic Agents:
# Define a custom output schemaclass CustomOutputSchema(BaseIOSchema):""" docstring for the custom output schema """chat_message: str = Field(..., description="The chat message from the agent.")suggested_questions: List[str] = Field(..., description="Suggested follow-up questions.")# Set up the system promptsystem_prompt_generator = SystemPromptGenerator(background=["This assistant is knowledgeable, helpful, and suggests follow-up questions."],steps=["Analyze the user's input to understand the context and intent.","Formulate a relevant and informative response.","Generate 3 suggested follow-up questions for the user."],output_instructions=["Provide clear and concise information in response to user queries.","Conclude each response with 3 relevant suggested questions for the user."] )# Initialize the agentagent = BaseAgent(config=BaseAgentConfig(client=your_openai_client, # Replace with your actual clientmodel="gpt-4o-mini",system_prompt_generator=system_prompt_generator,memory=AgentMemory(),output_schema=CustomOutputSchema) )# Use the agentresponse = agent.run(user_input)print(f"Agent: {response.chat_message}")print("Suggested questions:")for question in response.suggested_questions:print(f"- {question}")
This snippet showcases how to create a customizable agent that responds to user queries and suggests follow-up questions. For full, runnable examples, please refer to the following files in the atomic-examples/quickstart/quickstart/
directory:
Basic Chatbot A minimal chatbot example to get you started.
Custom Chatbot A more advanced example with a custom system prompt.
Custom Chatbot with Schema An advanced example featuring a custom output schema.
Multi-Provider Chatbot Demonstrates how to use different providers such as Ollama or Groq.
In addition to the quickstart examples, we have more complex examples demonstrating the power of Atomic Agents:
Web Search Agent: An intelligent agent that performs web searches and answers questions based on the results.
YouTube Summarizer: An agent that extracts and summarizes knowledge from YouTube videos.
For a complete list of examples, see the examples directory.
These examples provide a great starting point for understanding and using Atomic Agents.
Atomic Agents allows you to enhance your agents with dynamic context using Context Providers. Context Providers enable you to inject additional information into the agent's system prompt at runtime, making your agents more flexible and context-aware.
To use a Context Provider, create a class that inherits from SystemPromptContextProviderBase
and implements the get_info()
method, which returns the context string to be added to the system prompt.
Here's a simple example:
from atomic_agents.lib.components.system_prompt_generator import SystemPromptContextProviderBaseclass SearchResultsProvider(SystemPromptContextProviderBase):def __init__(self, title: str, search_results: List[str]):super().__init__(title=title)self.search_results = search_resultsdef get_info(self) -> str:return "n".join(self.search_results)
You can then register your Context Provider with the agent:
# Initialize your context provider with dynamic datasearch_results_provider = SearchResultsProvider(title="Search Results",search_results=["Result 1", "Result 2", "Result 3"] )# Register the context provider with the agentagent.register_context_provider("search_results", search_results_provider)
This allows your agent to include the search results (or any other context) in its system prompt, enhancing its responses based on the latest information.
Atomic Agents makes it easy to chain agents and tools together by aligning their input and output schemas. This design allows you to swap out components effortlessly, promoting modularity and reusability in your AI applications.
Suppose you have an agent that generates search queries and you want to use these queries with different search tools. By aligning the agent's output schema with the input schema of the search tool, you can easily chain them together or switch between different search providers.
Here's how you can achieve this:
import instructorimport openaifrom pydantic import Fieldfrom atomic_agents.agents.base_agent import BaseIOSchema, BaseAgent, BaseAgentConfigfrom atomic_agents.lib.components.system_prompt_generator import SystemPromptGenerator# Import the search tool you want to usefrom web_search_agent.tools.searxng_search import SearxNGSearchTool# Define the input schema for the query agentclass QueryAgentInputSchema(BaseIOSchema):"""Input schema for the QueryAgent."""instruction: str = Field(..., description="Instruction to generate search queries for.")num_queries: int = Field(..., description="Number of queries to generate.")# Initialize the query agentquery_agent = BaseAgent(BaseAgentConfig(client=instructor.from_openai(openai.OpenAI()),model="gpt-4o-mini",system_prompt_generator=SystemPromptGenerator(background=["You are an intelligent query generation expert.","Your task is to generate a specified number of diverse and highly relevant queries based on a given instruction."],steps=["Receive the instruction and the number of queries to generate.","Generate the queries in JSON format."],output_instructions=["Ensure each query is unique and relevant.","Provide the queries in the expected schema."], ),input_schema=QueryAgentInputSchema,output_schema=SearxNGSearchTool.input_schema, # Align output schema) )
In this example:
Modularity: By setting the output_schema
of the query_agent
to match the input_schema
of SearxNGSearchTool
, you can directly use the output of the agent as input to the tool.
Swapability: If you decide to switch to a different search provider, you can import a different search tool and update the output_schema
accordingly.
For instance, to switch to another search service:
# Import a different search toolfrom web_search_agent.tools.another_search import AnotherSearchTool# Update the output schemaquery_agent.config.output_schema = AnotherSearchTool.input_schema
This design pattern simplifies the process of chaining agents and tools, making your AI applications more adaptable and easier to maintain.
To run the CLI, simply run the following command:
atomic
Or if you installed Atomic Agents with Poetry, for example:
poetry run atomic
Or if you installed Atomic Agents with uv:
uv run atomic
After running this command, you will be presented with a menu allowing you to download tools.
Each tool's has its own:
Input schema
Output schema
Usage example
Dependencies
Installation instructions
The atomic-assembler
CLI gives you complete control over your tools, avoiding the clutter of unnecessary dependencies. It makes modifying tools straightforward additionally, each tool comes with its own set of tests for reliability.
But you're not limited to the CLI! If you prefer, you can directly access the tool folders and manage them manually by simply copying and pasting as needed.
Atomic Agents depends on the Instructor package. This means that in all examples where OpenAI is used, any other API supported by Instructor can also be used—such as Ollama, Groq, Mistral, Cohere, Anthropic, Gemini, and more. For a complete list, please refer to the Instructor documentation on its GitHub page.
API documentation can be found here.
Atomic Forge is a collection of tools that can be used with Atomic Agents to extend its functionality. Current tools include:
Calculator
SearxNG Search
YouTube Transcript Scraper
For more information on using and creating tools, see the Atomic Forge README.
We welcome contributions! Please see the Developer Guide for detailed information on how to contribute to Atomic Agents. Here are some quick steps:
Fork the repository
Create a new branch (git checkout -b feature-branch
)
Make your changes
Run tests (pytest --cov atomic_agents
)
Format your code (black atomic_agents atomic_assembler
)
Lint your code (flake8 atomic_agents atomic_assembler
)
Commit your changes (git commit -m 'Add some feature'
)
Push to the branch (git push origin feature-branch
)
Open a pull request
For full development setup and guidelines, please refer to the Developer Guide.
This project is licensed under the MIT License—see the LICENSE file for details.