LangGraph Studio offers a new way to develop LLM applications by providing a specialized agent IDE that enables visualization, interaction, and debugging of complex agentic applications
With visual graphs and the ability to edit state, you can better understand agent workflows and iterate faster. LangGraph Studio integrates with LangSmith so you can collaborate with teammates to debug failure modes.
While in Beta, LangGraph Studio is available for free to all LangSmith users on any plan tier. Sign up for LangSmith here.
Download the latest .dmg
file of LangGraph Studio by clicking here or by visiting the releases page.
Currently, only macOS is supported. Windows and Linux support is coming soon. We also depend on Docker Engine to be running, currently we only support the following runtimes:
LangGraph Studio requires docker-compose version 2.22.0+ or higher. Please make sure you have Docker Desktop or Orbstack installed and running before continuing.
To use LangGraph Studio, make sure you have a project with a LangGraph app set up.
For this example, we will use this example repository here which uses a requirements.txt
file for dependencies:
git clone https://github.com/langchain-ai/langgraph-example.git
If you would like to use a pyproject.toml
file instead for managing dependencies, you can use this example repository.
git clone https://github.com/langchain-ai/langgraph-example-pyproject.git
You will then want to create a .env
file with the relevant environment variables:
cp .env.example .env
You should then open up the .env
file and fill in with relevant OpenAI, Anthropic, and Tavily API keys.
If you already have them set in your environment, you can save them to this .env file with the following commands:
echo "OPENAI_API_KEY="$OPENAI_API_KEY"" > .env
echo "ANTHROPIC_API_KEY="$ANTHROPIC_API_KEY"" >> .env
echo "TAVILY_API_KEY="$TAVILY_API_KEY"" >> .env
Note: do NOT add a LANGSMITH_API_KEY to the .env file. We will do this automatically for you when you authenticate, and manually setting this may cause errors.
Once you've set up the project, you can use it in LangGraph Studio. Let's dive in!
When you open LangGraph Studio desktop app for the first time, you need to login via LangSmith.
Once you have successfully authenticated, you can choose the LangGraph application folder to use — you can either drag and drop or manually select it in the file picker. If you are using the example project, the folder would be langgraph-example
.
Important
The application directory you select needs to contain correctly configured langgraph.json
file. See more information on how to configure it here and how to set up a LangGraph app here.
Once you select a valid project, LangGraph Studio will start a LangGraph API server and you should see a UI with your graph rendered.
Now we can run the graph! LangGraph Studio lets you run your graph with different inputs and configurations.
To start a new run:
agent
. The list of graphs corresponds to the graphs
keys in your langgraph.json
configuration.Input
section.Submit
to invoke the selected graph.The following video shows how to start a new run:
To change configuration for a given graph run, press Configurable
button in the Input
section. Then click Submit
to invoke the graph.
Important
In order for the Configurable
menu to be visible, make sure to specify config schema when creating StateGraph
. You can read more about how to add config schema to your graph here.
The following video shows how to edit configuration and start a new run:
When you open LangGraph Studio, you will automatically be in a new thread window. If you have an existing thread open, follow these steps to create a new thread:
+
to open a new thread menu.The following video shows how to create a thread:
To select a thread:
New Thread
/ Thread
label at the top of the right-hand pane to open a thread list dropdown.The following video shows how to select a thread:
LangGraph Studio allows you to edit the thread state and fork the threads to create alternative graph execution with the updated state. To do it:
Fork
to update the state and create a new graph execution with the updated state.The following video shows how to edit a thread in the studio:
You might want to execute your graph step by step, or stop graph execution before/after a specific node executes. You can do so by adding interrupts. Interrupts can be set for all nodes (i.e. walk through the agent execution step by step) or for specific nodes. An interrupt in LangGraph Studio means that the graph execution will be interrupted both before and after a given node runs.
To walk through the agent execution step by step, you can add interrupts to a all or a subset of nodes in the graph:
Interrupt
.Interrupt on all
.The following video shows how to add interrupts to all nodes:
+
button show up on the left side of the node.+
to invoke the selected graph.Input
/ configuration and clicking Submit
The following video shows how to add interrupts to a specific node:
To remove the interrupt, simply follow the same step and press x
button on the left side of the node.
In addition to interrupting on a node and editing the graph state, you might want to support human-in-the-loop workflows with the ability to manually update state. Here is a modified version of agent.py
with agent
and human
nodes, where the graph execution will be interrupted on human
node. This will let you send input as part of the human
node. This can be useful when you want the agent to get user input. This essentially replaces how you might use input()
if you were running this from the command line.
from typing import TypedDict, Annotated, Sequence, Literal
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, END, add_messages
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], add_messages]
model = ChatAnthropic(temperature=0, model_name="claude-3-sonnet-20240229")
def call_model(state: AgentState) -> AgentState:
messages = state["messages"]
response = model.invoke(messages)
return {"messages": [response]}
# no-op node that should be interrupted on
def human_feedback(state: AgentState) -> AgentState:
pass
def should_continue(state: AgentState) -> Literal["agent", "end"]:
messages = state['messages']
last_message = messages[-1]
if isinstance(last_message, HumanMessage):
return "agent"
return "end"
workflow = StateGraph(AgentState)
workflow.set_entry_point("agent")
workflow.add_node("agent", call_model)
workflow.add_node("human", human_feedback)
workflow.add_edge("agent", "human")
workflow.add_conditional_edges(
"human",
should_continue,
{
"agent": "agent",
"end": END,
},
)
graph = workflow.compile(interrupt_before=["human"])
The following video shows how to manually send state updates (i.e. messages in our example) when interrupted:
LangGraph Studio allows you to modify your project config (langgraph.json
) interactively.
To modify the config from the studio, follow these steps:
Configure
on the bottom right. This will open an interactive config menu with the values that correspond to the existing langgraph.json
.Save and Restart
to reload the LangGraph API server with the updated config.The following video shows how to edit project config from the studio:
With LangGraph Studio you can modify your graph code and sync the changes live to the interactive graph.
To modify your graph from the studio, follow these steps:
Open in VS Code
on the bottom right. This will open the project that is currently opened in LangGraph studio..py
files where the compiled graph is defined or associated dependencies.The following video shows how to open code editor from the studio:
After you modify the underlying code you can also replay a node in the graph. For example, if an agent responds poorly, you can update the agent node implementation in your code editor and rerun it. This can make it much easier to iterate on long-running agents.
LangGraph Studio relies on Docker Compose to run the API, Redis and Postgres, which in turn creates its own network. Thus, to access local services you need to use host.docker.internal
as the hostname instead of localhost
. See #112 for more details.
By default, we try to make the image as small as possible, thus some dependencies such as gcc
or build-essentials
are missing from the base image. If you need to install additional dependencies, you can do so by adding additional Dockerfile instructions in the dockerfile_lines
section of your langgraph.json
file:
{
"dockerfile_lines": [
"RUN apt-get update && apt-get install -y gcc"
]
}
See How to customize Dockerfile for more details.