Enable human intervention¶
To review, edit, and approve tool calls in an agent or workflow, use LangGraph's human-in-the-loop features.
Pause using interrupt
¶
The interrupt
function in LangGraph enables human-in-the-loop workflows by pausing the graph at a specific node, presenting information to a human, and resuming the graph with their input. It's useful for tasks like approvals, edits, or gathering additional context.
To use interrupt
in your graph, you need to:
- Specify a checkpointer to save the graph state after each step.
- Call
interrupt()
in the appropriate place. See the Common Patterns section for examples. - Run the graph with a thread ID until the
interrupt
is hit. - Resume execution using
invoke
/ainvoke
/stream
/astream
(see TheCommand
primitive).
API Reference: interrupt | Command
from langgraph.types import interrupt, Command
def human_node(state: State):
value = interrupt( # (1)!
{
"text_to_revise": state["some_text"] # (2)!
}
)
return {
"some_text": value # (3)!
}
graph = graph_builder.compile(checkpointer=checkpointer) # (4)!
# Run the graph until the interrupt is hit.
config = {"configurable": {"thread_id": "some_id"}}
result = graph.invoke({"some_text": "original text"}, config=config) # (5)!
print(result['__interrupt__']) # (6)!
# > [
# > Interrupt(
# > value={'text_to_revise': 'original text'},
# > resumable=True,
# > ns=['human_node:6ce9e64f-edef-fe5d-f7dc-511fa9526960']
# > )
# > ]
print(graph.invoke(Command(resume="Edited text"), config=config)) # (7)!
# > {'some_text': 'Edited text'}
interrupt(...)
pauses execution athuman_node
, surfacing the given payload to a human.- Any JSON serializable value can be passed to the
interrupt
function. Here, a dict containing the text to revise. - Once resumed, the return value of
interrupt(...)
is the human-provided input, which is used to update the state. - A checkpointer is required to persist graph state. In production, this should be durable (e.g., backed by a database).
- The graph is invoked with some initial state.
- When the graph hits the interrupt, it returns an
Interrupt
object with the payload and metadata. - The graph is resumed with a
Command(resume=...)
, injecting the human's input and continuing execution.
Extended example: using interrupt
from typing import TypedDict
import uuid
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.constants import START
from langgraph.graph import StateGraph
from langgraph.types import interrupt, Command
class State(TypedDict):
some_text: str
def human_node(state: State):
value = interrupt( # (1)!
{
"text_to_revise": state["some_text"] # (2)!
}
)
return {
"some_text": value # (3)!
}
# Build the graph
graph_builder = StateGraph(State)
graph_builder.add_node("human_node", human_node)
graph_builder.add_edge(START, "human_node")
checkpointer = InMemorySaver() # (4)!
graph = graph_builder.compile(checkpointer=checkpointer)
# Pass a thread ID to the graph to run it.
config = {"configurable": {"thread_id": uuid.uuid4()}}
# Run the graph until the interrupt is hit.
result = graph.invoke({"some_text": "original text"}, config=config) # (5)!
print(result['__interrupt__']) # (6)!
# > [
# > Interrupt(
# > value={'text_to_revise': 'original text'},
# > resumable=True,
# > ns=['human_node:6ce9e64f-edef-fe5d-f7dc-511fa9526960']
# > )
# > ]
print(graph.invoke(Command(resume="Edited text"), config=config)) # (7)!
# > {'some_text': 'Edited text'}
interrupt(...)
pauses execution athuman_node
, surfacing the given payload to a human.- Any JSON serializable value can be passed to the
interrupt
function. Here, a dict containing the text to revise. - Once resumed, the return value of
interrupt(...)
is the human-provided input, which is used to update the state. - A checkpointer is required to persist graph state. In production, this should be durable (e.g., backed by a database).
- The graph is invoked with some initial state.
- When the graph hits the interrupt, it returns an
Interrupt
object with the payload and metadata. - The graph is resumed with a
Command(resume=...)
, injecting the human's input and continuing execution.
New in 0.4.0
__interrupt__
is a special key that will be returned when running the graph if the graph is interrupted. Support for __interrupt__
in invoke
and ainvoke
has been added in version 0.4.0. If you're on an older version, you will only see __interrupt__
in the result if you use stream
or astream
. You can also use graph.get_state(thread_id)
to get the interrupt value.
Warning
Interrupts are both powerful and ergonomic. However, while they may resemble Python's input() function in terms of developer experience, it's important to note that they do not automatically resume execution from the interruption point. Instead, they rerun the entire node where the interrupt was used. For this reason, interrupts are typically best placed at the start of a node or in a dedicated node.
Resume using the Command
primitive¶
Warning
Resuming from an interrupt
is different from Python's input()
function, where execution resumes from the exact point where the input()
function was called.
When the interrupt
function is used within a graph, execution pauses at that point and awaits user input.
To resume execution, use the Command
primitive, which can be supplied via the invoke
, ainvoke
, stream
, or astream
methods. The graph resumes execution from the beginning of the node where interrupt(...)
was initially called. This time, the interrupt
function will return the value provided in Command(resume=value)
rather than pausing again. All code from the beginning of the node to the interrupt
will be re-executed.
# Resume graph execution by providing the user's input.
graph.invoke(Command(resume={"age": "25"}), thread_config)
Resume multiple interrupts with one invocation¶
If you have multiple interrupts in the task queue, you can use Command.resume
with a dictionary mapping of interrupt ids to resume with a single invoke
/ stream
call.
For example, once your graph has been interrupted (multiple times, theoretically) and is stalled:
resume_map = {
i.interrupt_id: f"human input for prompt {i.value}"
for i in parent.get_state(thread_config).interrupts
}
parent_graph.invoke(Command(resume=resume_map), config=thread_config)
Common patterns¶
Below we show different design patterns that can be implemented using interrupt
and Command
.
Approve or reject¶
Pause the graph before a critical step, such as an API call, to review and approve the action. If the action is rejected, you can prevent the graph from executing the step, and potentially take an alternative action.
API Reference: interrupt | Command
from typing import Literal
from langgraph.types import interrupt, Command
def human_approval(state: State) -> Command[Literal["some_node", "another_node"]]:
is_approved = interrupt(
{
"question": "Is this correct?",
# Surface the output that should be
# reviewed and approved by the human.
"llm_output": state["llm_output"]
}
)
if is_approved:
return Command(goto="some_node")
else:
return Command(goto="another_node")
# Add the node to the graph in an appropriate location
# and connect it to the relevant nodes.
graph_builder.add_node("human_approval", human_approval)
graph = graph_builder.compile(checkpointer=checkpointer)
# After running the graph and hitting the interrupt, the graph will pause.
# Resume it with either an approval or rejection.
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(Command(resume=True), config=thread_config)
Extended example: approve or reject with interrupt
from typing import Literal, TypedDict
import uuid
from langgraph.constants import START, END
from langgraph.graph import StateGraph
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
# Define the shared graph state
class State(TypedDict):
llm_output: str
decision: str
# Simulate an LLM output node
def generate_llm_output(state: State) -> State:
return {"llm_output": "This is the generated output."}
# Human approval node
def human_approval(state: State) -> Command[Literal["approved_path", "rejected_path"]]:
decision = interrupt({
"question": "Do you approve the following output?",
"llm_output": state["llm_output"]
})
if decision == "approve":
return Command(goto="approved_path", update={"decision": "approved"})
else:
return Command(goto="rejected_path", update={"decision": "rejected"})
# Next steps after approval
def approved_node(state: State) -> State:
print("✅ Approved path taken.")
return state
# Alternative path after rejection
def rejected_node(state: State) -> State:
print("❌ Rejected path taken.")
return state
# Build the graph
builder = StateGraph(State)
builder.add_node("generate_llm_output", generate_llm_output)
builder.add_node("human_approval", human_approval)
builder.add_node("approved_path", approved_node)
builder.add_node("rejected_path", rejected_node)
builder.set_entry_point("generate_llm_output")
builder.add_edge("generate_llm_output", "human_approval")
builder.add_edge("approved_path", END)
builder.add_edge("rejected_path", END)
checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)
# Run until interrupt
config = {"configurable": {"thread_id": uuid.uuid4()}}
result = graph.invoke({}, config=config)
print(result["__interrupt__"])
# Output:
# Interrupt(value={'question': 'Do you approve the following output?', 'llm_output': 'This is the generated output.'}, ...)
# Simulate resuming with human input
# To test rejection, replace resume="approve" with resume="reject"
final_result = graph.invoke(Command(resume="approve"), config=config)
print(final_result)
Review and edit state¶
API Reference: interrupt
from langgraph.types import interrupt
def human_editing(state: State):
...
result = interrupt(
# Interrupt information to surface to the client.
# Can be any JSON serializable value.
{
"task": "Review the output from the LLM and make any necessary edits.",
"llm_generated_summary": state["llm_generated_summary"]
}
)
# Update the state with the edited text
return {
"llm_generated_summary": result["edited_text"]
}
# Add the node to the graph in an appropriate location
# and connect it to the relevant nodes.
graph_builder.add_node("human_editing", human_editing)
graph = graph_builder.compile(checkpointer=checkpointer)
...
# After running the graph and hitting the interrupt, the graph will pause.
# Resume it with the edited text.
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(
Command(resume={"edited_text": "The edited text"}),
config=thread_config
)
Extended example: edit state with interrupt
from typing import TypedDict
import uuid
from langgraph.constants import START, END
from langgraph.graph import StateGraph
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
# Define the graph state
class State(TypedDict):
summary: str
# Simulate an LLM summary generation
def generate_summary(state: State) -> State:
return {
"summary": "The cat sat on the mat and looked at the stars."
}
# Human editing node
def human_review_edit(state: State) -> State:
result = interrupt({
"task": "Please review and edit the generated summary if necessary.",
"generated_summary": state["summary"]
})
return {
"summary": result["edited_summary"]
}
# Simulate downstream use of the edited summary
def downstream_use(state: State) -> State:
print(f"✅ Using edited summary: {state['summary']}")
return state
# Build the graph
builder = StateGraph(State)
builder.add_node("generate_summary", generate_summary)
builder.add_node("human_review_edit", human_review_edit)
builder.add_node("downstream_use", downstream_use)
builder.set_entry_point("generate_summary")
builder.add_edge("generate_summary", "human_review_edit")
builder.add_edge("human_review_edit", "downstream_use")
builder.add_edge("downstream_use", END)
# Set up in-memory checkpointing for interrupt support
checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)
# Invoke the graph until it hits the interrupt
config = {"configurable": {"thread_id": uuid.uuid4()}}
result = graph.invoke({}, config=config)
# Output interrupt payload
print(result["__interrupt__"])
# Example output:
# Interrupt(
# value={
# 'task': 'Please review and edit the generated summary if necessary.',
# 'generated_summary': 'The cat sat on the mat and looked at the stars.'
# },
# resumable=True,
# ...
# )
# Resume the graph with human-edited input
edited_summary = "The cat lay on the rug, gazing peacefully at the night sky."
resumed_result = graph.invoke(
Command(resume={"edited_summary": edited_summary}),
config=config
)
print(resumed_result)
Review tool calls¶
To add a human approval step to a tool:
- Use
interrupt()
in the tool to pause execution. - Resume with a
Command(resume=...)
to continue based on human input.
API Reference: InMemorySaver | interrupt | create_react_agent
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.types import interrupt
from langgraph.prebuilt import create_react_agent
# An example of a sensitive tool that requires human review / approval
def book_hotel(hotel_name: str):
"""Book a hotel"""
response = interrupt( # (1)!
f"Trying to call `book_hotel` with args {{'hotel_name': {hotel_name}}}. "
"Please approve or suggest edits."
)
if response["type"] == "accept":
pass
elif response["type"] == "edit":
hotel_name = response["args"]["hotel_name"]
else:
raise ValueError(f"Unknown response type: {response['type']}")
return f"Successfully booked a stay at {hotel_name}."
checkpointer = InMemorySaver() # (2)!
agent = create_react_agent(
model="anthropic:claude-3-5-sonnet-latest",
tools=[book_hotel],
checkpointer=checkpointer, # (3)!
)
- The
interrupt
function pauses the agent graph at a specific node. In this case, we callinterrupt()
at the beginning of the tool function, which pauses the graph at the node that executes the tool. The information insideinterrupt()
(e.g., tool calls) can be presented to a human, and the graph can be resumed with the user input (tool call approval, edit or feedback). - The
InMemorySaver
is used to store the agent state at every step in the tool calling loop. This enables short-term memory and human-in-the-loop capabilities. In this example, we useInMemorySaver
to store the agent state in memory. In a production application, the agent state will be stored in a database. - Initialize the agent with the
checkpointer
.
Run the agent with the stream()
method, passing the config
object to specify the thread ID. This allows the agent to resume the same conversation on future invocations.
config = {
"configurable": {
"thread_id": "1"
}
}
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "book a stay at McKittrick hotel"}]},
config
):
print(chunk)
print("\n")
You should see that the agent runs until it reaches the
interrupt()
call, at which point it pauses and waits for human input.
Resume the agent with a Command(resume=...)
to continue based on human input.
API Reference: Command
from langgraph.types import Command
for chunk in agent.stream(
Command(resume={"type": "accept"}), # (1)!
# Command(resume={"type": "edit", "args": {"hotel_name": "McKittrick Hotel"}}),
config
):
print(chunk)
print("\n")
- The
interrupt
function is used in conjunction with theCommand
object to resume the graph with a value provided by the human.
Add interrupts to any tool¶
You can create a wrapper to add interrupts to any tool. The example below provides a reference implementation compatible with Agent Inbox UI and Agent Chat UI.
from typing import Callable
from langchain_core.tools import BaseTool, tool as create_tool
from langchain_core.runnables import RunnableConfig
from langgraph.types import interrupt
from langgraph.prebuilt.interrupt import HumanInterruptConfig, HumanInterrupt
def add_human_in_the_loop(
tool: Callable | BaseTool,
*,
interrupt_config: HumanInterruptConfig = None,
) -> BaseTool:
"""Wrap a tool to support human-in-the-loop review."""
if not isinstance(tool, BaseTool):
tool = create_tool(tool)
if interrupt_config is None:
interrupt_config = {
"allow_accept": True,
"allow_edit": True,
"allow_respond": True,
}
@create_tool( # (1)!
tool.name,
description=tool.description,
args_schema=tool.args_schema
)
def call_tool_with_interrupt(config: RunnableConfig, **tool_input):
request: HumanInterrupt = {
"action_request": {
"action": tool.name,
"args": tool_input
},
"config": interrupt_config,
"description": "Please review the tool call"
}
response = interrupt([request])[0] # (2)!
# approve the tool call
if response["type"] == "accept":
tool_response = tool.invoke(tool_input, config)
# update tool call args
elif response["type"] == "edit":
tool_input = response["args"]["args"]
tool_response = tool.invoke(tool_input, config)
# respond to the LLM with user feedback
elif response["type"] == "response":
user_feedback = response["args"]
tool_response = user_feedback
else:
raise ValueError(f"Unsupported interrupt response type: {response['type']}")
return tool_response
return call_tool_with_interrupt
- This wrapper creates a new tool that calls
interrupt()
before executing the wrapped tool. interrupt()
is using special input and output format that's expected by Agent Inbox UI:- a list of
HumanInterrupt
objects is sent toAgentInbox
render interrupt information to the end user - resume value is provided by
AgentInbox
as a list (i.e.,Command(resume=[...])
)
- a list of
You can use the add_human_in_the_loop
wrapper to add interrupt()
to any tool without having to add it inside the tool:
API Reference: InMemorySaver | create_react_agent
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import create_react_agent
checkpointer = InMemorySaver()
def book_hotel(hotel_name: str):
"""Book a hotel"""
return f"Successfully booked a stay at {hotel_name}."
agent = create_react_agent(
model="anthropic:claude-3-5-sonnet-latest",
tools=[
add_human_in_the_loop(book_hotel), # (1)!
],
checkpointer=checkpointer,
)
config = {"configurable": {"thread_id": "1"}}
# Run the agent
for chunk in agent.stream(
{"messages": [{"role": "user", "content": "book a stay at McKittrick hotel"}]},
config
):
print(chunk)
print("\n")
- The
add_human_in_the_loop
wrapper is used to addinterrupt()
to the tool. This allows the agent to pause execution and wait for human input before proceeding with the tool call.
You should see that the agent runs until it reaches the
interrupt()
call, at which point it pauses and waits for human input.
Resume the agent with a Command(resume=...)
to continue based on human input.
API Reference: Command
from langgraph.types import Command
for chunk in agent.stream(
Command(resume=[{"type": "accept"}]),
# Command(resume=[{"type": "edit", "args": {"args": {"hotel_name": "McKittrick Hotel"}}}]),
config
):
print(chunk)
print("\n")
Validate human input¶
If you need to validate the input provided by the human within the graph itself (rather than on the client side), you can achieve this by using multiple interrupt calls within a single node.
API Reference: interrupt
from langgraph.types import interrupt
def human_node(state: State):
"""Human node with validation."""
question = "What is your age?"
while True:
answer = interrupt(question)
# Validate answer, if the answer isn't valid ask for input again.
if not isinstance(answer, int) or answer < 0:
question = f"'{answer} is not a valid age. What is your age?"
answer = None
continue
else:
# If the answer is valid, we can proceed.
break
print(f"The human in the loop is {answer} years old.")
return {
"age": answer
}
Extended example: validating user input
from typing import TypedDict
import uuid
from langgraph.constants import START, END
from langgraph.graph import StateGraph
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
# Define graph state
class State(TypedDict):
age: int
# Node that asks for human input and validates it
def get_valid_age(state: State) -> State:
prompt = "Please enter your age (must be a non-negative integer)."
while True:
user_input = interrupt(prompt)
# Validate the input
try:
age = int(user_input)
if age < 0:
raise ValueError("Age must be non-negative.")
break # Valid input received
except (ValueError, TypeError):
prompt = f"'{user_input}' is not valid. Please enter a non-negative integer for age."
return {"age": age}
# Node that uses the valid input
def report_age(state: State) -> State:
print(f"✅ Human is {state['age']} years old.")
return state
# Build the graph
builder = StateGraph(State)
builder.add_node("get_valid_age", get_valid_age)
builder.add_node("report_age", report_age)
builder.set_entry_point("get_valid_age")
builder.add_edge("get_valid_age", "report_age")
builder.add_edge("report_age", END)
# Create the graph with a memory checkpointer
checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)
# Run the graph until the first interrupt
config = {"configurable": {"thread_id": uuid.uuid4()}}
result = graph.invoke({}, config=config)
print(result["__interrupt__"]) # First prompt: "Please enter your age..."
# Simulate an invalid input (e.g., string instead of integer)
result = graph.invoke(Command(resume="not a number"), config=config)
print(result["__interrupt__"]) # Follow-up prompt with validation message
# Simulate a second invalid input (e.g., negative number)
result = graph.invoke(Command(resume="-10"), config=config)
print(result["__interrupt__"]) # Another retry
# Provide valid input
final_result = graph.invoke(Command(resume="25"), config=config)
print(final_result) # Should include the valid age
Considerations¶
When using human-in-the-loop, there are some considerations to keep in mind.
Using with code with side-effects¶
Place code with side effects, such as API calls, after the interrupt
or in a separate node to avoid duplication, as these are re-triggered every time the node is resumed.
Using with subgraphs called as functions¶
When invoking a subgraph as a function, the parent graph will resume execution from the beginning of the node where the subgraph was invoked where the interrupt
was triggered. Similarly, the subgraph will resume from the beginning of the node where the interrupt()
function was called.
def node_in_parent_graph(state: State):
some_code() # <-- This will re-execute when the subgraph is resumed.
# Invoke a subgraph as a function.
# The subgraph contains an `interrupt` call.
subgraph_result = subgraph.invoke(some_input)
...
Extended example: parent and subgraph execution flow
Say we have a parent graph with 3 nodes:
Parent Graph: node_1
→ node_2
(subgraph call) → node_3
And the subgraph has 3 nodes, where the second node contains an interrupt
:
Subgraph: sub_node_1
→ sub_node_2
(interrupt
) → sub_node_3
When resuming the graph, the execution will proceed as follows:
- Skip
node_1
in the parent graph (already executed, graph state was saved in snapshot). - Re-execute
node_2
in the parent graph from the start. - Skip
sub_node_1
in the subgraph (already executed, graph state was saved in snapshot). - Re-execute
sub_node_2
in the subgraph from the beginning. - Continue with
sub_node_3
and subsequent nodes.
Here is abbreviated example code that you can use to understand how subgraphs work with interrupts. It counts the number of times each node is entered and prints the count.
import uuid
from typing import TypedDict
from langgraph.graph import StateGraph
from langgraph.constants import START
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
class State(TypedDict):
"""The graph state."""
state_counter: int
counter_node_in_subgraph = 0
def node_in_subgraph(state: State):
"""A node in the sub-graph."""
global counter_node_in_subgraph
counter_node_in_subgraph += 1 # This code will **NOT** run again!
print(f"Entered `node_in_subgraph` a total of {counter_node_in_subgraph} times")
counter_human_node = 0
def human_node(state: State):
global counter_human_node
counter_human_node += 1 # This code will run again!
print(f"Entered human_node in sub-graph a total of {counter_human_node} times")
answer = interrupt("what is your name?")
print(f"Got an answer of {answer}")
checkpointer = MemorySaver()
subgraph_builder = StateGraph(State)
subgraph_builder.add_node("some_node", node_in_subgraph)
subgraph_builder.add_node("human_node", human_node)
subgraph_builder.add_edge(START, "some_node")
subgraph_builder.add_edge("some_node", "human_node")
subgraph = subgraph_builder.compile(checkpointer=checkpointer)
counter_parent_node = 0
def parent_node(state: State):
"""This parent node will invoke the subgraph."""
global counter_parent_node
counter_parent_node += 1 # This code will run again on resuming!
print(f"Entered `parent_node` a total of {counter_parent_node} times")
# Please note that we're intentionally incrementing the state counter
# in the graph state as well to demonstrate that the subgraph update
# of the same key will not conflict with the parent graph (until
subgraph_state = subgraph.invoke(state)
return subgraph_state
builder = StateGraph(State)
builder.add_node("parent_node", parent_node)
builder.add_edge(START, "parent_node")
# A checkpointer must be enabled for interrupts to work!
checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)
config = {
"configurable": {
"thread_id": uuid.uuid4(),
}
}
for chunk in graph.stream({"state_counter": 1}, config):
print(chunk)
print('--- Resuming ---')
for chunk in graph.stream(Command(resume="35"), config):
print(chunk)
This will print out
Entered `parent_node` a total of 1 times
Entered `node_in_subgraph` a total of 1 times
Entered human_node in sub-graph a total of 1 times
{'__interrupt__': (Interrupt(value='what is your name?', resumable=True, ns=['parent_node:4c3a0248-21f0-1287-eacf-3002bc304db4', 'human_node:2fe86d52-6f70-2a3f-6b2f-b1eededd6348'], when='during'),)}
--- Resuming ---
Entered `parent_node` a total of 2 times
Entered human_node in sub-graph a total of 2 times
Got an answer of 35
{'parent_node': {'state_counter': 1}}
Using multiple interrupts¶
Using multiple interrupts within a single node can be helpful for patterns like validating human input. However, using multiple interrupts in the same node can lead to unexpected behavior if not handled carefully.
When a node contains multiple interrupt calls, LangGraph keeps a list of resume values specific to the task executing the node. Whenever execution resumes, it starts at the beginning of the node. For each interrupt encountered, LangGraph checks if a matching value exists in the task's resume list. Matching is strictly index-based, so the order of interrupt calls within the node is critical.
To avoid issues, refrain from dynamically changing the node's structure between executions. This includes adding, removing, or reordering interrupt calls, as such changes can result in mismatched indices. These problems often arise from unconventional patterns, such as mutating state via Command(resume=..., update=SOME_STATE_MUTATION)
or relying on global variables to modify the node’s structure dynamically.
Extended example: incorrect code that introduces non-determinism
import uuid
from typing import TypedDict, Optional
from langgraph.graph import StateGraph
from langgraph.constants import START
from langgraph.types import interrupt, Command
from langgraph.checkpoint.memory import MemorySaver
class State(TypedDict):
"""The graph state."""
age: Optional[str]
name: Optional[str]
def human_node(state: State):
if not state.get('name'):
name = interrupt("what is your name?")
else:
name = "N/A"
if not state.get('age'):
age = interrupt("what is your age?")
else:
age = "N/A"
print(f"Name: {name}. Age: {age}")
return {
"age": age,
"name": name,
}
builder = StateGraph(State)
builder.add_node("human_node", human_node)
builder.add_edge(START, "human_node")
# A checkpointer must be enabled for interrupts to work!
checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)
config = {
"configurable": {
"thread_id": uuid.uuid4(),
}
}
for chunk in graph.stream({"age": None, "name": None}, config):
print(chunk)
for chunk in graph.stream(Command(resume="John", update={"name": "foo"}), config):
print(chunk)