Human-in-the-loop using Server API¶
To review, edit, and approve tool calls in an agent or workflow, use LangGraph's human-in-the-loop features.
LangGraph API invoke & resume¶
from langgraph_sdk import get_client
from langgraph_sdk.schema import Command
client = get_client(url=<DEPLOYMENT_URL>)
# Using the graph deployed with the name "agent"
assistant_id = "agent"
# create a thread
thread = await client.threads.create()
thread_id = thread["thread_id"]
# Run the graph until the interrupt is hit.
result = await client.runs.wait(
thread_id,
assistant_id,
input={"some_text": "original text"} # (1)!
)
print(result['__interrupt__']) # (2)!
# > [
# > {
# > 'value': {'text_to_revise': 'original text'},
# > 'resumable': True,
# > 'ns': ['human_node:fc722478-2f21-0578-c572-d9fc4dd07c3b'],
# > 'when': 'during'
# > }
# > ]
# Resume the graph
print(await client.runs.wait(
thread_id,
assistant_id,
command=Command(resume="Edited text") # (3)!
))
# > {'some_text': 'Edited text'}
- The graph is invoked with some initial state.
- When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
- The graph is resumed with a
Command(resume=...)
, injecting the human's input and continuing execution.
import { Client } from "@langchain/langgraph-sdk";
const client = new Client({ apiUrl: <DEPLOYMENT_URL> });
// Using the graph deployed with the name "agent"
const assistantID = "agent";
// create a thread
const thread = await client.threads.create();
const threadID = thread["thread_id"];
// Run the graph until the interrupt is hit.
const result = await client.runs.wait(
threadID,
assistantID,
{ input: { "some_text": "original text" } } // (1)!
);
console.log(result['__interrupt__']); // (2)!
// > [
// > {
// > 'value': {'text_to_revise': 'original text'},
// > 'resumable': True,
// > 'ns': ['human_node:fc722478-2f21-0578-c572-d9fc4dd07c3b'],
// > 'when': 'during'
// > }
// > ]
// Resume the graph
console.log(await client.runs.wait(
threadID,
assistantID,
{ command: { resume: "Edited text" }} // (3)!
));
// > {'some_text': 'Edited text'}
- The graph is invoked with some initial state.
- When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
- The graph is resumed with a
{ resume: ... }
command object, injecting the human's input and continuing execution.
Create a thread:
curl --request POST \
--url <DEPLOYMENT_URL>/threads \
--header 'Content-Type: application/json' \
--data '{}'
Run the graph until the interrupt is hit.:
curl --request POST \
--url <DEPLOYMENT_URL>/threads/<THREAD_ID>/runs/wait \
--header 'Content-Type: application/json' \
--data "{
\"assistant_id\": \"agent\",
\"input\": {\"some_text\": \"original text\"}
}"
Resume the graph:
Extended example: using interrupt
This is an example graph you can run in the LangGraph API server. See LangGraph Platform quickstart for more details.
from typing import TypedDict
import uuid
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.constants import START
from langgraph.graph import StateGraph
from langgraph.types import interrupt, Command
class State(TypedDict):
some_text: str
def human_node(state: State):
value = interrupt( # (1)!
{
"text_to_revise": state["some_text"] # (2)!
}
)
return {
"some_text": value # (3)!
}
# Build the graph
graph_builder = StateGraph(State)
graph_builder.add_node("human_node", human_node)
graph_builder.add_edge(START, "human_node")
graph = graph_builder.compile()
interrupt(...)
pauses execution athuman_node
, surfacing the given payload to a human.- Any JSON serializable value can be passed to the
interrupt
function. Here, a dict containing the text to revise. - Once resumed, the return value of
interrupt(...)
is the human-provided input, which is used to update the state.
Once you have a running LangGraph API server, you can interact with it using LangGraph SDK
from langgraph_sdk import get_client
from langgraph_sdk.schema import Command
client = get_client(url=<DEPLOYMENT_URL>)
# Using the graph deployed with the name "agent"
assistant_id = "agent"
# create a thread
thread = await client.threads.create()
thread_id = thread["thread_id"]
# Run the graph until the interrupt is hit.
result = await client.runs.wait(
thread_id,
assistant_id,
input={"some_text": "original text"} # (1)!
)
print(result['__interrupt__']) # (2)!
# > [
# > {
# > 'value': {'text_to_revise': 'original text'},
# > 'resumable': True,
# > 'ns': ['human_node:fc722478-2f21-0578-c572-d9fc4dd07c3b'],
# > 'when': 'during'
# > }
# > ]
# Resume the graph
print(await client.runs.wait(
thread_id,
assistant_id,
command=Command(resume="Edited text") # (3)!
))
# > {'some_text': 'Edited text'}
- The graph is invoked with some initial state.
- When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
- The graph is resumed with a
Command(resume=...)
, injecting the human's input and continuing execution.
import { Client } from "@langchain/langgraph-sdk";
const client = new Client({ apiUrl: <DEPLOYMENT_URL> });
// Using the graph deployed with the name "agent"
const assistantID = "agent";
// create a thread
const thread = await client.threads.create();
const threadID = thread["thread_id"];
// Run the graph until the interrupt is hit.
const result = await client.runs.wait(
threadID,
assistantID,
{ input: { "some_text": "original text" } } // (1)!
);
console.log(result['__interrupt__']); // (2)!
// > [
// > {
// > 'value': {'text_to_revise': 'original text'},
// > 'resumable': True,
// > 'ns': ['human_node:fc722478-2f21-0578-c572-d9fc4dd07c3b'],
// > 'when': 'during'
// > }
// > ]
// Resume the graph
console.log(await client.runs.wait(
threadID,
assistantID,
{ command: { resume: "Edited text" }} // (3)!
));
// > {'some_text': 'Edited text'}
- The graph is invoked with some initial state.
- When the graph hits the interrupt, it returns an interrupt object with the payload and metadata.
- The graph is resumed with a
{ resume: ... }
command object, injecting the human's input and continuing execution.
Create a thread:
curl --request POST \
--url <DEPLOYMENT_URL>/threads \
--header 'Content-Type: application/json' \
--data '{}'
Run the graph until the interrupt is hit:
curl --request POST \
--url <DEPLOYMENT_URL>/threads/<THREAD_ID>/runs/wait \
--header 'Content-Type: application/json' \
--data "{
\"assistant_id\": \"agent\",
\"input\": {\"some_text\": \"original text\"}
}"
Resume the graph:
Learn more¶
- Human-in-the-loop conceptual guide: learn more about LangGraph human-in-the-loop features.
- Common patterns: learn how to implement patterns like approving/rejecting actions, requesting user input, tool call review, and validating human input.