Mapping of node names to their PregelNode implementations
Mapping of channel names to their BaseChannel or ManagedValueSpec implementations
Type of configurable fields that can be passed to the graph
Type of input values accepted by the graph
Type of output values produced by the graph
Whether to automatically validate the graph structure when it is compiled. Defaults to true.
The channels in the graph, mapping channel names to their BaseChannel or ManagedValueSpec instances
Optional
checkpointerOptional checkpointer for persisting graph state. When provided, saves a checkpoint of the graph state at every superstep. When false or undefined, checkpointing is disabled, and the graph will not be able to save or restore state.
Optional
configThe default configuration for graph execution, can be overridden on a per-invocation basis
Whether to enable debug logging. Defaults to false.
The input channels for the graph. These channels receive the initial input when the graph is invoked. Can be a single channel key or an array of channel keys.
Optional
interruptOptional array of node names or "all" to interrupt after executing these nodes. Used for implementing human-in-the-loop workflows.
Optional
interruptOptional array of node names or "all" to interrupt before executing these nodes. Used for implementing human-in-the-loop workflows.
Protected
lc_Optional
nameThe nodes in the graph, mapping node names to their PregelNode instances
The output channels for the graph. These channels contain the final output when the graph completes. Can be a single channel key or an array of channel keys.
Optional
retryOptional retry policy for handling failures in node execution
Optional
stepOptional timeout in milliseconds for the execution of each superstep
Optional
storeOptional long-term memory store for the graph, allows for persistance & retrieval of data across threads
Optional
streamOptional channels to stream. If not specified, all channels will be streamed. Can be a single channel key or an array of channel keys.
The streaming modes enabled for this graph. Defaults to ["values"]. Supported modes:
A map of aliases for constructor args. Keys are the attribute names, e.g. "foo". Values are the alias that will replace the key in serialization. This is used to eg. make argument names match Python.
A map of additional attributes to merge with constructor args. Keys are the attribute names, e.g. "foo". Values are the attribute values, which will be serialized. These attributes need to be accepted by the constructor as arguments.
The final serialized identifier for the module.
A map of secrets, which will be omitted from serialization. Keys are paths to the secret in constructor args, e.g. "foo.bar.baz". Values are the secret ids, which will be used when deserializing.
A manual list of keys that should be serialized. If not overridden, all fields passed into the constructor will be serialized.
Gets a list of all channels that should be streamed. If streamChannels is specified, returns those channels. Otherwise, returns all channels in the graph.
Array of channel keys to stream
Internal method that handles batching and configuration for a runnable It takes a function, input values, and optional configuration, and returns a promise that resolves to the output values.
The function to be executed for each input value.
Optional
options: Optional
batchOptions: RunnableBatchOptionsA promise that resolves to the output values.
Protected
_callOptional
options: Partial<Protected
_getProtected
_separateOptional
options: Partial<Protected
_streamProtected
_transformHelper method to transform an Iterator of Input values into an Iterator of
Output values, with callbacks.
Use this to implement stream()
or transform()
in Runnable subclasses.
Optional
options: Partial<Assigns new fields to the dict output of this runnable. Returns a new runnable.
Convert a runnable to a tool. Return a new instance of RunnableToolLike
which contains the runnable, name, description and schema.
Optional
description?: stringThe description of the tool. Falls back to the description on the Zod schema if not provided, or undefined if neither are provided.
Optional
name?: stringThe name of the tool. If not provided, it will default to the name of the runnable.
The Zod schema for the input of the tool. Infers the Zod type from the input type of the runnable.
An instance of RunnableToolLike
which is a runnable that can be used as a tool.
Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.
Array of inputs to each batch call.
Optional
options: Either a single call options object to apply to each batch call or an array for each call.
Optional
batchOptions: RunnableBatchOptions & { returnExceptions?: false }Whether to return errors rather than throwing on the first one
Optional
returnExceptions?: falseWhether to return errors rather than throwing on the first one
An array of RunOutputs, or mixed RunOutputs and errors if batchOptions.returnExceptions is set
Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.
Array of inputs to each batch call.
Optional
options: Either a single call options object to apply to each batch call or an array for each call.
Optional
batchOptions: RunnableBatchOptions & { returnExceptions: true }Whether to return errors rather than throwing on the first one
Whether to return errors rather than throwing on the first one
An array of RunOutputs, or mixed RunOutputs and errors if batchOptions.returnExceptions is set
Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.
Array of inputs to each batch call.
Optional
options: Either a single call options object to apply to each batch call or an array for each call.
Optional
batchOptions: RunnableBatchOptionsWhether to return errors rather than throwing on the first one
An array of RunOutputs, or mixed RunOutputs and errors if batchOptions.returnExceptions is set
Bind arguments to a Runnable, returning a new Runnable.
A new RunnableBinding that, when invoked, will apply the bound args.
Optional
suffix: stringGets the current state of the graph. Requires a checkpointer to be configured.
Configuration for retrieving the state
Optional
options: GetStateOptionsAdditional options
A snapshot of the current graph state
Gets the history of graph states. Requires a checkpointer to be configured. Useful for:
Configuration for retrieving the history
Optional
options: CheckpointListOptionsOptions for filtering the history
An async iterator of state snapshots
Gets all subgraphs within this graph. A subgraph is a Pregel instance that is nested within a node of this graph.
Optional
namespace: stringOptional namespace to filter subgraphs
Optional
recurse: booleanWhether to recursively get subgraphs of subgraphs
Generator yielding tuples of [name, subgraph]
Gets all subgraphs within this graph asynchronously. A subgraph is a Pregel instance that is nested within a node of this graph.
Optional
namespace: stringOptional namespace to filter subgraphs
Optional
recurse: booleanWhether to recursively get subgraphs of subgraphs
AsyncGenerator yielding tuples of [name, subgraph]
Run the graph with a single input and config.
The input to the graph.
Optional
options: Partial<The configuration to use for the run.
Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input.
Pick keys from the dict output of this runnable. Returns a new runnable.
Create a new runnable sequence that runs each individual runnable in series, piping the output of one runnable into another runnable or runnable-like.
A runnable, function, or object whose values are functions or runnables.
A new runnable sequence.
Streams the execution of the graph, emitting state updates as they occur. This is the primary method for observing graph execution in real-time.
Stream modes:
For more details, see the Streaming how-to guides.
The input to start graph execution with
Optional
options: Partial<Configuration options for streaming
An async iterable stream of graph state updates
Generate a stream of events emitted by the internal steps of the runnable.
Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results.
A StreamEvent is a dictionary with the following schema:
event
: string - Event names are of the format: on_[runnable_type]_(start|stream|end).name
: string - The name of the runnable that generated the event.run_id
: string - Randomly generated ID associated with the given execution of
the runnable that emitted the event. A child runnable that gets invoked as part of the execution of a
parent runnable is assigned its own unique ID.tags
: string[] - The tags of the runnable that generated the event.metadata
: Record<string, any> - The metadata of the runnable that generated the event.data
: Record<string, any>Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
ATTENTION This reference table is for the V2 version of the schema.
+----------------------+-----------------------------+------------------------------------------+
| event | input | output/chunk |
+======================+=============================+==========================================+
| on_chat_model_start | {"messages": BaseMessage[]} | |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_stream | | AIMessageChunk("hello") |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_end | {"messages": BaseMessage[]} | AIMessageChunk("hello world") |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_start | {'input': 'hello'} | |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_stream | | 'Hello' |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_end | 'Hello human!' | |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_start | | |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_stream | | "hello world!" |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_end | [Document(...)] | "hello world!, goodbye world!" |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_start | {"x": 1, "y": "2"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_end | | {"x": 1, "y": "2"} |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_start | {"query": "hello"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_end | {"query": "hello"} | [Document(...), ..] |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_start | {"question": "hello"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_end | {"question": "hello"} | ChatPromptValue(messages: BaseMessage[]) |
+----------------------+-----------------------------+------------------------------------------+
The "on_chain_*" events are the default for Runnables that don't fit one of the above categories.
In addition to the standard events above, users can also dispatch custom events.
Custom events will be only be surfaced with in the v2
version of the API!
A custom event has following format:
+-----------+------+------------------------------------------------------------+
| Attribute | Type | Description |
+===========+======+============================================================+
| name | str | A user defined name for the event. |
+-----------+------+------------------------------------------------------------+
| data | Any | The data associated with the event. This can be anything. |
+-----------+------+------------------------------------------------------------+
Here's an example:
import { RunnableLambda } from "@langchain/core/runnables";
import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch";
// Use this import for web environments that don't support "async_hooks"
// and manually pass config to child runs.
// import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch/web";
const slowThing = RunnableLambda.from(async (someInput: string) => {
// Placeholder for some slow operation
await new Promise((resolve) => setTimeout(resolve, 100));
await dispatchCustomEvent("progress_event", {
message: "Finished step 1 of 2",
});
await new Promise((resolve) => setTimeout(resolve, 100));
return "Done";
});
const eventStream = await slowThing.streamEvents("hello world", {
version: "v2",
});
for await (const event of eventStream) {
if (event.event === "on_custom_event") {
console.log(event);
}
}
Optional
streamOptions: Omit<EventStreamCallbackHandlerInput, "autoClose">Generate a stream of events emitted by the internal steps of the runnable.
Use to create an iterator over StreamEvents that provide real-time information about the progress of the runnable, including StreamEvents from intermediate results.
A StreamEvent is a dictionary with the following schema:
event
: string - Event names are of the format: on_[runnable_type]_(start|stream|end).name
: string - The name of the runnable that generated the event.run_id
: string - Randomly generated ID associated with the given execution of
the runnable that emitted the event. A child runnable that gets invoked as part of the execution of a
parent runnable is assigned its own unique ID.tags
: string[] - The tags of the runnable that generated the event.metadata
: Record<string, any> - The metadata of the runnable that generated the event.data
: Record<string, any>Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
ATTENTION This reference table is for the V2 version of the schema.
+----------------------+-----------------------------+------------------------------------------+
| event | input | output/chunk |
+======================+=============================+==========================================+
| on_chat_model_start | {"messages": BaseMessage[]} | |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_stream | | AIMessageChunk("hello") |
+----------------------+-----------------------------+------------------------------------------+
| on_chat_model_end | {"messages": BaseMessage[]} | AIMessageChunk("hello world") |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_start | {'input': 'hello'} | |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_stream | | 'Hello' |
+----------------------+-----------------------------+------------------------------------------+
| on_llm_end | 'Hello human!' | |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_start | | |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_stream | | "hello world!" |
+----------------------+-----------------------------+------------------------------------------+
| on_chain_end | [Document(...)] | "hello world!, goodbye world!" |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_start | {"x": 1, "y": "2"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_tool_end | | {"x": 1, "y": "2"} |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_start | {"query": "hello"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_retriever_end | {"query": "hello"} | [Document(...), ..] |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_start | {"question": "hello"} | |
+----------------------+-----------------------------+------------------------------------------+
| on_prompt_end | {"question": "hello"} | ChatPromptValue(messages: BaseMessage[]) |
+----------------------+-----------------------------+------------------------------------------+
The "on_chain_*" events are the default for Runnables that don't fit one of the above categories.
In addition to the standard events above, users can also dispatch custom events.
Custom events will be only be surfaced with in the v2
version of the API!
A custom event has following format:
+-----------+------+------------------------------------------------------------+
| Attribute | Type | Description |
+===========+======+============================================================+
| name | str | A user defined name for the event. |
+-----------+------+------------------------------------------------------------+
| data | Any | The data associated with the event. This can be anything. |
+-----------+------+------------------------------------------------------------+
Here's an example:
import { RunnableLambda } from "@langchain/core/runnables";
import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch";
// Use this import for web environments that don't support "async_hooks"
// and manually pass config to child runs.
// import { dispatchCustomEvent } from "@langchain/core/callbacks/dispatch/web";
const slowThing = RunnableLambda.from(async (someInput: string) => {
// Placeholder for some slow operation
await new Promise((resolve) => setTimeout(resolve, 100));
await dispatchCustomEvent("progress_event", {
message: "Finished step 1 of 2",
});
await new Promise((resolve) => setTimeout(resolve, 100));
return "Done";
});
const eventStream = await slowThing.streamEvents("hello world", {
version: "v2",
});
for await (const event of eventStream) {
if (event.event === "on_custom_event") {
console.log(event);
}
}
Optional
streamOptions: Omit<EventStreamCallbackHandlerInput, "autoClose">Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.
Optional
options: Partial<Optional
streamOptions: Omit<LogStreamCallbackHandlerInput, "autoClose">Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.
Updates the state of the graph with new values. Requires a checkpointer to be configured.
This method can be used for:
Configuration for the update
The values to update the state with
Optional
asNode: string | NOptional node name to attribute the update to
Updated configuration
Creates a new instance of the Pregel graph with updated configuration. This method follows the immutable pattern - instead of modifying the current instance, it returns a new instance with the merged configuration.
The configuration to merge with the current configuration
A new Pregel instance with the merged configuration
Create a new runnable from the current one that will try invoking other passed fallback runnables if the initial invocation fails.
Other runnables to call if the runnable errors.
A new RunnableWithFallbacks.
Bind lifecycle listeners to a Runnable, returning a new Runnable. The Run object contains information about the run, including its id, type, input, output, error, startTime, endTime, and any tags or metadata added to the run.
The object containing the callback functions.
Optional
onEnd?: (run: Run, config?: RunnableConfig<Record<string, any>>) => void | Promise<void>Called after the runnable finishes running, with the Run object.
Optional
onError?: (run: Run, config?: RunnableConfig<Record<string, any>>) => void | Promise<void>Called if the runnable throws an error, with the Run object.
Optional
onStart?: (run: Run, config?: RunnableConfig<Record<string, any>>) => void | Promise<void>Called before the runnable starts running, with the Run object.
Add retry logic to an existing runnable.
Optional
fields: {A new RunnableRetry that, when invoked, will retry according to the parameters.
Static
is
The Pregel class is the core runtime engine of LangGraph, implementing a message-passing graph computation model inspired by Google's Pregel system. It provides the foundation for building reliable, controllable agent workflows that can evolve state over time.
Key features:
The Pregel class is not intended to be instantiated directly by consumers. Instead, use the following higher-level APIs:
Pregel
Pregel
instance is returned by the entrypoint functionExample
Example