LangGraph Blog
LangGraph Blog
Python Repo
Introduction
One of the things we highlighted in our LangChain v0.1 announcement
was the introduction of a new library: LangGraph. LangGraph is built on
top of LangChain and completely interoperable with the LangChain
ecosystem. It adds new value primarily through the introduction of an
easy way to create cyclical graphs. This is often useful when creating
agent runtimes.
In this blog post, we will first walk through the motivations for
LangGraph. We will then cover the basic functionality it provides. We
will then spotlight two agent runtimes we've implemented already. We
will then highlight a few of the common modifications to these runtimes
we've heard requests for, and examples of implementing those. We will
finish with a preview of what we will be releasing next.
Motivation
One of the big value props of LangChain is the ability to easily create
custom chains. We've invested heavily in the functionality for this with
LangChain Expression Language. However, so far we've lacked a
method for easily introducing cycles into these chains. Effectively, these
chains are directed acyclic graphs (DAGs) - as are most data
orchestration frameworks.
One of the common patterns we see when people are creating more
complex LLM applications is the introduction of cycles into the runtime.
These cycles often use the LLM to reason about what to do next in the
cycle. A big unlock of LLMs is the ability to use them for these reasoning
tasks. This can essentially be thought of as running an LLM in a for-loop.
These types of systems are often called agents.
These types of applications are often called agents. The simplest - but at
the same time most ambitious - form of these is a loop that essentially
has two steps:
1 Call the LLM to determine either (a) what actions to take, or (b)
what response to give the user
2 Take given actions, and pass back to step 1
One thing we've seen in practice as we've worked the community and
companies to put agents into production is that often times more
control is needed. You may want to always force an agent to call
particular tool first. You may want have more control over how tools are
called. You may want to have different prompts for the agent,
depending on that state it is in.
Functionality
At it's core, LangGraph exposes a pretty narrow interface on top of
LangChain.
StateGraph
StateGraph is a class that represents the graph. You initialize this class
by passing in a state definition. This state definition represents a
central state object that is updated over time. This state is updated by
nodes in the graph, which return operations to attributes of this state (in
the form of a key-value store).
class State(TypedDict):
input: str
all_actions: Annotated[List[str], operator.add]
graph = StateGraph(State)
Nodes
After creating a StateGraph , you then add nodes with
graph.add_node("model", model)
graph.add_node("tools", tool_executor)
There is also a special END node that is used to represent the end of the
graph. It is important that your cycles be able to end eventually!
Edges
After adding nodes, you can then add edges to create the graph. There
are a few types of edges.
The Starting Edge
This is the edge that connects the start of the graph to a particular node.
This will make it so that that node is the first one called when input is
passed to the graph. Pseudocode for that is:
graph.set_entry_point("model")
Normal Edges
These are edges where one node should ALWAYS be called after another.
An example of this may be in the basic agent runtime, where we always
want the model to be called after we call a tool.
graph.add_edge("tools", "model")
Conditional Edges
An example of this could be that after a model is called we either exit the
graph and return to the user, or we call a tool - depending on what a
user decides! See an example in pseudocode below:
graph.add_conditional_edge(
"model",
should_continue,
{
"end": END,
"continue": "tools"
}
)
Compile
After we define our graph, we can compile it into a runnable! This simply
takes the graph definition we've created so far an returns a runnable.
This runnable exposes all the same method as LangChain runnables
( .invoke , .stream , .astream_log , etc) allowing it to be called in the
Agent Executor
We've recreated the canonical LangChain AgentExecutor with
LangGraph. This will allow you to use existing LangChain agents, but
allow you to more easily modify the internals of the AgentExecutor. The
state of this graph by default contains concepts that should be familiar
to you if you've used LangChain agents: input , chat_history ,
class AgentState(TypedDict):
input: str
chat_history: list[BaseMessage]
agent_outcome: Union[AgentAction, AgentFinish, None]
intermediate_steps: Annotated[list[tuple[AgentAction, str
As such, we've created an agent runtime that works with this state. The
input is a list of messages, and nodes just simply add to this list of
messages over time.
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], operator.add]
Modifications
One of the big benefits of LangGraph is that it exposes the logic of
AgentExecutor in a far more natural and modifiable way. We've provided
a few examples of modifications that we've heard requests for:
For when you always want to make an agent call a tool first. For Agent
Executor and Chat Agent Executor.
Human-in-the-loop
How to make the agent return output in a specific format using function
calling. Only for Chat Agent Executor.
Future Work
We're incredibly excited about the possibility of LangGraph enabling
more custom and powerful agent runtimes. Some of the things we are
looking to implement in the near future:
Multi-agent workflows
If any of these resonate with you, please feel free to add an example
notebook in the LangGraph repo, or reach out to us at
[email protected] for more involved collaboration!
TAGS
By LangChain
JOIN OUR NEWSLETTER