Langgraph Guide
Langgraph Guide
Overview
LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent
workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence.
LangGraph allows you to define flows that involve cycles, essential for most agentic architectures, differentiating it from
DAG-based solutions. As a very low-level framework, it provides fine-grained control over both the flow and state of your
application, crucial for creating reliable agents. Additionally, LangGraph includes built-in persistence, enabling advanced
human-in-the-loop and memory features.
LangGraph is inspired by Pregel and Apache Beam. The public interface draws inspiration from NetworkX. LangGraph is
built by LangChain Inc, the creators of LangChain, but can be used without LangChain.
LangGraph Platform is infrastructure for deploying LangGraph agents. It is a commercial solution for deploying agentic
applications to production, built on the open-source LangGraph framework. The LangGraph Platform consists of several
components that work together to support the development, deployment, debugging, and monitoring of LangGraph
applications: LangGraph Server (APIs), LangGraph SDKs (clients for the APIs), LangGraph CLI (command line tool for
building the server), LangGraph Studio (UI/debugger),
To learn more about LangGraph, check out our first LangChain Academy course, Introduction to LangGraph, available for
free here.
Key Features
Human-in-the-Loop: Interrupt graph execution to approve or edit next action planned by the agent.
Streaming Support: Stream outputs as they are produced by each node (including token streaming).
Integration with LangChain: LangGraph integrates seamlessly with LangChain and LangSmith (but does not require
them).
LangGraph Platform
LangGraph Platform is a commercial solution for deploying agentic applications to production, built on the open-source
LangGraph framework. Here are some common issues that arise in complex deployments, which LangGraph Platform
addresses:
Streaming support: LangGraph Server provides multiple streaming modes optimized for various application needs
Background runs: Runs agents asynchronously in the background
Support for long running agents: Infrastructure that can handle long running processes
Double texting: Handle the case where you get two messages from the user before the agent can respond
Handle burstiness: Task queue for ensuring requests are handled consistently without loss, even under heavy loads
Installation
pip install -U langgraph
Example
One of the central concepts of LangGraph is state. Each graph execution creates a state that is passed between nodes in
the graph as they execute, and each node updates this internal state with its return value after it executes. The way that
the graph updates its internal state is defined by either the type of graph chosen or a custom function.
Let's take a look at a simple example of an agent that can use a search tool.
export ANTHROPIC_API_KEY=sk-...
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY=lsv2_sk_...
tools = [search]
tool_node = ToolNode(tools)
"Based on the search results, I can tell you that the current weather in San Francisco is:\n\nTemperature:
60 degrees Fahrenheit\nConditions: Foggy\n\nSan Francisco is known for its microclimates and frequent fog,
especially during the summer months. The temperature of 60°F (about 15.5°C) is quite typical for the city,
which tends to have mild temperatures year-round. The fog, often referred to as "Karl the Fog" by locals,
is a characteristic feature of San Francisco\'s weather, particularly in the mornings and evenings.\n\nIs
there anything else you\'d like to know about the weather in San Francisco or any other location?"
Now when we pass the same "thread_id" , the conversation context is retained via the saved state (i.e. stored list of
messages)
final_state = app.invoke(
{"messages": [HumanMessage(content="what about ny")]},
config={"configurable": {"thread_id": 42}}
)
final_state["messages"][-1].content
"Based on the search results, I can tell you that the current weather in New York City is:\n\nTemperature:
90 degrees Fahrenheit (approximately 32.2 degrees Celsius)\nConditions: Sunny\n\nThis weather is quite
different from what we just saw in San Francisco. New York is experiencing much warmer temperatures right
now. Here are a few points to note:\n\n1. The temperature of 90°F is quite hot, typical of summer weather
in New York City.\n2. The sunny conditions suggest clear skies, which is great for outdoor activities but
also means it might feel even hotter due to direct sunlight.\n3. This kind of weather in New York often
comes with high humidity, which can make it feel even warmer than the actual temperature suggests.\n\nIt's
interesting to see the stark contrast between San Francisco's mild, foggy weather and New York's hot, sunny
conditions. This difference illustrates how varied weather can be across different parts of the United
States, even on the same day.\n\nIs there anything else you'd like to know about the weather in New York or
any other location?"
Step-by-step Breakdown
we use ChatAnthropic as our LLM. NOTE: we need make sure the model knows that it has these tools available to
call. We can do this by converting the LangChain tools into the format for OpenAI tool calling using the
.bind_tools() method.
we define the tools we want to use - a search tool in our case. It is really easy to create your own tools - see
documentation here on how to do that here.
we initialize graph ( StateGraph ) by passing state schema (in our case MessagesState )
MessagesState is a prebuilt state schema that has one attribute -- a list of LangChain Message objects, as well as logic
for merging the updates from each node into the state
The agent node: responsible for deciding what (if any) actions to take.
The tools node that invokes tools: if the agent decides to take an action, this node will then execute that action.
First, we need to set the entry point for graph execution - agent node.
Then we define one normal and one conditional edge. Conditional edge means that the destination depends on the
contents of the graph's state ( MessageState ). In our case, the destination is not known until the agent (LLM) decides.
b. Finish (respond to the user) if the agent did not ask to run tools
Normal edge: after the tools are invoked, the graph should always return to the agent to decide what to do next
When we compile the graph, we turn it into a LangChain Runnable, which automatically enables calling .invoke() ,
.stream() and .batch() with your inputs
We can also optionally pass checkpointer object for persisting state between graph runs, and enabling memory,
human-in-the-loop workflows, time travel and more. In our case we use MemorySaver - a simple in-memory
checkpointer
6. Execute the graph.
a. LangGraph adds the input message to the internal state, then passes the state to the entrypoint node, "agent" .
c. The chat model returns an AIMessage . LangGraph adds this to the state.
d. Graph cycles the following steps until there are no more tool_calls on AIMessage :
e. Execution progresses to the special END value and outputs the final state. And as a result, we get a list of all our chat
messages as output.
Documentation
Tutorials: Learn to build with LangGraph through guided examples.
How-to Guides: Accomplish specific things within LangGraph, from streaming, to adding memory & persistence, to
common design patterns (branching, subgraphs, etc.), these are the place to go if you want to copy and run a specific
code snippet.
Conceptual Guides: In-depth explanations of the key concepts and principles behind LangGraph, such as nodes, edges,
state and more.
API Reference: Review important classes and methods, simple examples of how to use the graph and checkpointing
APIs, higher-level prebuilt components and more.
Cloud (beta): With one click, deploy LangGraph applications to LangGraph Cloud.
Contributing
For more information on how to contribute, see here.