0% found this document useful (0 votes)
54 views9 pages

Agentic RAG - Removed

The document outlines the implementation of an Agentic Retrieval-Augmented Generation (RAG) system using LangGraph, detailing the setup process, including package installation and API key configuration. It describes the creation of a retriever tool to index blog posts and the structure of a state graph that facilitates decision-making and message handling between different nodes. The document also includes code snippets for defining agent behavior, assessing document relevance, and generating responses based on user queries.

Uploaded by

Milind Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views9 pages

Agentic RAG - Removed

The document outlines the implementation of an Agentic Retrieval-Augmented Generation (RAG) system using LangGraph, detailing the setup process, including package installation and API key configuration. It describes the creation of a retriever tool to index blog posts and the structure of a state graph that facilitates decision-making and message handling between different nodes. The document also includes code snippets for defining agent behavior, assessing document relevance, and generating responses based on user queries.

Uploaded by

Milind Patil
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

1/7/25, 11:47 PM Agentic RAG

Home Tutorials RAG

Agentic RAG
Retrieval Agents are useful when we want to make decisions about whether to retrieve
from an index.

To implement a retrieval agent, we simply need to give an LLM access to a retriever tool.

We can incorporate this into LangGraph.

Setup
First, let's download the required packages and set our API keys:

%%capture --no-stderr
%pip install -U --quiet langchain-community tiktoken langchain-openai
langchainhub chromadb langchain langgraph langchain-text-splitters
beautifulsoup4

import getpass
import os

def _set_env(key: str):


if key not in os.environ:
os.environ[key] = getpass.getpass(f"{key}:")

_set_env("OPENAI_API_KEY")

Set up LangSmith for LangGraph development

Sign up for LangSmith to quickly spot issues and improve the performance of your
LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM
apps built with LangGraph — read more about how to get started here.

Retriever
First, we index 3 blog posts.

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 1/10
1/7/25, 11:47 PM Agentic RAG

from langchain_community.document_loaders import WebBaseLoader


from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter

urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]

docs = [WebBaseLoader(url).load() for url in urls]


docs_list = [item for sublist in docs for item in sublist]

text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=100, chunk_overlap=50
)
doc_splits = text_splitter.split_documents(docs_list)

# Add to vectorDB
vectorstore = Chroma.from_documents(
documents=doc_splits,
collection_name="rag-chroma",
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()

API Reference: WebBaseLoader | Chroma | OpenAIEmbeddings |


RecursiveCharacterTextSplitter

Then we create a retriever tool.

from langchain.tools.retriever import create_retriever_tool

retriever_tool = create_retriever_tool(
retriever,
"retrieve_blog_posts",
"Search and return information about Lilian Weng blog posts on LLM agents,
prompt engineering, and adversarial attacks on LLMs.",
)

tools = [retriever_tool]

API Reference: create_retriever_tool

Agent State
We will define a graph.

A state object that it passes around to each node.

Our state will be a list of messages .

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 2/10
1/7/25, 11:47 PM Agentic RAG

Each node in our graph will append to it.

from typing import Annotated, Sequence


from typing_extensions import TypedDict

from langchain_core.messages import BaseMessage

from langgraph.graph.message import add_messages

class AgentState(TypedDict):
# The add_messages function defines how an update should be processed
# Default is to replace. add_messages says "append"
messages: Annotated[Sequence[BaseMessage], add_messages]

API Reference: BaseMessage | add_messages

Nodes and Edges


We can lay out an agentic RAG graph like this:

The state is a set of messages

Each node will update (append to) state

Conditional edges decide which node to visit next

Using Pydantic with LangChain

This notebook uses Pydantic v2 BaseModel , which requires langchain-core >= 0.3 . Using
langchain-core < 0.3 will result in errors due to mixing of Pydantic v1 and v2 BaseModels .

from typing import Annotated, Literal, Sequence


from typing_extensions import TypedDict

from langchain import hub

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 3/10
1/7/25, 11:47 PM Agentic RAG

from langchain_core.messages import BaseMessage, HumanMessage


from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

from pydantic import BaseModel, Field

from langgraph.prebuilt import tools_condition

### Edges

def grade_documents(state) -> Literal["generate", "rewrite"]:


"""
Determines whether the retrieved documents are relevant to the question.

Args:
state (messages): The current state

Returns:
str: A decision for whether the documents are relevant or not
"""

print("---CHECK RELEVANCE---")

# Data model
class grade(BaseModel):
"""Binary score for relevance check."""

binary_score: str = Field(description="Relevance score 'yes' or 'no'")

# LLM
model = ChatOpenAI(temperature=0, model="gpt-4-0125-preview",
streaming=True)

# LLM with tool and validation


llm_with_tool = model.with_structured_output(grade)

# Prompt
prompt = PromptTemplate(
template="""You are a grader assessing relevance of a retrieved
document to a user question. \n
Here is the retrieved document: \n\n {context} \n\n
Here is the user question: {question} \n
If the document contains keyword(s) or semantic meaning related to the
user question, grade it as relevant. \n
Give a binary score 'yes' or 'no' score to indicate whether the
document is relevant to the question.""",
input_variables=["context", "question"],
)

# Chain
chain = prompt | llm_with_tool

messages = state["messages"]
last_message = messages[-1]

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 4/10
1/7/25, 11:47 PM Agentic RAG

question = messages[0].content
docs = last_message.content

scored_result = chain.invoke({"question": question, "context": docs})

score = scored_result.binary_score

if score == "yes":
print("---DECISION: DOCS RELEVANT---")
return "generate"

else:
print("---DECISION: DOCS NOT RELEVANT---")
print(score)
return "rewrite"

### Nodes

def agent(state):
"""
Invokes the agent model to generate a response based on the current state.
Given
the question, it will decide to retrieve using the retriever tool, or
simply end.

Args:
state (messages): The current state

Returns:
dict: The updated state with the agent response appended to messages
"""
print("---CALL AGENT---")
messages = state["messages"]
model = ChatOpenAI(temperature=0, streaming=True, model="gpt-4-turbo")
model = model.bind_tools(tools)
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}

def rewrite(state):
"""
Transform the query to produce a better question.

Args:
state (messages): The current state

Returns:
dict: The updated state with re-phrased question
"""

print("---TRANSFORM QUERY---")
messages = state["messages"]
question = messages[0].content

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 5/10
1/7/25, 11:47 PM Agentic RAG

msg = [
HumanMessage(
content=f""" \n
Look at the input and try to reason about the underlying semantic intent /
meaning. \n
Here is the initial question:
\n ------- \n
{question}
\n ------- \n
Formulate an improved question: """,
)
]

# Grader
model = ChatOpenAI(temperature=0, model="gpt-4-0125-preview",
streaming=True)
response = model.invoke(msg)
return {"messages": [response]}

def generate(state):
"""
Generate answer

Args:
state (messages): The current state

Returns:
dict: The updated state with re-phrased question
"""
print("---GENERATE---")
messages = state["messages"]
question = messages[0].content
last_message = messages[-1]

docs = last_message.content

# Prompt
prompt = hub.pull("rlm/rag-prompt")

# LLM
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0,
streaming=True)

# Post-processing
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)

# Chain
rag_chain = prompt | llm | StrOutputParser()

# Run
response = rag_chain.invoke({"context": docs, "question": question})
return {"messages": [response]}

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 6/10
1/7/25, 11:47 PM Agentic RAG

print("*" * 20 + "Prompt[rlm/rag-prompt]" + "*" * 20)


prompt = hub.pull("rlm/rag-prompt").pretty_print() # Show what the prompt
looks like

API Reference: BaseMessage | HumanMessage | StrOutputParser | PromptTemplate |


ChatOpenAI | tools_condition

********************Prompt[rlm/rag-prompt]********************
================================ Human Message
=================================

You are an assistant for question-answering tasks. Use the following pieces of
retrieved context to answer the question. If you don't know the answer, just
say that you don't know. Use three sentences maximum and keep the answer
concise.
Question: {question}
Context: {context}
Answer:

Graph
Start with an agent, call_model

Agent make a decision to call a function

If so, then action to call tool (retriever)

Then call agent with the tool output added to messages ( state )

from langgraph.graph import END, StateGraph, START


from langgraph.prebuilt import ToolNode

# Define a new graph


workflow = StateGraph(AgentState)

# Define the nodes we will cycle between


workflow.add_node("agent", agent) # agent
retrieve = ToolNode([retriever_tool])
workflow.add_node("retrieve", retrieve) # retrieval
workflow.add_node("rewrite", rewrite) # Re-writing the question
workflow.add_node(
"generate", generate
) # Generating a response after we know the documents are relevant
# Call agent node to decide to retrieve or not
workflow.add_edge(START, "agent")

# Decide whether to retrieve


workflow.add_conditional_edges(
"agent",
# Assess agent decision
tools_condition,
{
# Translate the condition outputs to nodes in our graph

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 7/10
1/7/25, 11:47 PM Agentic RAG

"tools": "retrieve",
END: END,
},
)

# Edges taken after the `action` node is called.


workflow.add_conditional_edges(
"retrieve",
# Assess agent decision
grade_documents,
)
workflow.add_edge("generate", END)
workflow.add_edge("rewrite", "agent")

# Compile
graph = workflow.compile()

API Reference: END | StateGraph | START | ToolNode

from IPython.display import Image, display

try:
display(Image(graph.get_graph(xray=True).draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 8/10
1/7/25, 11:47 PM Agentic RAG

import pprint

inputs = {
"messages": [
("user", "What does Lilian Weng say about the types of agent
memory?"),
]
}
for output in graph.stream(inputs):
for key, value in output.items():
pprint.pprint(f"Output from node '{key}':")
pprint.pprint("---")
pprint.pprint(value, indent=2, width=80, depth=None)
pprint.pprint("\n---\n")

---CALL AGENT---
"Output from node 'agent':"
'---'
{ 'messages': [ AIMessage(content='', additional_kwargs={'tool_calls':
[{'index': 0, 'id': 'call_z36oPZN8l1UC6raxrebqc1bH', 'function': {'arguments':
'{"query":"types of agent memory"}', 'name': 'retrieve_blog_posts'}, 'type':
'function'}]}, response_metadata={'finish_reason': 'tool_calls'}, id='run-
2bad2518-8187-4d8f-8e23-2b9501becb6f-0', tool_calls=[{'name':
'retrieve_blog_posts', 'args': {'query': 'types of agent memory'}, 'id':
'call_z36oPZN8l1UC6raxrebqc1bH'}])]}
'\n---\n'
---CHECK RELEVANCE---
---DECISION: DOCS RELEVANT---
"Output from node 'retrieve':"
'---'
{ 'messages': [ ToolMessage(content='Table of Contents\n\n\n\nAgent System
Overview\n\nComponent One: Planning\n\nTask Decomposition\n\nSelf-
Reflection\n\n\nComponent Two: Memory\n\nTypes of Memory\n\nMaximum Inner
Product Search (MIPS)\n\n\nComponent Three: Tool Use\n\nCase
Studies\n\nScientific Discovery Agent\n\nGenerative Agents
Simulation\n\nProof-of-Concept
Examples\n\n\nChallenges\n\nCitation\n\nReferences\n\nPlanning\n\nSubgoal and
decomposition: The agent breaks down large tasks into smaller, manageable
subgoals, enabling efficient handling of complex tasks.\nReflection and
refinement: The agent can do self-criticism and self-reflection over past
actions, learn from mistakes and refine them for future steps, thereby
improving the quality of final results.\n\n\nMemory\n\nMemory\n\nShort-term
memory: I would consider all the in-context learning (See Prompt Engineering)
as utilizing short-term memory of the model to learn.\nLong-term memory: This
provides the agent with the capability to retain and recall (infinite)
information over extended periods, often by leveraging an external vector
store and fast retrieval.\n\n\nTool use\n\nThe design of generative agents
combines LLM with memory, planning and reflection mechanisms to enable agents
to behave conditioned on past experience, as well as to interact with other
agents.', name='retrieve_blog_posts', id='d815f283-868c-4660-a1c6-
5f6e5373ca06', tool_call_id='call_z36oPZN8l1UC6raxrebqc1bH')]}
'\n---\n'
---GENERATE---
"Output from node 'generate':"
'---'
{ 'messages': [ 'Lilian Weng discusses short-term and long-term memory in '

https://langchain-ai.github.io/langgraph/tutorials/rag/langgraph_agentic_rag/?h=age#nodes-and-edges 9/10

You might also like