L2-Memory - Jupyter Notebook
L2-Memory - Jupyter Notebook
Outline
ConversationBufferMemory
ConversationBufferWindowMemory
ConversationTokenBufferMemory
ConversationSummaryMemory
ConversationBufferMemory
In [1]: import os
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file
import warnings
warnings.filterwarnings('ignore')
Note: LLM's do not always produce the same results. When executing the code in your notebook, you may get
slightly different answers that those in the video.
Current conversation:
"Hello Andrew, it's nice to meet you. My name is AI. How can I assist you tod
ay?"
Current conversation:
Human: Hi, my name is Andrew
AI: Hello Andrew, it's nice to meet you. My name is AI. How can I assist you
today?
Human: What is 1+1?
AI:
Current conversation:
Human: Hi, my name is Andrew
AI: Hello Andrew, it's nice to meet you. My name is AI. How can I assist you
today?
Human: What is 1+1?
AI: The answer to 1+1 is 2.
Human: What is my name?
AI:
In [8]: print(memory.buffer)
In [9]: memory.load_memory_variables({})
{'history': "Human: Hi, my name is Andrew\nAI: Hello Andrew, it's nice to mee
t you. My name is AI. How can I assist you today?\nHuman: What is 1+1?\nAI: T
he answer to 1+1 is 2.\nHuman: What is my name?\nAI: Your name is Andrew, as
you mentioned earlier."}
In [12]: print(memory.buffer)
Human: Hi
AI: What's up
In [13]: memory.load_memory_variables({})
In [15]: memory.load_memory_variables({})
{'history': "Human: Hi\nAI: What's up\nHuman: Not much, just hanging\nAI: Coo
l"}
ConversationBufferWindowMemory
In [16]: from langchain.memory import ConversationBufferWindowMemory
In [19]: memory.load_memory_variables({})
"Hello Andrew, it's nice to meet you. My name is AI. How can I assist you tod
ay?"
"I'm sorry, I don't have access to that information. Could you please tell me
your name?"
ConversationTokenBufferMemory
In [24]: #!pip install tiktoken
In [27]: memory.load_memory_variables({})
ConversationSummaryMemory
In [28]: from langchain.memory import ConversationSummaryBufferMemory
In [30]: memory.load_memory_variables({})
{'history': "System: The human and AI engage in small talk before discussing
the day's schedule. The AI informs the human of a morning meeting with the pr
oduct team, time to work on the LangChain project, and a lunch meeting with a
customer interested in the latest AI developments."}
In [31]: conversation = ConversationChain(
llm=llm,
memory = memory,
verbose=True
)
Current conversation:
System: The human and AI engage in small talk before discussing the day's sch
edule. The AI informs the human of a morning meeting with the product team, t
ime to work on the LangChain project, and a lunch meeting with a customer int
erested in the latest AI developments.
Human: What would be a good demo to show?
AI:
In [33]: memory.load_memory_variables({})
{'history': "System: The human and AI engage in small talk before discussing
the day's schedule. The AI informs the human of a morning meeting with the pr
oduct team, time to work on the LangChain project, and a lunch meeting with a
customer interested in the latest AI developments. The human asks what would
be a good demo to show.\nAI: Based on the customer's interest in AI developme
nts, I would suggest showcasing our latest natural language processing capabi
lities. We could demonstrate how our AI can understand and respond to complex
language queries, and even provide personalized recommendations based on the
user's preferences. Additionally, we could highlight our machine learning alg
orithms and how they continuously improve the accuracy and efficiency of our
AI. Would you like me to prepare a demo for the meeting?"}
Reminder: Download your notebook to you local computer to save your work.