Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

Migrating off ConversationBufferMemory or ConversationStringBufferMemory

ConversationBufferMemoryandConversationStringBufferMemorywere used to keep track of a conversation between a human and an ai asstistant without any additional processing.

note

TheConversationStringBufferMemory is equivalent toConversationBufferMemory but was targeting LLMs that were not chat models.

The methods for handling conversation history using existing modern primitives are:

  1. UsingLangGraph persistence along with appropriate processing of the message history
  2. Using LCEL withRunnableWithMessageHistory combined with appropriate processing of the message history.

Most users will findLangGraph persistence both easier to use and configure than the equivalent LCEL, especially for more complex use cases.

Set up

%%capture--no-stderr
%pip install--upgrade--quiet langchain-openai langchain
import os
from getpassimport getpass

if"OPENAI_API_KEY"notin os.environ:
os.environ["OPENAI_API_KEY"]= getpass()

Usage with LLMChain / ConversationChain

This section shows how to migrate offConversationBufferMemory orConversationStringBufferMemory that's used together with either anLLMChain or aConversationChain.

Legacy

Below is example usage ofConversationBufferMemory with anLLMChain or an equivalentConversationChain.

Details
from langchain.chainsimport LLMChain
from langchain.memoryimport ConversationBufferMemory
from langchain_core.messagesimport SystemMessage
from langchain_core.promptsimport ChatPromptTemplate
from langchain_core.prompts.chatimport(
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
from langchain_openaiimport ChatOpenAI

prompt= ChatPromptTemplate(
[
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
)

memory= ConversationBufferMemory(memory_key="chat_history", return_messages=True)

legacy_chain= LLMChain(
llm=ChatOpenAI(),
prompt=prompt,
memory=memory,
)

legacy_result= legacy_chain.invoke({"text":"my name is bob"})
print(legacy_result)

legacy_result= legacy_chain.invoke({"text":"what was my name"})
{'text': 'Hello Bob! How can I assist you today?', 'chat_history': [HumanMessage(content='my name is bob', additional_kwargs={}, response_metadata={}), AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, response_metadata={})]}
legacy_result["text"]
'Your name is Bob. How can I assist you today, Bob?'
note

Note that there is no support for separating conversation threads in a single memory object

LangGraph

The example below shows how to use LangGraph to implement aConversationChain orLLMChain withConversationBufferMemory.

This example assumes that you're already somewhat familiar withLangGraph. If you're not, then please see theLangGraph Quickstart Guide for more details.

LangGraph offers a lot of additional functionality (e.g., time-travel and interrupts) and will work well for other more complex (and realistic) architectures.

Details
import uuid

from IPython.displayimport Image, display
from langchain_core.messagesimport HumanMessage
from langgraph.checkpoint.memoryimport MemorySaver
from langgraph.graphimport START, MessagesState, StateGraph

# Define a new graph
workflow= StateGraph(state_schema=MessagesState)

# Define a chat model
model= ChatOpenAI()


# Define the function that calls the model
defcall_model(state: MessagesState):
response= model.invoke(state["messages"])
# We return a list, because this will get added to the existing list
return{"messages": response}


# Define the two nodes we will cycle between
workflow.add_edge(START,"model")
workflow.add_node("model", call_model)


# Adding memory is straight forward in langgraph!
memory= MemorySaver()

app= workflow.compile(
checkpointer=memory
)


# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
# This enables a single application to manage conversations among multiple users.
thread_id= uuid.uuid4()
config={"configurable":{"thread_id": thread_id}}


input_message= HumanMessage(content="hi! I'm bob")
for eventin app.stream({"messages":[input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()

# Here, let's confirm that the AI remembers our name!
input_message= HumanMessage(content="what was my name?")
for eventin app.stream({"messages":[input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob
================================== Ai Message ==================================

Hello Bob! How can I assist you today?
================================ Human Message =================================

what was my name?
================================== Ai Message ==================================

Your name is Bob. How can I help you today, Bob?

LCEL RunnableWithMessageHistory

Alternatively, if you have a simple chain, you can wrap the chat model of the chain within aRunnableWithMessageHistory.

Please refer to the followingmigration guide for more information.

Usage with a pre-built agent

This example shows usage of an Agent Executor with a pre-built agent constructed using thecreate_tool_calling_agent function.

If you are using one of theold LangChain pre-built agents, you should be ableto replace that code with the newlanggraph pre-built agent which leveragesnative tool calling capabilities of chat models and will likely work better out of the box.

Legacy Usage

Details
from langchainimport hub
from langchain.agentsimport AgentExecutor, create_tool_calling_agent
from langchain.memoryimport ConversationBufferMemory
from langchain_core.toolsimport tool
from langchain_openaiimport ChatOpenAI

model= ChatOpenAI(temperature=0)


@tool
defget_user_age(name:str)->str:
"""Use this tool to find the user's age."""
# This is a placeholder for the actual implementation
if"bob"in name.lower():
return"42 years old"
return"41 years old"


tools=[get_user_age]

prompt= ChatPromptTemplate.from_messages(
[
("placeholder","{chat_history}"),
("human","{input}"),
("placeholder","{agent_scratchpad}"),
]
)

# Construct the Tools agent
agent= create_tool_calling_agent(model, tools, prompt)
# Instantiate memory
memory= ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# Create an agent
agent= create_tool_calling_agent(model, tools, prompt)
agent_executor= AgentExecutor(
agent=agent,
tools=tools,
memory=memory,# Pass the memory to the executor
)

# Verify that the agent can use tools
print(agent_executor.invoke({"input":"hi! my name is bob what is my age?"}))
print()
# Verify that the agent has access to conversation history.
# The agent should be able to answer that the user's name is bob.
print(agent_executor.invoke({"input":"do you remember my name?"}))
{'input': 'hi! my name is bob what is my age?', 'chat_history': [HumanMessage(content='hi! my name is bob what is my age?', additional_kwargs={}, response_metadata={}), AIMessage(content='Bob, you are 42 years old.', additional_kwargs={}, response_metadata={})], 'output': 'Bob, you are 42 years old.'}

{'input': 'do you remember my name?', 'chat_history': [HumanMessage(content='hi! my name is bob what is my age?', additional_kwargs={}, response_metadata={}), AIMessage(content='Bob, you are 42 years old.', additional_kwargs={}, response_metadata={}), HumanMessage(content='do you remember my name?', additional_kwargs={}, response_metadata={}), AIMessage(content='Yes, your name is Bob.', additional_kwargs={}, response_metadata={})], 'output': 'Yes, your name is Bob.'}

LangGraph

You can follow the standard LangChain tutorial forbuilding an agent an in depth explanation of how this works.

This example is shown here explicitly to make it easier for users to compare the legacy implementation vs. the corresponding langgraph implementation.

This example shows how to add memory to thepre-built react agent in langgraph.

For more details, please see thehow to add memory to the prebuilt ReAct agent guide in langgraph.

Details
import uuid

from langchain_core.messagesimport HumanMessage
from langchain_core.toolsimport tool
from langchain_openaiimport ChatOpenAI
from langgraph.checkpoint.memoryimport MemorySaver
from langgraph.prebuiltimport create_react_agent


@tool
defget_user_age(name:str)->str:
"""Use this tool to find the user's age."""
# This is a placeholder for the actual implementation
if"bob"in name.lower():
return"42 years old"
return"41 years old"


memory= MemorySaver()
model= ChatOpenAI()
app= create_react_agent(
model,
tools=[get_user_age],
checkpointer=memory,
)

# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
# This enables a single application to manage conversations among multiple users.
thread_id= uuid.uuid4()
config={"configurable":{"thread_id": thread_id}}

# Tell the AI that our name is Bob, and ask it to use a tool to confirm
# that it's capable of working like an agent.
input_message= HumanMessage(content="hi! I'm bob. What is my age?")

for eventin app.stream({"messages":[input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()

# Confirm that the chat bot has access to previous conversation
# and can respond to the user saying that the user's name is Bob.
input_message= HumanMessage(content="do you remember my name?")

for eventin app.stream({"messages":[input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob. What is my age?
================================== Ai Message ==================================
Tool Calls:
get_user_age (call_oEDwEbIDNdokwqhAV6Azn47c)
Call ID: call_oEDwEbIDNdokwqhAV6Azn47c
Args:
name: bob
================================= Tool Message =================================
Name: get_user_age

42 years old
================================== Ai Message ==================================

Bob, you are 42 years old! If you need any more assistance or information, feel free to ask.
================================ Human Message =================================

do you remember my name?
================================== Ai Message ==================================

Yes, your name is Bob. If you have any other questions or need assistance, feel free to ask!

If we use a different thread ID, it'll start a new conversation and the bot will not know our name!

config={"configurable":{"thread_id":"123456789"}}

input_message= HumanMessage(content="hi! do you remember my name?")

for eventin app.stream({"messages":[input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! do you remember my name?
================================== Ai Message ==================================

Hello! Yes, I remember your name. It's great to see you again! How can I assist you today?

Next steps

Explore persistence with LangGraph:

Add persistence with simple LCEL (favor langgraph for more complex use cases):

Working with message history:


[8]ページ先頭

©2009-2025 Movatter.jp