ConversationEntityMemory allows the conversation system to retain facts about specific entities mentioned during the dialogue.
It extracts information about entities from the conversation (using an LLM) and accumulates knowledge about these entities over time (also using an LLM)
You can alternatively set OPENAI_API_KEY in .env file and load it.
[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.
from dotenv import load_dotenvload_dotenv(override=True)
True
Entity Memory Conversation Example
This example demonstrates how to use ConversationEntityMemory to store and manage information about entities mentioned during a conversation. The conversation accumulates ongoing knowledge about these entities while maintaining a natural flow.
from langchain_openai import ChatOpenAIfrom langchain.chains import ConversationChainfrom langchain.memory.entity import ConversationEntityMemory
from langchain.prompts import PromptTemplateentity_memory_conversation_template =PromptTemplate( input_variables=["entities", "history", "input"], template="""You are an assistant to a human, powered by a large language model trained by OpenAI.You assist with various tasks, from answering simple questions to providing detailed discussions on a wide range of topics. You can generate human-like text, allowing natural conversations and coherent, relevant responses.You constantly learn and improve, processing large amounts of text to provide accurate and informative responses. You can use personalized information provided in the context below, along with your own generated knowledge.Context:{entities}Current conversation:{history}Last line:Human: {input}You:""",)print(entity_memory_conversation_template)
input_variables=['entities', 'history', 'input'] input_types={} partial_variables={} template='\nYou are an assistant to a human, powered by a large language model trained by OpenAI.\n\nYou assist with various tasks, from answering simple questions to providing detailed discussions on a wide range of topics. You can generate human-like text, allowing natural conversations and coherent, relevant responses.\n\nYou constantly learn and improve, processing large amounts of text to provide accurate and informative responses. You can use personalized information provided in the context below, along with your own generated knowledge.\n\nContext:\n{entities}\n\nCurrent conversation:\n{history}\nLast line:\nHuman: {input}\nYou:\n'
# Input conversationresponse = conversation.predict( input=("Amelia is an award-winning landscape photographer who has traveled around the globe capturing natural wonders. ""David is a wildlife conservationist dedicated to protecting endangered species. ""They are planning to open a nature-inspired photography gallery and learning center that raises funds for conservation projects." ))# Print the assistant's responseprint(response)
That sounds like a fantastic initiative! Combining Amelia's stunning landscape photography with David's passion for wildlife conservation could create a powerful platform for raising awareness and funds. What kind of exhibits or programs are they considering for the gallery and learning center?
Retrieving Entity Memory
Let's examine the conversation history stored in memory using the memory.entity_store.store method to verify memory retention.
# Print the entity memoryconversation.memory.entity_store.store
{'Amelia': 'Amelia is an award-winning landscape photographer who has traveled around the globe capturing natural wonders and is planning to open a nature-inspired photography gallery and learning center with David, a wildlife conservationist, to raise funds for conservation projects.',
'David': 'David is a wildlife conservationist dedicated to protecting endangered species, and he is planning to open a nature-inspired photography gallery and learning center with Amelia that raises funds for conservation projects.'}