Unlike ConversationEntityMemory, which manages information about entities in a key-value format for individual entities, ConversationKGMemory(Conversation Knowledge Graph Memory) is a module that manages relationships between entities in a graph format.
It extracts and structures knowledge triplets (subject-relationship-object) to identify and store complex relationships between entities, and allows exploration of entity connectivity through graph structure.
This helps the model understand relationships between different entities and better respond to queries based on complex networks and historical context.
# Set environment variables
from langchain_opentutorial import set_env
set_env(
{
"OPENAI_API_KEY": "",
"LANGCHAIN_API_KEY": "",
"LANGCHAIN_TRACING_V2": "true",
"LANGCHAIN_ENDPOINT": "https://api.smith.langchain.com",
"LANGCHAIN_PROJECT": "05-ConversationKGMemory", # title 과 동일하게 설정해 주세요
}
)
Environment variables have been set successfully.
You can alternatively set API keys such as OPENAI_API_KEY in a .env file and load them.
[Note] This is not necessary if you've already set the required API keys in previous steps.
# Load API keys from .env file
from dotenv import load_dotenv
load_dotenv(override=True)
True
Conversation Knowlege Graph Memory
ConversationKGMemory is a memory module that stores and manages information extracted from conversations in a graph structure. This example demonstrates the following key features:
Storing conversation context (save_context)
(Reference) Getting a list of entity names in the graph sorted by causal dependence. (get_topological_sort)
Extracting entities from current conversation (get_current_entities)
The following example shows the process of extracting entities and relationships from a conversation about a new designer, Shelly Kim, and storing them in a graph format.
from langchain_openai import ChatOpenAI
from langchain_community.memory.kg import ConversationKGMemory
llm = ChatOpenAI(model_name="gpt-4o", temperature=0)
memory = ConversationKGMemory(llm=llm, return_messages=True)
memory.save_context(
{"input": "This is Shelly Kim who lives in Pangyo."},
{"output": "Hello Shelly, nice to meet you! What kind of work do you do?"},
)
memory.save_context(
{"input": "Shelly Kim is our company's new designer."},
{
"output": "That's great! Welcome to our team. I hope you'll enjoy working with us."
},
)
The load_memory_variables method operates through the following steps:
1. Entity Extraction
Extracts entities (e.g., "Shelly Kim") from the input "Who is Shelly Kim?"
Internally uses the get_current_entities method.
2. Knowledge Retrieval
Searches for all knowledge triplets related to the extracted entities.
Queries the graph for information previously stored via save_context
3. Information Formatting
Converts found triplets into system messages.
Returns a list of message objects due to the return_messages=True setting.
This method retrieves relevant information from the stored knowledge graph and returns it in a structured format, which can then be used as context for subsequent conversations with the language model.
memory.load_memory_variables({"input": "Who is Shelly Kim?"})
{'history': [SystemMessage(content="On Shelly Kim: Shelly Kim lives in Pangyo. Shelly Kim is our company's new designer.", additional_kwargs={}, response_metadata={})]}
Applying KG Memory to Chain
This section demonstrates how to use ConversationKGMemory with ConversationChain
(The class ConversationChain was deprecated in LangChain 0.2.7 and will be removed in 1.0. If you want, you can skip to Applying KG Memory with LCEL)
from langchain_community.memory.kg import ConversationKGMemory
from langchain_core.prompts.prompt import PromptTemplate
from langchain.chains import ConversationChain
llm = ChatOpenAI(model_name="gpt-4o", temperature=0)
template = """The following is a friendly conversation between a human and an AI.
The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know.
The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.
Relevant Information:
{history}
Conversation:
Human: {input}
AI:"""
prompt = PromptTemplate(input_variables=["history", "input"], template=template)
conversation_with_kg = ConversationChain(
llm=llm, prompt=prompt, memory=ConversationKGMemory(llm=llm)
)
C:\Users\Caelu\AppData\Local\Temp\ipykernel_5648\1729312250.py:21: LangChainDeprecationWarning: The class `ConversationChain` was deprecated in LangChain 0.2.7 and will be removed in 1.0. Use :meth:`~RunnableWithMessageHistory: https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html` instead.
conversation_with_kg = ConversationChain(
Let's initialize the conversation with some basic information.
conversation_with_kg.predict(
input="My name is Teddy. Shelly is a coworker of mine, and she's a new designer at our company."
)
"Hi Teddy! It's great to meet you. It sounds like you and Shelly are working together in a creative environment. Being a new designer, Shelly must be bringing fresh ideas and perspectives to your team. How has it been working with her so far?"
Let's query the memory for information about Shelly
conversation_with_kg.memory.load_memory_variables({"input": "who is Shelly?"})
{'history': 'On Shelly: Shelly is a coworker of Teddy. Shelly is a new designer. Shelly works at our company.'}
You can also reset the memory by memory.clear().
conversation_with_kg.memory.clear()
conversation_with_kg.memory.load_memory_variables({"input": "who is Shelly?"})
{'history': ''}
Applying KG Memory with LCEL
Let's examine the memory after having a conversation using a custom ConversationChain with ConversationKGMemory by LCEL
from operator import itemgetter
from langchain_community.memory.kg import ConversationKGMemory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-4o", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"""The following is a friendly conversation between a human and an AI.
The AI is talkative and provides lots of specific details from its context.
If the AI does not know the answer to a question, it truthfully says it does not know.
The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.
Relevant Information:
{history}""",
),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
]
)
memory = ConversationKGMemory(llm=llm, return_messages=True, memory_key="history")
class ConversationChain:
def __init__(self, prompt, llm, memory):
self.memory = memory
self.chain = (
RunnablePassthrough()
| RunnablePassthrough.assign(
history=RunnableLambda(memory.load_memory_variables)
| itemgetter("history")
)
| prompt
| llm
)
def invoke(self, input_dict):
response = self.chain.invoke(input_dict)
self.memory.save_context(input_dict, {"output": response.content})
return response
conversation_with_kg = ConversationChain(prompt, llm, memory)
Let's initialize the conversation with some basic information.
response = conversation_with_kg.invoke(
{
"input": "My name is Teddy. Shelly is a coworker of mine, and she's a new designer at our company."
}
)
response.content
"Hi Teddy! It's nice to meet you. It sounds like you and Shelly are working together at your company. How's everything going with the new designer on board?"
Let's query the memory for information about Shelly.
conversation_with_kg.memory.load_memory_variables({"input": "who is Shelly?"})
{'history': [SystemMessage(content='On Shelly: Shelly is a coworker of Teddy. Shelly is a new designer. Shelly works at our company.', additional_kwargs={}, response_metadata={})]}
You can also reset the memory by memory.clear().
conversation_with_kg.memory.clear()
conversation_with_kg.memory.load_memory_variables({"input": "who is Shelly?"})