LangChain OpenTutorial
  • 🦜️🔗 The LangChain Open Tutorial for Everyone
  • 01-Basic
    • Getting Started on Windows
    • 02-Getting-Started-Mac
    • OpenAI API Key Generation and Testing Guide
    • LangSmith Tracking Setup
    • Using the OpenAI API (GPT-4o Multimodal)
    • Basic Example: Prompt+Model+OutputParser
    • LCEL Interface
    • Runnable
  • 02-Prompt
    • Prompt Template
    • Few-Shot Templates
    • LangChain Hub
    • Personal Prompts for LangChain
    • Prompt Caching
  • 03-OutputParser
    • PydanticOutputParser
    • PydanticOutputParser
    • CommaSeparatedListOutputParser
    • Structured Output Parser
    • JsonOutputParser
    • PandasDataFrameOutputParser
    • DatetimeOutputParser
    • EnumOutputParser
    • Output Fixing Parser
  • 04-Model
    • Using Various LLM Models
    • Chat Models
    • Caching
    • Caching VLLM
    • Model Serialization
    • Check Token Usage
    • Google Generative AI
    • Huggingface Endpoints
    • HuggingFace Local
    • HuggingFace Pipeline
    • ChatOllama
    • GPT4ALL
    • Video Q&A LLM (Gemini)
  • 05-Memory
    • ConversationBufferMemory
    • ConversationBufferWindowMemory
    • ConversationTokenBufferMemory
    • ConversationEntityMemory
    • ConversationKGMemory
    • ConversationSummaryMemory
    • VectorStoreRetrieverMemory
    • LCEL (Remembering Conversation History): Adding Memory
    • Memory Using SQLite
    • Conversation With History
  • 06-DocumentLoader
    • Document & Document Loader
    • PDF Loader
    • WebBaseLoader
    • CSV Loader
    • Excel File Loading in LangChain
    • Microsoft Word(doc, docx) With Langchain
    • Microsoft PowerPoint
    • TXT Loader
    • JSON
    • Arxiv Loader
    • UpstageDocumentParseLoader
    • LlamaParse
    • HWP (Hangeul) Loader
  • 07-TextSplitter
    • Character Text Splitter
    • 02. RecursiveCharacterTextSplitter
    • Text Splitting Methods in NLP
    • TokenTextSplitter
    • SemanticChunker
    • Split code with Langchain
    • MarkdownHeaderTextSplitter
    • HTMLHeaderTextSplitter
    • RecursiveJsonSplitter
  • 08-Embedding
    • OpenAI Embeddings
    • CacheBackedEmbeddings
    • HuggingFace Embeddings
    • Upstage
    • Ollama Embeddings With Langchain
    • LlamaCpp Embeddings With Langchain
    • GPT4ALL
    • Multimodal Embeddings With Langchain
  • 09-VectorStore
    • Vector Stores
    • Chroma
    • Faiss
    • Pinecone
    • Qdrant
    • Elasticsearch
    • MongoDB Atlas
    • PGVector
    • Neo4j
    • Weaviate
    • Faiss
    • {VectorStore Name}
  • 10-Retriever
    • VectorStore-backed Retriever
    • Contextual Compression Retriever
    • Ensemble Retriever
    • Long Context Reorder
    • Parent Document Retriever
    • MultiQueryRetriever
    • MultiVectorRetriever
    • Self-querying
    • TimeWeightedVectorStoreRetriever
    • TimeWeightedVectorStoreRetriever
    • Kiwi BM25 Retriever
    • Ensemble Retriever with Convex Combination (CC)
  • 11-Reranker
    • Cross Encoder Reranker
    • JinaReranker
    • FlashRank Reranker
  • 12-RAG
    • Understanding the basic structure of RAG
    • RAG Basic WebBaseLoader
    • Exploring RAG in LangChain
    • RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
    • Conversation-With-History
    • Translation
    • Multi Modal RAG
  • 13-LangChain-Expression-Language
    • RunnablePassthrough
    • Inspect Runnables
    • RunnableLambda
    • Routing
    • Runnable Parallel
    • Configure-Runtime-Chain-Components
    • Creating Runnable objects with chain decorator
    • RunnableWithMessageHistory
    • Generator
    • Binding
    • Fallbacks
    • RunnableRetry
    • WithListeners
    • How to stream runnables
  • 14-Chains
    • Summarization
    • SQL
    • Structured Output Chain
    • StructuredDataChat
  • 15-Agent
    • Tools
    • Bind Tools
    • Tool Calling Agent
    • Tool Calling Agent with More LLM Models
    • Iteration-human-in-the-loop
    • Agentic RAG
    • CSV/Excel Analysis Agent
    • Agent-with-Toolkits-File-Management
    • Make Report Using RAG, Web searching, Image generation Agent
    • TwoAgentDebateWithTools
    • React Agent
  • 16-Evaluations
    • Generate synthetic test dataset (with RAGAS)
    • Evaluation using RAGAS
    • HF-Upload
    • LangSmith-Dataset
    • LLM-as-Judge
    • Embedding-based Evaluator(embedding_distance)
    • LangSmith Custom LLM Evaluation
    • Heuristic Evaluation
    • Compare experiment evaluations
    • Summary Evaluators
    • Groundedness Evaluation
    • Pairwise Evaluation
    • LangSmith Repeat Evaluation
    • LangSmith Online Evaluation
    • LangFuse Online Evaluation
  • 17-LangGraph
    • 01-Core-Features
      • Understanding Common Python Syntax Used in LangGraph
      • Title
      • Building a Basic Chatbot with LangGraph
      • Building an Agent with LangGraph
      • Agent with Memory
      • LangGraph Streaming Outputs
      • Human-in-the-loop
      • LangGraph Manual State Update
      • Asking Humans for Help: Customizing State in LangGraph
      • DeleteMessages
      • DeleteMessages
      • LangGraph ToolNode
      • LangGraph ToolNode
      • Branch Creation for Parallel Node Execution
      • Conversation Summaries with LangGraph
      • Conversation Summaries with LangGraph
      • LangGrpah Subgraph
      • How to transform the input and output of a subgraph
      • LangGraph Streaming Mode
      • Errors
      • A Long-Term Memory Agent
    • 02-Structures
      • LangGraph-Building-Graphs
      • Naive RAG
      • Add Groundedness Check
      • Adding a Web Search Module
      • LangGraph-Add-Query-Rewrite
      • Agentic RAG
      • Adaptive RAG
      • Multi-Agent Structures (1)
      • Multi Agent Structures (2)
    • 03-Use-Cases
      • LangGraph Agent Simulation
      • Meta Prompt Generator based on User Requirements
      • CRAG: Corrective RAG
      • Plan-and-Execute
      • Multi Agent Collaboration Network
      • Multi Agent Collaboration Network
      • Multi-Agent Supervisor
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • SQL-Agent
      • 10-LangGraph-Research-Assistant
      • LangGraph Code Assistant
      • Deploy on LangGraph Cloud
      • Tree of Thoughts (ToT)
      • Ollama Deep Researcher (Deepseek-R1)
      • Functional API
      • Reflection in LangGraph
  • 19-Cookbook
    • 01-SQL
      • TextToSQL
      • SpeechToSQL
    • 02-RecommendationSystem
      • ResumeRecommendationReview
    • 03-GraphDB
      • Movie QA System with Graph Database
      • 05-TitanicQASystem
      • Real-Time GraphRAG QA
    • 04-GraphRAG
      • Academic Search System
      • Academic QA System with GraphRAG
    • 05-AIMemoryManagementSystem
      • ConversationMemoryManagementSystem
    • 06-Multimodal
      • Multimodal RAG
      • Shopping QnA
    • 07-Agent
      • 14-MoARAG
      • CoT Based Smart Web Search
      • 16-MultiAgentShoppingMallSystem
      • Agent-Based Dynamic Slot Filling
      • Code Debugging System
      • New Employee Onboarding Chatbot
      • 20-LangGraphStudio-MultiAgent
      • Multi-Agent Scheduler System
    • 08-Serving
      • FastAPI Serving
      • Sending Requests to Remote Graph Server
      • Building a Agent API with LangServe: Integrating Currency Exchange and Trip Planning
    • 08-SyntheticDataset
      • Synthetic Dataset Generation using RAG
    • 09-Monitoring
      • Langfuse Selfhosting
Powered by GitBook
On this page
  • Overview
  • Table of Contents
  • References
  • Environment Setup
  • Conversation Knowledge Graph Memory
  • (Reference) get_topological_sort() → List[str]
  • get_current_entities(input_string: str) → List[str]
  • get_knowledge_triplets(input_string: str) → List[KnowledgeTriple]
  • load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any]
  • Applying KG Memory to Chain
  • Applying KG Memory with LCEL
  1. 05-Memory

ConversationKGMemory

PreviousConversationEntityMemoryNextConversationSummaryMemory

Last updated 28 days ago

  • Author:

  • Design:

  • Peer Review : ,

  • Proofread :

  • This is a part of

Overview

Unlike ConversationEntityMemory, which manages information about entities in a key-value format for individual entities, ConversationKGMemory(Conversation Knowledge Graph Memory) is a module that manages relationships between entities in a graph format.

It extracts and structures knowledge triplets (subject-relationship-object) to identify and store complex relationships between entities, and allows exploration of entity connectivity through graph structure .

This helps the model understand relationships between different entities and better respond to queries based on complex networks and historical context.

Table of Contents

References


Environment Setup

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

%%capture --no-stderr
%pip install langchain-opentutorial
# Install required packages
from langchain_opentutorial import package

package.install(
    [
        "langsmith",
        "langchain",
        "langchain_core",
        "langchain_community",
        "langchain_openai",
    ],
    verbose=False,
    upgrade=False,
)
# Set environment variables
from langchain_opentutorial import set_env

set_env(
    {
        "OPENAI_API_KEY": "",
        "LANGCHAIN_API_KEY": "",
        "LANGCHAIN_TRACING_V2": "true",
        "LANGCHAIN_ENDPOINT": "https://api.smith.langchain.com",
        "LANGCHAIN_PROJECT": "05-ConversationKGMemory",  # title 과 동일하게 설정해 주세요
    }
)
Environment variables have been set successfully.

You can alternatively set API keys such as OPENAI_API_KEY in a .env file and load them.

[Note] This is not necessary if you've already set the required API keys in previous steps.

# Load API keys from .env file
from dotenv import load_dotenv

load_dotenv(override=True)
True

Conversation Knowledge Graph Memory

ConversationKGMemory is a memory module that stores and manages information extracted from conversations in a graph structure.

This example demonstrates the following key features:

  • Storing conversation context (save_context)

  • (Reference) Getting a list of entity names in the graph sorted by causal dependence. (get_topological_sort)

  • Extracting entities from current conversation (get_current_entities)

  • Extracting knowledge triplets (get_knowledge_triplets)

  • Retrieving stored memory (load_memory_variables)

The following example shows the process of extracting entities and relationships from a conversation about a new designer, Shelly Kim, and storing them in a graph format.

from langchain_openai import ChatOpenAI
from langchain_community.memory.kg import ConversationKGMemory
llm = ChatOpenAI(model_name="gpt-4o", temperature=0)

memory = ConversationKGMemory(llm=llm, return_messages=True)
memory.save_context(
    {"input": "This is Shelly Kim who lives in Pangyo."},
    {"output": "Hello Shelly, nice to meet you! What kind of work do you do?"},
)
memory.save_context(
    {"input": "Shelly Kim is our company's new designer."},
    {
        "output": "That's great! Welcome to our team. I hope you'll enjoy working with us."
    },
)

(Reference) get_topological_sort() → List[str]

You can use the get_topological_sort method to view all entities stored in the knowledge graph in topological order:

This method:

  • Uses NetworkX library to analyze the knowledge graph structure.

  • Performs topological sorting based on directed edges.

  • Returns a list of entities in dependency order.

The order reflects the relationships between entities in the conversation, showing how they are connected in the knowledge graph.

memory.kg.get_topological_sort()
['Shelly Kim', 'Pangyo', "our company's new designer"]

get_current_entities(input_string: str) → List[str]

Here's how the get_current_entities method works:

1. Entity Extraction Chain Creation

  • Creates an LLMChain using the entity_extraction_prompt template.

  • This prompt is designed to extract proper nouns from the last line of the conversation.

2. Context Processing

  • Retrieves the last k*2 messages from the buffer. (default : k=2)

  • Generates conversation history string using human_prefix and ai_prefix.

3. Entity Extraction

  • Extracts proper nouns from the input string "Who is Shelly Kim?"

  • Primarily recognizes words starting with capital letters as proper nouns.

  • In this case, "Shelly Kim" is extracted as an entity.

This method only extracts entities from the question itself , while the previous conversation context is used only for reference.

memory.get_current_entities({"input": "Who is Shelly Kim?"})
['Shelly Kim']

get_knowledge_triplets(input_string: str) → List[KnowledgeTriple]

The get_knowledge_triplets method operates as follows:

1. Knowledge Triple Extraction Chain

  • Creates an LLMChain using the knowledge_triplet_extraction_prompt template.

  • Designed to extract triples in ( subject-relation-object ) format from given text.

2. Memory Search

  • Searches for information related to "Shelly" from previously stored conversations.

  • Stored context:

    • "This is Shelly Kim who lives in Pangyo."

    • "Shelly Kim is our company's new designer."

3. Triple Extraction

  • Generates the following triples from the retrieved information:

    • (Shelly Kim, lives in, Pangyo)

    • (Shelly Kim, is, designer)

    • (Shelly Kim, works at, our company)

This method extracts relationship information in triple format from all stored conversation content related to a specific entity .

memory.get_knowledge_triplets({"input": "Shelly"}), "\n", memory.get_knowledge_triplets(
    {"input": "Pangyo"}
), "\n", memory.get_knowledge_triplets(
    {"input": "designer"}
), "\n", memory.get_knowledge_triplets(
    {"input": "Langchain"}
)
([KnowledgeTriple(subject='Shelly Kim', predicate='lives in', object_='Pangyo'),
      KnowledgeTriple(subject='Shelly Kim', predicate='is', object_="company's new designer")],
     '\n',
     [KnowledgeTriple(subject='Shelly Kim', predicate='lives in', object_='Pangyo')],
     '\n',
     [KnowledgeTriple(subject='Shelly Kim', predicate='is a', object_='designer')],
     '\n',
     [])

load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any]

The load_memory_variables method operates through the following steps:

1. Entity Extraction

  • Extracts entities (e.g., "Shelly Kim") from the input "Who is Shelly Kim?"

  • Internally uses the get_current_entities method.

2. Knowledge Retrieval

  • Searches for all knowledge triplets related to the extracted entities.

  • Queries the graph for information previously stored via save_context method.

3. Information Formatting

  • Converts found triplets into system messages.

  • Returns a list of message objects due to the return_messages=True setting.

This method retrieves relevant information from the stored knowledge graph and returns it in a structured format, which can then be used as context for subsequent conversations with the language model.

memory.load_memory_variables({"input": "Who is Shelly Kim?"})
{'history': [SystemMessage(content="On Shelly Kim: Shelly Kim lives in Pangyo. Shelly Kim is our company's new designer.", additional_kwargs={}, response_metadata={})]}

Applying KG Memory to Chain

This section demonstrates how to use ConversationKGMemory with ConversationChain .

from langchain_community.memory.kg import ConversationKGMemory
from langchain_core.prompts.prompt import PromptTemplate
from langchain.chains import ConversationChain

llm = ChatOpenAI(model_name="gpt-4o", temperature=0)

template = """The following is a friendly conversation between a human and an AI. 
The AI is talkative and provides lots of specific details from its context. 
If the AI does not know the answer to a question, it truthfully says it does not know. 
The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:

{history}

Conversation:
Human: {input}
AI:"""
prompt = PromptTemplate(input_variables=["history", "input"], template=template)

conversation_with_kg = ConversationChain(
    llm=llm, prompt=prompt, memory=ConversationKGMemory(llm=llm)
)
C:\Users\Caelu\AppData\Local\Temp\ipykernel_5648\1729312250.py:21: LangChainDeprecationWarning: The class `ConversationChain` was deprecated in LangChain 0.2.7 and will be removed in 1.0. Use :meth:`~RunnableWithMessageHistory: https://python.langchain.com/v0.2/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html` instead.
      conversation_with_kg = ConversationChain(

Let's initialize the conversation with some basic information.

conversation_with_kg.predict(
    input="My name is Teddy. Shelly is a coworker of mine, and she's a new designer at our company."
)
"Hi Teddy! It's great to meet you. It sounds like you and Shelly are working together in a creative environment. Being a new designer, Shelly must be bringing fresh ideas and perspectives to your team. How has it been working with her so far?"

Let's query the memory for information about Shelly.

conversation_with_kg.memory.load_memory_variables({"input": "who is Shelly?"})
{'history': 'On Shelly: Shelly is a coworker of Teddy. Shelly is a new designer. Shelly works at our company.'}

You can also reset the memory by memory.clear().

conversation_with_kg.memory.clear()
conversation_with_kg.memory.load_memory_variables({"input": "who is Shelly?"})
{'history': ''}

Applying KG Memory with LCEL

Let's examine the memory after having a conversation using a custom ConversationChain with ConversationKGMemory by LCEL

from operator import itemgetter
from langchain_community.memory.kg import ConversationKGMemory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model_name="gpt-4o", temperature=0)

prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            """The following is a friendly conversation between a human and an AI. 
The AI is talkative and provides lots of specific details from its context. 
If the AI does not know the answer to a question, it truthfully says it does not know. 
The AI ONLY uses information contained in the "Relevant Information" section and does not hallucinate.

Relevant Information:
{history}""",
        ),
        MessagesPlaceholder(variable_name="history"),
        ("human", "{input}"),
    ]
)

memory = ConversationKGMemory(llm=llm, return_messages=True, memory_key="history")


class ConversationChain:
    def __init__(self, prompt, llm, memory):
        self.memory = memory
        self.chain = (
            RunnablePassthrough()
            | RunnablePassthrough.assign(
                history=RunnableLambda(memory.load_memory_variables)
                | itemgetter("history")
            )
            | prompt
            | llm
        )

    def invoke(self, input_dict):
        response = self.chain.invoke(input_dict)
        self.memory.save_context(input_dict, {"output": response.content})
        return response


conversation_with_kg = ConversationChain(prompt, llm, memory)

Let's initialize the conversation with some basic information.

response = conversation_with_kg.invoke(
    {
        "input": "My name is Teddy. Shelly is a coworker of mine, and she's a new designer at our company."
    }
)
response.content
"Hi Teddy! It's nice to meet you. It sounds like you and Shelly are working together at your company. How's everything going with the new designer on board?"

Let's query the memory for information about Shelly.

conversation_with_kg.memory.load_memory_variables({"input": "who is Shelly?"})
{'history': [SystemMessage(content='On Shelly: Shelly is a coworker of Teddy. Shelly is a new designer. Shelly works at our company.', additional_kwargs={}, response_metadata={})]}

You can also reset the memory by memory.clear().

conversation_with_kg.memory.clear()
conversation_with_kg.memory.load_memory_variables({"input": "who is Shelly?"})
{'history': []}

Set up the environment. You may refer to for more details.

You can checkout the for more details.

(The class ConversationChain was deprecated in LangChain 0.2.7 and will be removed in 1.0. If you want, you can skip to )

LangChain Python API Reference>langchain-community: 0.3.13>memory>ConversationKGMemory
LangChain Python API Reference>langchain-community: 0.2.16>NetworkxEntityGraph
Environment Setup
langchain-opentutorial
Applying KG Memory with LCEL
Secludor
Secludor
ulysyszh
Jinu Cho
Juni Lee
LangChain Open Tutorial
Overview
Environment Setup
Conversation Knowledge Graph Memory
Applying KG Memory to Chain
Applying KG Memory with LCEL