ConversationKGMemory

Open in ColabOpen in GitHub

Overview

Unlike ConversationEntityMemory, which manages information about entities in a key-value format for individual entities, ConversationKGMemory(Conversation Knowledge Graph Memory) is a module that manages relationships between entities in a graph format.

It extracts and structures knowledge triplets (subject-relationship-object) to identify and store complex relationships between entities, and allows exploration of entity connectivity through graph structure .

This helps the model understand relationships between different entities and better respond to queries based on complex networks and historical context.

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorial for more details.

You can alternatively set API keys such as OPENAI_API_KEY in a .env file and load them.

[Note] This is not necessary if you've already set the required API keys in previous steps.

Conversation Knowledge Graph Memory

ConversationKGMemory is a memory module that stores and manages information extracted from conversations in a graph structure.

This example demonstrates the following key features:

  • Storing conversation context (save_context)

  • (Reference) Getting a list of entity names in the graph sorted by causal dependence. (get_topological_sort)

  • Extracting entities from current conversation (get_current_entities)

  • Extracting knowledge triplets (get_knowledge_triplets)

  • Retrieving stored memory (load_memory_variables)

The following example shows the process of extracting entities and relationships from a conversation about a new designer, Shelly Kim, and storing them in a graph format.

(Reference) get_topological_sort() → List[str]

You can use the get_topological_sort method to view all entities stored in the knowledge graph in topological order:

This method:

  • Uses NetworkX library to analyze the knowledge graph structure.

  • Performs topological sorting based on directed edges.

  • Returns a list of entities in dependency order.

The order reflects the relationships between entities in the conversation, showing how they are connected in the knowledge graph.

get_current_entities(input_string: str) → List[str]

Here's how the get_current_entities method works:

1. Entity Extraction Chain Creation

  • Creates an LLMChain using the entity_extraction_prompt template.

  • This prompt is designed to extract proper nouns from the last line of the conversation.

2. Context Processing

  • Retrieves the last k*2 messages from the buffer. (default : k=2)

  • Generates conversation history string using human_prefix and ai_prefix.

3. Entity Extraction

  • Extracts proper nouns from the input string "Who is Shelly Kim?"

  • Primarily recognizes words starting with capital letters as proper nouns.

  • In this case, "Shelly Kim" is extracted as an entity.

This method only extracts entities from the question itself , while the previous conversation context is used only for reference.

get_knowledge_triplets(input_string: str) → List[KnowledgeTriple]

The get_knowledge_triplets method operates as follows:

1. Knowledge Triple Extraction Chain

  • Creates an LLMChain using the knowledge_triplet_extraction_prompt template.

  • Designed to extract triples in ( subject-relation-object ) format from given text.

2. Memory Search

  • Searches for information related to "Shelly" from previously stored conversations.

  • Stored context:

    • "This is Shelly Kim who lives in Pangyo."

    • "Shelly Kim is our company's new designer."

3. Triple Extraction

  • Generates the following triples from the retrieved information:

    • (Shelly Kim, lives in, Pangyo)

    • (Shelly Kim, is, designer)

    • (Shelly Kim, works at, our company)

This method extracts relationship information in triple format from all stored conversation content related to a specific entity .

load_memory_variables(inputs: Dict[str, Any]) → Dict[str, Any]

The load_memory_variables method operates through the following steps:

1. Entity Extraction

  • Extracts entities (e.g., "Shelly Kim") from the input "Who is Shelly Kim?"

  • Internally uses the get_current_entities method.

2. Knowledge Retrieval

  • Searches for all knowledge triplets related to the extracted entities.

  • Queries the graph for information previously stored via save_context method.

3. Information Formatting

  • Converts found triplets into system messages.

  • Returns a list of message objects due to the return_messages=True setting.

This method retrieves relevant information from the stored knowledge graph and returns it in a structured format, which can then be used as context for subsequent conversations with the language model.

Applying KG Memory to Chain

This section demonstrates how to use ConversationKGMemory with ConversationChain .

(The class ConversationChain was deprecated in LangChain 0.2.7 and will be removed in 1.0. If you want, you can skip to Applying KG Memory with LCEL)

Let's initialize the conversation with some basic information.

Let's query the memory for information about Shelly.

You can also reset the memory by memory.clear().

Applying KG Memory with LCEL

Let's examine the memory after having a conversation using a custom ConversationChain with ConversationKGMemory by LCEL

Let's initialize the conversation with some basic information.

Let's query the memory for information about Shelly.

You can also reset the memory by memory.clear().

Last updated