LCEL (Remembering Conversation History): Adding Memory

Open in ColabOpen in GitHub

Overview

This tutorial demonstrates how to add memory to arbitrary chains using LCEL.

The LangChain Expression Language (LCEL) takes a declarative approach to building new Runnables from existing Runnables. For more details about LCEL, please refer to the References below.

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorial for more details.

Alternatively, environment variables can also be set using a .env file.

[Note]

  • This is not necessary if you've already set the environment variables in the previous step.

Initializing Model and Prompt

Now, let's start to initialize the model and the prompt we'll use.

Creating Memory

Create a ConversationBufferMemory to store conversation history.

  • return_messages : When set to True, it returns HumanMessage and AIMessage objects.

  • memory_key: The key that will be substituted into the Chain's prompt later. This can be modified as needed.

Check the saved conversation history.

Since nothing has been saved yet, the conversation history is empty.

Use RunnablePassthrough.assign to assign the result of the memory.load_memory_variables function to the chat_history variable, and extract the value corresponding to the chat_history key from this result.

Hold on a second! What is...

RunnablePassthrough? RunnableLambda?

To put it simply, RunnablePassthrough provides the functionality to pass through data as is, while RunnableLambda provides the functionality to execute user-defined functions.

When you call RunnablePassthrough alone, it simply passes the input as received. However, when you use RunnablePassthrough.assign, it delivers the input combined with additional arguments provided to the function.

Let's look at the code for more details.

Since RunnablePassthrough.assign is used, the returned value is a combination of the input and the additional arguments provided to the function.

In this case, the key of the additional argument is chat_history. The value corresponds to the part of the result of memory.load_memory_variables executed through RunnableLambda that is extracted by itemgetter using the chat_history key.

Adding Memory to Chain

Let's add memory to the chain using LCEL.

Proceed with the first conversation.

Using the memory.save_context function, the user's query (input) and the AI's response content (response.content) are saved to memory.

This stored memory can be used to record the current state during the model learning process or to track user requests and system responses.

Shall we find out if the model correctly remembers your name through memory?

Remembering well! This means that the memory connected using LCEL is working correctly!

Example Implementation of a Custom ConversationChain

Let's create our own custom ConversationChain!

Let's do something interesting using our custom ConversationChain!

Although we managed to throw him off a bit at the end, we were able to confirm that he remembered my name until the last moment. He is indeed a remarkable pirate!🏴‍☠️⚓

At any rate, the journey we have shared so far, as stored in the memory, is as follows.

Now, create your own journey using the custom ConversationChain with LCEL!

Thank you for your hard work!🎉🎉🎉

Last updated