ConversationTokenBufferMemory

Open in Colabarrow-up-rightOpen in GitHubarrow-up-right

Overview

ConversationTokenBufferMemory stores recent conversation history in a buffer memory and determines when to flush conversation content based on token length rather than the number of conversations.

Key parameters:

  • max_token_limit: Sets the maximum token length for storing conversation content

  • return_messages: When True, returns the messages in chat format. When False, returns a string

  • human_prefix: Prefix to add before human messages (default: "Human")

  • ai_prefix: Prefix to add before AI messages (default: "AI")

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setuparrow-up-right for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorialarrow-up-right for more details.

You can alternatively set OPENAI_API_KEY in .env file and load it.

[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.

Limiting Maximum Token Length to 50

This section demonstrates how to limit the conversation memory to 50 tokens

Setting Maximum Token Length to 150

Let's check how the conversation is stored when we set the maximum token length to 150.

Last updated