Conversation-With-History

Open in ColabOpen in GitHub

Overview

This tutorial provides a comprehensive guide to implementing conversational AI systems with memory capabilities using LangChain in two main approaches.

1. Creating a chain to record conversations

  • Creates a simple question-answering chatbot using ChatOpenAI.

  • Implements a system to store and retrieve conversation history based on session IDs.

  • Uses RunnableWithMessageHistory to incorporate chat history into the chain.

2. Creating a RAG chain that retrieves information from documents and records conversations

  • Builds a more complex system that combines document retrieval with conversational AI.

  • Processes a PDF document , creates embeddings, and sets up a vector store for efficient retrieval.

  • Implements a RAG chain that can answer questions based on the document content and previous conversation history.

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorial for more details.

You can alternatively set API keys such as OPENAI_API_KEY in a .env file and load them.

[Note] This is not necessary if you've already set the required API keys in previous steps.

Creating a Chain that remembers previous conversations

Background knowledge needed to understand this content : RunnableWithMessageHistory

1. Adding Chat History to the Core Chain

  • Implement MessagesPlaceholder to incorporate conversation history

  • Define a prompt template that handles user input queries

  • Initialize a ChatOpenAI instance configured to use the ChatGPT model

  • Construct a chain by connecting the prompt template, language model, and output parser

  • Implement StrOutputParser to format the model's response as a string

Creating a Chain with Conversation History (chain_with_history)

  • Initialize a dictionary to store conversation session records

  • Create the function get_session_history that retrieves chat history by session ID and creates a new ChatMessageHistory instance if none exists

  • Instantiate a RunnableWithMessageHistory object to handle persistent conversation history

Process the initial input.

Handle Subsequent Query.

2. Implementing RAG with Conversation History Management

Build a PDF-based Question Answering system that incorporates conversational context.

Create a standard RAG Chain, ensuring to include {chat_history} in the prompt template at step 6.

  • (step 1) Load PDF documents using PDFPlumberLoader

  • (step 2) Segment documents into manageable chunks with RecursiveCharacterTextSplitter

  • (step 3) Create vector embeddings of text chunks using OpenAIEmbeddings

  • (step 4) Index and store embeddings in a FAISS vector database

  • (step 5) Implement a retriever to query relevant information from the vector database

  • (step 6) Design a QA prompt template that incorporates conversation history , user queries, and retrieved context with response instructions

  • (step 7) Initialize a ChatOpenAI instance configured to use the GPT-4o model

  • (step 8) Build the complete chain by connecting the retriever, prompt template, and language model

The system retrieves relevant document context for user queries and generates contextually informed responses.

Implementing Conversation History Management

  • Initialize the store dictionary to maintain conversation histories indexed by session IDs, and create the get_session_history function to retrieve or create session records

  • Create a RunnableWithMessageHistory instance to enhance the RAG chain with conversation tracking capabilities, handling both user queries and historical context

Process the first user input.

Execute the subsequent question.

Last updated