Long Context Reorder
Author: Minji
Peer Review:
Proofread : jishin86
This is a part of LangChain OpenTutorial
Overview
Regardless of the model's architecture, performance significantly degrades when including more than 10 retrieved documents.
Simply put, when the model needs to access relevant information in the middle of a long context, it tends to ignore the provided documents.
For more details, please refer to the following paper:
https://arxiv.org/abs/2307.03172
To avoid this issue, you can prevent performance degradation by reordering documents after retrieval.
Create a retriever that can store and search text data using the Chroma vector store. Use the retriever's invoke method to search for highly relevant documents for a given query.
Table of Contents
Environment Setup
Set up the environment. You may refer to Environment Setup for more details.
[Note]
langchain-opentutorialis a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.You can checkout the
langchain-opentutorialfor more details.
Create an instance of the LongContextReorder class named reordering.
Enter a query for the retriever to perform the search.
Create an instance of LongContextReorder class.
Call reordering.transform_documents(docs) to reorder the document list.
Less relevant documents are positioned in the middle of the list, while more relevant documents are positioned at the beginning and end.
Creating Question-Answering Chain with Context Reordering
A chain that enhances QA (Question-Answering) performance by reordering documents using LongContextReorder, which optimizes the arrangement of context for better comprehension and response accuracy.
Prints the reordered documents.
Enter the query in question and language for response.
Check the search results of reordered documents.
Prints the response.
Last updated