{VectorStore Name}
Last updated
Last updated
Author:
Design:
Peer Review:
This is a part of
This tutorial covers how to use {Vector Store Name} with LangChain .
{A short introduction to vectordb}
This tutorial walks you through using CRUD operations with the {VectorDB} storing , updating , deleting documents, and performing similarity-based retrieval .
[Note]
langchain-opentutorial
is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.
You can alternatively set API keys such as OPENAI_API_KEY
in a .env
file and load them.
[Note] This is not necessary if you've already set the required API keys in previous steps.
Please write down what you need to set up the Vectorstore here.
This part walks you through the data preparation process .
This section includes the following components:
Introduce Data
Preprocessing Data
In this tutorial, we will use the fairy tale π The Little Prince in PDF format as our data.
This material complies with the Apache 2.0 license .
The data is used in a text (.txt) format converted from the original PDF.
You can view the data at the link below.
In this tutorial section, we will preprocess the text data from The Little Prince and convert it into a list of LangChain Document
objects with metadata.
Each document chunk will include a title
field in the metadata, extracted from the first line of each section.
This part walks you through the initial setup of {vectordb} .
This section includes the following components:
Load Embedding Model
Load {vectordb} Client
In the Load Embedding Model section, you'll learn how to load an embedding model.
This tutorial uses OpenAI's API-Key for loading the model.
π‘ If you prefer to use another embedding model, see the instructions below.
In the Load {vectordb} Client section, we cover how to load the database client object using the Python SDK for {vectordb} .
To support the Langchain-Opentutorial , we implemented a custom set of CRUD functionalities for VectorDBs.
The following operations are included:
upsert
: Update existing documents or insert if they donβt exist
upsert_parallel
: Perform upserts in parallel for large-scale data
similarity_search
: Search for similar documents based on embeddings
delete
: Remove documents based on filter conditions
Each of these features is implemented as class methods specific to each VectorDB.
In this tutorial, you can easily utilize these methods to interact with your VectorDB.
We plan to continuously expand the functionality by adding more common operations in the future.
First, we create an instance of the {vectordb} helper class to use its CRUD functionalities.
This class is initialized with the {vectordb} Python SDK client instance and the embedding model instance , both of which were defined in the previous section.
Now you can use the following CRUD operations with the crud_manager
instance.
These instance allow you to easily manage documents in your {vectordb} .
Update existing documents or insert if they donβt exist
β Args
texts
: Iterable[str] β List of text contents to be inserted/updated.
metadatas
: Optional[List[Dict]] β List of metadata dictionaries for each text (optional).
ids
: Optional[List[str]] β Custom IDs for the documents. If not provided, IDs will be auto-generated.
**kwargs
: Extra arguments for the underlying vector store.
π Return
None
Perform upserts in parallel for large-scale data
β Args
texts
: Iterable[str] β List of text contents to be inserted/updated.
metadatas
: Optional[List[Dict]] β List of metadata dictionaries for each text (optional).
ids
: Optional[List[str]] β Custom IDs for the documents. If not provided, IDs will be auto-generated.
batch_size
: int β Number of documents per batch (default: 32).
workers
: int β Number of parallel workers (default: 10).
**kwargs
: Extra arguments for the underlying vector store.
π Return
None
Search for similar documents based on embeddings .
This method uses "cosine similarity" .
β Args
query
: str β The text query for similarity search.
k
: int β Number of top results to return (default: 10).
**kwargs
: Additional search options (e.g., filters).
π Return
results
: List[Document] β A list of LangChain Document objects ranked by similarity.
The as_retriever()
method creates a LangChain-compatible retriever wrapper.
This function allows a DocumentManager
class to return a retriever object by wrapping the internal search()
method, while staying lightweight and independent from full LangChain VectorStore
dependencies.
The retriever obtained through this function can be used the same as the existing LangChain retriever and is compatible with LangChain Pipeline(e.g. RetrievalQA,ConversationalRetrievalChain,Tool,...).
β Args
search_fn
: Callable - The function used to retrieve relevant documents. Typically this is self.search
from a DocumentManager
instance.
search_kwargs
: Optional[Dict] - A dictionary of keyword arguments passed to search_fn
, such as k
for top-K results or metadata filters.
π Return
LightCustomRetriever
:BaseRetriever - A lightweight LangChain-compatible retriever that internally uses the given search_fn
and search_kwargs
.
Remove documents based on filter conditions
β Args
ids
: Optional[List[str]] β List of document IDs to delete. If None, deletion is based on filter.
filters
: Optional[Dict] β Dictionary specifying filter conditions (e.g., metadata match).
**kwargs
: Any additional parameters.
π Return
None
Set up the environment. You may refer to for more details.
You can checkout the for more details.