PGVector

Open in Colab Open in GitHub

Overview

This tutorial covers how to use PGVector with LangChain .

PGVector is an open-source extension for PostgreSQL that allows you to store and search vector data alongside your regular database information.

This tutorial walks you through using CRUD operations with the PGVector storing , updating , deleting documents, and performing similarity-based retrieval .

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorial for more details.

You can alternatively set API keys such as OPENAI_API_KEY in a .env file and load them.

[Note] This is not necessary if you've already set the required API keys in previous steps.

Set up PGVector

If you are using Windows and have installed postgresql for Windows, you are required to install vector extension for postgresql. The following may help Install pgvector on Windows.

But in this tutorial, we will use Docker container. If you are using Mac or Windows, check Docker Desktop for Mac or Docker Desktop for Windows.

If you are using Docker desktop, you can easily set up PGVector by running the following command that spins up a Docker container:

For more detailed instructions, please refer to the official documentation

[ NOTE ]

  • If you want to maintain the stored data even after container being deleted, you must mount volume like below:

What is PGVector?

PGVector is a PostgreSQL extension that enables vector similarity search directly within your PostgreSQL database, making it ideal for AI applications, semantic search, and recommendation systems.

This is particularly valuable for who already use PostgreSQL who want to add vector search capabilities without managing separate infrastructure or learning new query languages.

Features :

  1. Native PostgreSQL integration with standard SQL queries

  2. Multiple similarity search methods including L2, Inner Product, Cosine

  3. Several indexing options including HNSW and IVFFlat

  4. Support for up to 2,000 dimensions per vector

  5. ACID compliance inherited from PostgreSQL

Advantages :

  1. Free and open-source

  2. Easy integration with existing PostgreSQL databases

  3. Full SQL functionality and transactional support

  4. No additional infrastructure needed

  5. Supports hybrid searches combining vector and traditional SQL queries

Disadvantages :

  1. Performance limitations with very large datasets (billions of vectors)

  2. Limited to single-node deployment

  3. Memory-intensive for large vector dimensions

  4. Requires manual optimization for best performance

  5. Less specialized features compared to dedicated vector databases

Prepare Data

This section guides you through the data preparation process .

This section includes the following components:

  • Data Introduction

  • Preprocess Data

Data Introduction

In this tutorial, we will use the fairy tale 📗 The Little Prince in PDF format as our data.

This material complies with the Apache 2.0 license .

The data is used in a text (.txt) format converted from the original PDF.

You can view the data at the link below.

Preprocess Data

In this tutorial section, we will preprocess the text data from The Little Prince and convert it into a list of LangChain Document objects with metadata.

Each document chunk will include a title field in the metadata, extracted from the first line of each section.

Setting up PGVector

This part walks you through the initial setup of PGVector .

This section includes the following components:

  • Load Embedding Model

  • Load PGVector Client

Load Embedding Model

In this section, you'll learn how to load an embedding model.

This tutorial uses OpenAI's API-Key for loading the model.

💡 If you prefer to use another embedding model, see the instructions below.

Load PGVector Client

In this section, we'll show you how to load the database client object using the Python SDK for PGVector .

If you are successfully running the PGVector container and get client objecct, you can use PGVectorIndexManager from pgvector.py in utils directory to handle collections.

You can also initialize pGVectorIndexManager by passing full connection string or each parameter separately instead of passing client.

When you initialize PGVectorIndexManager, the procedure will automatically create two tableslangchain_pg_collection and langchain_pg_embedding.

  • langchain_pg_collection

    • Stores names of the collections.

    • Distinguish collection by uuid and name.

  • langchain_pg_embedding

    • Stores actual data.

So, when you create a new collection and insert data to the collection, the data will be stored in langchain_pg_embedding table.

As you can see below, the uuid column in langchain_pg_collection table matched with collection_id column in langchain_pg_embedding table.

pgVector Entity Relation
pgVector Collection
pgVector Data

Create collection

Now we can create collection with index_manager.

To create collection, you need to pass embedding model and collection_name when calling the create_index method.

In this tutorial we will use text-embedding-3-large model of OpenAI.

If creation is successful, the method will return PGVectorDocumentManager class that can handle actual data.

In this tutorial we will create an collection with name langchain_opentutorial.

Document Manager

For the LangChain-OpenTutorial, we have implemented a custom set of CRUD functionalities for VectorDBs

The following operations are included:

  • upsert : Update existing documents or insert if they don’t exist

  • upsert_parallel : Perform upserts in parallel for large-scale data

  • similarity_search : Search for similar documents based on embeddings

  • delete : Remove documents based on filter conditions

Each of these features is implemented as class methods specific to each VectorDB.

In this tutorial, you'll learn how to use these methods to interact with your VectorDB.

We plan to continuously expand the functionality by adding more common operations in the future.

Filtering

The PGVector support following filtering operations.

Operator
Meaning/Category

$eq

Equality (==)

$ne

Inequality (!=)

$lt

Less than (<)

$lte

Less than or equal (<=)

$gt

Greater than (>)

$gte

Greater than or equal (>=)

$in

Special Cased (in)

$nin

Special Cased (not in)

$between

Special Cased (between)

$like

Text (like)

$ilike

Text (case-insensitive like)

$and

Logical (and)

$or

Logical (or)

Filter can be used with delete, and search methods.

To apply filter, we create a dictionary and pass it to filter parameter like the following

Create Instance

First, we create an instance of the PGVector helper class to use its CRUD functionalities.

This class is initialized with the PGVector Python SDK client instance and the embedding model instance , both of which were defined in the previous section.

Now you can use the following CRUD operations with the crud_manager instance.

These instance allow you to easily manage documents in your PGVector.

Upsert Document

Update existing documents or insert if they don’t exist

✅ Args

  • texts : Iterable[str] – List of text contents to be inserted/updated.

  • metadatas : Optional[List[Dict]] – List of metadata dictionaries for each text (optional).

  • ids : Optional[List[str]] – Custom IDs for the documents. If not provided, IDs will be auto-generated.

  • **kwargs : Extra arguments for the underlying vector store.

🔄 Return

  • ids : IDs of the upserted documents.

Upsert Parallel

Perform upserts in parallel for large-scale data

✅ Args

  • texts : Iterable[str] – List of text contents to be inserted/updated.

  • metadatas : Optional[List[Dict]] – List of metadata dictionaries for each text (optional).

  • ids : Optional[List[str]] – Custom IDs for the documents. If not provided, IDs will be auto-generated.

  • batch_size : int – Number of documents per batch (default: 32).

  • workers : int – Number of parallel workers (default: 10).

  • **kwargs : Extra arguments for the underlying vector store.

🔄 Return

  • ids : IDs of the upserted documents.

Search for similar documents based on embeddings .

This method uses "cosine similarity" .

✅ Args

  • query : str – The text query for similarity search.

  • k : int – Number of top results to return (default: 10).

**kwargs : Additional search options (e.g., filters).

🔄 Return

  • results : List[Document] – A list of LangChain Document objects ranked by similarity.

Delete Document

Delete documents based on filter conditions

✅ Args

  • ids : Optional[List[str]] – List of document IDs to delete. If None, deletion is based on filter.

  • filters : Optional[Dict] – Dictionary specifying filter conditions (e.g., metadata match).

  • **kwargs : Any additional parameters.

🔄 Return

  • None

Last updated