Ollama Embeddings With Langchain
Author: Gwangwon Jung
Peer Review : Teddy Lee, ro__o_jun, BokyungisaGod, Youngjun cho
Proofread : Youngjun cho
This is a part of LangChain Open Tutorial
Overview
This tutorial covers how to perform Text Embedding using Ollama and Langchain.
Ollama is an open-source project that allows you to easily serve models locally.
In this tutorial, we will create a simple example to measure the similarity between Documents and an input Query using Ollama and Langchain.

Table of Contents
References
Environment Setup
Set up the environment. You may refer to Environment Setup for more details.
[Note]
langchain-opentutorialis a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.You can checkout the
langchain-opentutorialfor more details.
Ollama Install and Model Serving
Ollama is an open-source project that makes it easy to run large language models(LLM) in a local environment. This tool allows users to download and run various LLMs with simple commands, enabling developers to experiment with and use AI models directly on their computers. Ollama is a tool with a user-friendly interface and fast performance, making AI development and experimentation more accessible and efficient.
Ollama User Guide
1. Install Ollama

2. Verify Ollama Installation

Identify Supported Embedding Models and Serving Model
👇 You can find the model in the hyperlink below.
Ollama Model Pull Guide
1. Search Models

2. Pull a Model

3. Verify the Model

Model Load and Embedding
Now that you have downloaded the model, let's load the model you downloaded and proceed with the embedding.
First, define Query and Documents
Next, let's load the embedding model downloaded with Ollama using Langchain.
The OllamaEmbeddings class in langchain_community/embeddings.py will be removed in langchain-community version 1.0.0.
So, in this tutorial, we used the OllamaEmbeddings class from langchain-ollama.
Let's use the loaded model to embed the Query and Documents.
The similarity calculation results
Let's use the vector values of the query and documents obtained earlier to calculate the similarity.
In this tutorial, we will use cosine similarity to calculate the similarity between the Query and the Documents.
Using the Sklearn library in Python, you can easily calculate cosine similarity.
Last updated