LangChain OpenTutorial
  • 🦜️🔗 The LangChain Open Tutorial for Everyone
  • 01-Basic
    • Getting Started on Windows
    • 02-Getting-Started-Mac
    • OpenAI API Key Generation and Testing Guide
    • LangSmith Tracking Setup
    • Using the OpenAI API (GPT-4o Multimodal)
    • Basic Example: Prompt+Model+OutputParser
    • LCEL Interface
    • Runnable
  • 02-Prompt
    • Prompt Template
    • Few-Shot Templates
    • LangChain Hub
    • Personal Prompts for LangChain
    • Prompt Caching
  • 03-OutputParser
    • PydanticOutputParser
    • PydanticOutputParser
    • CommaSeparatedListOutputParser
    • Structured Output Parser
    • JsonOutputParser
    • PandasDataFrameOutputParser
    • DatetimeOutputParser
    • EnumOutputParser
    • Output Fixing Parser
  • 04-Model
    • Using Various LLM Models
    • Chat Models
    • Caching
    • Caching VLLM
    • Model Serialization
    • Check Token Usage
    • Google Generative AI
    • Huggingface Endpoints
    • HuggingFace Local
    • HuggingFace Pipeline
    • ChatOllama
    • GPT4ALL
    • Video Q&A LLM (Gemini)
  • 05-Memory
    • ConversationBufferMemory
    • ConversationBufferWindowMemory
    • ConversationTokenBufferMemory
    • ConversationEntityMemory
    • ConversationKGMemory
    • ConversationSummaryMemory
    • VectorStoreRetrieverMemory
    • LCEL (Remembering Conversation History): Adding Memory
    • Memory Using SQLite
    • Conversation With History
  • 06-DocumentLoader
    • Document & Document Loader
    • PDF Loader
    • WebBaseLoader
    • CSV Loader
    • Excel File Loading in LangChain
    • Microsoft Word(doc, docx) With Langchain
    • Microsoft PowerPoint
    • TXT Loader
    • JSON
    • Arxiv Loader
    • UpstageDocumentParseLoader
    • LlamaParse
    • HWP (Hangeul) Loader
  • 07-TextSplitter
    • Character Text Splitter
    • 02. RecursiveCharacterTextSplitter
    • Text Splitting Methods in NLP
    • TokenTextSplitter
    • SemanticChunker
    • Split code with Langchain
    • MarkdownHeaderTextSplitter
    • HTMLHeaderTextSplitter
    • RecursiveJsonSplitter
  • 08-Embedding
    • OpenAI Embeddings
    • CacheBackedEmbeddings
    • HuggingFace Embeddings
    • Upstage
    • Ollama Embeddings With Langchain
    • LlamaCpp Embeddings With Langchain
    • GPT4ALL
    • Multimodal Embeddings With Langchain
  • 09-VectorStore
    • Vector Stores
    • Chroma
    • Faiss
    • Pinecone
    • Qdrant
    • Elasticsearch
    • MongoDB Atlas
    • PGVector
    • Neo4j
    • Weaviate
    • Faiss
    • {VectorStore Name}
  • 10-Retriever
    • VectorStore-backed Retriever
    • Contextual Compression Retriever
    • Ensemble Retriever
    • Long Context Reorder
    • Parent Document Retriever
    • MultiQueryRetriever
    • MultiVectorRetriever
    • Self-querying
    • TimeWeightedVectorStoreRetriever
    • TimeWeightedVectorStoreRetriever
    • Kiwi BM25 Retriever
    • Ensemble Retriever with Convex Combination (CC)
  • 11-Reranker
    • Cross Encoder Reranker
    • JinaReranker
    • FlashRank Reranker
  • 12-RAG
    • Understanding the basic structure of RAG
    • RAG Basic WebBaseLoader
    • Exploring RAG in LangChain
    • RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
    • Conversation-With-History
    • Translation
    • Multi Modal RAG
  • 13-LangChain-Expression-Language
    • RunnablePassthrough
    • Inspect Runnables
    • RunnableLambda
    • Routing
    • Runnable Parallel
    • Configure-Runtime-Chain-Components
    • Creating Runnable objects with chain decorator
    • RunnableWithMessageHistory
    • Generator
    • Binding
    • Fallbacks
    • RunnableRetry
    • WithListeners
    • How to stream runnables
  • 14-Chains
    • Summarization
    • SQL
    • Structured Output Chain
    • StructuredDataChat
  • 15-Agent
    • Tools
    • Bind Tools
    • Tool Calling Agent
    • Tool Calling Agent with More LLM Models
    • Iteration-human-in-the-loop
    • Agentic RAG
    • CSV/Excel Analysis Agent
    • Agent-with-Toolkits-File-Management
    • Make Report Using RAG, Web searching, Image generation Agent
    • TwoAgentDebateWithTools
    • React Agent
  • 16-Evaluations
    • Generate synthetic test dataset (with RAGAS)
    • Evaluation using RAGAS
    • HF-Upload
    • LangSmith-Dataset
    • LLM-as-Judge
    • Embedding-based Evaluator(embedding_distance)
    • LangSmith Custom LLM Evaluation
    • Heuristic Evaluation
    • Compare experiment evaluations
    • Summary Evaluators
    • Groundedness Evaluation
    • Pairwise Evaluation
    • LangSmith Repeat Evaluation
    • LangSmith Online Evaluation
    • LangFuse Online Evaluation
  • 17-LangGraph
    • 01-Core-Features
      • Understanding Common Python Syntax Used in LangGraph
      • Title
      • Building a Basic Chatbot with LangGraph
      • Building an Agent with LangGraph
      • Agent with Memory
      • LangGraph Streaming Outputs
      • Human-in-the-loop
      • LangGraph Manual State Update
      • Asking Humans for Help: Customizing State in LangGraph
      • DeleteMessages
      • DeleteMessages
      • LangGraph ToolNode
      • LangGraph ToolNode
      • Branch Creation for Parallel Node Execution
      • Conversation Summaries with LangGraph
      • Conversation Summaries with LangGraph
      • LangGrpah Subgraph
      • How to transform the input and output of a subgraph
      • LangGraph Streaming Mode
      • Errors
      • A Long-Term Memory Agent
    • 02-Structures
      • LangGraph-Building-Graphs
      • Naive RAG
      • Add Groundedness Check
      • Adding a Web Search Module
      • LangGraph-Add-Query-Rewrite
      • Agentic RAG
      • Adaptive RAG
      • Multi-Agent Structures (1)
      • Multi Agent Structures (2)
    • 03-Use-Cases
      • LangGraph Agent Simulation
      • Meta Prompt Generator based on User Requirements
      • CRAG: Corrective RAG
      • Plan-and-Execute
      • Multi Agent Collaboration Network
      • Multi Agent Collaboration Network
      • Multi-Agent Supervisor
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • SQL-Agent
      • 10-LangGraph-Research-Assistant
      • LangGraph Code Assistant
      • Deploy on LangGraph Cloud
      • Tree of Thoughts (ToT)
      • Ollama Deep Researcher (Deepseek-R1)
      • Functional API
      • Reflection in LangGraph
  • 19-Cookbook
    • 01-SQL
      • TextToSQL
      • SpeechToSQL
    • 02-RecommendationSystem
      • ResumeRecommendationReview
    • 03-GraphDB
      • Movie QA System with Graph Database
      • 05-TitanicQASystem
      • Real-Time GraphRAG QA
    • 04-GraphRAG
      • Academic Search System
      • Academic QA System with GraphRAG
    • 05-AIMemoryManagementSystem
      • ConversationMemoryManagementSystem
    • 06-Multimodal
      • Multimodal RAG
      • Shopping QnA
    • 07-Agent
      • 14-MoARAG
      • CoT Based Smart Web Search
      • 16-MultiAgentShoppingMallSystem
      • Agent-Based Dynamic Slot Filling
      • Code Debugging System
      • New Employee Onboarding Chatbot
      • 20-LangGraphStudio-MultiAgent
      • Multi-Agent Scheduler System
    • 08-Serving
      • FastAPI Serving
      • Sending Requests to Remote Graph Server
      • Building a Agent API with LangServe: Integrating Currency Exchange and Trip Planning
    • 08-SyntheticDataset
      • Synthetic Dataset Generation using RAG
    • 09-Monitoring
      • Langfuse Selfhosting
Powered by GitBook
On this page
  • Overview
  • Table of Contents
  • References
  • Environment Setup
  • Prerequisites
  • Clone the Langfuse Repository
  • Start Langfuse with Docker Compose
  • What's happening?
  • Wait for Ready State
  • Initial Setup & Usage
  • Sending Traces from Your Application
  • Managing & Monitoring Your Local Instance
  • Upgrading & Maintenance
  • Set KEY and run LANGFUSE locally
  1. 19-Cookbook
  2. 09-Monitoring

Langfuse Selfhosting

Previous09-Monitoring

Last updated 10 days ago

  • Author:

  • Peer Review:

  • Proofread :

  • This is a part of

Overview

In this tutorial, you’ll learn how to run Langfuse locally using Docker Compose and integrate it with your LLM-based applications (e.g., those built with LangChain). Langfuse provides comprehensive tracking and observability for:

  • Token usage

  • Execution time and performance metrics

  • Error rates and unexpected behaviors

  • Agent and chain interactions

By the end of this tutorial, you’ll have a local Langfuse instance running on http://localhost:3000 and be ready to send tracking events from your application.

Table of Contents

References


Environment Setup

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

set environment variables is in .env.

Copy the contents of .env_sample and load it into your .env with the key you set.

from dotenv import load_dotenv

load_dotenv(override=True)
True
%%capture --no-stderr
!pip install langchain-opentutorial
    [notice] A new release of pip is available: 24.3.1 -> 25.0.1
    [notice] To update, run: python.exe -m pip install --upgrade pip
# Install required packages
from langchain_opentutorial import package

package.install(
    [
        "langsmith",
        "langchain-openai",
        "langchain",
        "python-dotenv",
        "langchain-core",
        "langfuse",
        "openai"
    ],
    verbose=False,
    upgrade=False,
)

Prerequisites

  1. Docker & Docker Compose

  • Docker Desktop (Mac/Windows) or

  • Docker Engine & Docker Compose plugin (Linux)

  1. Git

  • Make sure you can run git clone from your terminal or command prompt.

  1. Sufficient System Resources

  • Running Langfuse locally requires a few GB of memory available for Docker containers.

Clone the Langfuse Repository

open your terminal and run the following command:

git clone https://github.com/langfuse/langfuse.git
cd langfuse

This will download the Langfuse repository to your local machine. and move into the directory.

Start Langfuse with Docker Compose

Inside the langfuse directory, simply run:

docker compose up

What's happening?

  • Docker Compose will pull and start multiple containers for the Langfuse stack:

    • Database (Postgres)

    • Langfuse backend services

    • Langfuse web/UI

  • This is the simplest way to run Langfuse locally and explore its features.

Wait for Ready State

After 2-3 minutes, you should see logs indicating that the 'langfuse-web-1' container is Ready. At that point, open your browser to:

http://localhost:3000

You will be greeted by the Langfuse login/setup sceen or homepage.

Initial Setup & Usage

Follow the on-screen prompts to configure your local Langfuse instance(e.g., create an initial admin user). Once the setup completes, you can log in to the Langfuse UI.

From here, you will be able to:

  • Create projects to track events from your LLM apps

  • Invite team members (if applicable)

  • Adjust various settings like API keys, encryption or networking

Sending Traces from Your Application

To start collecting trace data in your local Langfuse instance, you’ll need to send events from your application code:

  1. Obtain your Project Key / API Key from the Langfuse UI.

  2. Install the appropriate Langfuse SDK in your application environment (e.g., npm install @langfuse/node for Node.js, or pip install langfuse for Python—if available).

  3. Initialize the Langfuse client and pass in the host (pointing to your local instance) and the API key.

  4. Record events (e.g., chain steps, LLM requests, agent actions) in your code to send structured data to Langfuse.

(Refer to Langfuse’s official docs for language-specific integration examples.)

Managing & Monitoring Your Local Instance

Once you’re up and running:

  • UI: Go to http://localhost:3000 for the Langfuse dashboard.

  • Logs: Monitor the Docker logs in your terminal for any errors or system messages.

  • Configuration: Check the configuration guide for advanced setups:

    • Authentication & SSO

    • Encryption

    • Headless Initialization

    • Networking

    • Organization Creators

    • UI Customization

Stopping/Restarting

To stop the containers, press Ctrl + C in the same terminal window running Docker Compose. If you used a detached mode (-d), then:

docker compose down

Add the -v flag to remove volumes as well (which also deletes stored data).

Upgrading & Maintenance

To upgrade to the latest Langfuse version, simply:

docker compose down
docker compose up --pull always

This will pull the newest images and restart Langfuse. Refer to the upgrade guide for more detailed steps.

Set KEY and run LANGFUSE locally

You can try out langfuse by running the code below.

import os
 
# get keys for your project from https://cloud.langfuse.com
# os.environ["LANGFUSE_PUBLIC_KEY"] = "YOUR_LANGFUSE_PUBLIC_KEY"
# os.environ["LANGFUSE_SECRET_KEY"] = "YOUR_LANGFUSE_SECRET_KEY"

 
# Your host, defaults to https://cloud.langfuse.com
# For US data region, set to "https://us.cloud.langfuse.com"
os.environ["LANGFUSE_HOST"] = "http://localhost:3000"
from langfuse.decorators import observe
from langfuse.openai import openai # OpenAI integration
 
@observe()
def story():
    return openai.chat.completions.create(
        model="gpt-3.5-turbo",
        max_tokens=100,
        messages=[
          {"role": "system", "content": "You are a great storyteller."},
          {"role": "user", "content": "Once upon a time in a galaxy far, far away..."}
        ],
    ).choices[0].message.content
 
@observe()
def main():
    return story()
 
main()
"there existed a peaceful planet called Zoraxia, home to a diverse array of alien species living in harmony. The inhabitants of Zoraxia were known for their love of music and had created a beautiful symphony that could be heard throughout the entire galaxy.\n\nHowever, one fateful day, a dark shadow fell upon Zoraxia as an evil sorcerer named Malakar arrived with his army of shadowy creatures. Malakar sought to harness the power of the planet's music"

Set up the environment. You may refer to for more details.

You can checkout the for more details.

Langfuse DOC
Environment Setup
langchain-opentutorial
JeongGi Park
frimer
LangChain Open Tutorial
Overview
Environment Setup
Prerequisites
Clone the Langfuse Repository
Start Langfuse with Docker Compose
Initial Setup & Usage
Sending Traces from Your Application
Managing & Monitoring Your Local Instance
Upgrading & Maintenance
Set KEY and run LANGFUSE locally