LangChain OpenTutorial
  • 🦜️🔗 The LangChain Open Tutorial for Everyone
  • 01-Basic
    • Getting Started on Windows
    • 02-Getting-Started-Mac
    • OpenAI API Key Generation and Testing Guide
    • LangSmith Tracking Setup
    • Using the OpenAI API (GPT-4o Multimodal)
    • Basic Example: Prompt+Model+OutputParser
    • LCEL Interface
    • Runnable
  • 02-Prompt
    • Prompt Template
    • Few-Shot Templates
    • LangChain Hub
    • Personal Prompts for LangChain
    • Prompt Caching
  • 03-OutputParser
    • PydanticOutputParser
    • PydanticOutputParser
    • CommaSeparatedListOutputParser
    • Structured Output Parser
    • JsonOutputParser
    • PandasDataFrameOutputParser
    • DatetimeOutputParser
    • EnumOutputParser
    • Output Fixing Parser
  • 04-Model
    • Using Various LLM Models
    • Chat Models
    • Caching
    • Caching VLLM
    • Model Serialization
    • Check Token Usage
    • Google Generative AI
    • Huggingface Endpoints
    • HuggingFace Local
    • HuggingFace Pipeline
    • ChatOllama
    • GPT4ALL
    • Video Q&A LLM (Gemini)
  • 05-Memory
    • ConversationBufferMemory
    • ConversationBufferWindowMemory
    • ConversationTokenBufferMemory
    • ConversationEntityMemory
    • ConversationKGMemory
    • ConversationSummaryMemory
    • VectorStoreRetrieverMemory
    • LCEL (Remembering Conversation History): Adding Memory
    • Memory Using SQLite
    • Conversation With History
  • 06-DocumentLoader
    • Document & Document Loader
    • PDF Loader
    • WebBaseLoader
    • CSV Loader
    • Excel File Loading in LangChain
    • Microsoft Word(doc, docx) With Langchain
    • Microsoft PowerPoint
    • TXT Loader
    • JSON
    • Arxiv Loader
    • UpstageDocumentParseLoader
    • LlamaParse
    • HWP (Hangeul) Loader
  • 07-TextSplitter
    • Character Text Splitter
    • 02. RecursiveCharacterTextSplitter
    • Text Splitting Methods in NLP
    • TokenTextSplitter
    • SemanticChunker
    • Split code with Langchain
    • MarkdownHeaderTextSplitter
    • HTMLHeaderTextSplitter
    • RecursiveJsonSplitter
  • 08-Embedding
    • OpenAI Embeddings
    • CacheBackedEmbeddings
    • HuggingFace Embeddings
    • Upstage
    • Ollama Embeddings With Langchain
    • LlamaCpp Embeddings With Langchain
    • GPT4ALL
    • Multimodal Embeddings With Langchain
  • 09-VectorStore
    • Vector Stores
    • Chroma
    • Faiss
    • Pinecone
    • Qdrant
    • Elasticsearch
    • MongoDB Atlas
    • PGVector
    • Neo4j
    • Weaviate
    • Faiss
    • {VectorStore Name}
  • 10-Retriever
    • VectorStore-backed Retriever
    • Contextual Compression Retriever
    • Ensemble Retriever
    • Long Context Reorder
    • Parent Document Retriever
    • MultiQueryRetriever
    • MultiVectorRetriever
    • Self-querying
    • TimeWeightedVectorStoreRetriever
    • TimeWeightedVectorStoreRetriever
    • Kiwi BM25 Retriever
    • Ensemble Retriever with Convex Combination (CC)
  • 11-Reranker
    • Cross Encoder Reranker
    • JinaReranker
    • FlashRank Reranker
  • 12-RAG
    • Understanding the basic structure of RAG
    • RAG Basic WebBaseLoader
    • Exploring RAG in LangChain
    • RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
    • Conversation-With-History
    • Translation
    • Multi Modal RAG
  • 13-LangChain-Expression-Language
    • RunnablePassthrough
    • Inspect Runnables
    • RunnableLambda
    • Routing
    • Runnable Parallel
    • Configure-Runtime-Chain-Components
    • Creating Runnable objects with chain decorator
    • RunnableWithMessageHistory
    • Generator
    • Binding
    • Fallbacks
    • RunnableRetry
    • WithListeners
    • How to stream runnables
  • 14-Chains
    • Summarization
    • SQL
    • Structured Output Chain
    • StructuredDataChat
  • 15-Agent
    • Tools
    • Bind Tools
    • Tool Calling Agent
    • Tool Calling Agent with More LLM Models
    • Iteration-human-in-the-loop
    • Agentic RAG
    • CSV/Excel Analysis Agent
    • Agent-with-Toolkits-File-Management
    • Make Report Using RAG, Web searching, Image generation Agent
    • TwoAgentDebateWithTools
    • React Agent
  • 16-Evaluations
    • Generate synthetic test dataset (with RAGAS)
    • Evaluation using RAGAS
    • HF-Upload
    • LangSmith-Dataset
    • LLM-as-Judge
    • Embedding-based Evaluator(embedding_distance)
    • LangSmith Custom LLM Evaluation
    • Heuristic Evaluation
    • Compare experiment evaluations
    • Summary Evaluators
    • Groundedness Evaluation
    • Pairwise Evaluation
    • LangSmith Repeat Evaluation
    • LangSmith Online Evaluation
    • LangFuse Online Evaluation
  • 17-LangGraph
    • 01-Core-Features
      • Understanding Common Python Syntax Used in LangGraph
      • Title
      • Building a Basic Chatbot with LangGraph
      • Building an Agent with LangGraph
      • Agent with Memory
      • LangGraph Streaming Outputs
      • Human-in-the-loop
      • LangGraph Manual State Update
      • Asking Humans for Help: Customizing State in LangGraph
      • DeleteMessages
      • DeleteMessages
      • LangGraph ToolNode
      • LangGraph ToolNode
      • Branch Creation for Parallel Node Execution
      • Conversation Summaries with LangGraph
      • Conversation Summaries with LangGraph
      • LangGrpah Subgraph
      • How to transform the input and output of a subgraph
      • LangGraph Streaming Mode
      • Errors
      • A Long-Term Memory Agent
    • 02-Structures
      • LangGraph-Building-Graphs
      • Naive RAG
      • Add Groundedness Check
      • Adding a Web Search Module
      • LangGraph-Add-Query-Rewrite
      • Agentic RAG
      • Adaptive RAG
      • Multi-Agent Structures (1)
      • Multi Agent Structures (2)
    • 03-Use-Cases
      • LangGraph Agent Simulation
      • Meta Prompt Generator based on User Requirements
      • CRAG: Corrective RAG
      • Plan-and-Execute
      • Multi Agent Collaboration Network
      • Multi Agent Collaboration Network
      • Multi-Agent Supervisor
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • SQL-Agent
      • 10-LangGraph-Research-Assistant
      • LangGraph Code Assistant
      • Deploy on LangGraph Cloud
      • Tree of Thoughts (ToT)
      • Ollama Deep Researcher (Deepseek-R1)
      • Functional API
      • Reflection in LangGraph
  • 19-Cookbook
    • 01-SQL
      • TextToSQL
      • SpeechToSQL
    • 02-RecommendationSystem
      • ResumeRecommendationReview
    • 03-GraphDB
      • Movie QA System with Graph Database
      • 05-TitanicQASystem
      • Real-Time GraphRAG QA
    • 04-GraphRAG
      • Academic Search System
      • Academic QA System with GraphRAG
    • 05-AIMemoryManagementSystem
      • ConversationMemoryManagementSystem
    • 06-Multimodal
      • Multimodal RAG
      • Shopping QnA
    • 07-Agent
      • 14-MoARAG
      • CoT Based Smart Web Search
      • 16-MultiAgentShoppingMallSystem
      • Agent-Based Dynamic Slot Filling
      • Code Debugging System
      • New Employee Onboarding Chatbot
      • 20-LangGraphStudio-MultiAgent
      • Multi-Agent Scheduler System
    • 08-Serving
      • FastAPI Serving
      • Sending Requests to Remote Graph Server
      • Building a Agent API with LangServe: Integrating Currency Exchange and Trip Planning
    • 08-SyntheticDataset
      • Synthetic Dataset Generation using RAG
    • 09-Monitoring
      • Langfuse Selfhosting
Powered by GitBook
On this page
  • Overview
  • Table of Contents
  • References
  • Environment Setup
  • Memory Saver
  • Configuration with RunnableConfig
  • Inspecting State Snapshots
  1. 17-LangGraph
  2. 01-Core-Features

Agent with Memory

PreviousBuilding an Agent with LangGraphNextLangGraph Streaming Outputs

Last updated 28 days ago

  • Author:

  • Peer Review:

  • Proofread :

  • This is a part of

Overview

This tutorial covers how to add an in-memory checkpoint saver to an agent.

An in-memory checkpoint saver enables an agent to store previous interactions, allowing the agent to engage in multi-turn conversations in a coherent manner.

Also in this tutorial, we use ToolNode and tools_condition prebuilt in LangGraph instead of a customized tool node.

Table of Contents

References


Environment Setup

[Note]

%%capture --no-stderr
%pip install langchain-opentutorial
    [notice] A new release of pip is available: 24.1 -> 24.3.1
    [notice] To update, run: python.exe -m pip install --upgrade pip
# Install required packages
from langchain_opentutorial import package

package.install(
    [
        "langsmith",
        "langchain",
        "langchain_core",
        "langchain_community",
        "langchain_openai",
        "langgraph",
    ],
    verbose=False,
    upgrade=False,
)
from dotenv import load_dotenv

load_dotenv(override=True)
True

You can set API keys in a .env file or set them manually.

[Note] If you’re not using the .env file, no worries! Just enter the keys directly in the cell below, and you’re good to go.

# Set environment variables
from langchain_opentutorial import set_env

set_env(
    {
        # "OPENAI_API_KEY": "",
        # "LANGCHAIN_API_KEY": "",
        # "TAVILY_API_KEY": "",
        "LANGCHAIN_TRACING_V2": "true",
        "LANGCHAIN_ENDPOINT": "https://api.smith.langchain.com",
        "LANGCHAIN_PROJECT": "Agent-with-Memory",
    }
)
Environment variables have been set successfully.

Memory Saver

Without a memory to remember the context, an agent cannot engage in multi-turn interactions.

In this tutorial, we use LangGraph's MemorySaver, which stores checkpoints in memory.

By providing a checkpointer during graph compilation and a thread_id when calling a graph, the state is automatically saved after each step. The thread_id acts as a key for continuous graph execution.

Steps to use MemorySaver:

  1. Create a MemorySaver checkpointer.

from langgraph.checkpoint.memory import MemorySaver

# Create a MemorySaver
memory = MemorySaver()
  1. Define a graph.

from typing import Annotated
from typing_extensions import TypedDict
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition


########## 1. Define state ##########
# Define states to save after each step
class State(TypedDict):
    messages: Annotated[list, add_messages]


########## 2. Define and bind tools ##########
# Define tools
tool = TavilySearchResults(k=3)
tools = [tool]

# Define LLM
llm = ChatOpenAI(model="gpt-4o-mini")

# Bind LLM with tools
llm_with_tools = llm.bind_tools(tools)


########## 3. Define nodes ##########
# Create a state graph
graph_builder = StateGraph(State)


# Define and add a chatbot node
def chatbot(state: State):
    # Invoke and return messages
    return {"messages": [llm_with_tools.invoke(state["messages"])]}


graph_builder.add_node("chatbot", chatbot)

# Define and add a tool node
tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

# Add a conditional edge
graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)

########## 4. Add edges ##########
# tools > chatbot
graph_builder.add_edge("tools", "chatbot")

# START > chatbot
graph_builder.add_edge(START, "chatbot")

# chatbot > END
graph_builder.add_edge("chatbot", END)
  1. Compile the graph with the checkpointer.

# Compile a graph builder
graph = graph_builder.compile(checkpointer=memory)
  1. Visualize the graph.

from IPython.display import Image, display
from langgraph.graph.state import CompiledStateGraph
from dataclasses import dataclass


# Define node styles to use for visualizing the graph
@dataclass
class NodeStyles:
    default: str = (
        "fill:#45C4B0, fill-opacity:0.3, color:#23260F, stroke:#45C4B0, stroke-width:1px, font-weight:bold, line-height:1.2"
    )
    first: str = (
        "fill:#45C4B0, fill-opacity:0.1, color:#23260F, stroke:#45C4B0, stroke-width:1px, font-weight:normal, font-style:italic, stroke-dasharray:2,2"
    )
    last: str = (
        "fill:#45C4B0, fill-opacity:1, color:#000000, stroke:#45C4B0, stroke-width:1px, font-weight:normal, font-style:italic, stroke-dasharray:2,2"
    )


# Define a function to visualize the graph
def visualize_graph(graph, xray=False):
    """
    Displays a visualization of the CompiledStateGraph object.

    This function converts the given graph object,
    if it is an instance of CompiledStateGraph, into a Mermaid-formatted PNG image and displays it.

    Args:
        graph: The graph object to be visualized. Must be an instance of CompiledStateGraph.

    Returns:
        None

    Raises:
        Exception: Raised if an error occurs during the graph visualization process.
    """
    try:
        # Visualize the graph
        if isinstance(graph, CompiledStateGraph):
            display(
                Image(
                    graph.get_graph(xray=xray).draw_mermaid_png(
                        background_color="white",
                        node_colors=NodeStyles(),
                    )
                )
            )
    except Exception as e:
        print(f"[ERROR] Visualize Graph Error: {e}")
visualize_graph(graph)

Configuration with RunnableConfig

Define a RunnableConfig object and set the recursion_limit and thread_id properties.

  • recursion_limit: The maximum number of nodes the graph can visit. Exceeding this limit will trigger a RecursionError.

  • thread_id: An ID to distinguish different sessions.

from langchain_core.runnables import RunnableConfig

config = RunnableConfig(
    recursion_limit=10,
    configurable={"thread_id": "1"},
)
# First question for {"thread_id": "1"}
question = "Hi! My name is Hannah. I run a YouTube channel. Nice to meet you!"

for event in graph.stream({"messages": [("user", question)]}, config=config):
    for value in event.values():
        value["messages"][-1].pretty_print()
================================== Ai Message ==================================
    
    Hi Hannah! Nice to meet you too! That's awesome that you run a YouTube channel. What kind of content do you create?
# Second question for {"thread_id": "1"}
question = "Do you remember my name?"

for event in graph.stream({"messages": [("user", question)]}, config=config):
    for value in event.values():
        value["messages"][-1].pretty_print()
================================== Ai Message ==================================
    
    Yes, your name is Hannah! How can I assist you today?

Modify the thread_id in the RunnableConfig to see if the agent remembers the previous conversation.

config = RunnableConfig(
    recursion_limit=10,
    configurable={"thread_id": "2"},  # Change the thread_id
)
question = "Do you remember my name?"

for event in graph.stream({"messages": [("user", question)]}, config=config):
    for value in event.values():
        value["messages"][-1].pretty_print()
================================== Ai Message ==================================
    
    I don't have the ability to remember personal information or previous interactions. Each session is treated independently. How can I assist you today?

Inspecting State Snapshots

A checkpoint (snapshot) stores:

  • the current state values

  • corresponding config

  • The next node to process (empty at the end).

Call get_state(config) to see a graph's state for a certain config.

config = RunnableConfig(
    configurable={"thread_id": "1"},
)
# Create a snapshot of the graph state
snapshot = graph.get_state(config)
snapshot
StateSnapshot(values={'messages': [HumanMessage(content='Hi! My name is Hannah. I run a YouTube channel. Nice to meet you!', additional_kwargs={}, response_metadata={}, id='f97ccf7d-a15d-4b02-9974-3908b008e181'), AIMessage(content="Hi Hannah! Nice to meet you too! That's awesome that you run a YouTube channel. What kind of content do you create?", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 29, 'prompt_tokens': 98, 'total_tokens': 127, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_72ed7ab54c', 'finish_reason': 'stop', 'logprobs': None}, id='run-4a48ff7d-4c42-4406-a65c-047e55d782bb-0', usage_metadata={'input_tokens': 98, 'output_tokens': 29, 'total_tokens': 127, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}), HumanMessage(content='Do you remember my name?', additional_kwargs={}, response_metadata={}, id='05bcbd52-3ce8-408e-9ba4-2087d46d8f9a'), AIMessage(content='Yes, your name is Hannah! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 16, 'prompt_tokens': 139, 'total_tokens': 155, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_72ed7ab54c', 'finish_reason': 'stop', 'logprobs': None}, id='run-13f47663-d83d-4d9d-9304-acbabb32e5fe-0', usage_metadata={'input_tokens': 139, 'output_tokens': 16, 'total_tokens': 155, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1efd6f6a-bfa9-6ae9-8004-08c865005c63'}}, metadata={'source': 'loop', 'writes': {'chatbot': {'messages': [AIMessage(content='Yes, your name is Hannah! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 16, 'prompt_tokens': 139, 'total_tokens': 155, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_72ed7ab54c', 'finish_reason': 'stop', 'logprobs': None}, id='run-13f47663-d83d-4d9d-9304-acbabb32e5fe-0', usage_metadata={'input_tokens': 139, 'output_tokens': 16, 'total_tokens': 155, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}}, 'thread_id': '1', 'step': 4, 'parents': {}}, created_at='2025-01-20T06:20:40.740119+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1efd6f6a-b6e5-62c6-8003-cf6983af2d22'}}, tasks=())

You can access the configuration with snapshot.config.

# Configuration of the snapshot
snapshot.config
{'configurable': {'thread_id': '1',
      'checkpoint_ns': '',
      'checkpoint_id': '1efd6f6a-bfa9-6ae9-8004-08c865005c63'}}

You can access the saved states with snapshot.value.

# Values saved in the snapshot
snapshot.values
{'messages': [HumanMessage(content='Hi! My name is Hannah. I run a YouTube channel. Nice to meet you!', additional_kwargs={}, response_metadata={}, id='f97ccf7d-a15d-4b02-9974-3908b008e181'),
      AIMessage(content="Hi Hannah! Nice to meet you too! That's awesome that you run a YouTube channel. What kind of content do you create?", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 29, 'prompt_tokens': 98, 'total_tokens': 127, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_72ed7ab54c', 'finish_reason': 'stop', 'logprobs': None}, id='run-4a48ff7d-4c42-4406-a65c-047e55d782bb-0', usage_metadata={'input_tokens': 98, 'output_tokens': 29, 'total_tokens': 127, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),
      HumanMessage(content='Do you remember my name?', additional_kwargs={}, response_metadata={}, id='05bcbd52-3ce8-408e-9ba4-2087d46d8f9a'),
      AIMessage(content='Yes, your name is Hannah! How can I assist you today?', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 16, 'prompt_tokens': 139, 'total_tokens': 155, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_72ed7ab54c', 'finish_reason': 'stop', 'logprobs': None}, id='run-13f47663-d83d-4d9d-9304-acbabb32e5fe-0', usage_metadata={'input_tokens': 139, 'output_tokens': 16, 'total_tokens': 155, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}

You can use snapshot.next to see which node will be processed next. In this case, snapshot.next is empty since the graph reaches the END node.

# Next node
snapshot.next
()

Define custom functions to view the complete contents of snapshot and its metadata (snapshot.metadata).

import json


# Custom function to serialize non-JSON serializable objects
def custom_serializer(obj):
    if hasattr(obj, "__dict__"):
        return obj.__dict__
    elif isinstance(obj, (list, tuple)):
        return [custom_serializer(item) for item in obj]
    return str(obj)


# Custom function to print the snapshot
def pretty_print_snapshot(snapshot):
    try:
        snapshot_json = json.dumps(
            snapshot, indent=4, ensure_ascii=False, default=custom_serializer
        )
        print(snapshot_json)
    except Exception as e:
        print(f"Error formatting snapshot: {e}")
pretty_print_snapshot(snapshot)
[
        {
            "messages": [
                {
                    "content": "Hi! My name is Hannah. I run a YouTube channel. Nice to meet you!",
                    "additional_kwargs": {},
                    "response_metadata": {},
                    "type": "human",
                    "name": null,
                    "id": "f97ccf7d-a15d-4b02-9974-3908b008e181",
                    "example": false
                },
                {
                    "content": "Hi Hannah! Nice to meet you too! That's awesome that you run a YouTube channel. What kind of content do you create?",
                    "additional_kwargs": {
                        "refusal": null
                    },
                    "response_metadata": {
                        "token_usage": {
                            "completion_tokens": 29,
                            "prompt_tokens": 98,
                            "total_tokens": 127,
                            "completion_tokens_details": {
                                "accepted_prediction_tokens": 0,
                                "audio_tokens": 0,
                                "reasoning_tokens": 0,
                                "rejected_prediction_tokens": 0
                            },
                            "prompt_tokens_details": {
                                "audio_tokens": 0,
                                "cached_tokens": 0
                            }
                        },
                        "model_name": "gpt-4o-mini-2024-07-18",
                        "system_fingerprint": "fp_72ed7ab54c",
                        "finish_reason": "stop",
                        "logprobs": null
                    },
                    "type": "ai",
                    "name": null,
                    "id": "run-4a48ff7d-4c42-4406-a65c-047e55d782bb-0",
                    "example": false,
                    "tool_calls": [],
                    "invalid_tool_calls": [],
                    "usage_metadata": {
                        "input_tokens": 98,
                        "output_tokens": 29,
                        "total_tokens": 127,
                        "input_token_details": {
                            "audio": 0,
                            "cache_read": 0
                        },
                        "output_token_details": {
                            "audio": 0,
                            "reasoning": 0
                        }
                    }
                },
                {
                    "content": "Do you remember my name?",
                    "additional_kwargs": {},
                    "response_metadata": {},
                    "type": "human",
                    "name": null,
                    "id": "05bcbd52-3ce8-408e-9ba4-2087d46d8f9a",
                    "example": false
                },
                {
                    "content": "Yes, your name is Hannah! How can I assist you today?",
                    "additional_kwargs": {
                        "refusal": null
                    },
                    "response_metadata": {
                        "token_usage": {
                            "completion_tokens": 16,
                            "prompt_tokens": 139,
                            "total_tokens": 155,
                            "completion_tokens_details": {
                                "accepted_prediction_tokens": 0,
                                "audio_tokens": 0,
                                "reasoning_tokens": 0,
                                "rejected_prediction_tokens": 0
                            },
                            "prompt_tokens_details": {
                                "audio_tokens": 0,
                                "cached_tokens": 0
                            }
                        },
                        "model_name": "gpt-4o-mini-2024-07-18",
                        "system_fingerprint": "fp_72ed7ab54c",
                        "finish_reason": "stop",
                        "logprobs": null
                    },
                    "type": "ai",
                    "name": null,
                    "id": "run-13f47663-d83d-4d9d-9304-acbabb32e5fe-0",
                    "example": false,
                    "tool_calls": [],
                    "invalid_tool_calls": [],
                    "usage_metadata": {
                        "input_tokens": 139,
                        "output_tokens": 16,
                        "total_tokens": 155,
                        "input_token_details": {
                            "audio": 0,
                            "cache_read": 0
                        },
                        "output_token_details": {
                            "audio": 0,
                            "reasoning": 0
                        }
                    }
                }
            ]
        },
        [],
        {
            "configurable": {
                "thread_id": "1",
                "checkpoint_ns": "",
                "checkpoint_id": "1efd6f6a-bfa9-6ae9-8004-08c865005c63"
            }
        },
        {
            "source": "loop",
            "writes": {
                "chatbot": {
                    "messages": [
                        {
                            "content": "Yes, your name is Hannah! How can I assist you today?",
                            "additional_kwargs": {
                                "refusal": null
                            },
                            "response_metadata": {
                                "token_usage": {
                                    "completion_tokens": 16,
                                    "prompt_tokens": 139,
                                    "total_tokens": 155,
                                    "completion_tokens_details": {
                                        "accepted_prediction_tokens": 0,
                                        "audio_tokens": 0,
                                        "reasoning_tokens": 0,
                                        "rejected_prediction_tokens": 0
                                    },
                                    "prompt_tokens_details": {
                                        "audio_tokens": 0,
                                        "cached_tokens": 0
                                    }
                                },
                                "model_name": "gpt-4o-mini-2024-07-18",
                                "system_fingerprint": "fp_72ed7ab54c",
                                "finish_reason": "stop",
                                "logprobs": null
                            },
                            "type": "ai",
                            "name": null,
                            "id": "run-13f47663-d83d-4d9d-9304-acbabb32e5fe-0",
                            "example": false,
                            "tool_calls": [],
                            "invalid_tool_calls": [],
                            "usage_metadata": {
                                "input_tokens": 139,
                                "output_tokens": 16,
                                "total_tokens": 155,
                                "input_token_details": {
                                    "audio": 0,
                                    "cache_read": 0
                                },
                                "output_token_details": {
                                    "audio": 0,
                                    "reasoning": 0
                                }
                            }
                        }
                    ]
                }
            },
            "thread_id": "1",
            "step": 4,
            "parents": {}
        },
        "2025-01-20T06:20:40.740119+00:00",
        {
            "configurable": {
                "thread_id": "1",
                "checkpoint_ns": "",
                "checkpoint_id": "1efd6f6a-b6e5-62c6-8003-cf6983af2d22"
            }
        },
        []
    ]
pretty_print_snapshot(snapshot.metadata)
{
        "source": "loop",
        "writes": {
            "chatbot": {
                "messages": [
                    {
                        "content": "Yes, your name is Hannah! How can I assist you today?",
                        "additional_kwargs": {
                            "refusal": null
                        },
                        "response_metadata": {
                            "token_usage": {
                                "completion_tokens": 16,
                                "prompt_tokens": 139,
                                "total_tokens": 155,
                                "completion_tokens_details": {
                                    "accepted_prediction_tokens": 0,
                                    "audio_tokens": 0,
                                    "reasoning_tokens": 0,
                                    "rejected_prediction_tokens": 0
                                },
                                "prompt_tokens_details": {
                                    "audio_tokens": 0,
                                    "cached_tokens": 0
                                }
                            },
                            "model_name": "gpt-4o-mini-2024-07-18",
                            "system_fingerprint": "fp_72ed7ab54c",
                            "finish_reason": "stop",
                            "logprobs": null
                        },
                        "type": "ai",
                        "name": null,
                        "id": "run-13f47663-d83d-4d9d-9304-acbabb32e5fe-0",
                        "example": false,
                        "tool_calls": [],
                        "invalid_tool_calls": [],
                        "usage_metadata": {
                            "input_tokens": 139,
                            "output_tokens": 16,
                            "total_tokens": 155,
                            "input_token_details": {
                                "audio": 0,
                                "cache_read": 0
                            },
                            "output_token_details": {
                                "audio": 0,
                                "reasoning": 0
                            }
                        }
                    }
                ]
            }
        },
        "thread_id": "1",
        "step": 4,
        "parents": {}
    }

Setting up your environment is the first step. See the for more details.

langchain-opentutorial is a package of easy-to-use environment setup guidance, useful functions and utilities for tutorials. Check out the for more details.

LangGraph: MemorySaver
LangGraph: ToolNode
LangGraph: tools_condition
Environment Setup guide
langchain-opentutorial
Hye-yoon Jeong
Chaeyoon Kim
LangChain Open Tutorial
Overview
Environment Setup
Memory Saver
Configuration with RunnableConfig
Inspecting State Snapshots
png