In this tutorial, we introduce how to inspect and visualize various components (including the graph structure) of a Runnable chain. Understanding the underlying graph structure can help diagnose and optimize complex chain flows.
Table of Contents
References
Environment Setup
[Note]
langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.
You can alternatively set OPENAI_API_KEY in .env file and load it.
[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.
from dotenv import load_dotenv
load_dotenv(override=True)
Introduction to Inspecting Runnables
LangChain Runnable objects can be composed into pipelines, commonly referred to as chains or flows. After setting up a runnable, you might want to inspect its structure to see what's happening under the hood.
By inspecting these, you can:
Understand the sequence of transformations and data flows.
Visualize the graph for debugging.
Retrieve or modify prompts or sub-chains as needed.
Graph Inspection
We'll create a runnable chain that includes a retriever from FAISS, a prompt template, and a ChatOpenAI model. Then we’ll inspect the chain’s graph to understand how data flows between these components.
from langchain_core.prompts import ChatPromptTemplate
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
# Create a FAISS vector store from simple text data
vectorstore = FAISS.from_texts(
["Teddy is an AI engineer who loves programming!"], embedding=OpenAIEmbeddings()
)
# Create a retriever based on the vector store
retriever = vectorstore.as_retriever()
template = """Answer the question based only on the following context:\n{context}\n\nQuestion: {question}"""
# Create a prompt template
prompt = ChatPromptTemplate.from_template(template)
# Initialize ChatOpenAI model
model = ChatOpenAI(model="gpt-4o-mini")
# Construct the chain: (dictionary format) => prompt => model => output parser
chain = (
{
"context": retriever,
"question": RunnablePassthrough(),
} # Search context and question
| prompt
| model
| StrOutputParser()
)
Graph Output
We can inspect the chain’s internal graph of nodes (steps) and edges (data flows).
# Get nodes from the chain's graph
chain.get_graph().nodes
# Get edges from the chain's graph
chain.get_graph().edges
We can also print the graph in an ASCII-based diagram to visualize the chain flow.
chain.get_graph().print_ascii()
Prompt Retrieval
Finally, we can retrieve the actual prompts used in this chain. This is helpful to see exactly what LLM instructions are being sent.
chain.get_prompts()
Set up the environment. You may refer to for more details.