In this tutorial, we introduce how to inspect and visualize various components (including the graph structure) of a Runnable chain. Understanding the underlying graph structure can help diagnose and optimize complex chain flows.
You can alternatively set OPENAI_API_KEY in .env file and load it.
[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.
from dotenv import load_dotenvload_dotenv(override=True)
Introduction to Inspecting Runnables
LangChain Runnable objects can be composed into pipelines, commonly referred to as chains or flows. After setting up a runnable, you might want to inspect its structure to see what's happening under the hood.
By inspecting these, you can:
Understand the sequence of transformations and data flows.
Visualize the graph for debugging.
Retrieve or modify prompts or sub-chains as needed.
Graph Inspection
We'll create a runnable chain that includes a retriever from FAISS, a prompt template, and a ChatOpenAI model. Then we’ll inspect the chain’s graph to understand how data flows between these components.
from langchain_core.prompts import ChatPromptTemplatefrom langchain_community.vectorstores import FAISSfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.runnables import RunnablePassthroughfrom langchain_openai import ChatOpenAI, OpenAIEmbeddings# Create a FAISS vector store from simple text datavectorstore = FAISS.from_texts( ["Teddy is an AI engineer who loves programming!"], embedding=OpenAIEmbeddings())# Create a retriever based on the vector storeretriever = vectorstore.as_retriever()template ="""Answer the question based only on the following context:\n{context}\n\nQuestion: {question}"""# Create a prompt templateprompt = ChatPromptTemplate.from_template(template)# Initialize ChatOpenAI modelmodel =ChatOpenAI(model="gpt-4o-mini")# Construct the chain: (dictionary format) => prompt => model => output parserchain = ({"context": retriever,"question":RunnablePassthrough(),}# Search context and question| prompt| model|StrOutputParser())
Graph Output
We can inspect the chain’s internal graph of nodes (steps) and edges (data flows).
# Get nodes from the chain's graphchain.get_graph().nodes
# Get edges from the chain's graphchain.get_graph().edges
We can also print the graph in an ASCII-based diagram to visualize the chain flow.
chain.get_graph().print_ascii()
Prompt Retrieval
Finally, we can retrieve the actual prompts used in this chain. This is helpful to see exactly what LLM instructions are being sent.