LangGraph ToolNode
Author: JoonHo Kim
Peer Review :
Proofread : Chaeyoon Kim
This is a part of LangChain Open Tutorial
Overview
In this tutorial, we will cover how to use LangGraph's pre-built ToolNode for tool invocation.ToolNode is a LangChain Runnable that takes a graph state containing a list of messages as input and updates the state with the result of the tool invocation.
It is designed to work seamlessly with LangGraph's pre-built agents and can operate with any StateGraph, provided the state includes a messages key with an appropriate reducer.
Now, let’s explore how to maximize productivity using LangGraph ToolNode. 🚀
Table of Contents
References
Environment Setup
Set up the environment. You may refer to Environment Setup for more details.
[Note]
langchain-opentutorialis a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.You can checkout the
langchain-opentutorialfor more details.
Creating tools
Before creating tools, we will build some functions to collect and fetch news from Google News for keyword that user input.
You can refer to Creating Tools section in Tool Calling Agent with More LLM Models for more details.
Now Let's create tools that search News for query and execute python code.
Next, we will explore how to use ToolNode to invoke tools.
ToolNode is initialized with a list of tools and the following arguments.
Args:tools: A sequence of tools that can be invoked by theToolNode.name: The name of theToolNodein the graph. Defaults to "tools".tags: Optional tags to associate with the node. Defaults to "None".handle_tool_errors: How to handle tool errors raised by tools inside the node. Defaults to "True".
Calling ToolNode manually
ToolNode operates on graph state with a list of messages.
AIMessage is used to represent a message with the role assistant. This is the response from the model, which can include text or a request to invoke tools. It could also include other media types like images, audio, or video though this is still uncommon at the moment.
Args:content: The content of the message. Usually a string, but can be a list of content blocks.tool_calls: A list of tool calls associated with the message.name: The name of the tool to invoke.args: The arguments to pass to the tool.id: An optional unique identifier for the message, ideally provided by the provider/model that created the message.type: The type of the message.
It expects the last message in the list to be an AIMessage with tool_calls parameter.
Let’s see how to manually invoke ToolNode.
Generally, there is no need to manually create an AIMessage, as it is automatically generated by all LangChain chat models that support tool invocation.
Additionally, by passing multiple tool invocations to the tool_calls parameter of an AIMessage, you can perform parallel tool invocations using ToolNode.
Using with LLMs
To use chat models with tool calling, we need to first ensure that the model is aware of the available tools.
LangChain provides various chat models of different providers such as OpenAI GPT, Anthropic Claude, Google Gemini and more.
You can visit LangChain ChatModels for more details.
In this tutorial, We do this by calling .bind_tools method on ChatOpenAI model.
This can be done by calling the .bind_tools method on the ChatOpenAI model.
As you can see, the AI message generated by the chat model already has tool_calls populated, so we can just pass it directly to ToolNode.
Using with Agent
Next, let's explore how to use ToolNode within a LangGraph' graph.
We will set up an ReAct Agent's graph implementation. This agent takes a query as input and repeatedly invokes tools until it gathers enough information to resolve the query.
Before we start, let's define a function to visualize the graph.
The ToolNode and OpenAI model will be used together with the tools we just defined.
Let's build a graph with the following steps.
Use LLM model to process messages and generate responses, return responses with tool calls.
Initialize workflow graph based on message state.
Define the two nodes we will cycle between agent and tools.
Connect the workflow starting point to the agent node.
Set up conditional branching from the agent node, connecting to a tool node or an endpoint.
Set up circular edges between the tool node and the agent node.
Connect the agent node to the end point.
Compile the defined workflow graph and create an executable application.

Let's try to run the graph with different queries.
ToolNode can also handle errors that occur during tool execution.
You can enable/disable this feature by setting handle_tool_errors=True (enabled by default).
Last updated