Asking Humans for Help: Customizing State in LangGraph

Open in ColabOpen in GitHub

Overview

This tutorial demonstrates how to extend a chatbot using LangGraph by adding a "human" node, allowing the system to optionally ask humans for help. It introduces state customization with an "ask_human" flag and shows how to handle interruptions and manual state updates. The tutorial also covers graph visualization, conditional logic, and integrating tools like web search and human assistance.

Table of Contents

References

Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorial for more details.

You can alternatively set OPENAI_API_KEY in .env file and load it.

[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.

Setting Up the Node to Ask Humans for Help

Extends the State class to include an ask_human flag and defines the HumanRequest tool schema using TypedDict and BaseModel. This allows the chatbot to formally request human assistance when needed, adding flexibility to its decision-making process.

This time, we will add a state (ask_human) to determine whether the chatbot should ask a human for help during the conversation.

We define the schema for the human request.

Next, we define the chatbot node. The key modification here is that the chatbot will toggle the ask_human flag if it calls the RequestAssistance flag.

Next, we create the graph builder and add the chatbot and tools nodes to the graph, as before.

Setting Up the Human Node

Next, we create the human node.

This node primarily serves as a placeholder to trigger an interrupt in the graph. If the user does not manually update the state during the interrupt, the LLM will insert a tool message to indicate that the human was asked for help but did not respond.

This node also resets the ask_human flag to ensure the graph does not revisit the node unless another request is made.

Reference Image

Next, we define the conditional logic.

The select_next_node function routes the path to the human node if the flag is set. Otherwise, it uses the prebuilt tools_condition function to select the next node.

The tools_condition function simply checks if the chatbot used tool_calls in the response message.

If so, it routes to the action node. Otherwise, it ends the graph.

Finally, we connect the edges and compile the graph.

Let's visualize the graph.

png

The chatbot node behaves as follows:

  • The chatbot can ask a human for help (chatbot->select->human)

  • It can call a search engine tool (chatbot->select->action)

  • Or it can respond directly (chatbot->select-> end ).

Once an action or request is made, the graph switches back to the chatbot node to continue the task.

Notice: The LLM has called the provided "HumanRequest" tool, and an interrupt has been set. Let's check the graph state.

The graph state is actually interrupted before the 'human' node. In this scenario, you can act as the "expert" and manually update the state by adding a new ToolMessage with your input.

To respond to the chatbot's request, follow these steps:

  1. Create a ToolMessage containing your response. This will be passed back to the chatbot.

  2. Call update_state to manually update the graph state.

You can check the state to confirm that the response has been added.

Next, we resume the graph by passing None as the input.

Finally, let's check the final result.

Last updated