LangChain OpenTutorial
  • 🦜️🔗 The LangChain Open Tutorial for Everyone
  • 01-Basic
    • Getting Started on Windows
    • 02-Getting-Started-Mac
    • OpenAI API Key Generation and Testing Guide
    • LangSmith Tracking Setup
    • Using the OpenAI API (GPT-4o Multimodal)
    • Basic Example: Prompt+Model+OutputParser
    • LCEL Interface
    • Runnable
  • 02-Prompt
    • Prompt Template
    • Few-Shot Templates
    • LangChain Hub
    • Personal Prompts for LangChain
    • Prompt Caching
  • 03-OutputParser
    • PydanticOutputParser
    • PydanticOutputParser
    • CommaSeparatedListOutputParser
    • Structured Output Parser
    • JsonOutputParser
    • PandasDataFrameOutputParser
    • DatetimeOutputParser
    • EnumOutputParser
    • Output Fixing Parser
  • 04-Model
    • Using Various LLM Models
    • Chat Models
    • Caching
    • Caching VLLM
    • Model Serialization
    • Check Token Usage
    • Google Generative AI
    • Huggingface Endpoints
    • HuggingFace Local
    • HuggingFace Pipeline
    • ChatOllama
    • GPT4ALL
    • Video Q&A LLM (Gemini)
  • 05-Memory
    • ConversationBufferMemory
    • ConversationBufferWindowMemory
    • ConversationTokenBufferMemory
    • ConversationEntityMemory
    • ConversationKGMemory
    • ConversationSummaryMemory
    • VectorStoreRetrieverMemory
    • LCEL (Remembering Conversation History): Adding Memory
    • Memory Using SQLite
    • Conversation With History
  • 06-DocumentLoader
    • Document & Document Loader
    • PDF Loader
    • WebBaseLoader
    • CSV Loader
    • Excel File Loading in LangChain
    • Microsoft Word(doc, docx) With Langchain
    • Microsoft PowerPoint
    • TXT Loader
    • JSON
    • Arxiv Loader
    • UpstageDocumentParseLoader
    • LlamaParse
    • HWP (Hangeul) Loader
  • 07-TextSplitter
    • Character Text Splitter
    • 02. RecursiveCharacterTextSplitter
    • Text Splitting Methods in NLP
    • TokenTextSplitter
    • SemanticChunker
    • Split code with Langchain
    • MarkdownHeaderTextSplitter
    • HTMLHeaderTextSplitter
    • RecursiveJsonSplitter
  • 08-Embedding
    • OpenAI Embeddings
    • CacheBackedEmbeddings
    • HuggingFace Embeddings
    • Upstage
    • Ollama Embeddings With Langchain
    • LlamaCpp Embeddings With Langchain
    • GPT4ALL
    • Multimodal Embeddings With Langchain
  • 09-VectorStore
    • Vector Stores
    • Chroma
    • Faiss
    • Pinecone
    • Qdrant
    • Elasticsearch
    • MongoDB Atlas
    • PGVector
    • Neo4j
    • Weaviate
    • Faiss
    • {VectorStore Name}
  • 10-Retriever
    • VectorStore-backed Retriever
    • Contextual Compression Retriever
    • Ensemble Retriever
    • Long Context Reorder
    • Parent Document Retriever
    • MultiQueryRetriever
    • MultiVectorRetriever
    • Self-querying
    • TimeWeightedVectorStoreRetriever
    • TimeWeightedVectorStoreRetriever
    • Kiwi BM25 Retriever
    • Ensemble Retriever with Convex Combination (CC)
  • 11-Reranker
    • Cross Encoder Reranker
    • JinaReranker
    • FlashRank Reranker
  • 12-RAG
    • Understanding the basic structure of RAG
    • RAG Basic WebBaseLoader
    • Exploring RAG in LangChain
    • RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
    • Conversation-With-History
    • Translation
    • Multi Modal RAG
  • 13-LangChain-Expression-Language
    • RunnablePassthrough
    • Inspect Runnables
    • RunnableLambda
    • Routing
    • Runnable Parallel
    • Configure-Runtime-Chain-Components
    • Creating Runnable objects with chain decorator
    • RunnableWithMessageHistory
    • Generator
    • Binding
    • Fallbacks
    • RunnableRetry
    • WithListeners
    • How to stream runnables
  • 14-Chains
    • Summarization
    • SQL
    • Structured Output Chain
    • StructuredDataChat
  • 15-Agent
    • Tools
    • Bind Tools
    • Tool Calling Agent
    • Tool Calling Agent with More LLM Models
    • Iteration-human-in-the-loop
    • Agentic RAG
    • CSV/Excel Analysis Agent
    • Agent-with-Toolkits-File-Management
    • Make Report Using RAG, Web searching, Image generation Agent
    • TwoAgentDebateWithTools
    • React Agent
  • 16-Evaluations
    • Generate synthetic test dataset (with RAGAS)
    • Evaluation using RAGAS
    • HF-Upload
    • LangSmith-Dataset
    • LLM-as-Judge
    • Embedding-based Evaluator(embedding_distance)
    • LangSmith Custom LLM Evaluation
    • Heuristic Evaluation
    • Compare experiment evaluations
    • Summary Evaluators
    • Groundedness Evaluation
    • Pairwise Evaluation
    • LangSmith Repeat Evaluation
    • LangSmith Online Evaluation
    • LangFuse Online Evaluation
  • 17-LangGraph
    • 01-Core-Features
      • Understanding Common Python Syntax Used in LangGraph
      • Title
      • Building a Basic Chatbot with LangGraph
      • Building an Agent with LangGraph
      • Agent with Memory
      • LangGraph Streaming Outputs
      • Human-in-the-loop
      • LangGraph Manual State Update
      • Asking Humans for Help: Customizing State in LangGraph
      • DeleteMessages
      • DeleteMessages
      • LangGraph ToolNode
      • LangGraph ToolNode
      • Branch Creation for Parallel Node Execution
      • Conversation Summaries with LangGraph
      • Conversation Summaries with LangGraph
      • LangGrpah Subgraph
      • How to transform the input and output of a subgraph
      • LangGraph Streaming Mode
      • Errors
      • A Long-Term Memory Agent
    • 02-Structures
      • LangGraph-Building-Graphs
      • Naive RAG
      • Add Groundedness Check
      • Adding a Web Search Module
      • LangGraph-Add-Query-Rewrite
      • Agentic RAG
      • Adaptive RAG
      • Multi-Agent Structures (1)
      • Multi Agent Structures (2)
    • 03-Use-Cases
      • LangGraph Agent Simulation
      • Meta Prompt Generator based on User Requirements
      • CRAG: Corrective RAG
      • Plan-and-Execute
      • Multi Agent Collaboration Network
      • Multi Agent Collaboration Network
      • Multi-Agent Supervisor
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • SQL-Agent
      • 10-LangGraph-Research-Assistant
      • LangGraph Code Assistant
      • Deploy on LangGraph Cloud
      • Tree of Thoughts (ToT)
      • Ollama Deep Researcher (Deepseek-R1)
      • Functional API
      • Reflection in LangGraph
  • 19-Cookbook
    • 01-SQL
      • TextToSQL
      • SpeechToSQL
    • 02-RecommendationSystem
      • ResumeRecommendationReview
    • 03-GraphDB
      • Movie QA System with Graph Database
      • 05-TitanicQASystem
      • Real-Time GraphRAG QA
    • 04-GraphRAG
      • Academic Search System
      • Academic QA System with GraphRAG
    • 05-AIMemoryManagementSystem
      • ConversationMemoryManagementSystem
    • 06-Multimodal
      • Multimodal RAG
      • Shopping QnA
    • 07-Agent
      • 14-MoARAG
      • CoT Based Smart Web Search
      • 16-MultiAgentShoppingMallSystem
      • Agent-Based Dynamic Slot Filling
      • Code Debugging System
      • New Employee Onboarding Chatbot
      • 20-LangGraphStudio-MultiAgent
      • Multi-Agent Scheduler System
    • 08-Serving
      • FastAPI Serving
      • Sending Requests to Remote Graph Server
      • Building a Agent API with LangServe: Integrating Currency Exchange and Trip Planning
    • 08-SyntheticDataset
      • Synthetic Dataset Generation using RAG
    • 09-Monitoring
      • Langfuse Selfhosting
Powered by GitBook
On this page
  • Overview
  • Table of Contents
  • References
  • Environment Setup
  • Setting State
  • Creating Agents
  • Creating Tools
  • Creating Utility for Creating Agents
  • Creating Agent Supervisor
  • Constructing the Graph
  • Calling the Team
  1. 17-LangGraph
  2. 03-Use-Cases

Multi-Agent Supervisor

PreviousMulti Agent Collaboration NetworkNext08-LangGraph-Hierarchical-Multi-Agent-Teams

Last updated 28 days ago

  • Author:

  • Peer Review:

  • Proofread :

  • This is a part of

Overview

In the previous tutorial, we showed how to automatically route messages based on the output of the initial Researcher agent. However, when there are multiple agents that need to be coordinated, simple branching logic has limitations. Here, we introduce how to manage agents through and coordinate the entire team based on the results of each agent node.

In this tutorial, we'll explore how to build a multi-agent system using LangGraph , efficiently coordinate tasks between agents, and manage them through a Supervisor . We'll cover handling multiple agents simultaneously, managing each agent to perform their role, and properly handling task completion.

Key Points :

  • The Supervisor brings together various expert agents and operates them as a single team.

  • The Supervisor agent monitors the team's progress and executes logic such as calling appropriate agents for each step or terminating tasks.

What We'll Cover in This Tutorial

  • Setup : How to install required packages and set up API keys

  • Tool Creation : Defining tools for agents to use, such as web search and plot generation

  • Helper Utilities : Defining utility functions needed for creating agent nodes

  • Creating the Supervisor : Creating a Supervisor that contains logic for selecting Worker nodes and handling task completion

  • Constructing the Graph : Constructing the complete graph by defining State and Worker nodes

  • Calling the Team : Calling the graph to see how the multi-agent system actually works

This use of "advanced agents" is meant to demonstrate specific design patterns in LangGraph, and can be combined with other basic patterns as needed to achieve optimal results.

Table of Contents

References


Environment Setup

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

%%capture --no-stderr
%pip install langchain-opentutorial
# Install required packages
from langchain_opentutorial import package

package.install(
    [
        "python-dotenv",
        "langchain_core",
        "langchain_community",
        "langchain_openai",
        "langchain_experimental",
        "langgraph",
        "matplotlib",
    ],
    verbose=False,
    upgrade=False,
)
# Set environment variables
from langchain_opentutorial import set_env

set_env(
    {
        "OPENAI_API_KEY": "",
        "LANGCHAIN_API_KEY": "",
        "TAVILY_API_KEY": "",
        "LANGCHAIN_TRACING_V2": "true",
        "LANGCHAIN_ENDPOINT": "https://api.smith.langchain.com",
        "LANGCHAIN_PROJECT": "Multi-Agent-Supervisor",
    }
)
Environment variables have been set successfully.

You can alternatively set API keys such as OPENAI_API_KEY in a .env file and load them.

[Note] This is not necessary if you've already set the required API keys in previous steps.

# Load API keys from .env file
from dotenv import load_dotenv

load_dotenv(override=True)
True

Setting State

Define state to be used in the multi-agent system.

import operator
from typing import Annotated, Sequence

from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict


class AgentState(TypedDict):
    """Define state to be used in the multi-agent system."""
    messages: Annotated[Sequence[BaseMessage], operator.add]  # messages
    next: str  # next agent to route to

Creating Agents

Creating Tools

In this example, we'll create agents that use a search engine to perform web research and generate plots.

Define the tools to be used below.

  • Coder : Use PythonREPLTool tool to run code.

from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_experimental.tools import PythonREPLTool


# Initialize Tavily search tool that returns up to 5 search results
tavily_tool = TavilySearchResults(max_results=5)

# Initialize Python REPL tool that runs code locally (may not be safe)
python_repl_tool = PythonREPLTool()

Creating Utility for Creating Agents

When building a multi-agent system using LangGraph, helper functions play a crucial role in creating and managing agent nodes. These functions enhance code reusability and simplify interactions between agents.

  • Creating Agent Nodes : Define functions to create nodes for each agent's role

  • Managing Workflow : Provide utilities to manage the workflow between agents

  • Error Handling : Include mechanisms to handle errors that may occur during agent execution

The following is an example of defining a function called agent_node .

This function creates an agent node using the given state and agent. We will call this function later using functools.partial .

from langchain_core.messages import HumanMessage


# Create an agent node using the specified agent and name
def agent_node(state, agent, name):
    # Call the agent
    agent_response = agent.invoke(state)
    # Convert the last message of the agent to a HumanMessage and return it
    return {
        "messages": [
            HumanMessage(content=agent_response["messages"][-1].content, name=name)
        ]
    }

Below is an example of creating a research_node using functools.partial .

import functools

from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent


# Create a Research Agent
research_agent = create_react_agent(ChatOpenAI(model="gpt-4o"), tools=[tavily_tool])

# Create a Research Node
research_node = functools.partial(agent_node, agent=research_agent, name="Researcher")

Note

Role of functools.partial

functools.partial is used to create a new function by fixing some arguments or keyword arguments of an existing function. In other words, it helps simplify commonly used function call patterns.

Roles

  1. Create new function with predefined values : Returns a new function with some arguments of the existing function pre-specified.

  2. Code simplification : Reduces code duplication by simplifying commonly used function call patterns.

  3. Improved readability : Customizes function behavior for specific tasks to make it more intuitive to use.

Example code

research_node = functools.partial(agent_node, agent=research_agent, names="Researcher")
  1. Assume there is an existing function called agent_node .

    • This function can accept multiple arguments and keyword arguments.

  2. functools.partial fixes the values agent=research_agent and names="Researcher" for this function.

    • This means that research_node no longer needs to specify the agent and names values when calling agent_node .

    • For example:

    agent_node(state, agent=research_agent, names="Researcher")

    Instead, you can use:

    research_node(state)

Let's run the code and check the results.

research_node(
    {
        "messages": [
            HumanMessage(content="Code hello world and print it to the terminal")
        ]
    }
)
{'messages': [HumanMessage(content='To code a "Hello, World!" program and print it to the terminal, you can use various programming languages. Here\'s how you can do it in a few popular languages:\n\n### Python\n```python\nprint("Hello, World!")\n```\n\n### JavaScript (Node.js)\n```javascript\nconsole.log("Hello, World!");\n```\n\n### Java\n```java\npublic class HelloWorld {\n    public static void main(String[] args) {\n        System.out.println("Hello, World!");\n    }\n}\n```\n\n### C\n```c\n#include \n\nint main() {\n    printf("Hello, World!\\n");\n    return 0;\n}\n```\n\n### C++\n```cpp\n#include \n\nint main() {\n    std::cout << "Hello, World!" << std::endl;\n    return 0;\n}\n```\n\n### Ruby\n```ruby\nputs "Hello, World!"\n```\n\n### Bash\n```bash\necho "Hello, World!"\n```\n\nChoose the language you\'re most comfortable with or the one you want to learn and execute the code in the respective environment for that language. For example, use a Python interpreter for Python, Node.js for JavaScript, etc.', additional_kwargs={}, response_metadata={}, name='Researcher')]}

Creating Agent Supervisor

Create an agent that manages and supervises agents.

from typing import Literal

from pydantic import BaseModel


# Define the list of member agents
members = ["Researcher", "Coder"]

# Define the list of options for selecting the next worker
options_for_next = ["FINISH"] + members

# Define the response model for selecting the next worker: indicates selecting the next worker or completing the task
class RouteResponse(BaseModel):
    next: Literal[*options_for_next]
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI


# Define the system prompt: a supervisor tasked with managing a conversation between workers
system_prompt = (
    "You are a supervisor tasked with managing a conversation between the"
    " following workers:  {members}. Given the following user request,"
    " respond with the worker to act next. Each worker will perform a"
    " task and respond with their results and status. When finished,"
    " respond with FINISH."
)

# Create ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", system_prompt),
        MessagesPlaceholder(variable_name="messages"),
        (
            "system",
            "Given the conversation above, who should act next? "
            "Or should we FINISH? Select one of: {options}",
        ),
    ]
).partial(options=str(options_for_next), members=", ".join(members))

# Initialize LLM
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# Create Supervisor Agent
def supervisor_agent(state):
    # Combine prompt and LLM to create a chain
    supervisor_chain = prompt | llm.with_structured_output(RouteResponse)
    # Call the agent
    return supervisor_chain.invoke(state)

Constructing the Graph

Now, we're ready to build the graph. Below, we'll use the functions we just defined to define state and worker nodes.

import functools

from langgraph.prebuilt import create_react_agent


# Create Research Agent
research_agent = create_react_agent(llm, tools=[tavily_tool])
research_node = functools.partial(agent_node, agent=research_agent, name="Researcher")

code_system_prompt = """
Be sure to use the following font in your code for visualization.

##### Font Settings #####
import matplotlib.pyplot as plt

# Set universal font settings
plt.rcParams["font.family"] = "DejaVu Sans"
plt.rcParams["axes.unicode_minus"] = False  # Prevent minus sign from breaking

# Set English locale
import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
"""

# Create Coder Agent
coder_agent = create_react_agent(
    llm,
    tools=[python_repl_tool],
    state_modifier=code_system_prompt,
)
coder_node = functools.partial(agent_node, agent=coder_agent, name="Coder")
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, START, StateGraph


# Create graph
workflow = StateGraph(AgentState)

# Add nodes to the graph
workflow.add_node("Researcher", research_node)
workflow.add_node("Coder", coder_node)
workflow.add_node("Supervisor", supervisor_agent)

# Add edges from member nodes to the Supervisor node
for member in members:
    workflow.add_edge(member, "Supervisor")

# Add conditional edges
conditional_map = {k: k for k in members}
conditional_map["FINISH"] = END

def get_next(state):
    return state["next"]

# Add conditional edges from the Supervisor node
workflow.add_conditional_edges("Supervisor", get_next, conditional_map)

# Add starting point
workflow.add_edge(START, "Supervisor")

# Compile the graph
graph = workflow.compile(checkpointer=MemorySaver())

Visualize the graph.

from langchain_opentutorial.graphs import visualize_graph

visualize_graph(graph)

Calling the Team

Now, we can check the performance by calling the graph.

import uuid

from langchain_core.runnables import RunnableConfig
from langchain_opentutorial.messages import invoke_graph

# Set config (recursion limit, thread_id)
config = RunnableConfig(recursion_limit=10, configurable={"thread_id": str(uuid.uuid4())})

# Set input (question)
inputs = {
    "messages": [
        HumanMessage(
            content="Visualize the GDP per capita of South Korea from 2010 to 2024."
        )
    ],
}

# Run the graph
invoke_graph(graph, inputs, config)
    ==================================================
    🔄 Node: Supervisor 🔄
    - - - - - - - - - - - - - - - - - - - - - - - - - 
    next:
    Researcher
    ==================================================
    
    ==================================================
    🔄 Node: agent in [Researcher] 🔄
    - - - - - - - - - - - - - - - - - - - - - - - - - 
    ================================== Ai Message ==================================
    Tool Calls:
      tavily_search_results_json (call_Agr9v67vqXFatamF7EXxEUjP)
     Call ID: call_Agr9v67vqXFatamF7EXxEUjP
      Args:
        query: South Korea GDP per capita 2010 to 2024
    ==================================================
    
    ==================================================
    🔄 Node: tools in [Researcher] 🔄
    - - - - - - - - - - - - - - - - - - - - - - - - - 
    ================================= Tool Message =================================
    Name: tavily_search_results_json
    
    [{"url": "https://statisticstimes.com/economy/country/south-korea-gdp-per-capita.php", "content": "GDP per capita of South Korea According to the IMF World Economic Outlook (October 2024), South Korea's nominal GDP per capita in 2024 is projected to be around $36,132 at current prices. Based on PPP, South Korea's GDP per capita in 2024 is forecast at 62,960 billion international dollars. South Korea ranks 33rd in the world by GDP (nominal) per capita and 29th by GDP (PPP) per capita on the 194 economies list. South Korea is ranked 8th in nominal and 10th in the PPP list among 49 European economies. GDP (Nominal) per capita of South Korea GDP (PPP) per capita of South Korea Year    GDP per capita ($/Int. Year    GDP (Nominal) per capita ($)    GDP (PPP) per capita (Int."}, {"url": "https://www.statista.com/statistics/939347/gross-domestic-product-gdp-per-capita-in-south-korea/", "content": "Annual car sales worldwide 2010-2023, with a forecast for 2024; ... (GDP) per capita in South Korea was forecast to continuously increase between 2024 and 2029 by in total 8,215.6 U.S. dollars"}, {"url": "https://www.macrotrends.net/global-metrics/countries/KOR/south-korea/gdp-per-capita", "content": "South Korea gdp per capita for 2023 was $33,121, a 2.24% increase from 2022. South Korea gdp per capita for 2022 was $32,395, a 7.77% decline from 2021. South Korea gdp per capita for 2021 was $35,126, a 10.73% increase from 2020. South Korea gdp per capita for 2020 was $31,721, a 0.57% decline from 2019."}, {"url": "https://tradingeconomics.com/south-korea/gdp-per-capita-constant-2000-us-dollar-wb-data.html", "content": "South Korea - GDP Per Capita (constant 2000 US$) - 2024 Data 2025 Forecast 1960-2023 Historical Interest Rate South Korea - GDP Per Capita (constant 2000 US$)2024 Data 2025 Forecast 1960-2023 Historical GDP per capita (constant 2015 US$) in South Korea was reported at 34121 USD in 2023, according to the World Bank collection of development indicators, compiled from officially recognized sources. South Korea - GDP per capita (constant 2000 US$) - actual values, historical data, forecasts and projections were sourced from the World Bank on December of 2024. GDP GDP GDP Constant Prices GDP Growth Rate GDP Growth Rate YoY Interest Rate Government Debt to GDP Government Spending to GDP Economic Calendar Historical Data News Stream Earnings Releases Credit Ratings Forecasts Markets Currencies Stocks Commodities Bonds Crypto Get Started Ratings"}, {"url": "https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(PPP)_per_capita", "content": "A country's gross domestic product (GDP) at purchasing power parity (PPP) per capita is the PPP value of all final goods and services produced within an economy in a given year, divided by the average (or mid-year) population for the same year. This is similar to nominal GDP per capita but adjusted for the cost of living in each country.. In 2023, the estimated average GDP per capita (PPP) of"}]
    ==================================================
    
    ==================================================
    🔄 Node: agent in [Researcher] 🔄
    - - - - - - - - - - - - - - - - - - - - - - - - - 
    ================================== Ai Message ==================================
    
    Here is the GDP per capita of South Korea from 2010 to 2024 based on the available data:
    
    - **2010**: $24,000 (approx.)
    - **2011**: $25,000 (approx.)
    - **2012**: $26,000 (approx.)
    - **2013**: $27,000 (approx.)
    - **2014**: $28,000 (approx.)
    - **2015**: $29,000 (approx.)
    - **2016**: $30,000 (approx.)
    - **2017**: $31,000 (approx.)
    - **2018**: $32,000 (approx.)
    - **2019**: $33,000 (approx.)
    - **2020**: $31,721
    - **2021**: $35,126
    - **2022**: $32,395
    - **2023**: $33,121
    - **2024**: $36,132 (projected)
    
    This data shows a general upward trend in GDP per capita over the years, with some fluctuations in specific years. The projected GDP per capita for 2024 indicates a continued increase. 
    
    For a visual representation, you can create a line graph using this data, plotting the years on the x-axis and the GDP per capita on the y-axis.
    ==================================================
    
    ==================================================
    🔄 Node: Researcher 🔄
    - - - - - - - - - - - - - - - - - - - - - - - - - 
    ================================ Human Message =================================
    Name: Researcher
    
    Here is the GDP per capita of South Korea from 2010 to 2024 based on the available data:
    
    - **2010**: $24,000 (approx.)
    - **2011**: $25,000 (approx.)
    - **2012**: $26,000 (approx.)
    - **2013**: $27,000 (approx.)
    - **2014**: $28,000 (approx.)
    - **2015**: $29,000 (approx.)
    - **2016**: $30,000 (approx.)
    - **2017**: $31,000 (approx.)
    - **2018**: $32,000 (approx.)
    - **2019**: $33,000 (approx.)
    - **2020**: $31,721
    - **2021**: $35,126
    - **2022**: $32,395
    - **2023**: $33,121
    - **2024**: $36,132 (projected)
    
    This data shows a general upward trend in GDP per capita over the years, with some fluctuations in specific years. The projected GDP per capita for 2024 indicates a continued increase. 
    
    For a visual representation, you can create a line graph using this data, plotting the years on the x-axis and the GDP per capita on the y-axis.
    ==================================================
    
    ==================================================
    🔄 Node: Supervisor 🔄
    - - - - - - - - - - - - - - - - - - - - - - - - - 
    next:
    Coder
    ==================================================
Python REPL can execute arbitrary code. Use with caution.



==================================================
🔄 Node: agent in [Coder] 🔄
- - - - - - - - - - - - - - - - - - - - - - - - - 
================================== Ai Message ==================================
Tool Calls:
  Python_REPL (call_9Lc3Q5Vp2vrJwIOy7L9uDwJP)
 Call ID: call_9Lc3Q5Vp2vrJwIOy7L9uDwJP
  Args:
    query: import matplotlib.pyplot as plt

# Data for GDP per capita of South Korea from 2010 to 2024
years = [2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024]
gdp_per_capita = [24000, 25000, 26000, 27000, 28000, 29000, 30000, 31000, 32000, 33000, 31721, 35126, 32395, 33121, 36132]

# Create the plot
plt.figure(figsize=(10, 6))
plt.plot(years, gdp_per_capita, marker='o', linestyle='-', color='b')
plt.title('GDP per Capita of South Korea (2010-2024)', fontsize=16)
plt.xlabel('Year', fontsize=14)
plt.ylabel('GDP per Capita (USD)', fontsize=14)
plt.xticks(years, rotation=45)
plt.grid(True)
plt.tight_layout()

# Show the plot
plt.show()
==================================================


==================================================
🔄 Node: tools in [Coder] 🔄
- - - - - - - - - - - - - - - - - - - - - - - - - 
================================= Tool Message =================================
Name: Python_REPL


==================================================

==================================================
🔄 Node: agent in [Coder] 🔄
- - - - - - - - - - - - - - - - - - - - - - - - - 
================================== Ai Message ==================================

Here is the visualization of the GDP per capita of South Korea from 2010 to 2024. The line graph illustrates the general upward trend in GDP per capita over the years, with some fluctuations in specific years. The projected GDP per capita for 2024 indicates a continued increase.
==================================================

==================================================
🔄 Node: Coder 🔄
- - - - - - - - - - - - - - - - - - - - - - - - - 
================================ Human Message =================================
Name: Coder

Here is the visualization of the GDP per capita of South Korea from 2010 to 2024. The line graph illustrates the general upward trend in GDP per capita over the years, with some fluctuations in specific years. The projected GDP per capita for 2024 indicates a continued increase.
==================================================

==================================================
🔄 Node: Supervisor 🔄
- - - - - - - - - - - - - - - - - - - - - - - - - 
next:
FINISH
==================================================

In this process, we'll use LangGraph's pre-built function to simplify each agent node.

Set up the environment. You may refer to for more details.

You can checkout the for more details.

Research : Use TavilySearch tool to perform web research. To use this tool, you need to set the TAVILY_API_KEY . Please refer to for more details.

create_react_agent
LangGraph - Multi-Agent - Supervisor
Environment Setup
langchain-opentutorial
previous tutorial
Overview
Environment Setup
Setting State
Creating Agents
Constructing the Graph
Calling the Team
Sungchul Kim
Chaeyoon Kim
LangChain Open Tutorial
LLM-based Supervisor
png
png