LangChain OpenTutorial
  • 🦜️🔗 The LangChain Open Tutorial for Everyone
  • 01-Basic
    • Getting Started on Windows
    • 02-Getting-Started-Mac
    • OpenAI API Key Generation and Testing Guide
    • LangSmith Tracking Setup
    • Using the OpenAI API (GPT-4o Multimodal)
    • Basic Example: Prompt+Model+OutputParser
    • LCEL Interface
    • Runnable
  • 02-Prompt
    • Prompt Template
    • Few-Shot Templates
    • LangChain Hub
    • Personal Prompts for LangChain
    • Prompt Caching
  • 03-OutputParser
    • PydanticOutputParser
    • PydanticOutputParser
    • CommaSeparatedListOutputParser
    • Structured Output Parser
    • JsonOutputParser
    • PandasDataFrameOutputParser
    • DatetimeOutputParser
    • EnumOutputParser
    • Output Fixing Parser
  • 04-Model
    • Using Various LLM Models
    • Chat Models
    • Caching
    • Caching VLLM
    • Model Serialization
    • Check Token Usage
    • Google Generative AI
    • Huggingface Endpoints
    • HuggingFace Local
    • HuggingFace Pipeline
    • ChatOllama
    • GPT4ALL
    • Video Q&A LLM (Gemini)
  • 05-Memory
    • ConversationBufferMemory
    • ConversationBufferWindowMemory
    • ConversationTokenBufferMemory
    • ConversationEntityMemory
    • ConversationKGMemory
    • ConversationSummaryMemory
    • VectorStoreRetrieverMemory
    • LCEL (Remembering Conversation History): Adding Memory
    • Memory Using SQLite
    • Conversation With History
  • 06-DocumentLoader
    • Document & Document Loader
    • PDF Loader
    • WebBaseLoader
    • CSV Loader
    • Excel File Loading in LangChain
    • Microsoft Word(doc, docx) With Langchain
    • Microsoft PowerPoint
    • TXT Loader
    • JSON
    • Arxiv Loader
    • UpstageDocumentParseLoader
    • LlamaParse
    • HWP (Hangeul) Loader
  • 07-TextSplitter
    • Character Text Splitter
    • 02. RecursiveCharacterTextSplitter
    • Text Splitting Methods in NLP
    • TokenTextSplitter
    • SemanticChunker
    • Split code with Langchain
    • MarkdownHeaderTextSplitter
    • HTMLHeaderTextSplitter
    • RecursiveJsonSplitter
  • 08-Embedding
    • OpenAI Embeddings
    • CacheBackedEmbeddings
    • HuggingFace Embeddings
    • Upstage
    • Ollama Embeddings With Langchain
    • LlamaCpp Embeddings With Langchain
    • GPT4ALL
    • Multimodal Embeddings With Langchain
  • 09-VectorStore
    • Vector Stores
    • Chroma
    • Faiss
    • Pinecone
    • Qdrant
    • Elasticsearch
    • MongoDB Atlas
    • PGVector
    • Neo4j
    • Weaviate
    • Faiss
    • {VectorStore Name}
  • 10-Retriever
    • VectorStore-backed Retriever
    • Contextual Compression Retriever
    • Ensemble Retriever
    • Long Context Reorder
    • Parent Document Retriever
    • MultiQueryRetriever
    • MultiVectorRetriever
    • Self-querying
    • TimeWeightedVectorStoreRetriever
    • TimeWeightedVectorStoreRetriever
    • Kiwi BM25 Retriever
    • Ensemble Retriever with Convex Combination (CC)
  • 11-Reranker
    • Cross Encoder Reranker
    • JinaReranker
    • FlashRank Reranker
  • 12-RAG
    • Understanding the basic structure of RAG
    • RAG Basic WebBaseLoader
    • Exploring RAG in LangChain
    • RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
    • Conversation-With-History
    • Translation
    • Multi Modal RAG
  • 13-LangChain-Expression-Language
    • RunnablePassthrough
    • Inspect Runnables
    • RunnableLambda
    • Routing
    • Runnable Parallel
    • Configure-Runtime-Chain-Components
    • Creating Runnable objects with chain decorator
    • RunnableWithMessageHistory
    • Generator
    • Binding
    • Fallbacks
    • RunnableRetry
    • WithListeners
    • How to stream runnables
  • 14-Chains
    • Summarization
    • SQL
    • Structured Output Chain
    • StructuredDataChat
  • 15-Agent
    • Tools
    • Bind Tools
    • Tool Calling Agent
    • Tool Calling Agent with More LLM Models
    • Iteration-human-in-the-loop
    • Agentic RAG
    • CSV/Excel Analysis Agent
    • Agent-with-Toolkits-File-Management
    • Make Report Using RAG, Web searching, Image generation Agent
    • TwoAgentDebateWithTools
    • React Agent
  • 16-Evaluations
    • Generate synthetic test dataset (with RAGAS)
    • Evaluation using RAGAS
    • HF-Upload
    • LangSmith-Dataset
    • LLM-as-Judge
    • Embedding-based Evaluator(embedding_distance)
    • LangSmith Custom LLM Evaluation
    • Heuristic Evaluation
    • Compare experiment evaluations
    • Summary Evaluators
    • Groundedness Evaluation
    • Pairwise Evaluation
    • LangSmith Repeat Evaluation
    • LangSmith Online Evaluation
    • LangFuse Online Evaluation
  • 17-LangGraph
    • 01-Core-Features
      • Understanding Common Python Syntax Used in LangGraph
      • Title
      • Building a Basic Chatbot with LangGraph
      • Building an Agent with LangGraph
      • Agent with Memory
      • LangGraph Streaming Outputs
      • Human-in-the-loop
      • LangGraph Manual State Update
      • Asking Humans for Help: Customizing State in LangGraph
      • DeleteMessages
      • DeleteMessages
      • LangGraph ToolNode
      • LangGraph ToolNode
      • Branch Creation for Parallel Node Execution
      • Conversation Summaries with LangGraph
      • Conversation Summaries with LangGraph
      • LangGrpah Subgraph
      • How to transform the input and output of a subgraph
      • LangGraph Streaming Mode
      • Errors
      • A Long-Term Memory Agent
    • 02-Structures
      • LangGraph-Building-Graphs
      • Naive RAG
      • Add Groundedness Check
      • Adding a Web Search Module
      • LangGraph-Add-Query-Rewrite
      • Agentic RAG
      • Adaptive RAG
      • Multi-Agent Structures (1)
      • Multi Agent Structures (2)
    • 03-Use-Cases
      • LangGraph Agent Simulation
      • Meta Prompt Generator based on User Requirements
      • CRAG: Corrective RAG
      • Plan-and-Execute
      • Multi Agent Collaboration Network
      • Multi Agent Collaboration Network
      • Multi-Agent Supervisor
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • SQL-Agent
      • 10-LangGraph-Research-Assistant
      • LangGraph Code Assistant
      • Deploy on LangGraph Cloud
      • Tree of Thoughts (ToT)
      • Ollama Deep Researcher (Deepseek-R1)
      • Functional API
      • Reflection in LangGraph
  • 19-Cookbook
    • 01-SQL
      • TextToSQL
      • SpeechToSQL
    • 02-RecommendationSystem
      • ResumeRecommendationReview
    • 03-GraphDB
      • Movie QA System with Graph Database
      • 05-TitanicQASystem
      • Real-Time GraphRAG QA
    • 04-GraphRAG
      • Academic Search System
      • Academic QA System with GraphRAG
    • 05-AIMemoryManagementSystem
      • ConversationMemoryManagementSystem
    • 06-Multimodal
      • Multimodal RAG
      • Shopping QnA
    • 07-Agent
      • 14-MoARAG
      • CoT Based Smart Web Search
      • 16-MultiAgentShoppingMallSystem
      • Agent-Based Dynamic Slot Filling
      • Code Debugging System
      • New Employee Onboarding Chatbot
      • 20-LangGraphStudio-MultiAgent
      • Multi-Agent Scheduler System
    • 08-Serving
      • FastAPI Serving
      • Sending Requests to Remote Graph Server
      • Building a Agent API with LangServe: Integrating Currency Exchange and Trip Planning
    • 08-SyntheticDataset
      • Synthetic Dataset Generation using RAG
    • 09-Monitoring
      • Langfuse Selfhosting
Powered by GitBook
On this page
  • Overview
  • Table of Contents
  • References
  • Environment Setup
  • Functional API
  • Core Components
  • Use Cases
  • Asynchronous and Parallel Processing
  • Interrupts and Human Intervention
  • Automated State Management
  1. 17-LangGraph
  2. 03-Use-Cases

Functional API

PreviousOllama Deep Researcher (Deepseek-R1)NextReflection in LangGraph

Last updated 7 days ago

  • Author:

  • Peer Review:

  • Proofread :

  • This is a part of

Overview

This tutorial covers LangGraph's Functional API, focusing on workflow automation with @entrypoint and @task decorators.

Key features include state management, parallel processing, and human-in-the-loop capabilities.

Table of Contents

References


Environment Setup

[Note]

%%capture --no-stderr
%pip install langchain-opentutorial
# Install required packages
from langchain_opentutorial import package

package.install(
    [
        "langchain_core",
        "langgraph",
        "langchain-openai",
    ],
    verbose=False,
    upgrade=True,
)

You can set API keys in a .env file or set them manually.

[Note] If you’re not using the .env file, no worries! Just enter the keys directly in the cell below, and you’re good to go.

from dotenv import load_dotenv
from langchain_opentutorial import set_env

# Attempt to load environment variables from a .env file; if unsuccessful, set them manually.
if not load_dotenv():
    set_env(
        {
            "OPENAI_API_KEY": "",
            "LANGCHAIN_API_KEY": "",
            "LANGCHAIN_TRACING_V2": "true",
            "LANGCHAIN_ENDPOINT": "https://api.smith.langchain.com",
            "LANGCHAIN_PROJECT": "15-LangGraph-Functional-API",
        }
    )

Functional API

The Functional API is a programming interface provided by LangGraph that extends existing Python functions with advanced features such as state management, parallel processing, and memory management, all while requiring minimal code modifications.

Core Components

The Functional API uses two primitives to define workflows:

  1. @entrypoint Decorator

  • Defines the entry point of a workflow

  • Automates state management and checkpointing

  • Manages streaming and interruption points

from uuid import uuid4
from langgraph.checkpoint.memory import MemorySaver
from langgraph.func import entrypoint


@entrypoint(checkpointer=MemorySaver())
def calculate_sum(numbers: list[int]) -> int:
    """A simple workflow that sums numbers"""
    return sum(numbers)

config = {
    "configurable": {
        "thread_id": str(uuid4())
    }
}

calculate_sum.invoke([1, 2, 3, 4, 5], config)
15
  1. @task Decorator

  • Defines units of work that can be executed asynchronously

  • Handles retry policies and error handling

  • Supports parallel processing

from uuid import uuid4
from langgraph.checkpoint.memory import MemorySaver
from langgraph.func import task

@task()
def multiply_number(num: int) -> int:
    """Simple task that multiplies a number by 2"""
    return num * 2

@entrypoint(checkpointer=MemorySaver())
def calculate_multiply(num: int) -> int:
    """A simple workflow that multiplies two numbers"""
    future = multiply_number(num)
    return future.result()

config = {
    "configurable": {
        "thread_id": str(uuid4())
    }
}
calculate_multiply.invoke(3, config)
6

Use Cases

Asynchronous and Parallel Processing

Long-running tasks can significantly impact application performance.

The Functional API allows you to execute tasks asynchronously and in parallel, improving efficiency especially for I/O-bound operations like LLMs API calls.

The @task decorator makes it easy to convert regular functions into asynchronous tasks.

from langgraph.func import task
import time

@task()
def process_number(n: int) -> int:
    """Simulates processing by waiting for 1 second"""
    time.sleep(1)
    return n * 2

@entrypoint()
def parallel_processing(numbers: list[int]) -> list[int]:
    """Processes multiple numbers in parallel"""
    # Start all tasks
    futures = [process_number(n) for n in numbers]
    return [f.result() for f in futures]

parallel_processing.invoke([1, 2, 3, 4, 5])
[2, 4, 6, 8, 10]

Interrupts and Human Intervention

Some workflows require human oversight or intervention at critical points.

The Functional API provides built-in support for human-in-the-loop processes through its interrupt mechanism.

This allows you to pause execution, get human input, and continue processing based on that input.

from uuid import uuid4
from langgraph.func import entrypoint, task
from langgraph.types import Command, interrupt
from langgraph.checkpoint.memory import MemorySaver


@task()
def step_1(input_query):
    """Append bar."""
    return f"{input_query} bar"


@task()
def human_feedback(input_query):
    """Append user input."""
    feedback = interrupt(f"Please provide feedback: {input_query}")
    return f"{input_query} {feedback}"


@task()
def step_3(input_query):
    """Append qux."""
    return f"{input_query} qux"

checkpointer = MemorySaver()

@entrypoint(checkpointer=checkpointer)
def graph(input_query):
    result_1 = step_1(input_query).result()
    feedback = interrupt(f"Please provide feedback: {result_1}")

    result_2 = f"{input_query} {feedback}"
    result_3 = step_3(result_2).result()

    return result_3

config = {"configurable": {"thread_id": str(uuid4())}}
for event in graph.stream("foo", config):
    print(event)
    print("\n")
{'step_1': 'foo bar'}
    
    
    {'__interrupt__': (Interrupt(value='Please provide feedback: foo bar', resumable=True, ns=['graph:f550c5f8-67e0-6c57-9206-c10c7affc896'], when='during'),)}
    
    
# Continue execution
for event in graph.stream(Command(resume="baz"), config):
    print(event)
    print("\n")
{'step_3': 'foo baz qux'}
    
    
    {'graph': 'foo baz qux'}
    
    

Automated State Management

The Functional API automatically handles state persistence and restoration between function calls.

This is particularly useful in conversational applications where maintaining context is crucial.

You can focus on your business logic while LangGraph handles the complexities of state management.

from uuid import uuid4
from langgraph.checkpoint.memory import MemorySaver
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage, BaseMessage
from langgraph.func import entrypoint
from langgraph.graph import add_messages


llm = ChatOpenAI(
    model="gpt-4o-mini",
    temperature=0
)

checkpointer = MemorySaver()

# Set a checkpointer to enable persistence.
@entrypoint(checkpointer=checkpointer)
def conversational_agent(messages: list[BaseMessage], *, previous: list[BaseMessage] = None):
    # Add previous messages from short-term memory to the current messages
    if previous is not None:
        messages = add_messages(previous, messages)

    # Get agent's response based on conversation history.
    llm_response = llm.invoke(
         [
            SystemMessage(
                content="You are a helpful assistant tasked with performing arithmetic on a set of inputs."
            )
        ]
        + messages
    )

    # Add agent's messages to conversation history
    messages = add_messages(messages, llm_response)

    return messages
# Config
config = {
    "configurable": {
        "thread_id": str(uuid4())
    }
}

# Run with checkpointer to persist state in memory
messages = conversational_agent.invoke([HumanMessage(content="Hi. I'm currently creating a tutorial, named LangChain OpenTutorial.")], config)
for m in messages:
    m.pretty_print()
================================ Human Message =================================
    
    Hi. I'm currently creating a tutorial, named LangChain OpenTutorial.
    ================================== Ai Message ==================================
    
    That sounds like a great project! How can I assist you with your LangChain OpenTutorial? Are you looking for help with content, examples, or something else?
# Checkpoint state
agent_state = conversational_agent.get_state(config)
for m in agent_state.values:
    m.pretty_print()
================================ Human Message =================================
    
    Hi. I'm currently creating a tutorial, named LangChain OpenTutorial.
    ================================== Ai Message ==================================
    
    That sounds like a great project! How can I assist you with your LangChain OpenTutorial? Are you looking for help with content, examples, or something else?
# Continue with the same thread
messages = conversational_agent.invoke([HumanMessage(content="Do you remember the name of my tutorial that I'm now working on?")], config)
for m in messages:
    m.pretty_print()
================================ Human Message =================================
    
    Hi. I'm currently creating a tutorial, named LangChain OpenTutorial.
    ================================== Ai Message ==================================
    
    That sounds like a great project! How can I assist you with your LangChain OpenTutorial? Are you looking for help with content, examples, or something else?
    ================================ Human Message =================================
    
    Do you remember the name of my tutorial that I'm now working on?
    ================================== Ai Message ==================================
    
    Yes, you mentioned that you are creating a tutorial named "LangChain OpenTutorial." How can I assist you further with it?

Setting up your environment is the first step. See the guide for more details.

The langchain-opentutorial is a package of easy-to-use environment setup guidance, useful functions and utilities for tutorials. Check out the for more details.

Environment Setup
langchain-opentutorial
Yejin Park
fastjw
LangChain Open Tutorial
LangGraph: Functional API Document
LangGraph: Functional API Tutorial
Overview
Environment Setup
Functional API
Use Cases