LangChain OpenTutorial
  • 🦜️🔗 The LangChain Open Tutorial for Everyone
  • 01-Basic
    • Getting Started on Windows
    • 02-Getting-Started-Mac
    • OpenAI API Key Generation and Testing Guide
    • LangSmith Tracking Setup
    • Using the OpenAI API (GPT-4o Multimodal)
    • Basic Example: Prompt+Model+OutputParser
    • LCEL Interface
    • Runnable
  • 02-Prompt
    • Prompt Template
    • Few-Shot Templates
    • LangChain Hub
    • Personal Prompts for LangChain
    • Prompt Caching
  • 03-OutputParser
    • PydanticOutputParser
    • PydanticOutputParser
    • CommaSeparatedListOutputParser
    • Structured Output Parser
    • JsonOutputParser
    • PandasDataFrameOutputParser
    • DatetimeOutputParser
    • EnumOutputParser
    • Output Fixing Parser
  • 04-Model
    • Using Various LLM Models
    • Chat Models
    • Caching
    • Caching VLLM
    • Model Serialization
    • Check Token Usage
    • Google Generative AI
    • Huggingface Endpoints
    • HuggingFace Local
    • HuggingFace Pipeline
    • ChatOllama
    • GPT4ALL
    • Video Q&A LLM (Gemini)
  • 05-Memory
    • ConversationBufferMemory
    • ConversationBufferWindowMemory
    • ConversationTokenBufferMemory
    • ConversationEntityMemory
    • ConversationKGMemory
    • ConversationSummaryMemory
    • VectorStoreRetrieverMemory
    • LCEL (Remembering Conversation History): Adding Memory
    • Memory Using SQLite
    • Conversation With History
  • 06-DocumentLoader
    • Document & Document Loader
    • PDF Loader
    • WebBaseLoader
    • CSV Loader
    • Excel File Loading in LangChain
    • Microsoft Word(doc, docx) With Langchain
    • Microsoft PowerPoint
    • TXT Loader
    • JSON
    • Arxiv Loader
    • UpstageDocumentParseLoader
    • LlamaParse
    • HWP (Hangeul) Loader
  • 07-TextSplitter
    • Character Text Splitter
    • 02. RecursiveCharacterTextSplitter
    • Text Splitting Methods in NLP
    • TokenTextSplitter
    • SemanticChunker
    • Split code with Langchain
    • MarkdownHeaderTextSplitter
    • HTMLHeaderTextSplitter
    • RecursiveJsonSplitter
  • 08-Embedding
    • OpenAI Embeddings
    • CacheBackedEmbeddings
    • HuggingFace Embeddings
    • Upstage
    • Ollama Embeddings With Langchain
    • LlamaCpp Embeddings With Langchain
    • GPT4ALL
    • Multimodal Embeddings With Langchain
  • 09-VectorStore
    • Vector Stores
    • Chroma
    • Faiss
    • Pinecone
    • Qdrant
    • Elasticsearch
    • MongoDB Atlas
    • PGVector
    • Neo4j
    • Weaviate
    • Faiss
    • {VectorStore Name}
  • 10-Retriever
    • VectorStore-backed Retriever
    • Contextual Compression Retriever
    • Ensemble Retriever
    • Long Context Reorder
    • Parent Document Retriever
    • MultiQueryRetriever
    • MultiVectorRetriever
    • Self-querying
    • TimeWeightedVectorStoreRetriever
    • TimeWeightedVectorStoreRetriever
    • Kiwi BM25 Retriever
    • Ensemble Retriever with Convex Combination (CC)
  • 11-Reranker
    • Cross Encoder Reranker
    • JinaReranker
    • FlashRank Reranker
  • 12-RAG
    • Understanding the basic structure of RAG
    • RAG Basic WebBaseLoader
    • Exploring RAG in LangChain
    • RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
    • Conversation-With-History
    • Translation
    • Multi Modal RAG
  • 13-LangChain-Expression-Language
    • RunnablePassthrough
    • Inspect Runnables
    • RunnableLambda
    • Routing
    • Runnable Parallel
    • Configure-Runtime-Chain-Components
    • Creating Runnable objects with chain decorator
    • RunnableWithMessageHistory
    • Generator
    • Binding
    • Fallbacks
    • RunnableRetry
    • WithListeners
    • How to stream runnables
  • 14-Chains
    • Summarization
    • SQL
    • Structured Output Chain
    • StructuredDataChat
  • 15-Agent
    • Tools
    • Bind Tools
    • Tool Calling Agent
    • Tool Calling Agent with More LLM Models
    • Iteration-human-in-the-loop
    • Agentic RAG
    • CSV/Excel Analysis Agent
    • Agent-with-Toolkits-File-Management
    • Make Report Using RAG, Web searching, Image generation Agent
    • TwoAgentDebateWithTools
    • React Agent
  • 16-Evaluations
    • Generate synthetic test dataset (with RAGAS)
    • Evaluation using RAGAS
    • HF-Upload
    • LangSmith-Dataset
    • LLM-as-Judge
    • Embedding-based Evaluator(embedding_distance)
    • LangSmith Custom LLM Evaluation
    • Heuristic Evaluation
    • Compare experiment evaluations
    • Summary Evaluators
    • Groundedness Evaluation
    • Pairwise Evaluation
    • LangSmith Repeat Evaluation
    • LangSmith Online Evaluation
    • LangFuse Online Evaluation
  • 17-LangGraph
    • 01-Core-Features
      • Understanding Common Python Syntax Used in LangGraph
      • Title
      • Building a Basic Chatbot with LangGraph
      • Building an Agent with LangGraph
      • Agent with Memory
      • LangGraph Streaming Outputs
      • Human-in-the-loop
      • LangGraph Manual State Update
      • Asking Humans for Help: Customizing State in LangGraph
      • DeleteMessages
      • DeleteMessages
      • LangGraph ToolNode
      • LangGraph ToolNode
      • Branch Creation for Parallel Node Execution
      • Conversation Summaries with LangGraph
      • Conversation Summaries with LangGraph
      • LangGrpah Subgraph
      • How to transform the input and output of a subgraph
      • LangGraph Streaming Mode
      • Errors
      • A Long-Term Memory Agent
    • 02-Structures
      • LangGraph-Building-Graphs
      • Naive RAG
      • Add Groundedness Check
      • Adding a Web Search Module
      • LangGraph-Add-Query-Rewrite
      • Agentic RAG
      • Adaptive RAG
      • Multi-Agent Structures (1)
      • Multi Agent Structures (2)
    • 03-Use-Cases
      • LangGraph Agent Simulation
      • Meta Prompt Generator based on User Requirements
      • CRAG: Corrective RAG
      • Plan-and-Execute
      • Multi Agent Collaboration Network
      • Multi Agent Collaboration Network
      • Multi-Agent Supervisor
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • 08-LangGraph-Hierarchical-Multi-Agent-Teams
      • SQL-Agent
      • 10-LangGraph-Research-Assistant
      • LangGraph Code Assistant
      • Deploy on LangGraph Cloud
      • Tree of Thoughts (ToT)
      • Ollama Deep Researcher (Deepseek-R1)
      • Functional API
      • Reflection in LangGraph
  • 19-Cookbook
    • 01-SQL
      • TextToSQL
      • SpeechToSQL
    • 02-RecommendationSystem
      • ResumeRecommendationReview
    • 03-GraphDB
      • Movie QA System with Graph Database
      • 05-TitanicQASystem
      • Real-Time GraphRAG QA
    • 04-GraphRAG
      • Academic Search System
      • Academic QA System with GraphRAG
    • 05-AIMemoryManagementSystem
      • ConversationMemoryManagementSystem
    • 06-Multimodal
      • Multimodal RAG
      • Shopping QnA
    • 07-Agent
      • 14-MoARAG
      • CoT Based Smart Web Search
      • 16-MultiAgentShoppingMallSystem
      • Agent-Based Dynamic Slot Filling
      • Code Debugging System
      • New Employee Onboarding Chatbot
      • 20-LangGraphStudio-MultiAgent
      • Multi-Agent Scheduler System
    • 08-Serving
      • FastAPI Serving
      • Sending Requests to Remote Graph Server
      • Building a Agent API with LangServe: Integrating Currency Exchange and Trip Planning
    • 08-SyntheticDataset
      • Synthetic Dataset Generation using RAG
    • 09-Monitoring
      • Langfuse Selfhosting
Powered by GitBook
On this page
  • Overview
  • Table of Contents
  • References
  • Environment Setup
  • How to Execute Custom Functions
  • Using RunnableConfig as Parameters
  1. 13-LangChain-Expression-Language

RunnableLambda

PreviousInspect RunnablesNextRouting

Last updated 28 days ago

  • Author:

  • Peer Review: ,

  • Proofread :

  • This is a part of

Overview

RunnableLambda provides a way to integrate custom functions into your LangChain pipeline.

This allows you to define your own custom functions and execute them using RunnableLambda as part of their workflow.

For example, you can build your own logic and execute the functions, performing different tasks such as:

  • Data preprocessing,

  • Calculations,

  • Interactions with external APIs, and

  • Any other custom logic you need in your chain.

Table of Contents

References


Environment Setup

[Note]

  • The langchain-opentutorial is a package of easy-to-use environment setup guidance, useful functions and utilities for tutorials.

%%capture --no-stderr
%pip install langchain-opentutorial
# Install required packages
from langchain_opentutorial import package

package.install(
    [
        "langsmith",
        "langchain",
        "langchain_core",
        "langchain_openai",
    ],
    verbose=False,
    upgrade=False,
)

Alternatively, you can set and load OPENAI_API_KEY from a .env file.

[Note] This is only necessary if you haven't already set OPENAI_API_KEY in previous steps.

# Set environment variables
from langchain_opentutorial import set_env

set_env(
    {
        "OPENAI_API_KEY": "",
        "LANGCHAIN_API_KEY": "",
        "LANGCHAIN_TRACING_V2": "true",
        "LANGCHAIN_ENDPOINT": "https://api.smith.langchain.com",
        "LANGCHAIN_PROJECT": "ConversationBufferWindowMemory",
    }
)
Environment variables have been set successfully.
from dotenv import load_dotenv

load_dotenv(override=True)
True

How to Execute Custom Functions

Important Note

A limitation of RunnableLambda is that custom functions can only accept a single argument. You can wrap custom functions with RunnableLambda to use them in your pipeline.

If you have a function that requires multiple parameters, create a wrapper function:

  1. Accepts a single input (typically a dictionary).

  2. Unpacks this input into multiple arguments inside the wrapper.

  3. Passes these arguments to your original function.

For example:

# Won't work with RunnableLambda
def original_function(arg1, arg2, arg3):
    pass

# Will work with RunnableLambda
def wrapper_function(input_dict):
    return original_function(
            input_dict['arg1'],
            input_dict['arg2'],
            input_dict['arg3']
)
from operator import itemgetter

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI


# Function for returning the length of the text
def length_function(text):
    return len(text)


# Function for multiplying the length of two texts
def _multiple_length_function(text1, text2):
    return len(text1) * len(text2)


# Wrapper function for connecting the function that receives 2 arguments
def multiple_length_function(
    _dict,
):  # Function for multiplying the length of two texts
    return _multiple_length_function(_dict["text1"], _dict["text2"])


# Create a prompt template
prompt = ChatPromptTemplate.from_template("what is {a} + {b}?")
# Initialize the ChatOpenAI model
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)

# Connect the prompt and model to create a chain
chain1 = prompt | model

# Chain configuration
chain = (
    {
        "a": itemgetter("input_1") | RunnableLambda(length_function),
        "b": {"text1": itemgetter("input_1"), "text2": itemgetter("input_2")}
        | RunnableLambda(multiple_length_function),
    }
    | prompt
    | model
    | StrOutputParser()
)

Execute the chain and verify the result.

# Execute the chain with the given arguments.
chain.invoke({"input_1": "bar", "input_2": "gah"})
'3 + 9 equals 12.'

Using RunnableConfig as Parameters

This allows you to pass various configuration options to nested executions, including:

  • Callbacks: For tracking and monitoring function execution.

  • Tags: For labeling and organizing different runs.

  • Other configuration: Additional settings to control function behavior.

For example, you can use RunnableConfig to:

  • Track function performance.

  • Add logging capabilities.

  • Group related operations using tags.

  • Configure error handling and retry logic.

  • Set timeouts and other execution parameters.

This makes RunnableLambda highly configurable for complex workflows with fine-grained control over execution.

from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableConfig
import json


def parse_or_fix(text: str, config: RunnableConfig):
    # Create a prompt template for fixing the next text
    fixing_chain = (
        ChatPromptTemplate.from_template(
            "Fix the following text:\n\ntext\n{input}\n\nError: {error}"
            " Don't narrate, just respond with the fixed data."
        )
        | ChatOpenAI(model="gpt-4o-mini", temperature=0)
        | StrOutputParser()
    )
    # Try up to 3 times
    for _ in range(3):
        try:
            # Parse the text as JSON
            return json.loads(text)
        except Exception as e:
            # If parsing fails, call the fixing chain to fix the text
            text = fixing_chain.invoke({"input": text, "error": e}, config)
            print(f"config: {config}")
    # If parsing fails, return "Failed to parse"
    return "Failed to parse"
from langchain.callbacks import get_openai_callback

with get_openai_callback() as cb:
    # Call the parse_or_fix function using RunnableLambda
    output = RunnableLambda(parse_or_fix).invoke(
        input="{foo:: bar}",
        config={"tags": ["my-tag"], "callbacks": [cb]},  # Pass the config
    )
    # Print the modified result
    print(f"\n\nModified result:\n{output}")
config: {'tags': ['my-tag'], 'metadata': {}, 'callbacks': , 'recursion_limit': 25, 'configurable': {}}
    
    
    Modified result:
    {'foo': 'bar'}
# Check the output
print(output)
{'foo': 'bar'}
# Check the callback
print(cb)
Tokens Used: 59
    	Prompt Tokens: 52
    		Prompt Tokens Cached: 0
    	Completion Tokens: 7
    		Reasoning Tokens: 0
    Successful Requests: 1
    Total Cost (USD): $1.1999999999999997e-05

Setting up your environment is the first step. See the guide for more details.

Check out the for more details.

RunnableLambda can optionally accept a object.

LangChain Python API Reference > langchain: 0.3.29 > runnables > RunnableLambda
LangChain Python API Reference > langchain: 0.3.29 > runnables > RunnableConfig
LangChain Python API Reference > docs > concepts > runnables > Configurable Runnables
Environment Setup
langchain-opentutorial
RunnableConfig
Kenny Jung
Junseong Kim
Haseom Shin
Chaeyoon Kim
LangChain Open Tutorial
Overview
Environment Setup
How to Execute Custom Functions
Using RunnableConfig as Parameters