# Load environment variables# Reload any variables that need to be overwritten from the previous cellfrom dotenv import load_dotenvload_dotenv(override=True)
True
What is the RunnableBranch
RunnableBranch dynamically routes logic based on input. It allows developers to define different processing paths depending on the characteristics of the input data.
RunnableBranch simplifies the implementation of complex decision trees in a simple and more intuitive way. This improves code readability and maintainability while promoting modularization and reusability of logic.
Additionally, RunnableBranch dynamically evaluates branching conditions at runtime. This enables it to select the appropriate processing routine, which enhances the system's adaptability and scalability.
Thanks to these features, RunnableBranch is applicable across various domains and is particularly useful for developing applications that handle highly variable and volatile input data.
By effectively utilizing RunnableBranch, developers can reduce code complexity while improving both system flexibility and performance.
Dynamic Logic Routing Based on Input
This section covers how to perform routing within LangChain Expression Language (LCEL).
Routing enables the creation of non-deterministic chains, where the output of a previous step determines the next step. This brings core structure and consistency to interactions with LLMs.
There are two primary methods available for implementing routing:
Returning a conditionally executable object from RunnableLambda (Recommended).
Using RunnableBranch.
Both of these methods can be explained using a two-step sequence: first, classifying the input question into a category (math, science, or other), and second, routing the question to the corresponding prompt chain based on the category.
Simple Example
Firstly, we will create a chain that classifies incoming questions into one of three categories: math, science, or other.
from langchain_openai import ChatOpenAIfrom langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import PromptTemplateprompt = PromptTemplate.from_template("""Classify the given user question into one of `math`, `science`, or `other`. Do not respond with more than one word.<question>{question}</question>Classification:""")# Create the chain.chain = ( prompt|ChatOpenAI(model="gpt-4o-mini")|StrOutputParser()# Use a string output parser.)
After creating the chain, use it to classify a test question and verify the result.
# Invoke the chain with a question.chain.invoke({"question": "What is 2+2?"})
'math'
# Invoke the chain with a question.chain.invoke({"question": "What is the law of action and reaction?"})
'science'
# Invoke the chain with a question.chain.invoke({"question": "What is LangChain?"})
'other'
RunnableLambda
RunnableLambda is a type of runnable designed to simplify the execution of a single transformation or operation using a lambda (anonymous) function.
It is primarily used for lightweight, stateless operations where defining an entire custom Runnable class would be overkill.
Unlike RunnableBranch, which focuses on conditional branching logic, RunnableLambda excels in straightforward data transformations or function applications.
Syntax
RunnableLambda is initialized with a single lambda function or callable object.
When invoked, the input value is passed directly to the lambda function.
The lambda function processes the input and returns the result.
Now, let's create three sub-chains.
math_chain = ( PromptTemplate.from_template("""You are an expert in math. \Always answer questions starting with "Pythagoras once said...". \Respond to the following question:Question: {question}Answer:""" )|ChatOpenAI(model="gpt-4o-mini"))science_chain = ( PromptTemplate.from_template("""You are an expert in science. \Always answer questions starting with "Isaac Newton once said...". \Respond to the following question:Question: {question}Answer:""" )|ChatOpenAI(model="gpt-4o-mini"))general_chain = ( PromptTemplate.from_template("""Respond to the following question concisely:Question: {question}Answer:""" )|ChatOpenAI(model="gpt-4o-mini"))
Using Custom Functions
This is the recommended approach in the official LangChain documentation. You can wrap custom functions with RunnableLambda to handle routing between different outputs.
# Return each chain based on the contents included in the topic.defroute(info):if"math"in info["topic"].lower():return math_chainelif"science"in info["topic"].lower():return science_chainelse:return general_chain
from operator import itemgetterfrom langchain_core.runnables import RunnableLambdafull_chain = ({"topic": chain,"question":itemgetter("question")}|RunnableLambda(# Pass the routing function as an argument. route )|StrOutputParser())
# Invoke the chain with a math-related question.full_chain.invoke({"question": "Please explain the concept of calculus."})
'Pythagoras once said, "The study of mathematics is the study of the universe." Calculus, much like the harmony found in geometric shapes, is a branch of mathematics that focuses on change and motion. It is fundamentally divided into two main concepts: differentiation and integration.\n\nDifferentiation deals with the idea of rates of change, allowing us to understand how a function behaves as its input changes. It helps us determine slopes of curves at given points, providing insight into how quantities vary.\n\nIntegration, on the other hand, is concerned with the accumulation of quantities, such as areas under curves. It allows us to sum up infinitely small pieces to find total quantities, providing a way to calculate things like distances traveled over time.\n\nTogether, these concepts enable us to analyze complex systems in fields ranging from physics to economics, illustrating how the world evolves and changes. In essence, calculus is a powerful tool that helps us grasp the continuous nature of change in our universe.'
# Invoke the chain with a science-related question.full_chain.invoke({"question": "How is gravitational acceleration calculated?"})
'Isaac Newton once said, "What goes up must come down," highlighting the fundamental principle of gravity. Gravitational acceleration is calculated using the formula \\( g = \\frac{F}{m} \\), where \\( F \\) is the force of gravity acting on an object and \\( m \\) is the mass of that object. In a more specific context, near the surface of the Earth, gravitational acceleration can also be approximated using the formula \\( g = \\frac{G \\cdot M}{r^2} \\), where \\( G \\) is the gravitational constant, \\( M \\) is the mass of the Earth, and \\( r \\) is the distance from the center of the Earth to the object. This results in a standard gravitational acceleration of approximately \\( 9.81 \\, \\text{m/s}^2 \\).'
# Invoke the chain with a general question.full_chain.invoke({"question": "What is RAG (Retrieval Augmented Generation)?"})
'Retrieval Augmented Generation (RAG) is a machine learning approach that combines retrieval-based methods with generative models. It retrieves relevant information from a knowledge base or document corpus to enhance the context for generating responses, enabling the model to produce more accurate and informative outputs by leveraging external data.'
RunnableBranch
RunnableBranch is a specialized Runnable designed for defining conditions and the corresponding Runnable objects based on input values.
However, it does not provide any functionality achievable with custom functions. So, using custom functions is often preferred.
Syntax
RunnableBranch is initialized with a list of (condition, Runnable) pairs and a default Runnable.
When RunnableBranch is invoked, the input value is sequentially passed to each condition.
The first condition that evaluates to True determins which Runnable is executed with the input.
If none of conditions evaluate to True, the default Runnable is executed.
from operator import itemgetterfrom langchain_core.runnables import RunnableBranchbranch =RunnableBranch(# Check if the topic contains "math" and execute math_chain if true. (lambdax: "math"in x["topic"].lower(), math_chain),# Check if the topic contains "science" and execute science_chain if true. (lambdax: "science"in x["topic"].lower(), science_chain),# If none of the above conditions match, execute general_chain. general_chain,)# Define the full chain that takes a topic and question, routes it, and parses the output.full_chain = ({"topic": chain,"question":itemgetter("question")}| branch |StrOutputParser())
Let's execute the full chain with each question.
full_chain.invoke({"question": "Please explain the concept of calculus."})
'Pythagoras once said, "To understand the world, we must first understand the relationships between its parts." Calculus is a branch of mathematics that focuses on change and motion, allowing us to analyze how quantities vary. It is fundamentally divided into two main areas: differential calculus, which deals with the concept of the derivative and how functions change at any given point, and integral calculus, which concerns the accumulation of quantities and the area under curves.\n\nThrough the tools of limits, derivatives, and integrals, calculus provides powerful methods for solving problems in physics, engineering, economics, and many other fields. It helps us understand everything from the motion of planets to the growth of populations, emphasizing the continuous nature of change in our universe.'
full_chain.invoke({"question": "How is gravitational acceleration calculated?"})
'Isaac Newton once said, "What goes up must come down," reflecting his profound understanding of gravity. Gravitational acceleration, often denoted as \\( g \\), is calculated using the formula:\n\n\\[\ng = \\frac{G \\cdot M}{r^2}\n\\]\n\nwhere \\( G \\) is the gravitational constant (approximately \\( 6.674 \\times 10^{-11} \\, \\text{m}^3 \\text{kg}^{-1} \\text{s}^{-2} \\)), \\( M \\) is the mass of the object exerting the gravitational force (like the Earth), and \\( r \\) is the distance from the center of that mass to the point where the gravitational acceleration is being calculated. Near the Earth\'s surface, this value is approximately \\( 9.81 \\, \\text{m/s}^2 \\).'
full_chain.invoke({"question": "What is RAG (Retrieval Augmented Generation)?"})
'Retrieval Augmented Generation (RAG) is a framework that combines retrieval-based and generation-based approaches in natural language processing. It retrieves relevant documents or information from a knowledge base and uses that information to enhance the generation of responses or text, improving the accuracy and relevance of the output. RAG is particularly useful in tasks like question answering and conversational agents.'
Comparison of RunnableBranch and RunnableLambda
Criteria
RunnableLambda
RunnableBranch
Condition Definition
All conditions are defined within a single function (route).
Each condition is defined as a (condition, Runnable) pair.
Readability
Very clear for simple logic.
Becomes clearer as the number of conditions increases.
Maintainability
Can become complex to maintain if the function grows large.
Provides a clear separation between conditions and their corresponding Runnables.
Flexibility
Allows more flexibility in how conditions are written.
Requires adherence to the (condition, Runnable) pattern.
Scalability
Involves modifying the existing function.
Requires adding new (condition, Runnable) pairs.
Recommended Use Case
When conditions are relatively simple or primarily function-based transformations.
When dealing with many conditions or when maintainability is a primary concern.