Routing

Open in ColabOpen in GitHub

Overview

This tutorial introduces three key tools in LangChain: RunnableSequence, RunnableBranch, and RunnableLambda, essential for building efficient and powerful AI applications.

RunnableSequence is a fundamental component that enables sequential processing pipelines, allowing structured and efficient handling of AI-related tasks. It provides automatic data flow management, error handling, and seamless integration with other LangChain components.

RunnableBranch enables structured decision-making by routing input through predefined conditions, simplifying complex branching scenarios.

RunnableLambda offers a flexible, function-based approach, ideal for lightweight transformations and inline processing.

Key Features of these components:

  • RunnableSequence:

    • Sequential processing pipeline creation

    • Automatic data flow management

    • Error handling and monitoring

    • Support for async operations

  • RunnableBranch:

    • Dynamic routing based on conditions

    • Structured decision trees

    • Complex branching logic

  • RunnableLambda:

    • Lightweight transformations

    • Function-based processing

    • Inline data manipulation

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can check out the langchain-opentutorial for more details.

You can alternatively set OPENAI_API_KEY in .env file and load it.

[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.

What is the RunnableSequence

RunnableSequence is a fundamental component in LangChain that enables the creation of sequential processing pipelines. It allows developers to chain multiple operations together where the output of one step becomes the input of the next step.

Key Concepts

  1. Sequential Processing

    • Ordered execution of operations

    • Automatic data flow between steps

    • Clear pipeline structure

  2. Data Transformation

    • Input preprocessing

    • State management

    • Output formatting

  3. Error Handling

    • Pipeline-level error management

    • Step-specific error recovery

    • Fallback mechanisms

Let's explore these concepts with practical examples.

Simple Example

First, we will create a Chain that classifies incoming questions into one of three categories: math, science, or other.

Basic Pipeline Creation

In this section, we'll explore how to create fundamental pipelines using RunnableSequence. We'll start with a simple text generation pipeline and gradually build more complex functionality.

Understanding Basic Pipeline Structure

  • Sequential Processing: How data flows through the pipeline

  • Component Integration: Combining different LangChain components

  • Data Transformation: Managing input/output between steps

Advanced Analysis Pipeline

Building upon our basic pipeline, we'll now create a more sophisticated analysis system that processes and evaluates the generated content.

Key Features

  • State Management: Maintaining context throughout the pipeline

  • Structured Analysis: Organizing output in a clear format

  • Error Handling: Basic error management implementation

Structured Evaluation Pipeline

In this section, we'll add structured evaluation capabilities to our pipeline, including proper error handling and validation.

Features

  • Structured Output: Using schema-based parsing

  • Validation: Input and output validation

  • Error Management: Comprehensive error handling

What is the RunnableBranch

RunnableBranch is a powerful tool that allows dynamic routing of logic based on input. It enables developers to flexibly define different processing paths depending on the characteristics of the input data.

RunnableBranch helps implement complex decision trees in a simple and intuitive way. This greatly improves code readability and maintainability while promoting logic modularization and reusability.

Additionally, RunnableBranch can dynamically evaluate branching conditions at runtime and select the appropriate processing routine, enhancing the system's adaptability and scalability.

Due to these features, RunnableBranch can be applied across various domains and is particularly useful for developing applications with high input data variability and volatility.

By effectively utilizing RunnableBranch, developers can reduce code complexity and improve system flexibility and performance.

Dynamic Logic Routing Based on Input

This section covers how to perform routing in LangChain Expression Language.

Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. This helps bring structure and consistency to interactions with LLMs.

There are two primary methods for performing routing:

  1. Returning a Conditionally Executable Object from RunnableLambda (Recommended)

  2. Using RunnableBranch

Both methods can be explained using a two-step sequence, where the first step classifies the input question as related to math, science, or other, and the second step routes it to the corresponding prompt chain.

Simple Example

First, we will create a Chain that classifies incoming questions into one of three categories: math, science, or other.

Use the created chain to classify the question.

RunableLambda

RunnableLambda is a type of Runnable designed to simplify the execution of a single transformation or operation using a lambda (anonymous) function.

It is primarily used for lightweight, stateless operations where defining an entire custom Runnable class would be overkill.

Unlike RunnableBranch, which focuses on conditional branching logic, RunnableLambda excels in straightforward data transformations or function applications.

Syntax

  • RunnableLambda is initialized with a single lambda function or callable object.

  • When invoked, the input value is passed directly to the lambda function.

  • The lambda function processes the input and returns the result.

Now, let's create three sub-chains.

Using Custom Functions

This is the recommended approach in the official LangChain documentation. You can wrap custom functions with RunnableLambda to handle routing between different outputs.

RunnableBranch

RunnableBranch is a special type of Runnable that allows you to define conditions and corresponding Runnable objects based on input values.

However, it does not provide functionality that cannot be achieved with custom functions, so using custom functions is generally recommended.

Syntax

  • RunnableBranch is initialized with a list of (condition, Runnable) pairs and a default Runnable.

  • When invoked, the input value is passed to each condition sequentially.

  • The first condition that evaluates to True is selected, and the corresponding Runnable is executed with the input value.

  • If no condition matches, the default Runnable is executed.

Execute the full chain with each question.

Building an AI Learning Assistant

Let's apply what we've learned about Runnable components to build a practical AI Learning Assistant. This system will help students by providing tailored responses based on their questions.

First, let's set up our core components:

Next, let's create our response generation strategy:

Now, let's create our main pipeline:

Let's try out our assistant:

Comparison of RunnableSequence, RunnableBranch, and RunnableLambda

Criteria

RunnableSequence

RunnableBranch

RunnableLambda

Primary Purpose

Sequential pipeline processing

Conditional routing and branching

Simple transformations and functions

Condition Definition

No conditions, sequential flow

Each condition defined as (condition, runnable) pair

All conditions within single function (route)

Structure

Linear chain of operations

Tree-like branching structure

Function-based transformation

Readability

Very clear for sequential processes

Becomes clearer as conditions increase

Very clear for simple logic

Maintainability

Easy to maintain step-by-step flow

Clear separation between conditions and runnables

Can become complex if function grows large

Flexibility

Flexible for linear processes

Must follow (condition, runnable) pattern

Allows flexible condition writing

Scalability

Add or modify pipeline steps

Requires adding new conditions and runnables

Expandable by modifying function

Error Handling

Pipeline-level error management

Branch-specific error handling

Basic error handling

State Management

Maintains state throughout pipeline

State managed per branch

Typically stateless

Recommended Use Case

When you need ordered processing steps

When there are many conditions or maintainability is priority

When conditions are simple or function-based

Complexity Level

Medium to High

Medium

Low

Async Support

Full async support

Limited async support

Basic async support

Last updated