RunnablePassthrough

Open in ColabOpen in GitHub

Overview

RunnablePassthrough is a utility that facilitates unmodified data flow through a pipeline. Its invoke() method returns input data in its original form without alterations.

This functionality allows seamless data transmission between pipeline stages.

It frequently works in tandem with RunnableParallel for concurrent task execution, enabling the addition of new key-value pairs to the data stream.

Common use cases for RunnablePassthrough include:

  • Direct data forwarding without transformation

  • Pipeline stage bypassing

  • Pipeline flow validation during debugging

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorial for more details.

If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below code:

You can alternatively set API keys such as OPENAI_API_KEY in a .env file and load them.

[Note] This is not necessary if you've already set the required API keys in previous steps.

Passing Data with RunnablePassthrough and RunnableParallel

RunnablePassthrough is a utility that passes data through unchanged or adds minimal information before forwarding.

It commonly integrates with RunnableParallel to map data under new keys.

  • Standalone Usage

    When used independently, RunnablePassthrough() returns the input data unmodified.

  • Usage with assign

    When implemented with assign as RunnablePassthrough.assign(...), it augments the input data with additional fields before forwarding.

By leveraging RunnablePassthrough, you can maintain data integrity through pipeline stages while selectively adding required information.

Let me continue reviewing any additional content. I'm tracking all modifications to provide a comprehensive summary once the review is complete.

Example of Using RunnableParallel and RunnablePassthrough

While RunnablePassthrough is effective independently, it becomes more powerful when combined with RunnableParallel.

This section demonstrates how to configure and run parallel tasks using the RunnableParallel class. The following steps provide a beginner-friendly implementation guide.

  1. Initialize RunnableParallel

    Create a RunnableParallel instance to manage concurrent task execution.

  2. Configure passed Task

    • Define a passed task utilizing RunnablePassthrough

    • This task preserves input data without modification

  3. Set Up extra Task

    • Implement an extra task using RunnablePassthrough.assign()

    • This task computes triple the "num" value and stores it with key mult

  4. Implement modified Task

    • Create a modified task using a basic function

    • This function increments the "num" value by 1

  5. Task Execution

    • Invoke all tasks using runnable.invoke()

    • Example: Input {"num": 1} triggers concurrent execution of all defined tasks

Summary of Results

When provided with input {"num": 1}, each task produces the following output:

  1. passed: Returns unmodified input data

    • Output: {"num": 1}

  2. extra: Augments input with "mult" key containing triple the "num" value

    • Output: {"num": 1, "mult": 3}

  3. modified: Increments the "num" value by 1

    • Output: {"num": 2}

Search Engine Integration

The following example illustrates an implementation of RunnablePassthrough.

Using RunnablePassthrough in a FAISS-Based RAG Pipeline

This code uses RunnablePassthrough in a FAISS-based RAG pipeline to pass retrieved context into a chat prompt. It enables seamless integration of OpenAI embeddings for efficient retrieval and response generation.

Using Ollama

  • Download the application from the Ollama official website

  • For comprehensive Ollama documentation, visit the GitHub tutorial

  • Implementation utilizes the llama3.2 1b model for response generation and mxbai-embed-large for embedding operations

Ollama Installation Guide on Colab

Google Colab requires the colab-xterm extension for terminal functionality. Follow these steps to install Ollama:

  1. Install and Initialize colab-xterm

  1. Launch Terminal

  1. Install Ollama

    Execute the following command in the terminal:

  1. Installation Verification

    Verify installation by running:

Successful installation displays the "Available Commands" menu.

  1. Download and Prepare the Embedding Model for Ollama

  1. Download and Prepare the Model for Answer Generation

Last updated