RunnablePassthrough
Author: Suhyun Lee
Peer Review:
Proofread : Yun Eun
This is a part of LangChain Open Tutorial
Overview
RunnablePassthrough
is a utility that facilitates unmodified data flow through a pipeline. Its invoke()
method returns input data in its original form without alterations.
This functionality allows seamless data transmission between pipeline stages.
It frequently works in tandem with RunnableParallel
for concurrent task execution, enabling the addition of new key-value pairs to the data stream.
Common use cases for RunnablePassthrough
include:
Direct data forwarding without transformation
Pipeline stage bypassing
Pipeline flow validation during debugging
Table of Contents
References
Environment Setup
Set up the environment. You may refer to Environment Setup for more details.
[Note]
langchain-opentutorial
is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.You can checkout the
langchain-opentutorial
for more details.
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below code:
You can alternatively set API keys such as OPENAI_API_KEY
in a .env
file and load them.
[Note] This is not necessary if you've already set the required API keys in previous steps.
Passing Data with RunnablePassthrough
and RunnableParallel
RunnablePassthrough
and RunnableParallel
RunnablePassthrough
is a utility that passes data through unchanged or adds minimal information before forwarding.
It commonly integrates with RunnableParallel
to map data under new keys.
Standalone Usage
When used independently,
RunnablePassthrough()
returns the input data unmodified.Usage with
assign
When implemented with
assign
asRunnablePassthrough.assign(...)
, it augments the input data with additional fields before forwarding.
By leveraging RunnablePassthrough
, you can maintain data integrity through pipeline stages while selectively adding required information.
Let me continue reviewing any additional content. I'm tracking all modifications to provide a comprehensive summary once the review is complete.
Example of Using RunnableParallel
and RunnablePassthrough
RunnableParallel
and RunnablePassthrough
While RunnablePassthrough
is effective independently, it becomes more powerful when combined with RunnableParallel
.
This section demonstrates how to configure and run parallel tasks using the RunnableParallel
class. The following steps provide a beginner-friendly implementation guide.
Initialize
RunnableParallel
Create a
RunnableParallel
instance to manage concurrent task execution.Configure
passed
TaskDefine a
passed
task utilizingRunnablePassthrough
This task preserves input data without modification
Set Up
extra
TaskImplement an
extra
task usingRunnablePassthrough.assign()
This task computes triple the "num" value and stores it with key
mult
Implement
modified
TaskCreate a
modified
task using a basic functionThis function increments the "num" value by 1
Task Execution
Invoke all tasks using
runnable.invoke()
Example: Input
{"num": 1}
triggers concurrent execution of all defined tasks
Summary of Results
When provided with input {"num": 1}
, each task produces the following output:
passed
: Returns unmodified input dataOutput:
{"num": 1}
extra
: Augments input with"mult"
key containing triple the"num"
valueOutput:
{"num": 1, "mult": 3}
modified
: Increments the"num"
value by 1Output:
{"num": 2}
Search Engine Integration
The following example illustrates an implementation of RunnablePassthrough
.
Using RunnablePassthrough
in a FAISS-Based RAG Pipeline
RunnablePassthrough
in a FAISS-Based RAG PipelineThis code uses RunnablePassthrough
in a FAISS-based RAG pipeline to pass retrieved context into a chat prompt.
It enables seamless integration of OpenAI embeddings for efficient retrieval and response generation.
Using Ollama
Download the application from the Ollama official website
For comprehensive Ollama documentation, visit the GitHub tutorial
Implementation utilizes the
llama3.2
1b model for response generation andmxbai-embed-large
for embedding operations
Ollama Installation Guide on Colab
Google Colab requires the colab-xterm
extension for terminal functionality. Follow these steps to install Ollama:
Install and Initialize
colab-xterm
Launch Terminal
Install Ollama
Execute the following command in the terminal:
Installation Verification
Verify installation by running:
Successful installation displays the "Available Commands" menu.
Download and Prepare the Embedding Model for Ollama
Download and Prepare the Model for Answer Generation
Last updated