RunnablePassthrough is a utility that facilitates unmodified data flow through a pipeline. Its invoke() method returns input data in its original form without alterations.
This functionality allows seamless data transmission between pipeline stages.
It frequently works in tandem with RunnableParallel for concurrent task execution, enabling the addition of new key-value pairs to the data stream.
You can alternatively set API keys such as OPENAI_API_KEY in a .env file and load them.
[Note] This is not necessary if you've already set the required API keys in previous steps.
# Load API keys from .env file
from dotenv import load_dotenv
load_dotenv(override=True)
True
Passing Data with RunnablePassthrough and RunnableParallel
RunnablePassthrough is a utility that passes data through unchanged or adds minimal information before forwarding.
It commonly integrates with RunnableParallel to map data under new keys.
Standalone Usage
When used independently, RunnablePassthrough() returns the input data unmodified.
Usage with assign
When implemented with assign as RunnablePassthrough.assign(...), it augments the input data with additional fields before forwarding.
By leveraging RunnablePassthrough, you can maintain data integrity through pipeline stages while selectively adding required information.
Let me continue reviewing any additional content. I'm tracking all modifications to provide a comprehensive summary once the review is complete.
Example of Using RunnableParallel and RunnablePassthrough
While RunnablePassthrough is effective independently, it becomes more powerful when combined with RunnableParallel.
This section demonstrates how to configure and run parallel tasks using the RunnableParallel class. The following steps provide a beginner-friendly implementation guide.
Initialize RunnableParallel
Create a RunnableParallel instance to manage concurrent task execution.
Configure passed Task
Define a passed task utilizing RunnablePassthrough
This task preserves input data without modification
Set Up extra Task
Implement an extra task using RunnablePassthrough.assign()
This task computes triple the "num" value and stores it with key "mult"
Implement modified Task
Create a modified task using a basic function
This function increments the "num" value by 1
Task Execution
Invoke all tasks using runnable.invoke()
Example: Input {"num": 1} triggers concurrent execution of all defined tasks
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
runnable = RunnableParallel(
# Sets up a Runnable that returns the input as-is.
passed=RunnablePassthrough(),
# Sets up a Runnable that multiplies the "num" value in the input by 3 and returns the result.
extra=RunnablePassthrough.assign(mult=lambda x: x["num"] * 3),
# Sets up a Runnable that adds 1 to the "num" value in the input and returns the result.
modified=lambda x: {"num": x["num"] + 1},
)
# Execute the Runnable with {"num": 1} as input.
result = runnable.invoke({"num": 1})
# Print the result.
print(result)
r = RunnablePassthrough.assign(mult=lambda x: x["num"] * 3)
r.invoke({"num": 1})
{'num': 1, 'mult': 3}
Summary of Results
When provided with input {"num": 1}, each task produces the following output:
passed: Returns unmodified input data
Output: {"num": 1}
extra: Augments input with "mult" key containing triple the "num" value
Output: {"num": 1, "mult": 3}
modified: Increments the "num" value by 1
Output: {"num": 2}
Search Engine Integration
The following example illustrates an implementation of RunnablePassthrough.
Using GPT
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
# Create a FAISS vector store from text data.
vectorstore = FAISS.from_texts(
[
"Cats are geniuses at claiming boxes as their own.",
"Dogs have successfully trained humans to take them for walks.",
"Cats aren't fond of water, but the water in a human's cup is an exception.",
"Dogs follow cats around, eager to befriend them.",
"Cats consider laser pointers their arch-nemesis.",
],
embedding=OpenAIEmbeddings(),
)
# Use the vector store as a retriever.
retriever = vectorstore.as_retriever()
# Define a template.
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
# Create a chat prompt from the template.
prompt = ChatPromptTemplate.from_template(template)
# Initialize the ChatOpenAI model.
model = ChatOpenAI(model_name="gpt-4o-mini")
# Function to format retrieved documents.
def format_docs(docs):
return "\n".join([doc.page_content for doc in docs])
# Construct the retrieval chain.
retrieval_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
# Query retrieval chain
retrieval_chain.invoke("What kind of objects do cats like?")
For comprehensive Ollama documentation, visit the GitHub tutorial
Implementation utilizes the llama3.2 1b model for response generation and mxbai-embed-large for embedding operations
Ollama Installation Guide on Colab
Google Colab requires the colab-xterm extension for terminal functionality. Follow these steps to install Ollama:
Install and Initialize colab-xterm
!pip install colab-xterm
%load_ext colabxterm
Launch Terminal
%xterm
Install Ollama
Execute the following command in the terminal:
curl -fsSL https://ollama.com/install.sh | sh
Installation Verification
Verify installation by running:
ollama
Successful installation displays the "Available Commands" menu.
Download and Prepare the Embedding Model for Ollama
!ollama pull mxbai-embed-large
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_ollama import OllamaEmbeddings
# Configure embeddings
ollama_embeddings = OllamaEmbeddings(model="mxbai-embed-large")
# Initialize FAISS vector store with text data
vectorstore = FAISS.from_texts(
[
"Cats are geniuses at claiming boxes as their own.",
"Dogs have successfully trained humans to take them for walks.",
"Cats aren't fond of water, but the water in a human's cup is an exception.",
"Dogs follow cats around, eager to befriend them.",
"Cats consider laser pointers their arch-nemesis.",
],
embedding=ollama_embeddings(),
)
# Convert vector store to retriever
retriever = vectorstore.as_retriever()
# Define prompt template
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
# Initialize chat prompt from template
prompt = ChatPromptTemplate.from_template(template)
Download and Prepare the Model for Answer Generation
!ollama pull llama3.2:1b
from langchain_ollama import ChatOllama
# Initialize Ollama chat model
ollama_model = ChatOllama(model="llama3.2:1b")
# Format retrieved documents
def format_docs(docs):
return "\n".join([doc.page_content for doc in docs])
# Build retrieval chain
retrieval_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| ollama_model # Use Ollama model for inference
| StrOutputParser()
)
# Query retrieval chain
retrieval_chain.invoke("What kind of objects do cats like?")
'Based on this context, it seems that cats tend to enjoy and claim boxes as their own.'
# Query retrieval chain
retrieval_chain.invoke("What do dogs like?")
'Based on the context, it seems that dogs enjoy being around cats and having them follow them. Additionally, dogs have successfully trained humans to take them for walks.'