This tutorial covers the implementation and usage of with_listeners() in Runnable.
with_listeners() binds lifecycle listeners to a Runnable, returning a new Runnable. This allows you to connect event listeners to the data flow, enabling tracking, analysis, and debugging during execution.
The with_listeners() function provides the ability to add event listeners to a Runnable object. Listeners are functions that are called when specific events occur, such as start, end, or error.
This function is useful in the following scenarios:
Chain Start: {'input': 'Hello, World!'}
Start: {'input': 'Hello, World!'}
End: {'output': 'Step 1 completed with message Hello, World!'}
Start: {'input': 'Step 1 completed with message Hello, World!'}
End: {'output': 'Step 2 completed with message Step 1 completed with message Hello, World!'}
Chain End: {'output': 'Step 2 completed with message Step 1 completed with message Hello, World!'}
'Step 2 completed with message Step 1 completed with message Hello, World!'
with_alisteners
Bind asynchronous lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Asynchronously called before the Runnable starts running. on_end: Asynchronously called after the Runnable finishes running. on_error: Asynchronously called if the Runnable throws an error.
The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.
runnable2: 25
runnable3: 25
runnable2: 28
runnable3: 28
Runnable[2s]: starts at 28
Runnable[3s]: starts at 28
Runnable[2s]: ends at 30
runnable2: 30
Runnable[3s]: ends at 31
runnable3: 31
runnable2: 32
runnable3: 33
RootListenersTracer
You can directly bind RootListenersTracer to a Runnable using RunnableBinding to register event listeners. This is the internal code of with_listeners().
RootListenersTracer calls listeners on run start, end, and error.
from langchain_core.tracers.root_listeners import RootListenersTracer
from langchain_core.runnables.base import RunnableBinding
from langchain_openai import ChatOpenAI
# Define listener functions
def fnStart(runObj):
print(f"Start: {runObj.inputs}")
def fnEnd(runObj):
print(f"End: {runObj.outputs}")
def fnError(runObj):
print(f"End: {runObj.error}")
# # LLM and chain setup
model = ChatOpenAI(model="gpt-4o-mini", temperature=0.0)
model_with_listeners = RunnableBinding(
bound=model,
config_factories=[
lambda config: {
"callbacks": [
RootListenersTracer(
config=config,
on_start=fnStart,
on_end=fnEnd,
on_error=fnError,
)
],
}
],
)
model_with_listeners.invoke("Tell me the founding year of Google")
You can directly use on_llm_start() and on_llm_end() of RootListenersTracer to handle events.
from langchain_core.tracers.schemas import Run
import uuid
from datetime import datetime, timezone
# Modify user-defined listener functions
def onStart(run: Run):
print(
f"[START] Run ID: {run.id}, Start time: {run.start_time}\nInput: {run.inputs}"
)
def onEnd(run: Run):
# Safely handle output
print(f"[END] Run ID: {run.id}, End time: {run.end_time}\nOutput: {run.outputs}")
def onError(run: Run):
print(f"[ERROR] Run ID: {run.id}, Error message: {run.error}")
# Create RootListenersTracer
tracer = RootListenersTracer(
config={}, on_start=onStart, on_end=onEnd, on_error=onError
)
# Set up LLM
llm = ChatOpenAI()
# Input text
input_text = "What is the founding year of Google?"
try:
# Create and initialize Run object at the start of execution
run_id = str(uuid.uuid4())
start_time = datetime.now(timezone.utc)
# Create Run object (including only required fields)
run = Run(
id=run_id,
start_time=start_time,
execution_order=1,
serialized={},
inputs={"input": input_text},
run_type="llm",
)
# Call tracer at the start of execution
tracer.on_llm_start(serialized={}, prompts=[input_text], run_id=run_id)
# Execute the actual Runnable
result = llm.generate([input_text])
# Update Run object
run.end_time = datetime.now(timezone.utc)
run.outputs = result
# Call tracer at the end of execution
tracer.on_llm_end(response=result, run_id=run_id)
except Exception as e:
run.error = str(e)
run.end_time = datetime.now(timezone.utc)
tracer.on_llm_error(error=e, run_id=run_id)
print(f"Error occurred: {str(e)}")
[START] Run ID: a76a54b6-8173-4173-b063-ebe107e52dd3, Start time: 2025-01-12 05:32:32.311749+00:00
Input: {'prompts': ['What is the founding year of Google?']}
[END] Run ID: a76a54b6-8173-4173-b063-ebe107e52dd3, End time: 2025-01-12 05:32:32.898851+00:00
Output: {'generations': [[{'text': 'Google was founded on September 4, 1998.', 'generation_info': {'finish_reason': 'stop', 'logprobs': None}, 'type': 'ChatGeneration', 'message': {'lc': 1, 'type': 'constructor', 'id': ['langchain', 'schema', 'messages', 'AIMessage'], 'kwargs': {'content': 'Google was founded on September 4, 1998.', 'additional_kwargs': {'refusal': None}, 'response_metadata': {'token_usage': {'completion_tokens': 13, 'prompt_tokens': 15, 'total_tokens': 28, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, 'type': 'ai', 'id': 'run-d0c3617b-05c1-4e34-8fa5-eba2ed0f2748-0', 'usage_metadata': {'input_tokens': 15, 'output_tokens': 13, 'total_tokens': 28, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}, 'tool_calls': [], 'invalid_tool_calls': []}}}]], 'llm_output': {'token_usage': {'completion_tokens': 13, 'prompt_tokens': 15, 'total_tokens': 28, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo'}, 'run': [{'run_id': UUID('d0c3617b-05c1-4e34-8fa5-eba2ed0f2748')}], 'type': 'LLMResult'}