Configure-Runtime-Chain-Components

Open in Colabarrow-up-rightOpen in GitHubarrow-up-right

Overview

In this tutorial, we will explore how to dynamically configure various options when calling a chain.

There are two ways to implement dynamic configuration:

  • First, the configurable_fields method allows you to configure specific fields of a Runnable object.

    • Dynamically modify specific field values at runtime

    • Example: Adjust individual parameters like temperature, model_name of an LLM

  • Second, the configurable_alternatives method lets you specify alternatives for a particular Runnable object that can be set during runtime

    • Replace entire components with alternatives at runtime

    • Example: Switch between different LLM models or prompt templates

[Note] The term Configurable fields refers to settings or parameters within a system that can be adjusted or modified by the user or administrator at runtime.

  • Applying configuration

    • with_config method: A unified interface for applying all configuration settings

    • Ability to apply single or multiple settings simultaneously

    • Used consistently across special components like HubRunnable

In the following sections, we'll cover detailed usage of each method and practical applications. We'll explore real-world examples including prompt management through HubRunnable setting various prompt alternatives, switching between LLM models, and more.

Table of Contents

References


Environment Setup

Setting up your environment is the first step. See the Environment Setuparrow-up-right guide for more details.

[Note]

  • The langchain-opentutorial is a package of easy-to-use environment setup guidance, useful functions and utilities for tutorials.

  • Check out the langchain-opentutorialarrow-up-right for more details.

Alternatively, you can set and load OPENAI_API_KEY from a .env file.

[Note] This is only necessary if you haven't already set OPENAI_API_KEY in previous steps.

Configurable Fields

Configurable fields provide a way to dynamically modify specific parameters of a Runnable object at runtime. This feature is essential when you need to fine-tune the behavior of your chains or models without changing their core implementation.

  • They allow you to specify which parameters can be modified during execution

  • Each configurable field can include a description that explains its purpose

  • You can configure multiple fields simultaneously

  • The original chain structure remains unchanged, even when you modify configurations for different runs.

The configurable_fields method is used to specify which parameters should be treated as configurable, making your LangChain applications more flexible and adaptable to different use cases.

Dynamic Property Configuration

Let's illustrate this with ChatOpenAI. When using ChatOpenAI, we can set various properties.

The model_name property is used to specify the version of GPT. For example, you can select different models by setting it to gpt-4o, gpt-4o-mini, or else.

To dynamically specify the model instead of using a fixed model_name, you can leverage the ConfigurableField and assign it to a dynamically configurable property value as follows:

When calling model.invoke(), you can dynamically specify parameters using the format config={"configurable": {"key": "value"}}.

Now let's try using the gpt-4o-mini model. Check the output to see the changed model.

Alternatively, you can set configurable parameters using the with_config() method of the model object to achieve the same result.

Or you can also use this function as part of a chain.

Configurable Alternatives with HubRunnables

Using HubRunnable simplifies dynamic prompt selection, allowing easy switching between prompts registered in the Hub

Configuring LangChain Hub Settings

HubRunnable provide an option to configure which prompt template to pull from the LangChain Hub. This enables you to dynamically select different prompts based on the hub path specification.

If you call the prompt.invoke() method without specifying a with_config, the Runnable will automatically pull and use the prompt that was initially registered in the set "rlm/rag-prompt" hub.

Switching between Runnables

Configurable alternatives provide a way to select between different Runnable objects that can be set at runtime.

For example, the configurable language model of ChatAnthropic provides high degree of flexibility that can be applied to various tasks and contexts.

To enable dynamic switching, we can define the model's parameters as ConfigurableField objects.

  • model: Specifies the base language model to be used.

  • temperature: Controls the randomness of the model's sampling (which values between 0 and 1). Lower values result in more deterministic and repetitive outputs, while higher values lead to more diverse and creative responses.

Setting Alternatives for LLM Objects

Let's explore how to implement configurable alternatives using a Large Language Model (LLM).

[Note]

  • To use the ChatAnthropic model, you need to obtain an API key from the Anthropic console: https://console.anthropic.com/dashboard.

  • You can uncomment and directly set the API key (as shown below) or store it in your .env file.

Set the ANTHROPIC_API_KEY environment variable in your code.

Here's how you can invoke a chain using the default ChatAnthropic model using chain.invoke().

You may specify a different model to use the llm by using chain.with_config(configurable={"llm": "model"}).

Now, change the chain's configuration to use gpt4o as the language model.

For this time, change the chain's configuration to use anthropic.

Setting Prompt Alternatives

Prompts can be configured in a similar pattern to the configuration of LLM alternatives that we previously set.

If no configuration changes are made, the default prompt will be used.

To use a different prompt, use with_config.

Now let's use the kor prompt to request a translation, for example, pass the input using the input variable.

Configuring Prompts and LLMs

You can configure multiple aspects using prompts and LLMs simultaneously.

Here's an example that demonstrates how to use both prompts and LLMs to accomplish this:

Saving Configurations

You can easily save configured chains as reusable objects. For example, after configuring a chain for a specific task, you can save it for later use in similar tasks.

Last updated