EnumOutputParser
Author: ranian963
Peer Review : JaeHo Kim
Proofread : Two-Jay
This is a part of LangChain Open Tutorial
Overview
In this tutorial, we introduce how to use EnumOutputParser
to extract valid Enum values from the output of an LLM.
EnumOutputParser
is a tool that parses the output of a language model into one of the predefined enumeration (Enum) values , offering the following features:
Enumeration Parsing: Converts the string output into a predefined
Enum
value.Type Safety: Ensures that the parsed result is always one of the defined
Enum
values.Flexibility: Automatically handles spaces and line breaks.
Table of Contents
References
Environment Setup
Set up the environment. You may refer to Environment Setup for more details.
[Note]
langchain-opentutorial
is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.You can checkout the
langchain-opentutorial
for more details.
%%capture --no-stderr
%pip install langchain-opentutorial
# Installing required libraries
from langchain_opentutorial import package
package.install(
[
"langsmith",
"langchain",
"langchain_openai"
],
verbose=False,
upgrade=False,
)
# Set environment variables
from langchain_opentutorial import set_env
set_env(
{
"OPENAI_API_KEY": "",
"LANGCHAIN_API_KEY": "",
"LANGCHAIN_TRACING_V2": "true",
"LANGCHAIN_ENDPOINT": "https://api.smith.langchain.com",
"LANGCHAIN_PROJECT": "07-EnumOutputParser",
}
)
You can alternatively set OPENAI_API_KEY
in .env
file and load it.
[Note] This is not necessary if you've already set OPENAI_API_KEY
in previous steps.
from dotenv import load_dotenv
load_dotenv(override=True)
Introduction to EnumOutputParser
EnumOutputParser
is a tool that strictly parses an LLM's output into a defined enumeration (Enum).
This ensures that the model output is always one of the enumerated values.
Use cases
When you only want one valid choice from a set of possibilities.
When you want to avoid typos and variations by using a clear Enum value.
In the following example, we define an Colors
Enum and make the LLM return one of red/green/blue
by parsing the output.
Example: Colors Enum Parser
The code below shows how to define the Colors(Enum)
class and wrap it with EnumOutputParser
, then integrate it into a prompt chain.
Once the chain is executed, the LLM response is strictly parsed into one of the values in Colors
.
# Import EnumOutputParser
from langchain.output_parsers.enum import EnumOutputParser
Define the Colors
enumeration using the Enum
class from Python's built-in enum
module.
from enum import Enum
class Colors(Enum):
RED = "Red"
GREEN = "Green"
BLUE = "Blue"
Now we create an EnumOutputParser
object for parsing strings into the Colors
enumeration.
# Instantiate EnumOutputParser
parser = EnumOutputParser(enum=Colors)
# You can view the format instructions that the parser expects.
print(parser.get_format_instructions())
Below is an example that constructs a simple chain using PromptTemplate
and ChatOpenAI
.
If the LLM responds about which color the object "sky" is, the parser will convert that string into a Colors
value.
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
# Prompt template: the parser's format instructions are added at the end.
prompt = (
PromptTemplate.from_template(
"""Which color is this object?
Object: {object}
Instructions: {instructions}"""
).partial(instructions=parser.get_format_instructions())
)
# Entire chain: (prompt) -> (LLM) -> (Enum Parser)
chain = prompt | ChatOpenAI(temperature=0) | parser
Now let's run the chain.
response = chain.invoke({"object": "sky"})
print("Parsed Enum:", response)
print("Raw Enum Value:", response.value)
Last updated