This tutorial covers how to create and utilize prompt templates using LangChain .
Prompt templates are essential for generating dynamic and flexible prompts that cater to various use cases, such as conversation history, structured outputs, and specialized queries.
In this tutorial, we will explore methods for creating PromptTemplate objects, applying partial variables, managing templates through YAML files, and leveraging advanced tools like ChatPromptTemplate and MessagePlaceholder for enhanced functionality.
from langchain_openai import ChatOpenAI# Load the modelllm =ChatOpenAI(model_name="gpt-4o")
Creating a PromptTemplate Object
There are two ways to create a PromptTemplate object.
Using the from_template() method.
Creating a PromptTemplate object and generating a prompt simultaneously.
Method 1. Using from_template() method
Define template with variable as {variable} .
from langchain_core.prompts import PromptTemplate# Define template. In this case, {country} is a variabletemplate ="What is the capital of {country}?"# Create a `PromptTemplate` object using the `from_template` methodprompt = PromptTemplate.from_template(template)prompt
PromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, template='What is the capital of {country}?')
You can complete the prompt by assigning a value to the variable country .
# Create prompt. Assign value to the variable using `format` methodprompt = prompt.format(country="United States of America")prompt
'What is the capital of United States of America?'
# Define templatetemplate ="What is the capital of {country}?"# Create a `PromptTemplate` object using the `from_template` methodprompt = PromptTemplate.from_template(template)# Create chainchain = prompt | llm
# Replace the country variable with a value of your choicechain.invoke("United States of America").content
'The capital of the United States of America is Washington, D.C.'
Method 2. Creating a PromptTemplate object and a prompt all at once.
Explicitly specify input_variables for additional validation.
Otherwise, a mismatch between such variables and the variables within the template string can raise an exception in instantiation.
# Define templatetemplate ="What is the capital of {country}?"# Create a prompt template with `PromptTemplate` objectprompt =PromptTemplate( template=template, input_variables=["country"],)prompt
PromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, template='What is the capital of {country}?')
# Create promptprompt.format(country="United States of America")
'What is the capital of United States of America?'
# Define templatetemplate ="What are the capitals of {country1} and {country2}, respectively?"# Create a prompt template with `PromptTemplate` objectprompt =PromptTemplate( template=template, input_variables=["country1"], partial_variables={"country2": "United States of America"# Pass `partial_variables` in dictionary form },)prompt
PromptTemplate(input_variables=['country1'], input_types={}, partial_variables={'country2': 'United States of America'}, template='What are the capitals of {country1} and {country2}, respectively?')
prompt.format(country1="South Korea")
'What are the capitals of South Korea and United States of America, respectively?'
PromptTemplate(input_variables=['country1'], input_types={}, partial_variables={'country2': 'India'}, template='What are the capitals of {country1} and {country2}, respectively?')
prompt_partial.format(country1="South Korea")
'What are the capitals of South Korea and India, respectively?'
chain = prompt_partial | llm
chain.invoke("United States of America").content
'The capital of the United States of America is Washington, D.C., and the capital of India is New Delhi.'
chain.invoke({"country1": "United States of America", "country2": "India"}).content
'The capital of the United States of America is Washington, D.C., and the capital of India is New Delhi.'
Using partial_variables
Using partial_variables , you can partially apply functions. This is particularly useful when there are common variables to be shared.
Common examples are date or time.
Suppose you want to specify the current date in your prompt, hardcoding the date into the prompt or passing it along with other input variables may not be practical. In this case, using a function that returns the current date to modify the prompt partially is much more convenient.
from datetime import datetime# Print the current datedatetime.now().strftime("%B %d")
'January 14'
# Define function to return the current datedefget_today():return datetime.now().strftime("%B %d")
prompt =PromptTemplate( template="Today's date is {today}. Please list {n} celebrities whose birthday is today. Please specify their date of birth.", input_variables=["n"], partial_variables={"today": get_today # Pass `partial_variables` in dictionary form },)
# Create promptprompt.format(n=3)
"Today's date is January 14. Please list 3 celebrities whose birthday is today. Please specify their date of birth."
# Create chainchain = prompt | llm
# Invoke chain and check the resultprint(chain.invoke(3).content)
Here are three celebrities born on January 14:
1. **Dave Grohl** - Born on January 14, 1969.
2. **LL Cool J** - Born on January 14, 1968.
3. **Jason Bateman** - Born on January 14, 1969.
# Invoke chain and check the resultprint(chain.invoke({"today": "Jan 02", "n": 3}).content)
Here are three celebrities born on January 2:
1. **Cuba Gooding Jr.** - Born on January 2, 1968.
2. **Taye Diggs** - Born on January 2, 1971.
3. **Kate Bosworth** - Born on January 2, 1983.
Load prompt template from YAML file
You can manage prompt templates in seperate yaml files and load using load_prompt .
from langchain_core.prompts import load_promptprompt =load_prompt("prompts/fruit_color.yaml", encoding="utf-8")prompt
PromptTemplate(input_variables=['fruit'], input_types={}, partial_variables={}, template='What is the color of {fruit}?')
prompt.format(fruit="an apple")
'What is the color of an apple?'
prompt2 =load_prompt("prompts/capital.yaml")print(prompt2.format(country="United States of America"))
Please provide information about the capital city of United States of America.
Summarize the characteristics of the capital in the following format, within 300 words.
----
[Format]
1. Area
2. Population
3. Historical Sites
4. Regional Products
#Answer:
ChatPromptTemplate
ChatPromptTemplate can be used to include a conversation history as a prompt.
Messages are structured as tuples in the format (role , message ) and are created as a list.
role
"system" : A system setup message, typically used for global settings-related prompts.
"human" : A user input message.
"ai" : An AI response message.
from langchain_core.prompts import ChatPromptTemplatechat_prompt = ChatPromptTemplate.from_template("What is the capital of {country}?")chat_prompt
ChatPromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['country'], input_types={}, partial_variables={}, template='What is the capital of {country}?'), additional_kwargs={})])
chat_prompt.format(country="United States of America")
'Human: What is the capital of United States of America?'
from langchain_core.prompts import ChatPromptTemplatechat_template = ChatPromptTemplate.from_messages( [# role, message ("system", "You are a friendly AI assistant. Your name is {name}."), ("human", "Nice to meet you!"), ("ai", "Hello! How can I assist you?"), ("human", "{user_input}"), ])# Create chat messagesmessages = chat_template.format_messages(name="Teddy", user_input="What is your name?")messages
[SystemMessage(content='You are a friendly AI assistant. Your name is Teddy.', additional_kwargs={}, response_metadata={}),
HumanMessage(content='Nice to meet you!', additional_kwargs={}, response_metadata={}),
AIMessage(content='Hello! How can I assist you?', additional_kwargs={}, response_metadata={}),
HumanMessage(content='What is your name?', additional_kwargs={}, response_metadata={})]
You can directly invoke LLM using the messages created above.
llm.invoke(messages).content
'My name is Teddy. How can I help you today?'
You can also create a chain to execute.
chain = chat_template | llm
chain.invoke({"name": "Teddy", "user_input": "What is your name?"}).content
'My name is Teddy. How can I help you today?'
MessagePlaceholder
LangChain also provides a MessagePlaceholder , which provides complete control over rendering messages during formatting.
This can be useful if you’re unsure which roles to use in a message prompt template or if you want to insert a list of messages during formatting.
from langchain_core.output_parsers import StrOutputParserfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderchat_prompt = ChatPromptTemplate.from_messages( [ ("system","You are a summarization specialist AI assistant. Your mission is to summarize conversations using key points.", ),MessagesPlaceholder(variable_name="conversation"), ("human", "Summarize the conversation so far in {word_count} words."), ])chat_prompt
ChatPromptTemplate(input_variables=['conversation', 'word_count'], input_types={'conversation': list[typing.Annotated[typing.Union[typing.Annotated[langchain_core.messages.ai.AIMessage, Tag(tag='ai')], typing.Annotated[langchain_core.messages.human.HumanMessage, Tag(tag='human')], typing.Annotated[langchain_core.messages.chat.ChatMessage, Tag(tag='chat')], typing.Annotated[langchain_core.messages.system.SystemMessage, Tag(tag='system')], typing.Annotated[langchain_core.messages.function.FunctionMessage, Tag(tag='function')], typing.Annotated[langchain_core.messages.tool.ToolMessage, Tag(tag='tool')], typing.Annotated[langchain_core.messages.ai.AIMessageChunk, Tag(tag='AIMessageChunk')], typing.Annotated[langchain_core.messages.human.HumanMessageChunk, Tag(tag='HumanMessageChunk')], typing.Annotated[langchain_core.messages.chat.ChatMessageChunk, Tag(tag='ChatMessageChunk')], typing.Annotated[langchain_core.messages.system.SystemMessageChunk, Tag(tag='SystemMessageChunk')], typing.Annotated[langchain_core.messages.function.FunctionMessageChunk, Tag(tag='FunctionMessageChunk')], typing.Annotated[langchain_core.messages.tool.ToolMessageChunk, Tag(tag='ToolMessageChunk')]], FieldInfo(annotation=NoneType, required=True, discriminator=Discriminator(discriminator=, custom_error_type=None, custom_error_message=None, custom_error_context=None))]]}, partial_variables={}, messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], input_types={}, partial_variables={}, template='You are a summarization specialist AI assistant. Your mission is to summarize conversations using key points.'), additional_kwargs={}), MessagesPlaceholder(variable_name='conversation'), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['word_count'], input_types={}, partial_variables={}, template='Summarize the conversation so far in {word_count} words.'), additional_kwargs={})])
You can use MessagesPlaceholder to add the conversation message list.
formatted_chat_prompt = chat_prompt.format( word_count=5, conversation=[ ("human", "Hello! I’m Teddy. Nice to meet you."), ("ai", "Nice to meet you! I look forward to working with you."), ],)print(formatted_chat_prompt)
System: You are a summarization specialist AI assistant. Your mission is to summarize conversations using key points.
Human: Hello! I’m Teddy. Nice to meet you.
AI: Nice to meet you! I look forward to working with you.
Human: Summarize the conversation so far in 5 words.
# Invoke chain and check the resultchain.invoke( {"word_count": 5,"conversation": [ ("human","Hello! I’m Teddy. Nice to meet you.", ), ("ai", "Nice to meet you! I look forward to working with you."), ], })