Meta Prompt Generator based on User Requirements
Author: Kenny Jung
Peer Review:
Proofread : Chaeyoon Kim
This is a part of LangChain Open Tutorial
Overview
This tutorial explains how to create a chatbot that helps users generate prompts. The chatbot first collects requirements from users, then generates prompts based on these requirements and modifies them according to user input. This process is divided into two separate states, with the LLM determining when to transition between states.
A graphical representation of this system can be found below.
Key Topics Covered
Gather information: Defining graphs for collecting user requirements
Generate Prompt: Setting up states for prompt generation
Define the state logic: Defining the chatbot's state logic
Create the graph: Creating graphs and storing conversation history
Use the graph: How to use the generated chatbot
In this example, we create a chatbot that helps users generate prompts.
The chatbot first collects requirements from users, then generates prompts based on these requirements and modifies them according to user input.
This process is divided into two separate states, with the LLM determining when to transition between states.
A graphical representation of the system can be found below.
Table of Contents

Environment Setup
Set up the environment. You may refer to Environment Setup for more details.
[Note]
langchain-opentutorialis a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.You can check out the
langchain-opentutorialfor more details.
You can alternatively set OPENAI_API_KEY in .env file and load it.
[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.
Collecting user requirements
At first, we define a node that collects user requirements.
In this process, we can request specific information from the user. We request the necessary information from the user until all necessary information is satisfied.
Prompt Generation
Now, we set the state to generate prompts.
To do this, we need a separate system message and a function to filter all messages before the tool call.
The definition of the meta prompt we use here is as follows.
Meta Prompt (Meta Prompt) Definition
Meta Prompt is a concept that refers to methods or strategies for optimizing prompt design and creation, which is used to make AI language models more effective and efficient. It goes beyond simply inputting text, including structured and creative approaches to guide model responses or improve the quality of results.
Key Features
Goal-oriented structure Meta prompt includes a clear definition of the information you want to achieve and a step-by-step design process for it.
Adaptive design Consider the model's response characteristics, limitations, and strengths to modify or iteratively optimize the prompt.
Useful prompt engineering Include conditional statements, guidelines, role instructions, etc. to finely adjust the model's response.
Multilayer approach Do not stop at a single question, but adopt a method of incrementally refining the answer through sub-questions.
Reference: OpenAI Meta Prompt Engineering Guide
Define state logic
We describe the logic for determining the state of the chatbot.
If the last message is a
tool call, the chatbot is in the "prompt creator"(prompt) state.If the last message is not a
HumanMessage, the user needs to respond next, so the chatbot is in theENDstate.If the last message is a
HumanMessage, the chatbot is in thepromptstate if there was atool callbefore.Otherwise, the chatbot is in the
infostate.
Create the graph
Now, we can create a graph. We will use MemorySaver to save the conversation history.
Visualize the graph.
https://langchain-ai.github.io/langgraph/how-tos/visualization/?h=visuali#setup

Run the graph
Now, we can run the graph to generate prompts.
Last updated