Meta Prompt Generator based on User Requirements

Open in ColabOpen in GitHub

Overview

This tutorial explains how to create a chatbot that helps users generate prompts. The chatbot first collects requirements from users, then generates prompts based on these requirements and modifies them according to user input. This process is divided into two separate states, with the LLM determining when to transition between states.

A graphical representation of this system can be found below.

Key Topics Covered

  • Gather information: Defining graphs for collecting user requirements

  • Generate Prompt: Setting up states for prompt generation

  • Define the state logic: Defining the chatbot's state logic

  • Create the graph: Creating graphs and storing conversation history

  • Use the graph: How to use the generated chatbot

In this example, we create a chatbot that helps users generate prompts.

The chatbot first collects requirements from users, then generates prompts based on these requirements and modifies them according to user input.

This process is divided into two separate states, with the LLM determining when to transition between states.

A graphical representation of the system can be found below.

Table of Contents


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can check out the langchain-opentutorial for more details.

You can alternatively set OPENAI_API_KEY in .env file and load it.

[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.

Collecting user requirements

At first, we define a node that collects user requirements.

In this process, we can request specific information from the user. We request the necessary information from the user until all necessary information is satisfied.

Prompt Generation

Now, we set the state to generate prompts.

To do this, we need a separate system message and a function to filter all messages before the tool call.

The definition of the meta prompt we use here is as follows.

Meta Prompt (Meta Prompt) Definition

Meta Prompt is a concept that refers to methods or strategies for optimizing prompt design and creation, which is used to make AI language models more effective and efficient. It goes beyond simply inputting text, including structured and creative approaches to guide model responses or improve the quality of results.

Key Features

  1. Goal-oriented structure Meta prompt includes a clear definition of the information you want to achieve and a step-by-step design process for it.

  2. Adaptive design Consider the model's response characteristics, limitations, and strengths to modify or iteratively optimize the prompt.

  3. Useful prompt engineering Include conditional statements, guidelines, role instructions, etc. to finely adjust the model's response.

  4. Multilayer approach Do not stop at a single question, but adopt a method of incrementally refining the answer through sub-questions.

Define state logic

We describe the logic for determining the state of the chatbot.

  • If the last message is a tool call, the chatbot is in the "prompt creator"(prompt) state.

  • If the last message is not a HumanMessage, the user needs to respond next, so the chatbot is in the END state.

  • If the last message is a HumanMessage, the chatbot is in the prompt state if there was a tool call before.

  • Otherwise, the chatbot is in the info state.

Create the graph

Now, we can create a graph. We will use MemorySaver to save the conversation history.

Visualize the graph.

https://langchain-ai.github.io/langgraph/how-tos/visualization/?h=visuali#setup

png

Run the graph

Now, we can run the graph to generate prompts.

Last updated