Reflection in LangGraph

Open in ColabOpen in GitHub

Overview

Reflection in the context of LLM-based agents refers to the process of prompting an LLM to observe its past steps and evaluate the quality of its decisions. This is particularly useful in scenarios like iterative problem-solving, search refinement, and agent evaluation.

In this tutorial, we will explore how to implement a simple Reflection mechanism using LangGraph, specifically to analyze and improve AI-generated essays.

What is Reflection?

Reflection involves prompting an LLM to analyze its own previous responses and adjust its future decisions accordingly.

This can be useful in:

  • Re-planning: Improving the next steps based on past performance.

  • Search Optimization: Refining retrieval strategies.

  • Evaluation: Measuring the effectiveness of a solution and iterating.

We will implement a Reflection-based agent in LangGraph that reviews its own responses and refines them dynamically.

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorial for more details.

You can set API keys in a .env file or set them manually.

[Note] If you’re not using the .env file, no worries! Just enter the keys directly in the cell below, and you’re good to go.

Defining the Reflection-Based Essay Generator

1. Generating the Essay

We will create a 5-paragraph essay generator that produces structured responses.

2. Reflection

Now, we define the reflection prompt, where an AI evaluates the generated essay and suggests improvements.

3. Iterative Improvement

We can repeat the process, incorporating feedback into new essay versions.

Defining the Reflection Graph

Now that we've implemented each step, we wire everything into a LangGraph workflow.

1. Define State

We establish a structured way to store and track messages. This ensures that each interaction, including generated essays and reflections, is retained in a structured manner for iterative improvements.

2. Create Nodes

Two key nodes are defined—one for generating essays and another for evaluating them. The generation_node creates a structured essay, while the reflection_node critiques and suggests improvements.

3. Set Conditional Loops

The workflow continues refining until a stopping criterion is met. In this case, after a few iterations, the graph stops the refinement process to prevent infinite loops.

4. Compile and Execute

The LangGraph structure is compiled, allowing seamless AI-driven iteration. The process begins with the generate node and loops through the reflect node until the stopping condition is met.

5. Execute the Graph

The graph is now executed in an asynchronous streaming process, where an essay is generated, reflected upon, and improved iteratively.

6. Retrieve Final State

After execution, we retrieve the final state of messages, including the refined essay and reflections.

Last updated