Deploy on LangGraph Cloud
Last updated
Last updated
Author: JoonHo Kim
Design:
Peer Review :
This is a part of LangChain Open Tutorial
LangGraph Cloud is a cloud-based framework designed to simplify the development, deployment, and management of graph-based workflows for AI applications. It extends the functionality of LangGraph by providing a scalable, distributed, and user-friendly environment to build complex AI agents, workflows, and pipelines.
With LangGraph Cloud, you can:
Handle large workloads with horizontally-scaling servers, task queues, and built-in persistence.
Debug agent failure modes and quickly iterate in a visual playground-like studio.
Deploy in one-click and get integrated tracing & monitoring in LangSmith.
This tutorial will guide you through the key features and components of LangGraph Cloud, including:
Setting up LangGraph Cloud: How to create an account, configure your workspace, and deploy your first workflow.
Deploying workflows: Deploying workflows using LangGraph Cloud.
Using LangGraph Studio: How to connect to the Web UI LangGraph Studio and test the assistant.
Testing the API: How to send messages to the assistant using the Python SDK and verify the message data using Rest API.
By the end of this tutorial, you will be equipped with the knowledge to effectively utilize LangGraph Cloud for building and managing AI workflows in a scalable and efficient manner.
Now, let's dive in and explore how to boost performance with LangGraph Cloud! 🚀
Set up the environment. You may refer to Environment Setup for more details.
[Note]
langchain-opentutorial
is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.
You can checkout the langchain-opentutorial
for more details.
Before we start, ensure we have the following:
To deploy a LangGraph application on LangGraph Cloud, your application's code must be stored in a GitHub repository. You can deploy any LangGraph applications to LangGraph Cloud with ease.
For this guide, we'll use the pre-built Python ReAct Agent
template. You can go to GitHub and fork the repository.
This ReAct Agent
application requires API keys from Anthropic and Tavily.
After logging in LangSmith, you can click the LangGraph Platform menu at the bottom of the left sidebar.
Click + New Deployment button at the bottom of the page and then you can follow the steps below for creating a new deployment.
Select Github react-agent repository from the drop-down menu.
Write the deployment name in Name field.
Select Git branch. main is default.
Langgraph config file is langgraph.json as default. You can also select another file.
Select Development type.
Write Environment Variables. In this tutorial, we will use ANTHROPIC_API_KEY and TAVILY_API_KEY.
Click Submit button at the upper right corner. It takes a few minutes to build the application.
Now you can see the deployment status on the Overview section.
Once your application is deployed, you can test it in LangGraph Studio on the web.
You can find the LangGraph Studio text and Endpoint URL at the bottom of the page. Let's click the LangGraph Studio text to copy the clipboard.
Now you can test your LangGraph application in LangGraph Studio on the web.
Now we will send messages to the assistant using the Python SDK. you can also use JavaScript SDK or Rest API.
Prior to this, we need to install the langgraph-sdk package.
Now, you can verify the message data.
If you append /docs
to the end of the Endpoint URL and enter it in a web browser, you can check the web API. We can refer to this document and use API testing tools like Postman or Scalar to conduct tests.
ex) GET https://{{endpoint_url}}threads/{{thread_id}}/history