Check Token Usage
Author: Haseom Shin
Proofread : Two-Jay
This is a part of LangChain Open Tutorial
Overview
This tutorial covers how to track and monitor token usage with LangChain and OpenAI API.
Token usage tracking is crucial for managing API costs and optimizing resource utilization.
In this tutorial, we will create a simple example to measure and monitor token consumption during OpenAI API calls using LangChain's CallbackHandler.

Table of Contents
References
Environment Setup
Set up the environment. You may refer to Environment Setup for more details.
[Note]
langchain-opentutorialis a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.You can checkout the
langchain-opentutorialfor more details.
You can alternatively set OPENAI_API_KEY in .env file and load it.
[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.
Let's setup ChatOpenAI with gpt-4o model.
Implementing Check Token Usage
if you want to check token usage, you can use get_openai_callback function.
Monitoring Token Usage
Token usage monitoring is crucial for managing costs and resources when using the OpenAI API. LangChain provides an easy way to track this through get_openai_callback().
In this section, we'll explore token usage monitoring through three key scenarios:
Single Query Monitoring:
Track token usage for individual API calls
Distinguish between prompt and completion tokens
Calculate costs
Multiple Queries Monitoring:
Track cumulative token usage across multiple API calls
Analyze total costs
Note: Token usage monitoring is currently only supported for OpenAI API.
Last updated