Check Token Usage

Open in ColabOpen in GitHub

Overview

This tutorial covers how to track and monitor token usage with LangChain and OpenAI API.

Token usage tracking is crucial for managing API costs and optimizing resource utilization.

In this tutorial, we will create a simple example to measure and monitor token consumption during OpenAI API calls using LangChain's CallbackHandler.

example

Table of Contents

References


Environment Setup

Set up the environment. You may refer to Environment Setup for more details.

[Note]

  • langchain-opentutorial is a package that provides a set of easy-to-use environment setup, useful functions and utilities for tutorials.

  • You can checkout the langchain-opentutorial for more details.

You can alternatively set OPENAI_API_KEY in .env file and load it.

[Note] This is not necessary if you've already set OPENAI_API_KEY in previous steps.

Let's setup ChatOpenAI with gpt-4o model.

Implementing Check Token Usage

if you want to check token usage, you can use get_openai_callback function.

Monitoring Token Usage

Token usage monitoring is crucial for managing costs and resources when using the OpenAI API. LangChain provides an easy way to track this through get_openai_callback().

In this section, we'll explore token usage monitoring through three key scenarios:

  1. Single Query Monitoring:

    • Track token usage for individual API calls

    • Distinguish between prompt and completion tokens

    • Calculate costs

  2. Multiple Queries Monitoring:

    • Track cumulative token usage across multiple API calls

    • Analyze total costs

Note: Token usage monitoring is currently only supported for OpenAI API.

Last updated