How to Reduce LLM Token Usage with TOON
Large Language Models (LLMs) have revolutionized the way we interact with technology. However, their power comes at a cost: token usage. Every request to an LLM consumes tokens, and the more tokens you use, the more you pay. Fortunately, there's a simple way to reduce your token usage without sacrificing performance: TOON.
The Problem with JSON
JSON is a verbose data format. It uses a lot of extra characters (like curly braces, square brackets, and quotation marks) to represent data. While this makes it easy for humans to read, it's not very efficient for machines. All of these extra characters consume tokens, which can add up quickly, especially when you're working with large datasets.
How TOON Solves the Problem
TOON is a more concise data format that's designed to be as efficient as possible. It uses a number of techniques to reduce token usage, including:
- Symbolic Keys: Instead of using strings for object keys, TOON uses symbols. This eliminates the need for quotation marks, which can save a significant number of tokens.
- Typed Values: TOON supports a wider range of data types than JSON, including integers, floats, and booleans. This allows it to represent data more efficiently, without the need for extra characters.
- Whitespace Insensitivity: TOON is insensitive to whitespace, which means you can remove all unnecessary spaces and line breaks without affecting the data.
Conclusion
By using TOON instead of JSON, you can significantly reduce your LLM token usage. This can lead to substantial cost savings, as well as faster response times. If you're serious about getting the most out of your LLM, then you owe it to yourself to give TOON a try.
Back to Blog