Skip to main content
Kaizen SDK Docs home page
Kaizen SDK Docs
Documentation
Support
Dashboard
Dashboard
Search...
Navigation
Search...
⌘K
Overview
Installation
Installation
Authentication
Authentication
Quick Start
Quickstart
Core Concepts
Core concepts
API Methods
Api methods
Kaizen client
Compress
Decompress
Optimize
Optimize request
Optimize response
Prompts encode
Prompts decode
Health
Advanced Usage
Advanced usage
Recipes
Recipes
Use Cases
Token budgeting
Json payloads
Prompt debugging
Agent observability
Case Study
Casestudy
Error Reference
Error reference
Changelog
Changelog
FAQ
Faq
English
On this page
Reduce token usage before LLM calls
Use Cases
Token budgeting
Reduce token usage before LLM calls
Use
client.optimize_request
and
client.prompts_encode
to ensure your prompts stay within token limits.
Previous
Next