optimize
Run compression and token estimation in a single call. Ideal when you need byte savings plus model-specific token counts.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
data | Any | ✅ | JSON payload to compress. |
options | EncodeOptions | ❌ | Same knobs as compress (indent, delimiter, length_marker). |
token_models | list[str] | ❌ | Model IDs (e.g., ["gpt-4o-mini", "claude-3-5-sonnet-20240620"]) for token stats. |
Code example
Response example
Errors
400→token_modelsmust be a list of strings; payload must includedata.429→ Token estimation rate limit exceeded (per model). Retry afterRetry-After.
Notes
- Token counts use Kaizen’s calibrated model tables; treat them as accurate estimates, not billing-grade numbers.
- When you only care about prompt compression (no token stats), prefer
compressto avoid the extra processing overhead.