optimize_request
Compress an outbound provider payload (messages, tools, multimodal content) and collect stats before you hit the vendor API.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
prompt | Any | ✅ | Provider-ready payload (messages, input, tools, etc.). |
options | EncodeOptions | ❌ | Same knobs as compress. |
auto_detect_json | bool | ❌ (default True) | Automatically find JSON snippets nested inside strings. |
metadata | dict[str, Any] | ❌ | User-defined context echoed in responses. |
token_models | list[str] | ❌ | Request token stats for specific models. |
Code example
Response example
Errors
400→promptmissing/empty, or contains unsupported field types.422→ Auto-detection failed to parse embedded JSON; disableauto_detect_jsonif your payload already uses strict types.
Notes
- Feed
resultdirectly into your provider client (OpenAI Responses, Anthropic Messages, Geminigenerate_content, etc.). - For the matching decode step after the provider responds, use
optimize_response.