prompts_encode
Purpose-built encoder for chat-style prompts. It detects nested JSON, schemas, and metadata automatically.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
prompt | Any | ✅ | Chat payload (messages, tools, metadata, etc.). |
options | EncodeOptions | ❌ | Same tuning knobs as compress. |
auto_detect_json | bool | ❌ (default True) | Detect JSON fragments embedded inside strings. |
schemas | dict[str, Any] | ❌ | Supply JSON Schemas to help Kaizen segment structured fields. |
metadata | dict[str, Any] | ❌ | Arbitrary observability data echoed in responses. |
token_models | list[str] | ❌ | Request model-specific token stats. |
Code example
Response example
Errors
400→promptmissing or contains unsupported types.422→ Auto-detection failed to parse a JSON fragment (disableauto_detect_jsonas a fallback).
Notes
- Use this method for every provider wrapper; the returned
resultis what you send upstream. - Store
metadata(e.g., customer IDs, workflow names) to correlate savings or replay prompts for auditing.