Skip to main content

Reduce token usage before LLM calls

Use client.optimize_request and client.prompts_encode to ensure your prompts stay within token limits.