API Reference
Evaluate Prompt
Operations about evaluate_prompts
POST
/
evaluate_prompt/predictPost Evaluate Prompt Predict
Evaluate an AI Prompt
Request Body
application/jsonRequiredmessages[role]Requiredarray<string>
messages[content]Requiredarray<string>
max_tokensinteger
Maximum number of output tokens, maximum 400
Default:
300Format: "int32"temperaturenumber
How creative the response should be. Between 0 and 2, the lower the less creative
Format:
"float"systemstring
For Anthropic, set the system prompt to use
model_kindstring
Which model provider should be used
Default:
"openai"Value in: "openai" | "anthropic"Evaluate an AI Prompt