The content of the message. All messages require content. For assistant messages containing function calls, content can be null.You can use the following parameters depending on the modality.
Text Content
Image Content
Video Content
Option 1:
You can use a string type to represent the text content of the message.
Option 2:
Use an array of content parts, object[]. Detailed fields are as follows:
The maximum number of tokens to generate in the completion.If the number of tokens in your prompt (previous messages) plus max_tokens exceeds the model’s context length, the behavior depends on context_length_exceeded_behavior. By default, max_tokens will be reduced to fit the context window rather than returning an error.
Whether to stream partial progress. If set, tokens will be sent as data-only server-sent events (SSE) as they become available, and the stream will be terminated with a data: [DONE] message.
If set, an additional chunk will be streamed before the data: [DONE] message. The usage field in this chunk shows the token usage statistics for the entire request, while the choices field is always an empty array. All other chunks will also include a usage field, but with a null value.
The number of completions to generate for each prompt.Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure you have reasonable settings for max_tokens and stop.Required range: 1 < x < 128
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Positive values penalize new tokens based on their existing frequency in the text, decreasing the model’s likelihood of repeating the same line verbatim.If the goal is only to slightly reduce repetitive samples, reasonable values are between 0.1 and 1. If the goal is to strongly suppress repetition, the coefficient can be increased to 2, but this may noticeably degrade sample quality. Negative values can be used to increase the likelihood of repetition.See also presence_penalty, which penalizes tokens that have appeared at least once at a fixed rate.Required range: -2 < x < 2
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood of talking about new topics.If the goal is only to slightly reduce repetitive samples, reasonable values are between 0.1 and 1. If the goal is to strongly suppress repetition, the coefficient can be increased to 2, but this may noticeably degrade sample quality. Negative values can be used to increase the likelihood of repetition.See also frequency_penalty, which penalizes tokens at an increasing rate based on how often they appear.Required range: -2 < x < 2
Applies a penalty to repeated tokens to discourage or encourage repetition. A value of 1.0 means no penalty, allowing free repetition. Values above 1.0 penalize repetition, reducing the likelihood of repeated tokens. Values between 0.0 and 1.0 reward repetition, increasing the chance of repeated tokens. A value of 1.2 is generally recommended for a good balance. Note that the penalty applies to both the generated output and the prompt in decoder-only models.Required range: 0 < x < 2
The sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.We generally recommend altering this or top_p, but not both.Required range: 0 < x < 2
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature, but not both.Required range: 0 < x <= 1
Top-k sampling is another sampling method where the k most likely next tokens are filtered and the probability mass is redistributed among only those k next tokens. The value of k controls the number of candidates for the next token at each step during text generation.Required range: 1 < x < 128
Modify the likelihood of specified tokens appearing in the completion.Accepts a JSON object that maps tokens to an associated bias value from -100 to 100.
Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model.For example, setting "logit_bias":{"1024": 6} will increase the likelihood of the token with ID 1024.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.Required range: 0 <= x <= 20
A list of tools the model may call. Currently, only functions are supported as tools. Use this to provide a list of functions the model may generate JSON inputs for.Learn more about function calling in the Function Calling Guide.
Whether to enable strict schema adherence when generating function calls. If set to true, the model will follow the exact schema defined in the parameters field.
Allows forcing the model to produce a specific output format.Set to { "type": "json_schema", "json_schema": {...} } to enable Structured Outputs, which ensures the model will match your supplied JSON schema.Set to { "type": "json_object" } to enable the legacy JSON mode, which ensures the model generates messages that are valid JSON. For models that support it, json_schema is recommended.
JSON Schema response format. Used to generate structured JSON responses.Only supported when type is set to json_schema, and also required when type is set to json_schema.Learn more in the Structured Outputs Guide.
The schema of the response format, described as a JSON Schema object. Learn how to build JSON schemas here.Supported types: string, number, integer, boolean, array, object, enum, anyOf.
Whether to enable strict schema adherence when generating output. If set to true, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is true.If you enable Structured Outputs by providing strict: true and call the API with an unsupported JSON Schema, you will receive an error.
The reason the model stopped generating tokens. This will be “stop” if the model hit a natural stop point or a provided stop sequence, or “length” if the maximum number of tokens specified in the request was reached.Options: stop, length