Skip to main content
POST
/
openai
/
v1
/
chat
/
completions
Create Chat Completion
curl --request POST \
  --url https://api.myrouter.ai/openai/v1/chat/completions \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: <content-type>' \
  --data '
{
  "model": "<string>",
  "messages": [
    {
      "content": {
        "type": "<string>",
        "text": "<string>",
        "image_url": "<string>",
        "video_url": "<string>"
      },
      "role": "<string>",
      "name": "<string>"
    }
  ],
  "max_tokens": 123,
  "stream": {},
  "stream_options": {
    "include_usage": true
  },
  "n": {},
  "seed": {},
  "frequency_penalty": {},
  "presence_penalty": {},
  "repetition_penalty": {},
  "stop": {},
  "temperature": {},
  "top_p": {},
  "top_k": {},
  "min_p": {},
  "logit_bias": {},
  "logprobs": {},
  "top_logprobs": {},
  "tools": {
    "type": "<string>",
    "function": {
      "name": "<string>",
      "description": {},
      "parameters": {},
      "strict": true
    }
  },
  "response_format": {
    "type": "<string>",
    "json_schema": {
      "name": "<string>",
      "description": {},
      "schema": {},
      "strict": true
    }
  },
  "separate_reasoning": {},
  "enable_thinking": {}
}
'
{
  "choices": [
    {
      "finish_reason": "<string>",
      "index": 123,
      "message": {
        "role": "<string>",
        "content": {},
        "reasoning_content": {}
      }
    }
  ],
  "created": 123,
  "id": "<string>",
  "model": "<string>",
  "object": "<string>",
  "usage": {
    "completion_tokens": 123,
    "prompt_tokens": 123,
    "total_tokens": 123
  }
}
Generate a model response based on the specified chat conversation.

Request Headers

Content-Type
string
required
Enum: application/json
Authorization
string
required
Bearer authentication format: Bearer {{API Key}}.

Request Body

model
string
required
The name of the model to use.
messages
object[]
required
A list of messages comprising the current conversation.
max_tokens
integer
required
The maximum number of tokens to generate in the completion.If the number of tokens in your prompt (previous messages) plus max_tokens exceeds the model’s context length, the behavior depends on context_length_exceeded_behavior. By default, max_tokens will be reduced to fit the context window rather than returning an error.
stream
boolean | null
default:false
Whether to stream partial progress. If set, tokens will be sent as data-only server-sent events (SSE) as they become available, and the stream will be terminated with a data: [DONE] message.
stream_options
object | null
Options for streaming responses. Only set this when stream is set to true.
n
integer | null
default:1
The number of completions to generate for each prompt.Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure you have reasonable settings for max_tokens and stop.Required range: 1 < x < 128
seed
integer | null
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
frequency_penalty
number | null
default:0
Positive values penalize new tokens based on their existing frequency in the text, decreasing the model’s likelihood of repeating the same line verbatim.If the goal is only to slightly reduce repetitive samples, reasonable values are between 0.1 and 1. If the goal is to strongly suppress repetition, the coefficient can be increased to 2, but this may noticeably degrade sample quality. Negative values can be used to increase the likelihood of repetition.See also presence_penalty, which penalizes tokens that have appeared at least once at a fixed rate.Required range: -2 < x < 2
presence_penalty
number | null
default:0
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood of talking about new topics.If the goal is only to slightly reduce repetitive samples, reasonable values are between 0.1 and 1. If the goal is to strongly suppress repetition, the coefficient can be increased to 2, but this may noticeably degrade sample quality. Negative values can be used to increase the likelihood of repetition.See also frequency_penalty, which penalizes tokens at an increasing rate based on how often they appear.Required range: -2 < x < 2
repetition_penalty
number | null
Applies a penalty to repeated tokens to discourage or encourage repetition. A value of 1.0 means no penalty, allowing free repetition. Values above 1.0 penalize repetition, reducing the likelihood of repeated tokens. Values between 0.0 and 1.0 reward repetition, increasing the chance of repeated tokens. A value of 1.2 is generally recommended for a good balance. Note that the penalty applies to both the generated output and the prompt in decoder-only models.Required range: 0 < x < 2
stop
string | null
Up to 4 sequences where the API will stop generating further tokens. The returned text will include the stop sequence.
temperature
number | null
default:1
The sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.We generally recommend altering this or top_p, but not both.Required range: 0 < x < 2
top_p
number | null
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature, but not both.Required range: 0 < x <= 1
top_k
integer | null
Top-k sampling is another sampling method where the k most likely next tokens are filtered and the probability mass is redistributed among only those k next tokens. The value of k controls the number of candidates for the next token at each step during text generation.Required range: 1 < x < 128
min_p
number | null
Represents the minimum probability for a token to be considered, relative to the probability of the most likely token.Required range: 0 <= x <= 1
logit_bias
map[string, integer] | null
Modify the likelihood of specified tokens appearing in the completion.Accepts a JSON object that maps tokens to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model.For example, setting "logit_bias":{"1024": 6} will increase the likelihood of the token with ID 1024.
logprobs
boolean | null
default:false
Whether to return log probabilities of the output tokens. If true, returns the log probabilities of each output token in the message content.
top_logprobs
integer | null
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.Required range: 0 <= x <= 20
tools
object[] | null
A list of tools the model may call. Currently, only functions are supported as tools. Use this to provide a list of functions the model may generate JSON inputs for.Learn more about function calling in the Function Calling Guide.
response_format
object | null
Allows forcing the model to produce a specific output format.Set to { "type": "json_schema", "json_schema": {...} } to enable Structured Outputs, which ensures the model will match your supplied JSON schema.Set to { "type": "json_object" } to enable the legacy JSON mode, which ensures the model generates messages that are valid JSON. For models that support it, json_schema is recommended.
separate_reasoning
boolean | null
default:false
Whether to separate reasoning from “content” into the “reasoning_content” field.Supported models:
  • deepseek/deepseek-r1-turbo
enable_thinking
boolean | null
default:true
Controls switching between thinking and non-thinking modes.Supported models:
  • zai-org/glm-4.5

Response

choices
object[]
required
A list of chat completion choices.
created
integer
required
The Unix timestamp (in seconds) of when the response was generated.
id
string
required
A unique identifier for the response.
model
string
required
The model used for the chat completion.
object
string
required
The object type, always chat.completion.
usage
object
Usage statistics.For streaming responses, the usage field is included in the last response chunk returned.