Skip to main content

Overview

Reasoning models are advanced language models optimized for complex problem-solving and reasoning tasks, improving solution accuracy by outputting detailed reasoning steps (chain of thought).

Typical Use Cases

  • Complex Problem Solving: Suitable for scenarios requiring step-by-step derivation and clear logical steps, such as mathematics and scientific reasoning.
  • Decision Support Systems: Provides detailed reasoning processes to support decision analysis, helping understand the logic behind decisions.
  • Education and Training: Helps users learn and understand complex knowledge by providing detailed derivation processes.

Installation & Preparation

Before using reasoning models, make sure you have the latest version of the OpenAI SDK installed:
pip install -U openai

API Usage

Use reasoning models by calling the /chat/completions endpoint.

Request Parameters

  • max_tokens: Sets the maximum number of output tokens for the model.
  • temperature: Recommended to set between 0.5 and 0.7 (0.6 recommended) to balance creativity and logical coherence.
  • top_p: Recommended to set to 0.95.

Example Request Code

Streaming Request

from openai import OpenAI

client = OpenAI(api_key="YOUR_API_KEY", base_url="https://api.myrouter.ai/openai")
messages = [
    {"role": "user", "content": "Explain Newton's second law."}
]

response = client.chat.completions.create(
    model="deepseek/deepseek-r1",
    messages=messages,
    stream=True,
    max_tokens=4096
)

content = ""
reasoning_content = ""
for chunk in response:
    if chunk.choices[0].delta.content:
        content += chunk.choices[0].delta.content
    if chunk.choices[0].delta.reasoning_content:
        reasoning_content += chunk.choices[0].delta.reasoning_content

print("Final answer:", content)
print("Reasoning process:", reasoning_content)

Non-Streaming Request

response = client.chat.completions.create(
    model="deepseek/deepseek-r1",
    messages=[
        {"role": "user", "content": "What is the greenhouse effect? How can it be mitigated?"}
    ],
    stream=False,
    max_tokens=4096
)

content = response.choices[0].message.content
reasoning_content = response.choices[0].message.reasoning_content

print("Final answer:", content)
print("Reasoning process:", reasoning_content)

Context Management

The reasoning content returned by the model is not automatically appended to the next round of conversation. Users need to manually manage conversation history:
messages.append({"role": "assistant", "content": content})
messages.append({"role": "user", "content": "Please continue explaining the solution."})

Supported Models

Pricing

  • Billing is based on the number of input and output tokens.
  • For specific pricing and conversion rules, please check the model detail page.

Notes & Best Practices

  • Do not add reasoning instructions in the system message; instead, specify instructions directly in the user message.
  • In math problems, clearly state your requirements, for example: “Please reason step by step and clearly state the final answer.”
  • To prevent the model from skipping the reasoning step, it is recommended to force the model to add a newline before the output.