Which models should I use?
There is no single right answer! The following is a curated list based on Myrouter internal testing, community feedback, and external benchmarks. We recommend using it as a starting point and will update it regularly as new models become available.- Model sizes are labeled as Small, Medium, or Large
- For best latency, use Small or Medium models. For best quality, use Large models or fine-tune Medium or Small models.
- You can explore all models in the Myrouter model library
| Use Case | Recommended Model |
|---|---|
| Code Generation & Reasoning | Claude series (Large/Medium/Small) |
| Deepseek-r1-0528 (Large) | |
| Deepseek-v3-0324 (Large) | |
| Qwen3-Coder-480B-A35B-Instruct (Large) | |
| Qwen3-235B-A22B-Instruct-2507 (Large) | |
| Kimi-K2-Instruct (Medium) | |
| GLM-4.5 (Medium) | |
| General Reasoning & Planning | Deepseek-r1-0528 (Large) |
| Deepseek-v3-0324 (Large) | |
| Qwen-2.5-72b-instruct (Medium) | |
| Llama-3.3-70b-instruct (Medium) | |
| Function Calling & Tool Use | Qwen3-235b-a22b-fp8 (Large) |
| Qwen 3 series (Large/Medium/Small) | |
| Long Context & Summarization | Llama-4-maverick-17b-128e-instruct-fp8 (Large) |
| Llama-4-scout-17b-16e-instruct (Medium) | |
| Vision & Document Understanding | Llama-4-maverick-17b-128e-instruct-fp8 (Large) |
| Qwen2.5-vl-72b-instruct (Medium) | |
| Llama-4-scout-17b-16e-instruct (Medium) | |
| Low-Latency NLU & Extraction | Llama-3.1-8b-instruct (Small) |
| Llama-3.2-3b-instruct (Small) |