Magistral Small 1.2 vs Magistral Medium 1.2
Comparing 2 AI models · 5 benchmarks · Mistral
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
Magistral Small 1.2
No clear wins
Magistral Medium 1.2
- GPQA
- MMLU Pro
- HLE
- LiveCodeBench
- AIME 2025
| Metric | Mi Magistral Small 1.2 | Mi Magistral Medium 1.2 |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $0.50/1M | $2.00/1M |
| Output Cost | $1.50/1M | $5.00/1M |
| Blended Cost 3:1 input/output ratio | $0.75/1M | $2.75/1M |
| Specifications | ||
| Organization Model creator | Mistral | Mistral |
| Release Date Launch date | Sep 17, 2025 | Sep 18, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 194.0 tok/s | 95.0 tok/s |
| Time to First Token (TTFT) Initial response delay | 352ms | 456ms |
| Latency Time to first answer token | 10664ms | 21506ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 43.0 | 52.0 |
| Coding Index Programming ability | 37.2 | 42.3 |
| Math Index Mathematical reasoning | 80.3 | 82.0 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 66.3% | 73.9% |
| MMLU Pro Advanced knowledge | 76.8% | 81.5% |
| HLE Hard language evaluation | 6.1% | 9.6% |
| LiveCodeBench Real-world coding tasks | 72.3% | 75.0% |
| MATH 500 Mathematical problems | — | — |
| AIME 2025 Advanced math competition | 80.3% | 82.0% |
| AIME (Original) Math olympiad problems | — | — |
| SciCode Scientific code generation | 35.2% | 39.2% |
| LCR Code review capability | 16.3% | 51.3% |
| IFBench Instruction-following | 44.4% | 43.0% |
| TAU-bench v2 Tool use & agentic tasks | 27.8% | 52.0% |
| TerminalBench Hard CLI command generation | 4.3% | 12.8% |
Key Takeaways
Magistral Small 1.2 offers the best value at $0.50/1M, making it ideal for high-volume applications and cost-conscious projects.
Magistral Medium 1.2 leads in reasoning capabilities with a 73.9% GPQA score, excelling at complex analytical tasks and problem-solving.
Magistral Medium 1.2 achieves a 42.3 coding index, making it the top choice for software development and code generation tasks.
All models support context windows of ∞+ tokens, suitable for processing lengthy documents and maintaining extended conversations.
When to Choose Each Model
Magistral Small 1.2
- Cost-sensitive applications
- High-volume processing
Magistral Medium 1.2
- Complex reasoning tasks
- Research & analysis
- Code generation
- Software development
Try Models for Free
Try Magistral Small 1.2 for FREE
No credit card or account required.
Try Magistral Medium 1.2 for FREE
No credit card or account required.