MiniMax-M2 vs DeepSeek R1 0528 (May '25)
Comparing 2 AI models · 6 benchmarks · MiniMax, DeepSeek
Most Affordable
Mi
MiniMax-M2
$0.30/1M
Highest Intelligence
De
DeepSeek R1 0528 (May '25)
81.3% GPQA
Best for Coding
Mi
MiniMax-M2
47.6 Coding Index
Price Difference
4.5x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
6 tests
Mi
MiniMax-M2
2
- LiveCodeBench
- AIME 2025
De
DeepSeek R1 0528 (May '25)
4
- GPQA
- MMLU Pro
- HLE
- MATH 500
| Metric | Mi MiniMax-M2 | De DeepSeek R1 0528 (May '25) |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $0.30/1M | $1.35/1M |
| Output Cost | $1.20/1M | $4.00/1M |
| Blended Cost 3:1 input/output ratio | $0.52/1M | $2.01/1M |
| Specifications | ||
| Organization Model creator | MiniMax | DeepSeek |
| Release Date Launch date | Oct 26, 2025 | May 28, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 96.6 tok/s | — |
| Time to First Token (TTFT) Initial response delay | 1543ms | — |
| Latency Time to first answer token | 22253ms | — |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 61.4 | 52.0 |
| Coding Index Programming ability | 47.6 | 44.1 |
| Math Index Mathematical reasoning | 78.3 | 76.0 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 77.7% | 81.3% |
| MMLU Pro Advanced knowledge | 82.0% | 84.9% |
| HLE Hard language evaluation | 12.5% | 14.9% |
| LiveCodeBench Real-world coding tasks | 82.6% | 77.0% |
| MATH 500 Mathematical problems | — | 98.3% |
| AIME 2025 Advanced math competition | 78.3% | 76.0% |