o4-mini (high) vs GPT-5 Codex (high)
Comparing 2 AI models · 6 benchmarks · OpenAI
Most Affordable
Op
o4-mini (high)
$1.10/1M
Highest Intelligence
Op
GPT-5 Codex (high)
83.7% GPQA
Best for Coding
Op
GPT-5 Codex (high)
53.5 Coding Index
Price Difference
1.1x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
6 tests
Op
o4-mini (high)
2
- LiveCodeBench
- MATH 500
Op
GPT-5 Codex (high)
4
- GPQA
- MMLU Pro
- HLE
- AIME 2025
| Metric | Op o4-mini (high) | Op GPT-5 Codex (high) |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $1.10/1M | $1.25/1M |
| Output Cost | $4.40/1M | $10.00/1M |
| Blended Cost 3:1 input/output ratio | $1.93/1M | $3.44/1M |
| Specifications | ||
| Organization Model creator | OpenAI | OpenAI |
| Release Date Launch date | Apr 16, 2025 | Sep 23, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 100.2 tok/s | 230.7 tok/s |
| Time to First Token (TTFT) Initial response delay | 21658ms | 16550ms |
| Latency Time to first answer token | 21658ms | 16550ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 59.6 | 68.5 |
| Coding Index Programming ability | 48.9 | 53.5 |
| Math Index Mathematical reasoning | 90.7 | 98.7 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 78.4% | 83.7% |
| MMLU Pro Advanced knowledge | 83.2% | 86.5% |
| HLE Hard language evaluation | 17.5% | 25.6% |
| LiveCodeBench Real-world coding tasks | 85.9% | 84.0% |
| MATH 500 Mathematical problems | 98.9% | — |
| AIME 2025 Advanced math competition | 90.7% | 98.7% |
| AIME (Original) Math olympiad problems | 94.0% | — |
| SciCode Scientific code generation | 46.5% | 40.9% |
| LCR Code review capability | 55.0% | 69.0% |
| IFBench Instruction-following | ||