GPT-4.1 vs o3
Comparing 2 AI models · 6 benchmarks · OpenAI
Most Affordable
Op
GPT-4.1
$2.00/1M
Highest Intelligence
Op
o3
82.7% GPQA
Best for Coding
Op
o3
52.2 Coding Index
Price Difference
1.0x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
6 tests
Op
GPT-4.1
0
No clear wins
Op
o3
6
- GPQA
- MMLU Pro
- HLE
- LiveCodeBench
- MATH 500
- AIME 2025
| Metric | Op GPT-4.1 | Op o3 |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $2.00/1M | $2.00/1M |
| Output Cost | $8.00/1M | $8.00/1M |
| Blended Cost 3:1 input/output ratio | $3.50/1M | $3.50/1M |
| Specifications | ||
| Organization Model creator | OpenAI | OpenAI |
| Release Date Launch date | Apr 14, 2025 | Apr 16, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 85.4 tok/s | 244.5 tok/s |
| Time to First Token (TTFT) Initial response delay | 561ms | 13759ms |
| Latency Time to first answer token | 561ms | 13759ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 43.4 | 65.5 |
| Coding Index Programming ability | 32.2 | 52.2 |
| Math Index Mathematical reasoning | 34.7 | 88.3 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 66.6% | 82.7% |
| MMLU Pro Advanced knowledge | 80.6% | 85.3% |
| HLE Hard language evaluation | 4.6% | 20.0% |
| LiveCodeBench Real-world coding tasks | 45.7% | 80.8% |
| MATH 500 Mathematical problems | 91.3% | 99.2% |
| AIME 2025 Advanced math competition | 34.7% | 88.3% |
| AIME (Original) Math olympiad problems | 43.7% | |