o3-pro vs o1-preview
Comparing 2 AI models · 2 benchmarks · OpenAI
Most Affordable
Op
o1-preview
$16.50/1M
Highest Intelligence
Op
o3-pro
84.5% GPQA
Best for Coding
Op
o1-preview
34.0 Coding Index
Price Difference
1.2x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
2 tests
Op
o3-pro
1
- GPQA
Op
o1-preview
1
- MATH 500
| Metric | Op o3-pro | Op o1-preview |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $20.00/1M | $16.50/1M |
| Output Cost | $80.00/1M | $66.00/1M |
| Blended Cost 3:1 input/output ratio | $35.00/1M | $28.88/1M |
| Specifications | ||
| Organization Model creator | OpenAI | OpenAI |
| Release Date Launch date | Jun 10, 2025 | Sep 12, 2024 |
| Performance & Speed | ||
| Throughput Output speed | 56.6 tok/s | — |
| Time to First Token (TTFT) Initial response delay | 35168ms | — |
| Latency Time to first answer token | 35168ms | — |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 65.3 | 44.9 |
| Coding Index Programming ability | — | 34.0 |
| Math Index Mathematical reasoning | — | — |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 84.5% | — |
| MMLU Pro Advanced knowledge | — | — |
| HLE Hard language evaluation | — | — |
| LiveCodeBench Real-world coding tasks | — | — |
| MATH 500 Mathematical problems | — | 92.4% |
| AIME 2025 Advanced math competition | — | — |
| AIME (Original) Math olympiad problems | — | |