Claude 4 Opus (Reasoning) vs o3-pro
Comparing 2 AI models · 6 benchmarks · Anthropic, OpenAI
Most Affordable
An
Claude 4 Opus (Reasoning)
$15.00/1M
Highest Intelligence
Op
o3-pro
84.5% GPQA
Best for Coding
An
Claude 4 Opus (Reasoning)
44.2 Coding Index
Price Difference
1.3x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
6 tests
An
Claude 4 Opus (Reasoning)
5
- MMLU Pro
- HLE
- LiveCodeBench
- MATH 500
- AIME 2025
Op
o3-pro
1
- GPQA
| Metric | An Claude 4 Opus (Reasoning) | Op o3-pro |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $15.00/1M | $20.00/1M |
| Output Cost | $75.00/1M | $80.00/1M |
| Blended Cost 3:1 input/output ratio | $30.00/1M | $35.00/1M |
| Specifications | ||
| Organization Model creator | Anthropic | OpenAI |
| Release Date Launch date | May 22, 2025 | Jun 10, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 40.5 tok/s | 55.3 tok/s |
| Time to First Token (TTFT) Initial response delay | 1229ms | 36035ms |
| Latency Time to first answer token | 50630ms | 36035ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 54.2 | 65.3 |
| Coding Index Programming ability | 44.2 | — |
| Math Index Mathematical reasoning | 73.3 | — |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 79.6% | 84.5% |
| MMLU Pro Advanced knowledge | 87.3% | — |
| HLE Hard language evaluation | 11.7% | — |
| LiveCodeBench Real-world coding tasks | 63.6% | — |
| MATH 500 Mathematical problems | 98.2% | — |
| AIME 2025 Advanced math competition | 73.3% | — |
| AIME (Original) Math olympiad problems | 75.7% | — |
| SciCode Scientific code generation | 39.8% | — |
| LCR Code review capability | 33.7% | — |
| IFBench Instruction-following | 53.7% | — |
| TAU-bench v2 Tool use & agentic tasks | 70.5% | |