Claude 4.1 Opus (Reasoning) vs Claude 4 Opus (Reasoning)
Comparing 2 AI models · 6 benchmarks · Anthropic
Most Affordable
An
Claude 4.1 Opus (Reasoning)
$15.00/1M
Highest Intelligence
An
Claude 4.1 Opus (Reasoning)
80.9% GPQA
Best for Coding
An
Claude 4.1 Opus (Reasoning)
46.1 Coding Index
Price Difference
1.0x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
6 tests
An
Claude 4.1 Opus (Reasoning)
5
- GPQA
- MMLU Pro
- HLE
- LiveCodeBench
- AIME 2025
An
Claude 4 Opus (Reasoning)
1
- MATH 500
| Metric | An Claude 4.1 Opus (Reasoning) | An Claude 4 Opus (Reasoning) |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $15.00/1M | $15.00/1M |
| Output Cost | $75.00/1M | $75.00/1M |
| Blended Cost 3:1 input/output ratio | $30.00/1M | $30.00/1M |
| Specifications | ||
| Organization Model creator | Anthropic | Anthropic |
| Release Date Launch date | Aug 5, 2025 | May 22, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 42.4 tok/s | 40.5 tok/s |
| Time to First Token (TTFT) Initial response delay | 1449ms | 1229ms |
| Latency Time to first answer token | 48641ms | 50630ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 59.3 | 54.2 |
| Coding Index Programming ability | 46.1 | 44.2 |
| Math Index Mathematical reasoning | ||