GPT-5.1 Codex (high) vs Gemini 2.5 Pro
Comparing 2 AI models · 6 benchmarks · OpenAI, Google
Most Affordable
Op
GPT-5.1 Codex (high)
$1.25/1M
Highest Intelligence
Op
GPT-5.1 Codex (high)
86.0% GPQA
Best for Coding
Op
GPT-5.1 Codex (high)
35.1 Coding Index
Price Difference
1.0x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
6 tests
Op
GPT-5.1 Codex (high)
4
- GPQA
- HLE
- LiveCodeBench
- AIME 2025
Go
Gemini 2.5 Pro
2
- MMLU Pro
- MATH 500
| Metric | Op GPT-5.1 Codex (high) | Go Gemini 2.5 Pro |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $1.25/1M | $1.25/1M |
| Output Cost | $10.00/1M | $10.00/1M |
| Blended Cost 3:1 input/output ratio | $3.44/1M | $3.44/1M |
| Specifications | ||
| Organization Model creator | OpenAI | |
| Release Date Launch date | Nov 13, 2025 | Jun 5, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 190.0 tok/s | 154.7 tok/s |
| Time to First Token (TTFT) Initial response delay | 15165ms | 35671ms |
| Latency Time to first answer token | 15165ms | 35671ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 41.5 | 34.1 |
| Coding Index Programming ability | 35.1 | 30.8 |
| Math Index Mathematical reasoning | 95.7 | 87.7 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 86.0% | 84.4% |
| MMLU Pro Advanced knowledge | 86.0% | 86.2% |
| HLE Hard language evaluation | 23.4% | 21.1% |
| LiveCodeBench Real-world coding tasks | 84.9% | |