gpt-oss-20B (high) vs Gemini 3 Pro Preview (high)
Comparing 2 AI models · 5 benchmarks · OpenAI, Google
Most Affordable
Op
gpt-oss-20B (high)
$0.07/1M
Highest Intelligence
Go
Gemini 3 Pro Preview (high)
90.8% GPQA
Best for Coding
Go
Gemini 3 Pro Preview (high)
62.3 Coding Index
Price Difference
28.6x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
5 tests
Op
gpt-oss-20B (high)
0
No clear wins
Go
Gemini 3 Pro Preview (high)
5
- GPQA
- MMLU Pro
- HLE
- LiveCodeBench
- AIME 2025
| Metric | Op gpt-oss-20B (high) | Go Gemini 3 Pro Preview (high) |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $0.07/1M | $2.00/1M |
| Output Cost | $0.20/1M | $12.00/1M |
| Blended Cost 3:1 input/output ratio | $0.10/1M | $4.50/1M |
| Specifications | ||
| Organization Model creator | OpenAI | |
| Release Date Launch date | Aug 5, 2025 | Nov 18, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 236.2 tok/s | 138.9 tok/s |
| Time to First Token (TTFT) Initial response delay | 586ms | 26684ms |
| Latency Time to first answer token | 9054ms | 26684ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 52.4 | 72.8 |
| Coding Index Programming ability | 40.7 | 62.3 |
| Math Index Mathematical reasoning | 89.3 | 95.7 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 68.8% | 90.8% |
| MMLU Pro Advanced knowledge | 74.8% | 89.8% |
| HLE Hard language evaluation | 9.8% | 37.2% |
| LiveCodeBench Real-world coding tasks | 77.7% | 91.7% |
| MATH 500 Mathematical problems | — | — |
| AIME 2025 Advanced math competition | 89.3% | 95.7% |
| AIME (Original) Math olympiad problems | — | — |
| SciCode Scientific code generation | 34.4% | 56.1% |
| LCR Code review capability | 34.3% | |