GLM-4.5 (Reasoning) vs GPT-5.2 (xhigh)
Comparing 2 AI models · 6 benchmarks · Z AI, OpenAI
Most Affordable
Z
GLM-4.5 (Reasoning)
$0.60/1M
Highest Intelligence
Op
GPT-5.2 (xhigh)
90.3% GPQA
Best for Coding
Op
GPT-5.2 (xhigh)
46.7 Coding Index
Price Difference
2.9x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
6 tests
Z
GLM-4.5 (Reasoning)
1
- MATH 500
Op
GPT-5.2 (xhigh)
5
- GPQA
- MMLU Pro
- HLE
- LiveCodeBench
- AIME 2025
| Metric | Z GLM-4.5 (Reasoning) | Op GPT-5.2 (xhigh) |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $0.60/1M | $1.75/1M |
| Output Cost | $2.20/1M | $14.00/1M |
| Blended Cost 3:1 input/output ratio | $1.00/1M | $4.81/1M |
| Specifications | ||
| Organization Model creator | Z AI | OpenAI |
| Release Date Launch date | Jul 28, 2025 | Dec 11, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 60.1 tok/s | 112.9 tok/s |
| Time to First Token (TTFT) Initial response delay | 536ms | 42184ms |
| Latency Time to first answer token | 33800ms | 42184ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 26.5 | 50.5 |
| Coding Index Programming ability | 25.8 | 46.7 |
| Math Index Mathematical reasoning | 73.7 | 99.0 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 78.2% | 90.3% |
| MMLU Pro Advanced knowledge | 83.5% | 87.4% |
| HLE Hard language evaluation | 12.2% | 35.4% |
| LiveCodeBench Real-world coding tasks | 73.8% | |