Claude 4.5 Sonnet (Reasoning) vs Grok 4
Comparing 2 AI models · 6 benchmarks · Anthropic, xAI
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
Claude 4.5 Sonnet (Reasoning)
- MMLU Pro
Grok 4
- GPQA
- HLE
- LiveCodeBench
- MATH 500
- AIME 2025
| Metric | An Claude 4.5 Sonnet (Reasoning) | xA Grok 4 |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $3.00/1M | $3.00/1M |
| Output Cost | $15.00/1M | $15.00/1M |
| Blended Cost 3:1 input/output ratio | $6.00/1M | $6.00/1M |
| Specifications | ||
| Organization Model creator | Anthropic | xAI |
| Release Date Launch date | Sep 29, 2025 | Jul 10, 2025 |
| Performance & Speed | ||
| Throughput Output speed | 72.5 tok/s | 37.2 tok/s |
| Time to First Token (TTFT) Initial response delay | 2014ms | 9172ms |
| Latency Time to first answer token | 29615ms | 9172ms |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 62.7 | 65.3 |
| Coding Index Programming ability | 49.8 | 55.1 |
| Math Index Mathematical reasoning | 88.0 | 92.7 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 83.4% | 87.7% |
| MMLU Pro Advanced knowledge | 87.5% | 86.6% |
| HLE Hard language evaluation | 17.3% | 23.9% |
| LiveCodeBench Real-world coding tasks | 71.4% | 81.9% |
| MATH 500 Mathematical problems | — | 99.0% |
| AIME 2025 Advanced math competition | 88.0% | 92.7% |
| AIME (Original) Math olympiad problems | — | 94.3% |
| SciCode Scientific code generation | 44.7% | 45.7% |
| LCR Code review capability | 65.7% | 68.0% |
| IFBench Instruction-following | 57.3% | 53.7% |
| TAU-bench v2 Tool use & agentic tasks | 78.1% | 74.9% |
| TerminalBench Hard CLI command generation | 33.3% | 37.6% |
Key Takeaways
Claude 4.5 Sonnet (Reasoning) offers the best value at $3.00/1M, making it ideal for high-volume applications and cost-conscious projects.
Grok 4 leads in reasoning capabilities with a 87.7% GPQA score, excelling at complex analytical tasks and problem-solving.
Grok 4 achieves a 55.1 coding index, making it the top choice for software development and code generation tasks.
All models support context windows of ∞+ tokens, suitable for processing lengthy documents and maintaining extended conversations.
When to Choose Each Model
Claude 4.5 Sonnet (Reasoning)
- Cost-sensitive applications
- High-volume processing
Grok 4
- Complex reasoning tasks
- Research & analysis
- Code generation
- Software development