Ling-1T vs Ring-1T
Comparing 2 AI models · 5 benchmarks · InclusionAI
Most Affordable
In
Ling-1T
$0.57/1M
Highest Intelligence
In
Ling-1T
71.9% GPQA
Best for Coding
In
Ling-1T
37.6 Coding Index
Price Difference
1.0x
input cost range
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Winners
5 tests
In
Ling-1T
3
- GPQA
- MMLU Pro
- LiveCodeBench
In
Ring-1T
2
- HLE
- AIME 2025
| Metric | In Ling-1T | In Ring-1T |
|---|---|---|
| Pricing Per 1M tokens | ||
| Input Cost | $0.57/1M | $0.57/1M |
| Output Cost | $2.28/1M | $2.28/1M |
| Blended Cost 3:1 input/output ratio | $1.00/1M | $1.00/1M |
| Specifications | ||
| Organization Model creator | InclusionAI | InclusionAI |
| Release Date Launch date | Oct 8, 2025 | Oct 13, 2025 |
| Performance & Speed | ||
| Throughput Output speed | — | — |
| Time to First Token (TTFT) Initial response delay | — | — |
| Latency Time to first answer token | — | — |
| Composite Indices | ||
| Intelligence Index Overall reasoning capability | 44.8 | 41.8 |
| Coding Index Programming ability | 37.6 | 35.8 |
| Math Index Mathematical reasoning | 71.3 | 89.3 |
| Standard Benchmarks | ||
| GPQA Graduate-level reasoning | 71.9% | 59.5% |
| MMLU Pro Advanced knowledge | 82.2% | 80.6% |
| HLE Hard language evaluation | 7.2% | 10.2% |
| LiveCodeBench Real-world coding tasks | 67.7% | 64.3% |
| MATH 500 Mathematical problems | — | — |
| AIME 2025 Advanced math competition | 71.3% | 89.3% |
| AIME (Original) Math olympiad problems | — | — |
| SciCode Scientific code generation | 35.2% | 36.7% |
| LCR Code review capability | 34.7% | 0.0% |
| IFBench Instruction-following | 34.8% | 44.6% |
| TAU-bench v2 Tool use & agentic tasks | 32.7% | 26.3% |
| TerminalBench Hard CLI command generation | 9.9% | 6.4% |