AI Model Ranking (LLM Leaderboard)
Most Intelligent AI Models
Language models ranked by GPQA reasoning score
|
Model
AI model name and provider organization |
Input/1M
Cost per 1 million input tokens (text you send to the model) |
Output/1M
Cost per 1 million output tokens (text the model generates for you) |
MMLU-Pro
Massive Multitask Language Understanding (Professional) - tests broad knowledge across 14 subjects including STEM, humanities, and social sciences |
GPQA
Graduate-level Google-Proof Q&A benchmark - tests PhD-level reasoning and advanced intelligence |
AIME 2025
American Invitational Mathematics Examination 2025 - tests advanced mathematical problem-solving ability |
Release
When the model was released - newer models may have more capabilities | Compare |
|---|---|---|---|---|---|---|---|
| #1 Gemini 3 Pro Preview (high) by Google | $2.00 | $12.00 | 89.8% | 90.8% | 95.7% | Nov 18, 2025 | |
| #2 Claude Opus 4.5 (Reasoning) by Anthropic | $5.00 | $25.00 | 89.5% | 86.6% | 91.3% | Nov 24, 2025 | |
| #3 GPT-5.1 (high) by OpenAI | $1.25 | $10.00 | 87.0% | 87.3% | 94.0% | Nov 13, 2025 | |
| #4 GPT-5 (high) by OpenAI | $1.25 | $10.00 | 87.1% | 85.4% | 94.3% | Aug 7, 2025 | |
| #5 GPT-5 Codex (high) by OpenAI | $1.25 | $10.00 | 86.5% | 83.7% | 98.7% | Sep 23, 2025 | |
| #6 Kimi K2 Thinking by Moonshot AI | $0.60 | $2.50 | 84.8% | 83.8% | 94.7% | Nov 6, 2025 | |
| #7 GPT-5 (medium) by OpenAI | $1.25 | $10.00 | 86.7% | 84.2% | 91.7% | Aug 7, 2025 | |
| #8 o3 by OpenAI | $2.00 | $8.00 | 85.3% | 82.7% | 88.3% | Apr 16, 2025 | |
| #9 Grok 4 by xAI | $3.00 | $15.00 | 86.6% | 87.7% | 92.7% | Jul 10, 2025 | |
| #10 o3-pro by OpenAI | $20.00 | $80.00 | - | 84.5% | - | Jun 10, 2025 | |
| #11 GPT-5 mini (high) by OpenAI | $0.25 | $2.00 | 83.7% | 82.8% | 90.7% | Aug 7, 2025 | |
| #12 Grok 4.1 Fast (Reasoning) by xAI | $0.20 | $0.50 | 85.4% | 85.3% | 89.3% | Nov 19, 2025 | |
| #13 Claude 4.5 Sonnet (Reasoning) by Anthropic | $3.00 | $15.00 | 87.5% | 83.4% | 88.0% | Sep 29, 2025 | |
| #14 GPT-5 (low) by OpenAI | $1.25 | $10.00 | 86.0% | 80.8% | 83.0% | Aug 7, 2025 | |
| #15 MiniMax-M2 by MiniMax | $0.30 | $1.20 | 82.0% | 77.7% | 78.3% | Oct 26, 2025 | |
| #16 GPT-5 mini (medium) by OpenAI | $0.25 | $2.00 | 82.8% | 80.3% | 85.0% | Aug 7, 2025 | |
| #17 gpt-oss-120B (high) by OpenAI | $0.15 | $0.60 | 80.8% | 78.2% | 93.4% | Aug 5, 2025 | |
| #18 Grok 4 Fast (Reasoning) by xAI | $0.20 | $0.50 | 85.0% | 84.7% | 89.7% | Sep 19, 2025 | |
| #19 Claude Opus 4.5 (Non-reasoning) by Anthropic | $5.00 | $25.00 | 88.9% | 81.0% | 62.7% | Nov 24, 2025 | |
| #20 Gemini 2.5 Pro by Google | $1.25 | $10.00 | 86.2% | 84.4% | 87.7% | Jun 5, 2025 | |
| #21 o4-mini (high) by OpenAI | $1.10 | $4.40 | 83.2% | 78.4% | 90.7% | Apr 16, 2025 | |
| #22 Claude 4.1 Opus (Reasoning) by Anthropic | $15.00 | $75.00 | 88.0% | 80.9% | 80.3% | Aug 5, 2025 | |
| #23 DeepSeek V3.1 Terminus (Reasoning) by DeepSeek | $0.40 | $2.00 | 85.1% | 79.2% | 89.7% | Sep 22, 2025 | |
| #24 Qwen3 235B A22B 2507 (Reasoning) by Alibaba | $0.70 | $8.40 | 84.3% | 79.0% | 91.0% | Jul 25, 2025 | |
| #25 Grok 3 mini Reasoning (high) by xAI | $0.30 | $0.50 | 82.8% | 79.1% | 84.7% | Feb 19, 2025 | |
| #26 Doubao Seed Code by ByteDance Seed | $0.17 | $1.12 | 85.4% | 76.4% | 79.3% | Nov 11, 2025 | |
| #27 DeepSeek V3.2 Exp (Reasoning) by DeepSeek | $0.28 | $0.42 | 85.0% | 79.7% | 87.7% | Sep 29, 2025 | |
| #28 Claude 4 Sonnet (Reasoning) by Anthropic | $3.00 | $15.00 | 84.2% | 77.7% | 74.3% | May 22, 2025 | |
| #29 GLM-4.6 (Reasoning) by Z AI | $0.60 | $2.20 | 82.9% | 78.0% | 86.0% | Sep 30, 2025 | |
| #30 Qwen3 Max Thinking by Alibaba | $1.20 | $6.00 | 82.4% | 77.6% | 82.3% | Nov 3, 2025 | |
| #31 Qwen3 Max by Alibaba | $1.20 | $6.00 | 84.1% | 76.4% | 80.7% | Sep 23, 2025 | |
| #32 Claude 4.5 Haiku (Reasoning) by Anthropic | $1.00 | $5.00 | 76.0% | 67.2% | 83.7% | Oct 15, 2025 | |
| #33 Gemini 2.5 Flash Preview (Sep '25) (Reasoning) by Google | $0.30 | $2.50 | 84.2% | 79.3% | 78.3% | Sep 25, 2025 | |
| #34 Qwen3 VL 235B A22B (Reasoning) by Alibaba | $0.70 | $8.40 | 83.6% | 77.2% | 88.3% | Sep 23, 2025 | |
| #35 Qwen3 Next 80B A3B (Reasoning) by Alibaba | $0.50 | $6.00 | 82.4% | 75.9% | 84.3% | Sep 11, 2025 | |
| #36 Claude 4 Opus (Reasoning) by Anthropic | $15.00 | $75.00 | 87.3% | 79.6% | 73.3% | May 22, 2025 | |
| #37 Gemini 2.5 Pro Preview (Mar' 25) by Google | $1.25 | $10.00 | 85.8% | 83.6% | - | Mar 25, 2025 | |
| #38 DeepSeek V3.1 (Reasoning) by DeepSeek | $0.42 | $1.34 | 85.1% | 77.9% | 89.7% | Aug 21, 2025 | |
| #39 Gemini 2.5 Pro Preview (May' 25) by Google | $1.25 | $10.00 | 83.7% | 82.2% | - | May 6, 2025 | |
| #40 gpt-oss-20B (high) by OpenAI | $0.07 | $0.20 | 74.8% | 68.8% | 89.3% | Aug 5, 2025 | |
| #41 Magistral Medium 1.2 by Mistral | $2.00 | $5.00 | 81.5% | 73.9% | 82.0% | Sep 18, 2025 | |
| #42 DeepSeek R1 0528 (May '25) by DeepSeek | $1.35 | $4.00 | 84.9% | 81.3% | 76.0% | May 28, 2025 | |
| #43 Qwen3 VL 32B (Reasoning) by Alibaba | $0.70 | $8.40 | 81.8% | 73.3% | 84.7% | Oct 21, 2025 | |
| #44 Apriel-v1.5-15B-Thinker by ServiceNow | N/A | N/A | 77.3% | 71.3% | 87.5% | Sep 30, 2025 | |
| #45 Seed-OSS-36B-Instruct by ByteDance Seed | $0.21 | $0.57 | 81.5% | 72.6% | 84.7% | Aug 20, 2025 | |
| #46 GLM-4.5 (Reasoning) by Z AI | $0.57 | $2.19 | 83.5% | 78.2% | 73.7% | Jul 28, 2025 | |
| #47 Gemini 2.5 Flash (Reasoning) by Google | $0.30 | $2.50 | 83.2% | 79.0% | 73.3% | May 20, 2025 | |
| #48 GPT-5 nano (high) by OpenAI | $0.05 | $0.40 | 78.0% | 67.6% | 83.7% | Aug 7, 2025 | |
| #49 o3-mini (high) by OpenAI | $1.10 | $4.40 | 80.2% | 77.3% | - | Jan 31, 2025 | |
| #50 Kimi K2 0905 by Moonshot AI | $0.99 | $2.50 | 81.9% | 76.7% | 57.3% | Sep 5, 2025 | |
| #51 Claude 3.7 Sonnet (Reasoning) by Anthropic | $3.00 | $15.00 | 83.7% | 77.2% | 56.3% | Feb 24, 2025 | |
| #52 Claude 4.5 Sonnet (Non-reasoning) by Anthropic | $3.00 | $15.00 | 86.0% | 72.7% | 37.0% | Sep 29, 2025 | |
| #53 GPT-5 nano (medium) by OpenAI | $0.05 | $0.40 | 77.2% | 67.0% | 78.3% | Aug 7, 2025 | |
| #54 GLM-4.5-Air by Z AI | $0.20 | $1.10 | 81.5% | 73.3% | 80.7% | Jul 28, 2025 | |
| #55 Grok Code Fast 1 by xAI | $0.20 | $1.50 | 79.3% | 72.7% | 43.3% | Aug 28, 2025 | |
| #56 Qwen3 Max (Preview) by Alibaba | $1.20 | $6.00 | 83.8% | 76.4% | 75.0% | Sep 5, 2025 | |
| #57 o3-mini by OpenAI | $1.10 | $4.40 | 79.1% | 74.8% | - | Jan 31, 2025 | |
| #58 Kimi K2 by Moonshot AI | $0.60 | $2.50 | 82.4% | 76.6% | 57.0% | Jul 11, 2025 | |
| #59 o1-pro by OpenAI | $150.00 | $600.00 | - | - | - | Mar 19, 2025 | |
| #60 Gemini 2.5 Flash-Lite Preview (Sep '25) (Reasoning) by Google | $0.10 | $0.40 | 80.8% | 70.9% | 68.7% | Sep 8, 2025 | |
| #61 gpt-oss-120B (low) by OpenAI | $0.15 | $0.59 | 77.5% | 67.2% | 66.7% | Aug 5, 2025 | |
| #62 o1 by OpenAI | $15.00 | $60.00 | 84.1% | 74.7% | - | Dec 5, 2024 | |
| #63 Gemini 2.5 Flash Preview (Sep '25) (Non-reasoning) by Google | $0.30 | $2.50 | 83.6% | 76.6% | 56.7% | Sep 25, 2025 | |
| #64 Qwen3 30B A3B 2507 (Reasoning) by Alibaba | $0.20 | $2.40 | 80.5% | 70.7% | 56.3% | Jul 30, 2025 | |
| #65 DeepSeek V3.2 Exp (Non-reasoning) by DeepSeek | $0.28 | $0.42 | 83.6% | 73.8% | 57.7% | Sep 29, 2025 | |
| #66 Sonar Reasoning Pro by Perplexity | N/A | N/A | - | - | - | Jan 28, 2025 | |
| #67 MiniMax M1 80k by MiniMax | $0.40 | $2.10 | 81.6% | 69.7% | 61.0% | Jun 17, 2025 | |
| #68 Gemini 2.5 Flash Preview (Reasoning) by Google | N/A | N/A | 80.0% | 69.8% | - | Apr 17, 2025 | |
| #69 DeepSeek V3.1 Terminus (Non-reasoning) by DeepSeek | $0.40 | $1.68 | 83.6% | 75.1% | 53.7% | Sep 22, 2025 | |
| #70 Qwen3 VL 30B A3B (Reasoning) by Alibaba | $0.20 | $2.40 | 80.7% | 72.0% | 82.3% | Oct 3, 2025 | |
| #71 Qwen3 235B A22B 2507 Instruct by Alibaba | $0.70 | $2.80 | 82.8% | 75.3% | 71.7% | Jul 21, 2025 | |
| #72 Grok 3 by xAI | $3.00 | $15.00 | 79.9% | 69.3% | 58.0% | Feb 19, 2025 | |
| #73 Llama Nemotron Super 49B v1.5 (Reasoning) by NVIDIA | $0.10 | $0.40 | 81.4% | 74.8% | 76.7% | Jul 25, 2025 | |
| #74 o1-preview by OpenAI | $16.50 | $66.00 | - | - | - | Sep 12, 2024 | |
| #75 Qwen3 Next 80B A3B Instruct by Alibaba | $0.50 | $2.00 | 81.9% | 73.8% | 66.3% | Sep 11, 2025 | |
| #76 Ling-1T by InclusionAI | $0.57 | $2.28 | 82.2% | 71.9% | 71.3% | Oct 8, 2025 | |
| #77 DeepSeek V3.1 (Non-reasoning) by DeepSeek | $0.56 | $1.66 | 83.3% | 73.5% | 49.7% | Aug 21, 2025 | |
| #78 GLM-4.6 (Non-reasoning) by Z AI | $0.60 | $2.20 | 78.4% | 63.2% | 44.3% | Sep 30, 2025 | |
| #79 Claude 4.1 Opus (Non-reasoning) by Anthropic | $15.00 | $75.00 | - | - | - | Aug 5, 2025 | |
| #80 Claude 4 Sonnet (Non-reasoning) by Anthropic | $3.00 | $15.00 | 83.7% | 68.3% | 38.0% | May 22, 2025 | |
| #81 gpt-oss-20B (low) by OpenAI | $0.07 | $0.20 | 71.8% | 61.1% | 62.3% | Aug 5, 2025 | |
| #82 Qwen3 VL 235B A22B Instruct by Alibaba | $0.70 | $2.80 | 82.3% | 71.2% | 70.7% | Sep 23, 2025 | |
| #83 DeepSeek R1 (Jan '25) by DeepSeek | $1.35 | $4.00 | 84.4% | 70.8% | 68.0% | Jan 20, 2025 | |
| #84 GPT-5 (minimal) by OpenAI | $1.25 | $10.00 | 80.6% | 67.3% | 31.7% | Aug 7, 2025 | |
| #85 Qwen3 4B 2507 (Reasoning) by Alibaba | N/A | N/A | 74.3% | 66.7% | 82.7% | Aug 6, 2025 | |
| #86 GPT-4.1 by OpenAI | $2.00 | $8.00 | 80.6% | 66.6% | 34.7% | Apr 14, 2025 | |
| #87 Magistral Small 1.2 by Mistral | $0.50 | $1.50 | 76.8% | 66.3% | 80.3% | Sep 17, 2025 | |
| #88 GPT-5.1 (Non-reasoning) by OpenAI | $1.25 | $10.00 | 80.1% | 64.3% | 38.0% | Nov 13, 2025 | |
| #89 EXAONE 4.0 32B (Reasoning) by LG AI Research | $0.60 | $1.00 | 81.8% | 73.9% | 80.0% | Jul 15, 2025 | |
| #90 GPT-4.1 mini by OpenAI | $0.40 | $1.60 | 78.1% | 66.4% | 46.3% | Apr 14, 2025 | |
| #91 Qwen3 Coder 480B A35B Instruct by Alibaba | $1.50 | $7.50 | 78.8% | 61.8% | 39.3% | Jul 22, 2025 | |
| #92 Claude 4 Opus (Non-reasoning) by Anthropic | $15.00 | $75.00 | 86.0% | 70.1% | 36.3% | May 22, 2025 | |
| #93 GPT-5 (ChatGPT) by OpenAI | $1.25 | $10.00 | 82.0% | 68.6% | 48.3% | Aug 7, 2025 | |
| #94 Ring-1T by InclusionAI | $0.57 | $2.28 | 80.6% | 59.5% | 89.3% | Oct 13, 2025 | |
| #95 Claude 4.5 Haiku (Non-reasoning) by Anthropic | $1.00 | $5.00 | 80.0% | 64.6% | 39.0% | Oct 15, 2025 | |
| #96 Qwen3 235B A22B (Reasoning) by Alibaba | $0.70 | $8.40 | 82.8% | 70.0% | 82.0% | Apr 28, 2025 | |
| #97 GPT-5 mini (minimal) by OpenAI | $0.25 | $2.00 | 77.5% | 68.7% | 46.7% | Aug 7, 2025 | |
| #98 Gemini 2.5 Flash-Lite Preview (Sep '25) (Non-reasoning) by Google | $0.10 | $0.40 | 79.6% | 65.1% | 46.7% | Sep 25, 2025 | |
| #99 Hermes 4 - Llama-3.1 405B (Reasoning) by Nous Research | $1.00 | $3.00 | 82.9% | 72.7% | 69.7% | Aug 27, 2025 | |
| #100 Grok 3 Reasoning Beta by xAI | N/A | N/A | - | - | - | Feb 19, 2025 | |
Understanding the AI Model Leaderboard
This comprehensive AI model leaderboard helps you compare and choose the best large language models (LLMs) for your needs. We track standardized AI benchmarks, token pricing, inference speed, and model capabilities across all major AI providers like OpenAI, Anthropic, Google, Meta, and DeepSeek.
Core AI Benchmarks Explained
- MMLU-Pro: Tests broad knowledge across 14 academic subjects including STEM, humanities, and social sciences - the foundational intelligence benchmark
- GPQA: Graduate-level Google-Proof Q&A benchmark - measures PhD-level reasoning and advanced problem-solving capabilities
- AIME 2025: American Invitational Mathematics Examination - evaluates elite mathematical reasoning and competition-level problem solving
- Coding Index: Composite score of LiveCodeBench, SciCode, and coding benchmarks - measures programming ability
- Math Index: Composite score of AIME, MATH-500, and mathematical reasoning tests
Key Metrics to Consider
- Token Pricing: Compare input vs output token costs per million - crucial for estimating API expenses and optimizing usage patterns
- Inference Speed: Measured in tokens/second - determines response time for chatbots, streaming, and real-time applications
- Release Date: Newer models often incorporate latest training techniques and updated knowledge cutoffs
- Benchmark Scores: Percentage scores (0-100%) make it easy to compare model capabilities at a glance
How to Choose the Right AI Model for Your Use Case
For Research & Analysis
Prioritize models with high MMLU-Pro (70%+) and GPQA (60%+) scores for complex reasoning tasks, academic research, and technical documentation
For Cost Optimization
Sort by input/output pricing - smaller models often deliver 80% of flagship performance at 10% of the cost for simple tasks
For Math & STEM
Filter by Math Index or AIME 2025 scores (50%+) for quantitative analysis, engineering calculations, and scientific applications
All benchmark scores and pricing data are updated daily from Artificial Analysis to reflect the latest model versions and capabilities. Use the sort filters above to find AI models by intelligence, cost, coding ability, math performance, speed, or release date.
Frequently Asked Questions
What is MMLU-Pro and why is it the standard AI intelligence benchmark?
MMLU-Pro (Massive Multitask Language Understanding - Professional) is the most comprehensive AI benchmark, testing models across 14 academic subjects including mathematics, science, history, law, and ethics. Scores range from 46% (basic competency) to 87% (near-expert level). Models scoring above 75% demonstrate strong general intelligence suitable for professional applications, while scores below 60% indicate limitations in complex reasoning tasks.
What does GPQA measure and which models score highest?
GPQA (Graduate-level Google-Proof Q&A) tests PhD-level reasoning with questions designed to be "Google-proof" - requiring deep understanding rather than simple fact retrieval. Top models like GPT-5.1 (87.3%), GPT-5 mini (82.8%), and o3 (82.7%) excel at GPQA, making them ideal for research, technical analysis, and complex problem-solving. Models below 50% GPQA struggle with advanced reasoning and may provide superficial answers to complex questions.
What is AIME 2025 and how does it evaluate AI mathematical ability?
AIME 2025 (American Invitational Mathematics Examination) is an elite math competition benchmark that tests advanced problem-solving, algebra, geometry, and number theory. Scores above 80% (like GPT-5 Codex at 98.7% or GPT-5.1 at 94%) indicate exceptional mathematical reasoning suitable for engineering, scientific computing, and quantitative analysis. Models scoring below 50% may struggle with multi-step mathematical problems or require explicit problem breakdown.
How is AI model pricing calculated and what's considered cost-effective?
AI model pricing is measured per 1 million tokens (approximately 750,000 words). Input pricing covers text you send, while output pricing covers generated responses. Budget models like Llama 3.3 70B cost $0.54/$0.71 per million tokens, mid-tier models like GPT-5 nano cost $0.05/$0.40, while premium models like GPT-5 cost $1.25/$10. For typical applications with 3:1 input-to-output ratio, budget models can be 10-20x cheaper than flagship models while maintaining 70-80% performance.
Which AI models are best for coding and programming tasks?
Sort by Coding Index to see top programming models. Our Coding Index combines LiveCodeBench, SciCode, and coding benchmarks. Top performers include GPT-5.1 (57.5 index), GPT-5 mini (51.4), and GPT-5 Codex (53.5). These models excel at code generation, debugging, refactoring, and explaining complex algorithms. For budget-conscious developers, models with 40+ coding index scores offer excellent value for routine programming tasks.
How often are AI model benchmarks and rankings updated?
Our leaderboard syncs daily with Artificial Analysis API to ensure benchmark scores (MMLU-Pro, GPQA, AIME 2025), pricing, and inference speed data reflect the latest model versions. New model releases appear immediately under the "Newest" sort option. Benchmark scores can change when providers release updated versions - for example, GPT-5.1 released in November 2025 achieved 69.7 intelligence compared to GPT-5's 68.5 from August 2025.
What inference speed (tokens/second) do I need for my application?
Inference speed determines how fast models generate responses. For real-time chatbots and interactive applications, target 100+ tokens/second (models like gpt-oss-120B at 340 tok/s). For background processing and batch jobs, 50-100 tok/s is sufficient. Premium reasoning models like GPT-5 (103 tok/s) balance speed and capability. Note that higher inference speed doesn't always mean better quality - slower models often deliver more thoughtful, detailed responses.
Can I test these AI models for free before committing?
Yes! Try our free AI chat interface to test different models instantly without creating an account. Many providers also offer free tiers: OpenAI (ChatGPT with daily limits), Anthropic (Claude with usage caps), Google (Gemini free tier), and open-source models like Llama 3.3. Compare performance on your specific use case before upgrading to paid plans.