AI Model Ranking (LLM Leaderboard)
Most Intelligent AI Models
Language models ranked by Artificial Analysis Index
| Model AI model name and provider organization | Intelligence Artificial Analysis Intelligence Index - composite reasoning and capability score across the benchmark suite | Speed Inference throughput in tokens per second - how fast the model generates responses | Context Maximum context window size - how much text, code, or conversation the model can process at once | Price Cost per 1 million tokens β input (text you send) / output (text the model generates) | Release When the model was released - newer models may have more capabilities | Compare |
|---|---|---|---|---|---|---|
| #1 GPT-5.5 (xhigh) by OpenAI | 60.2 | 62 tok/s | 1.1M | $5.00 / $30.00 | Apr 23, 2026 | |
| #2 GPT-5.5 (high) by OpenAI | 58.9 | 58 tok/s | 1.1M | $5.00 / $30.00 | Apr 23, 2026 | |
| #3 Claude Opus 4.7 (Adaptive Reasoning, Max Effort) by Anthropic | 57.3 | 51 tok/s | 1.0M | $5.00 / $25.00 | Apr 16, 2026 | |
| #4 Gemini 3.1 Pro Preview by Google | 57.2 | 127 tok/s | 1.0M | $2.00 / $12.00 | Feb 19, 2026 | |
| #5 GPT-5.4 (xhigh) by OpenAI | 56.8 | 93 tok/s | 1.1M | $2.50 / $15.00 | Mar 5, 2026 | |
| #6 GPT-5.5 (medium) by OpenAI | 56.7 | 57 tok/s | 1.1M | $5.00 / $30.00 | Apr 23, 2026 | |
| #7 Kimi K2.6 by MoonshotAI | 53.9 | 25 tok/s | 262K | $0.95 / $4.00 | Apr 20, 2026 | |
| #8 MiMo-V2.5-Pro by Xiaomi | 53.8 | 59 tok/s | 1.0M | $1.00 / $3.00 | Apr 22, 2026 | |
| #9 GPT-5.3 Codex (xhigh) by OpenAI | 53.6 | 86 tok/s | 400K | $1.75 / $14.00 | Feb 5, 2026 | |
| #10 Claude Opus 4.6 (Adaptive Reasoning, Max Effort) by Anthropic | 53.0 | 49 tok/s | N/A | $5.00 / $25.00 | Feb 5, 2026 | |
| #11 Muse Spark by Meta | 52.1 | N/A | N/A | N/A / N/A | Apr 8, 2026 | |
| #12 Claude Opus 4.7 (Non-reasoning, High Effort) by Anthropic | 51.8 | 43 tok/s | 1.0M | $5.00 / $25.00 | Apr 16, 2026 | |
| #13 Qwen3.6 Max Preview by Alibaba | 51.8 | 33 tok/s | 262K | $1.30 / $7.80 | Apr 20, 2026 | |
| #14 Claude Sonnet 4.6 (Adaptive Reasoning, Max Effort) by Anthropic | 51.7 | 67 tok/s | 1.0M | $3.00 / $15.00 | Feb 17, 2026 | |
| #15 V4 Pro (Reasoning, Max Effort) by DeepSeek | 51.5 | 34 tok/s | 1.0M | $1.74 / $3.48 | Apr 24, 2026 | |
| #16 GLM-5.1 (Reasoning) by Z AI | 51.4 | 45 tok/s | 203K | $1.40 / $4.40 | Apr 7, 2026 | |
| #17 GPT-5.2 (xhigh) by OpenAI | 51.3 | 71 tok/s | 400K | $1.75 / $14.00 | Dec 11, 2025 | |
| #18 GPT-5.5 (low) by OpenAI | 50.8 | 55 tok/s | 1.1M | $5.00 / $30.00 | Apr 23, 2026 | |
| #19 Qwen3.6 Plus by Alibaba | 50.0 | 53 tok/s | 1.0M | $0.50 / $3.00 | Apr 2, 2026 | |
| #20 V4 Pro (Reasoning, High Effort) by DeepSeek | 49.8 | 33 tok/s | 1.0M | $1.74 / $3.48 | Apr 24, 2026 | |
| #21 GLM-5 (Reasoning) by Z AI | 49.8 | 65 tok/s | 203K | $1.00 / $3.20 | Feb 11, 2026 | |
| #22 Claude Opus 4.5 (Reasoning) by Anthropic | 49.7 | 57 tok/s | 200K | $5.00 / $25.00 | Nov 24, 2025 | |
| #23 M2.7 by MiniMax | 49.6 | 45 tok/s | 197K | $0.30 / $1.20 | Mar 18, 2026 | |
| #24 Grok 4.20 0309 v2 (Reasoning) by xAI | 49.3 | 88 tok/s | N/A | $2.00 / $6.00 | Apr 7, 2026 | |
| #25 MiMo-V2-Pro by Xiaomi | 49.2 | N/A | 1.0M | N/A / N/A | Mar 18, 2026 | |
| #26 MiMo-V2.5 by Xiaomi | 49.0 | N/A | 1.0M | N/A / N/A | N/A | |
| #27 GPT-5.2 Codex (xhigh) by OpenAI | 49.0 | 88 tok/s | 400K | $1.75 / $14.00 | Dec 11, 2025 | |
| #28 GPT-5.4 mini (xhigh) by OpenAI | 48.9 | 164 tok/s | 400K | $0.75 / $4.50 | Mar 17, 2026 | |
| #29 Grok 4.20 0309 (Reasoning) by xAI | 48.5 | 85 tok/s | N/A | $2.00 / $6.00 | Mar 10, 2026 | |
| #30 Gemini 3 Pro Preview (high) by Google | 48.4 | 123 tok/s | N/A | $2.00 / $12.00 | Nov 18, 2025 | |
| #31 GPT-5.4 (low) by OpenAI | 47.9 | 59 tok/s | 1.1M | $2.50 / $15.00 | Mar 5, 2026 | |
| #32 GPT-5.1 (high) by OpenAI | 47.7 | 131 tok/s | 400K | $1.25 / $10.00 | Nov 13, 2025 | |
| #33 GLM-5-Turbo by Z AI | 46.8 | N/A | 203K | N/A / N/A | Mar 15, 2026 | |
| #34 Kimi K2.5 (Reasoning) by MoonshotAI | 46.8 | 31 tok/s | 262K | $0.60 / $3.00 | Jan 27, 2026 | |
| #35 GPT-5.2 (medium) by OpenAI | 46.6 | N/A | 400K | $1.75 / $14.00 | Dec 11, 2025 | |
| #36 V4 Flash (Reasoning, Max Effort) by DeepSeek | 46.5 | 79 tok/s | 1.0M | $0.14 / $0.28 | Apr 24, 2026 | |
| #37 Claude Opus 4.6 (Non-reasoning, High Effort) by Anthropic | 46.5 | 41 tok/s | 1.0M | $5.00 / $25.00 | Feb 5, 2026 | |
| #38 Gemini 3 Flash Preview (Reasoning) by Google | 46.4 | 184 tok/s | 1.0M | $0.50 / $3.00 | Dec 17, 2025 | |
| #39 Qwen3.6 27B (Reasoning) by Alibaba | 45.8 | 63 tok/s | 256K | $0.60 / $3.60 | Apr 22, 2026 | |
| #40 Qwen3.5 397B A17B (Reasoning) by Alibaba | 45.0 | 50 tok/s | 262K | $0.60 / $3.60 | Feb 16, 2026 | |
| #41 V4 Flash (Reasoning, High Effort) by DeepSeek | 44.9 | N/A | 1.0M | $0.14 / $0.28 | Apr 24, 2026 | |
| #42 MiMo-V2-Omni-0327 by Xiaomi | 44.9 | N/A | N/A | N/A / N/A | Mar 27, 2026 | |
| #43 GPT-5 (high) by OpenAI | 44.6 | 82 tok/s | 400K | $1.25 / $10.00 | Aug 7, 2025 | |
| #44 GPT-5 Codex (high) by OpenAI | 44.6 | 165 tok/s | 400K | $1.25 / $10.00 | Sep 23, 2025 | |
| #45 Claude Sonnet 4.6 (Non-reasoning, High Effort) by Anthropic | 44.4 | 47 tok/s | 1.0M | $3.00 / $15.00 | Feb 17, 2026 | |
| #46 GPT-5.4 nano (xhigh) by OpenAI | 44.0 | 160 tok/s | 400K | $0.20 / $1.25 | Mar 17, 2026 | |
| #47 KAT Coder Pro V2 by KwaiKAT | 43.8 | 111 tok/s | 256K | $0.30 / $1.20 | Mar 27, 2026 | |
| #48 GLM-5.1 (Non-reasoning) by Z AI | 43.8 | 41 tok/s | 203K | $1.40 / $4.40 | Apr 7, 2026 | |
| #49 Qwen3.6 35B A3B (Reasoning) by Alibaba | 43.5 | 188 tok/s | 262K | $0.25 / $1.49 | Apr 16, 2026 | |
| #50 MiMo-V2-Omni by Xiaomi | 43.4 | N/A | 262K | N/A / N/A | Mar 19, 2026 | |
| #51 GPT-5.1 Codex (high) by OpenAI | 43.1 | 171 tok/s | 400K | $1.25 / $10.00 | Nov 13, 2025 | |
| #52 Claude Opus 4.5 (Non-reasoning) by Anthropic | 43.1 | 50 tok/s | 200K | $5.00 / $25.00 | Nov 24, 2025 | |
| #53 Kimi K2.6 (Non-reasoning) by MoonshotAI | 43.0 | N/A | 262K | N/A / N/A | Apr 20, 2026 | |
| #54 Claude 4.5 Sonnet (Reasoning) by Anthropic | 43.0 | 44 tok/s | N/A | $3.00 / $15.00 | Sep 29, 2025 | |
| #55 GLM 5V Turbo (Reasoning) by Z AI | 42.9 | N/A | 203K | N/A / N/A | Apr 1, 2026 | |
| #56 Claude Sonnet 4.6 (Non-reasoning, Low Effort) by Anthropic | 42.6 | 49 tok/s | 1.0M | $3.00 / $15.00 | Feb 17, 2026 | |
| #57 GLM-4.7 (Reasoning) by Z AI | 42.1 | 103 tok/s | 203K | $0.60 / $2.20 | Dec 22, 2025 | |
| #58 Qwen3.5 27B (Reasoning) by Alibaba | 42.1 | 87 tok/s | 262K | $0.30 / $2.40 | Feb 24, 2026 | |
| #59 GPT-5 (medium) by OpenAI | 42.0 | 83 tok/s | 400K | $1.25 / $10.00 | Aug 7, 2025 | |
| #60 Claude 4.1 Opus (Reasoning) by Anthropic | 42.0 | 36 tok/s | N/A | $15.00 / $75.00 | Aug 5, 2025 | |
| #61 Hy3-preview (Reasoning) by Tencent | 41.9 | 84 tok/s | N/A | N/A / N/A | Apr 23, 2026 | |
| #62 M2.5 by MiniMax | 41.9 | 81 tok/s | 197K | $0.30 / $1.20 | Feb 12, 2026 | |
| #63 V3.2 (Reasoning) by DeepSeek | 41.7 | N/A | 131K | $0.28 / $0.42 | Dec 1, 2025 | |
| #64 Qwen3.5 122B A10B (Reasoning) by Alibaba | 41.6 | 135 tok/s | 262K | $0.40 / $3.20 | Feb 24, 2026 | |
| #65 MiMo-V2-Flash (Feb 2026) by Xiaomi | 41.5 | 120 tok/s | 262K | $0.10 / $0.30 | Dec 16, 2025 | |
| #66 Grok 4 by xAI | 41.5 | 48 tok/s | N/A | $3.00 / $15.00 | Jul 10, 2025 | |
| #67 Gemini 3 Pro Preview (low) by Google | 41.3 | N/A | N/A | $2.00 / $12.00 | Nov 18, 2025 | |
| #68 GPT-5 mini (high) by OpenAI | 41.2 | 78 tok/s | 400K | $0.25 / $2.00 | Aug 7, 2025 | |
| #69 GPT-5.5 (Non-reasoning) by OpenAI | 40.9 | 51 tok/s | 1.1M | $5.00 / $30.00 | Apr 23, 2026 | |
| #70 Kimi K2 Thinking by MoonshotAI | 40.9 | 101 tok/s | 262K | $0.60 / $2.50 | Nov 6, 2025 | |
| #71 o3-pro by OpenAI | 40.7 | 17 tok/s | 200K | $20.00 / $80.00 | Jun 10, 2025 | |
| #72 GLM-5 (Non-reasoning) by Z AI | 40.6 | 59 tok/s | 203K | $1.00 / $3.20 | Feb 11, 2026 | |
| #73 Qwen3.5 397B A17B (Non-reasoning) by Alibaba | 40.1 | 52 tok/s | 262K | $0.60 / $3.60 | Feb 16, 2026 | |
| #74 Qwen3 Max Thinking by Alibaba | 39.9 | 34 tok/s | 262K | $1.20 / $6.00 | Jan 26, 2026 | |
| #75 M2.1 by MiniMax | 39.4 | 83 tok/s | 197K | $0.30 / $1.20 | Dec 23, 2025 | |
| #76 V4 Pro (Non-reasoning) by DeepSeek | 39.3 | N/A | 1.0M | N/A / N/A | Apr 24, 2026 | |
| #77 Gemma 4 31B (Reasoning) by Google | 39.2 | 35 tok/s | 262K | N/A / N/A | Apr 2, 2026 | |
| #78 GPT-5 (low) by OpenAI | 39.2 | 64 tok/s | 400K | $1.25 / $10.00 | Aug 7, 2025 | |
| #79 MiMo-V2-Flash (Reasoning) by Xiaomi | 39.2 | 119 tok/s | 262K | $0.10 / $0.30 | Dec 16, 2025 | |
| #80 Claude 4 Opus (Reasoning) by Anthropic | 39.0 | 36 tok/s | N/A | $15.00 / $75.00 | May 22, 2025 | |
| #81 GPT-5 mini (medium) by OpenAI | 38.9 | 74 tok/s | 400K | $0.25 / $2.00 | Aug 7, 2025 | |
| #82 Claude 4 Sonnet (Reasoning) by Anthropic | 38.7 | 48 tok/s | N/A | $3.00 / $15.00 | May 22, 2025 | |
| #83 Grok 4.1 Fast (Reasoning) by xAI | 38.6 | 140 tok/s | N/A | $0.20 / $0.50 | Nov 19, 2025 | |
| #84 Qwen3.5 Omni Plus by Alibaba | 38.6 | 56 tok/s | N/A | $0.40 / $4.80 | Mar 30, 2026 | |
| #85 GPT-5.1 Codex mini (high) by OpenAI | 38.6 | 206 tok/s | 400K | $0.25 / $2.00 | Nov 13, 2025 | |
| #86 Step 3.5 Flash 2603 by StepFun | 38.5 | 134 tok/s | 262K | N/A / N/A | Apr 2, 2026 | |
| #87 o3 by OpenAI | 38.4 | 74 tok/s | 200K | $2.00 / $8.00 | Apr 16, 2025 | |
| #88 GPT-5.4 nano (medium) by OpenAI | 38.1 | 160 tok/s | 400K | $0.20 / $1.25 | Mar 17, 2026 | |
| #89 Step 3.5 Flash by StepFun | 37.8 | 125 tok/s | 262K | $0.10 / $0.30 | Feb 2, 2026 | |
| #90 GPT-5.4 mini (medium) by OpenAI | 37.7 | 161 tok/s | 400K | $0.75 / $4.50 | Mar 17, 2026 | |
| #91 Kimi K2.5 (Non-reasoning) by MoonshotAI | 37.3 | 31 tok/s | 262K | $0.60 / $3.00 | Jan 27, 2026 | |
| #92 Qwen3.5 27B (Non-reasoning) by Alibaba | 37.2 | 91 tok/s | 262K | $0.30 / $2.40 | Feb 24, 2026 | |
| #93 Claude 4.5 Haiku (Reasoning) by Anthropic | 37.1 | 104 tok/s | N/A | $1.00 / $5.00 | Oct 15, 2025 | |
| #94 Qwen3.6 27B (Non-reasoning) by Alibaba | 37.1 | 60 tok/s | 256K | $0.60 / $3.60 | Apr 22, 2026 | |
| #95 Claude 4.5 Sonnet (Non-reasoning) by Anthropic | 37.1 | 41 tok/s | N/A | $3.00 / $15.00 | Sep 29, 2025 | |
| #96 Qwen3.5 35B A3B (Reasoning) by Alibaba | 37.1 | 128 tok/s | 262K | $0.25 / $2.00 | Feb 24, 2026 | |
| #97 V4 Flash (Non-reasoning) by DeepSeek | 36.5 | N/A | 1.0M | N/A / N/A | Apr 24, 2026 | |
| #98 M2 by MiniMax | 36.1 | 83 tok/s | 197K | $0.30 / $1.20 | Oct 26, 2025 | |
| #99 Nemotron 3 Super 120B A12B (Reasoning) by NVIDIA | 36.0 | 162 tok/s | 262K | $0.30 / $0.75 | Mar 11, 2026 | |
| #100 KAT-Coder-Pro V1 by KwaiKAT | 36.0 | 114 tok/s | N/A | $0.30 / $1.20 | Nov 11, 2025 |
Showing 100 of 507 models
Chat with 100+ AI Models in one App.
Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.
Understanding the AI Model Leaderboard
This comprehensive AI model leaderboard helps you compare and choose the best large language models (LLMs) for your needs. We track standardized AI benchmarks, token pricing, inference speed, and model capabilities across all major AI providers like OpenAI, Anthropic, Google, Meta, and DeepSeek.
Core AI Benchmarks Explained
Key Metrics to Consider
How to Choose the Right AI Model for Your Use Case
For Research & Analysis
Prioritize models with high MMLU-Pro (70%+) and GPQA (60%+) scores for complex reasoning tasks, academic research, and technical documentation
For Cost Optimization
Sort by input/output pricing - smaller models often deliver 80% of flagship performance at 10% of the cost for simple tasks
For Math & STEM
Filter by Math Index or AIME 2025 scores (50%+) for quantitative analysis, engineering calculations, and scientific applications
All benchmark scores and pricing data are updated daily from Artificial Analysis to reflect the latest model versions and capabilities. Use the sort filters above to find AI models by intelligence, cost, coding ability, math performance, speed, or release date.
Frequently Asked Questions
What is MMLU-Pro and why is it the standard AI intelligence benchmark?
MMLU-Pro (Massive Multitask Language Understanding - Professional) is the most comprehensive AI benchmark, testing models across 14 academic subjects including mathematics, science, history, law, and ethics. Scores range from 46% (basic competency) to 87% (near-expert level). Models scoring above 75% demonstrate strong general intelligence suitable for professional applications, while scores below 60% indicate limitations in complex reasoning tasks.
What does GPQA measure and which models score highest?
GPQA (Graduate-level Google-Proof Q&A) tests PhD-level reasoning with questions designed to be "Google-proof" - requiring deep understanding rather than simple fact retrieval. Top models like GPT-5.1 (87.3%), GPT-5 mini (82.8%), and o3 (82.7%) excel at GPQA, making them ideal for research, technical analysis, and complex problem-solving. Models below 50% GPQA struggle with advanced reasoning and may provide superficial answers to complex questions.
What is AIME 2025 and how does it evaluate AI mathematical ability?
AIME 2025 (American Invitational Mathematics Examination) is an elite math competition benchmark that tests advanced problem-solving, algebra, geometry, and number theory. Scores above 80% (like GPT-5 Codex at 98.7% or GPT-5.1 at 94%) indicate exceptional mathematical reasoning suitable for engineering, scientific computing, and quantitative analysis. Models scoring below 50% may struggle with multi-step mathematical problems or require explicit problem breakdown.
How is AI model pricing calculated and what's considered cost-effective?
AI model pricing is measured per 1 million tokens (approximately 750,000 words). Input pricing covers text you send, while output pricing covers generated responses. Budget models like Llama 3.3 70B cost $0.54/$0.71 per million tokens, mid-tier models like GPT-5 nano cost $0.05/$0.40, while premium models like GPT-5 cost $1.25/$10. For typical applications with 3:1 input-to-output ratio, budget models can be 10-20x cheaper than flagship models while maintaining 70-80% performance.
Which AI models are best for coding and programming tasks?
Sort by Coding Index to see top programming models. Our Coding Index combines LiveCodeBench, SciCode, and coding benchmarks. Top performers include GPT-5.1 (57.5 index), GPT-5 mini (51.4), and GPT-5 Codex (53.5). These models excel at code generation, debugging, refactoring, and explaining complex algorithms. For budget-conscious developers, models with 40+ coding index scores offer excellent value for routine programming tasks.
How often are AI model benchmarks and rankings updated?
Our leaderboard syncs daily with Artificial Analysis API to ensure benchmark scores (MMLU-Pro, GPQA, AIME 2025), pricing, and inference speed data reflect the latest model versions. New model releases appear immediately under the "Newest" sort option. Benchmark scores can change when providers release updated versions - for example, GPT-5.1 released in November 2025 achieved 69.7 intelligence compared to GPT-5's 68.5 from August 2025.
What inference speed (tokens/second) do I need for my application?
Inference speed determines how fast models generate responses. For real-time chatbots and interactive applications, target 100+ tokens/second (models like gpt-oss-120B at 340 tok/s). For background processing and batch jobs, 50-100 tok/s is sufficient. Premium reasoning models like GPT-5 (103 tok/s) balance speed and capability. Note that higher inference speed doesn't always mean better quality - slower models often deliver more thoughtful, detailed responses.
Can I test these AI models for free before committing?
Yes! Try our free AI chat interface to test different models instantly without creating an account. Many providers also offer free tiers: OpenAI (ChatGPT with daily limits), Anthropic (Claude with usage caps), Google (Gemini free tier), and open-source models like Llama 3.3. Compare performance on your specific use case before upgrading to paid plans.