AI Model Ranking (LLM Leaderboard)
Best AI Math Models
Top language models for mathematics ranked by math index
|
Model
AI model name and provider organization |
Input/1M
Cost per 1 million input tokens (text you send to the model) |
Output/1M
Cost per 1 million output tokens (text the model generates for you) |
MMLU-Pro
Massive Multitask Language Understanding (Professional) - tests broad knowledge across 14 subjects including STEM, humanities, and social sciences |
GPQA
Graduate-level Google-Proof Q&A benchmark - tests PhD-level reasoning and advanced intelligence |
AIME 2025
American Invitational Mathematics Examination 2025 - tests advanced mathematical problem-solving ability |
Math
Artificial Analysis Math Index - composite score combining AIME, MATH-500, and mathematical reasoning benchmarks |
Release
When the model was released - newer models may have more capabilities | Compare |
|---|---|---|---|---|---|---|---|---|
| #1 GPT-5 Codex (high) by OpenAI | $1.25 | $10.00 | 86.5% | 83.7% | 98.7% | 98.7 | Sep 23, 2025 | |
| #2 Gemini 3 Pro Preview (high) by Google | $2.00 | $12.00 | 89.8% | 90.8% | 95.7% | 95.7 | Nov 18, 2025 | |
| #3 Kimi K2 Thinking by Moonshot AI | $0.60 | $2.50 | 84.8% | 83.8% | 94.7% | 94.7 | Nov 6, 2025 | |
| #4 GPT-5 (high) by OpenAI | $1.25 | $10.00 | 87.1% | 85.4% | 94.3% | 94.3 | Aug 7, 2025 | |
| #5 GPT-5.1 (high) by OpenAI | $1.25 | $10.00 | 87.0% | 87.3% | 94.0% | 94.0 | Nov 13, 2025 | |
| #6 gpt-oss-120B (high) by OpenAI | $0.15 | $0.60 | 80.8% | 78.2% | 93.4% | 93.4 | Aug 5, 2025 | |
| #7 Grok 4 by xAI | $3.00 | $15.00 | 86.6% | 87.7% | 92.7% | 92.7 | Jul 10, 2025 | |
| #8 GPT-5 (medium) by OpenAI | $1.25 | $10.00 | 86.7% | 84.2% | 91.7% | 91.7 | Aug 7, 2025 | |
| #9 Claude Opus 4.5 (Reasoning) by Anthropic | $5.00 | $25.00 | 89.5% | 86.6% | 91.3% | 91.3 | Nov 24, 2025 | |
| #10 Qwen3 235B A22B 2507 (Reasoning) by Alibaba | $0.70 | $8.40 | 84.3% | 79.0% | 91.0% | 91.0 | Jul 25, 2025 | |
| #11 GPT-5 mini (high) by OpenAI | $0.25 | $2.00 | 83.7% | 82.8% | 90.7% | 90.7 | Aug 7, 2025 | |
| #12 o4-mini (high) by OpenAI | $1.10 | $4.40 | 83.2% | 78.4% | 90.7% | 90.7 | Apr 16, 2025 | |
| #13 DeepSeek V3.1 Terminus (Reasoning) by DeepSeek | $0.40 | $2.00 | 85.1% | 79.2% | 89.7% | 89.7 | Sep 22, 2025 | |
| #14 Grok 4 Fast (Reasoning) by xAI | $0.20 | $0.50 | 85.0% | 84.7% | 89.7% | 89.7 | Sep 19, 2025 | |
| #15 DeepSeek V3.1 (Reasoning) by DeepSeek | $0.42 | $1.34 | 85.1% | 77.9% | 89.7% | 89.7 | Aug 21, 2025 | |
| #16 gpt-oss-20B (high) by OpenAI | $0.07 | $0.20 | 74.8% | 68.8% | 89.3% | 89.3 | Aug 5, 2025 | |
| #17 Grok 4.1 Fast (Reasoning) by xAI | $0.20 | $0.50 | 85.4% | 85.3% | 89.3% | 89.3 | Nov 19, 2025 | |
| #18 Ring-1T by InclusionAI | $0.57 | $2.28 | 80.6% | 59.5% | 89.3% | 89.3 | Oct 13, 2025 | |
| #19 o3 by OpenAI | $2.00 | $8.00 | 85.3% | 82.7% | 88.3% | 88.3 | Apr 16, 2025 | |
| #20 Qwen3 VL 235B A22B (Reasoning) by Alibaba | $0.70 | $8.40 | 83.6% | 77.2% | 88.3% | 88.3 | Sep 23, 2025 | |
| #21 Claude 4.5 Sonnet (Reasoning) by Anthropic | $3.00 | $15.00 | 87.5% | 83.4% | 88.0% | 88.0 | Sep 29, 2025 | |
| #22 Gemini 2.5 Pro by Google | $1.25 | $10.00 | 86.2% | 84.4% | 87.7% | 87.7 | Jun 5, 2025 | |
| #23 DeepSeek V3.2 Exp (Reasoning) by DeepSeek | $0.28 | $0.42 | 85.0% | 79.7% | 87.7% | 87.7 | Sep 29, 2025 | |
| #24 Apriel-v1.5-15B-Thinker by ServiceNow | N/A | N/A | 77.3% | 71.3% | 87.5% | 87.5 | Sep 30, 2025 | |
| #25 GLM-4.6 (Reasoning) by Z AI | $0.60 | $2.20 | 82.9% | 78.0% | 86.0% | 86.0 | Sep 30, 2025 | |
| #26 GPT-5 mini (medium) by OpenAI | $0.25 | $2.00 | 82.8% | 80.3% | 85.0% | 85.0 | Aug 7, 2025 | |
| #27 Grok 3 mini Reasoning (high) by xAI | $0.30 | $0.50 | 82.8% | 79.1% | 84.7% | 84.7 | Feb 19, 2025 | |
| #28 Qwen3 VL 32B (Reasoning) by Alibaba | $0.70 | $8.40 | 81.8% | 73.3% | 84.7% | 84.7 | Oct 21, 2025 | |
| #29 Seed-OSS-36B-Instruct by ByteDance Seed | $0.21 | $0.57 | 81.5% | 72.6% | 84.7% | 84.7 | Aug 20, 2025 | |
| #30 Qwen3 Next 80B A3B (Reasoning) by Alibaba | $0.50 | $6.00 | 82.4% | 75.9% | 84.3% | 84.3 | Sep 11, 2025 | |
| #31 GPT-5 nano (high) by OpenAI | $0.05 | $0.40 | 78.0% | 67.6% | 83.7% | 83.7 | Aug 7, 2025 | |
| #32 Claude 4.5 Haiku (Reasoning) by Anthropic | $1.00 | $5.00 | 76.0% | 67.2% | 83.7% | 83.7 | Oct 15, 2025 | |
| #33 Ring-flash-2.0 by InclusionAI | $0.14 | $0.57 | 79.3% | 72.5% | 83.7% | 83.7 | Sep 19, 2025 | |
| #34 GPT-5 (low) by OpenAI | $1.25 | $10.00 | 86.0% | 80.8% | 83.0% | 83.0 | Aug 7, 2025 | |
| #35 Qwen3 4B 2507 (Reasoning) by Alibaba | N/A | N/A | 74.3% | 66.7% | 82.7% | 82.7 | Aug 6, 2025 | |
| #36 Qwen3 VL 30B A3B (Reasoning) by Alibaba | $0.20 | $2.40 | 80.7% | 72.0% | 82.3% | 82.3 | Oct 3, 2025 | |
| #37 Qwen3 Max Thinking by Alibaba | $1.20 | $6.00 | 82.4% | 77.6% | 82.3% | 82.3 | Nov 3, 2025 | |
| #38 Magistral Medium 1.2 by Mistral | $2.00 | $5.00 | 81.5% | 73.9% | 82.0% | 82.0 | Sep 18, 2025 | |
| #39 Qwen3 235B A22B (Reasoning) by Alibaba | $0.70 | $8.40 | 82.8% | 70.0% | 82.0% | 82.0 | Apr 28, 2025 | |
| #40 GLM-4.5-Air by Z AI | $0.20 | $1.10 | 81.5% | 73.3% | 80.7% | 80.7 | Jul 28, 2025 | |
| #41 Qwen3 Max by Alibaba | $1.20 | $6.00 | 84.1% | 76.4% | 80.7% | 80.7 | Sep 23, 2025 | |
| #42 Claude 4.1 Opus (Reasoning) by Anthropic | $15.00 | $75.00 | 88.0% | 80.9% | 80.3% | 80.3 | Aug 5, 2025 | |
| #43 Magistral Small 1.2 by Mistral | $0.50 | $1.50 | 76.8% | 66.3% | 80.3% | 80.3 | Sep 17, 2025 | |
| #44 EXAONE 4.0 32B (Reasoning) by LG AI Research | $0.60 | $1.00 | 81.8% | 73.9% | 80.0% | 80.0 | Jul 15, 2025 | |
| #45 Doubao Seed Code by ByteDance Seed | $0.17 | $1.12 | 85.4% | 76.4% | 79.3% | 79.3 | Nov 11, 2025 | |
| #46 GPT-5 nano (medium) by OpenAI | $0.05 | $0.40 | 77.2% | 67.0% | 78.3% | 78.3 | Aug 7, 2025 | |
| #47 Gemini 2.5 Flash Preview (Sep '25) (Reasoning) by Google | $0.30 | $2.50 | 84.2% | 79.3% | 78.3% | 78.3 | Sep 25, 2025 | |
| #48 MiniMax-M2 by MiniMax | $0.30 | $1.20 | 82.0% | 77.7% | 78.3% | 78.3 | Oct 26, 2025 | |
| #49 Llama Nemotron Super 49B v1.5 (Reasoning) by NVIDIA | $0.10 | $0.40 | 81.4% | 74.8% | 76.7% | 76.7 | Jul 25, 2025 | |
| #50 DeepSeek R1 0528 (May '25) by DeepSeek | $1.35 | $4.00 | 84.9% | 81.3% | 76.0% | 76.0 | May 28, 2025 | |
| #51 Qwen3 Max (Preview) by Alibaba | $1.20 | $6.00 | 83.8% | 76.4% | 75.0% | 75.0 | Sep 5, 2025 | |
| #52 Claude 4 Sonnet (Reasoning) by Anthropic | $3.00 | $15.00 | 84.2% | 77.7% | 74.3% | 74.3 | May 22, 2025 | |
| #53 Qwen3 Omni 30B A3B (Reasoning) by Alibaba | $0.25 | $0.97 | 79.2% | 72.6% | 74.0% | 74.0 | Sep 22, 2025 | |
| #54 GLM-4.5 (Reasoning) by Z AI | $0.57 | $2.19 | 83.5% | 78.2% | 73.7% | 73.7 | Jul 28, 2025 | |
| #55 Gemini 2.5 Flash (Reasoning) by Google | $0.30 | $2.50 | 83.2% | 79.0% | 73.3% | 73.3 | May 20, 2025 | |
| #56 Claude 4 Opus (Reasoning) by Anthropic | $15.00 | $75.00 | 87.3% | 79.6% | 73.3% | 73.3 | May 22, 2025 | |
| #57 GLM-4.5V (Reasoning) by Z AI | $0.55 | $1.75 | 78.8% | 68.4% | 73.0% | 73.0 | Aug 11, 2025 | |
| #58 Qwen3 32B (Reasoning) by Alibaba | $0.70 | $8.40 | 79.8% | 66.8% | 73.0% | 73.0 | Apr 28, 2025 | |
| #59 Cogito v2.1 (Reasoning) by Deep Cogito | $1.25 | $1.25 | 84.9% | 76.8% | 72.7% | 72.7 | Nov 18, 2025 | |
| #60 Qwen3 VL 30B A3B Instruct by Alibaba | $0.20 | $0.80 | 76.4% | 69.5% | 72.3% | 72.3 | Oct 3, 2025 | |
| #61 Qwen3 30B A3B (Reasoning) by Alibaba | $0.20 | $2.40 | 77.7% | 61.6% | 72.3% | 72.3 | Apr 28, 2025 | |
| #62 Qwen3 235B A22B 2507 Instruct by Alibaba | $0.70 | $2.80 | 82.8% | 75.3% | 71.7% | 71.7 | Jul 21, 2025 | |
| #63 Ling-1T by InclusionAI | $0.57 | $2.28 | 82.2% | 71.9% | 71.3% | 71.3 | Oct 8, 2025 | |
| #64 Qwen3 VL 235B A22B Instruct by Alibaba | $0.70 | $2.80 | 82.3% | 71.2% | 70.7% | 70.7 | Sep 23, 2025 | |
| #65 NVIDIA Nemotron Nano 9B V2 (Reasoning) by NVIDIA | $0.04 | $0.16 | 74.2% | 57.0% | 69.7% | 69.7 | Aug 18, 2025 | |
| #66 Hermes 4 - Llama-3.1 405B (Reasoning) by Nous Research | $1.00 | $3.00 | 82.9% | 72.7% | 69.7% | 69.7 | Aug 27, 2025 | |
| #67 Gemini 2.5 Flash-Lite Preview (Sep '25) (Reasoning) by Google | $0.10 | $0.40 | 80.8% | 70.9% | 68.7% | 68.7 | Sep 8, 2025 | |
| #68 Hermes 4 - Llama-3.1 70B (Reasoning) by Nous Research | $0.13 | $0.40 | 81.1% | 69.9% | 68.7% | 68.7 | Aug 27, 2025 | |
| #69 Qwen3 VL 32B Instruct by Alibaba | $0.70 | $2.80 | 79.1% | 67.1% | 68.3% | 68.3 | Oct 21, 2025 | |
| #70 DeepSeek R1 (Jan '25) by DeepSeek | $1.35 | $4.00 | 84.4% | 70.8% | 68.0% | 68.0 | Jan 20, 2025 | |
| #71 gpt-oss-120B (low) by OpenAI | $0.15 | $0.59 | 77.5% | 67.2% | 66.7% | 66.7 | Aug 5, 2025 | |
| #72 Qwen3 30B A3B 2507 Instruct by Alibaba | $0.20 | $0.80 | 77.7% | 65.9% | 66.3% | 66.3 | Jul 29, 2025 | |
| #73 Qwen3 Next 80B A3B Instruct by Alibaba | $0.50 | $2.00 | 81.9% | 73.8% | 66.3% | 66.3 | Sep 11, 2025 | |
| #74 Ling-flash-2.0 by InclusionAI | $0.14 | $0.57 | 77.7% | 65.7% | 65.3% | 65.3 | Sep 17, 2025 | |
| #75 DeepSeek R1 0528 Qwen3 8B by DeepSeek | $0.06 | $0.09 | 73.9% | 61.2% | 63.7% | 63.7 | May 29, 2025 | |
| #76 Llama 3.1 Nemotron Ultra 253B v1 (Reasoning) by NVIDIA | $0.60 | $1.80 | 82.5% | 72.8% | 63.7% | 63.7 | Apr 7, 2025 | |
| #77 DeepSeek R1 Distill Qwen 32B by DeepSeek | $0.28 | $0.28 | 73.9% | 61.5% | 63.0% | 63.0 | Jan 20, 2025 | |
| #78 Claude Opus 4.5 (Non-reasoning) by Anthropic | $5.00 | $25.00 | 88.9% | 81.0% | 62.7% | 62.7 | Nov 24, 2025 | |
| #79 gpt-oss-20B (low) by OpenAI | $0.07 | $0.20 | 71.8% | 61.1% | 62.3% | 62.3 | Aug 5, 2025 | |
| #80 NVIDIA Nemotron Nano 9B V2 (Non-reasoning) by NVIDIA | $0.04 | $0.16 | 73.9% | 55.7% | 62.3% | 62.3 | Aug 18, 2025 | |
| #81 Solar Pro 2 (Reasoning) by Upstage | $0.50 | $0.50 | 80.5% | 68.7% | 61.3% | 61.3 | Jul 9, 2025 | |
| #82 MiniMax M1 80k by MiniMax | $0.40 | $2.10 | 81.6% | 69.7% | 61.0% | 61.0 | Jun 17, 2025 | |
| #83 Gemini 2.5 Flash (Non-reasoning) by Google | $0.30 | $2.50 | 80.9% | 68.3% | 60.3% | 60.3 | May 20, 2025 | |
| #84 Grok 3 by xAI | $3.00 | $15.00 | 79.9% | 69.3% | 58.0% | 58.0 | Feb 19, 2025 | |
| #85 Qwen3 14B (Non-reasoning) by Alibaba | $0.35 | $1.40 | 67.5% | 47.0% | 58.0% | 58.0 | Apr 28, 2025 | |
| #86 DeepSeek V3.2 Exp (Non-reasoning) by DeepSeek | $0.28 | $0.42 | 83.6% | 73.8% | 57.7% | 57.7 | Sep 29, 2025 | |
| #87 Kimi K2 0905 by Moonshot AI | $0.99 | $2.50 | 81.9% | 76.7% | 57.3% | 57.3 | Sep 5, 2025 | |
| #88 Kimi K2 by Moonshot AI | $0.60 | $2.50 | 82.4% | 76.6% | 57.0% | 57.0 | Jul 11, 2025 | |
| #89 Gemini 2.5 Flash Preview (Sep '25) (Non-reasoning) by Google | $0.30 | $2.50 | 83.6% | 76.6% | 56.7% | 56.7 | Sep 25, 2025 | |
| #90 Qwen3 30B A3B 2507 (Reasoning) by Alibaba | $0.20 | $2.40 | 80.5% | 70.7% | 56.3% | 56.3 | Jul 30, 2025 | |
| #91 Claude 3.7 Sonnet (Reasoning) by Anthropic | $3.00 | $15.00 | 83.7% | 77.2% | 56.3% | 56.3 | Feb 24, 2025 | |
| #92 DeepSeek R1 Distill Qwen 14B by DeepSeek | $0.15 | $0.15 | 74.0% | 48.4% | 55.7% | 55.7 | Jan 20, 2025 | |
| #93 Qwen3 14B (Reasoning) by Alibaba | $0.35 | $4.20 | 77.4% | 60.4% | 55.7% | 55.7 | Apr 28, 2025 | |
| #94 Llama 3.3 Nemotron Super 49B v1 (Reasoning) by NVIDIA | N/A | N/A | 78.5% | 64.3% | 54.7% | 54.7 | Mar 18, 2025 | |
| #95 DeepSeek R1 Distill Llama 70B by DeepSeek | $0.80 | $1.05 | 79.5% | 40.2% | 53.7% | 53.7 | Jan 20, 2025 | |
| #96 DeepSeek V3.1 Terminus (Non-reasoning) by DeepSeek | $0.40 | $1.68 | 83.6% | 75.1% | 53.7% | 53.7 | Sep 22, 2025 | |
| #97 Gemini 2.5 Flash-Lite (Reasoning) by Google | $0.10 | $0.40 | 75.9% | 62.5% | 53.3% | 53.3 | Jun 17, 2025 | |
| #98 Qwen3 4B 2507 Instruct by Alibaba | N/A | N/A | 67.2% | 51.7% | 52.3% | 52.3 | Aug 6, 2025 | |
| #99 Qwen3 Omni 30B A3B Instruct by Alibaba | $0.25 | $0.97 | 72.5% | 62.0% | 52.3% | 52.3 | Sep 22, 2025 | |
| #100 Exaone 4.0 1.2B (Reasoning) by LG AI Research | N/A | N/A | 58.8% | 51.5% | 50.3% | 50.3 | Jul 15, 2025 | |
Understanding the AI Model Leaderboard
This comprehensive AI model leaderboard helps you compare and choose the best large language models (LLMs) for your needs. We track standardized AI benchmarks, token pricing, inference speed, and model capabilities across all major AI providers like OpenAI, Anthropic, Google, Meta, and DeepSeek.
Core AI Benchmarks Explained
- MMLU-Pro: Tests broad knowledge across 14 academic subjects including STEM, humanities, and social sciences - the foundational intelligence benchmark
- GPQA: Graduate-level Google-Proof Q&A benchmark - measures PhD-level reasoning and advanced problem-solving capabilities
- AIME 2025: American Invitational Mathematics Examination - evaluates elite mathematical reasoning and competition-level problem solving
- Coding Index: Composite score of LiveCodeBench, SciCode, and coding benchmarks - measures programming ability
- Math Index: Composite score of AIME, MATH-500, and mathematical reasoning tests
Key Metrics to Consider
- Token Pricing: Compare input vs output token costs per million - crucial for estimating API expenses and optimizing usage patterns
- Inference Speed: Measured in tokens/second - determines response time for chatbots, streaming, and real-time applications
- Release Date: Newer models often incorporate latest training techniques and updated knowledge cutoffs
- Benchmark Scores: Percentage scores (0-100%) make it easy to compare model capabilities at a glance
How to Choose the Right AI Model for Your Use Case
For Research & Analysis
Prioritize models with high MMLU-Pro (70%+) and GPQA (60%+) scores for complex reasoning tasks, academic research, and technical documentation
For Cost Optimization
Sort by input/output pricing - smaller models often deliver 80% of flagship performance at 10% of the cost for simple tasks
For Math & STEM
Filter by Math Index or AIME 2025 scores (50%+) for quantitative analysis, engineering calculations, and scientific applications
All benchmark scores and pricing data are updated daily from Artificial Analysis to reflect the latest model versions and capabilities. Use the sort filters above to find AI models by intelligence, cost, coding ability, math performance, speed, or release date.
Frequently Asked Questions
What is MMLU-Pro and why is it the standard AI intelligence benchmark?
MMLU-Pro (Massive Multitask Language Understanding - Professional) is the most comprehensive AI benchmark, testing models across 14 academic subjects including mathematics, science, history, law, and ethics. Scores range from 46% (basic competency) to 87% (near-expert level). Models scoring above 75% demonstrate strong general intelligence suitable for professional applications, while scores below 60% indicate limitations in complex reasoning tasks.
What does GPQA measure and which models score highest?
GPQA (Graduate-level Google-Proof Q&A) tests PhD-level reasoning with questions designed to be "Google-proof" - requiring deep understanding rather than simple fact retrieval. Top models like GPT-5.1 (87.3%), GPT-5 mini (82.8%), and o3 (82.7%) excel at GPQA, making them ideal for research, technical analysis, and complex problem-solving. Models below 50% GPQA struggle with advanced reasoning and may provide superficial answers to complex questions.
What is AIME 2025 and how does it evaluate AI mathematical ability?
AIME 2025 (American Invitational Mathematics Examination) is an elite math competition benchmark that tests advanced problem-solving, algebra, geometry, and number theory. Scores above 80% (like GPT-5 Codex at 98.7% or GPT-5.1 at 94%) indicate exceptional mathematical reasoning suitable for engineering, scientific computing, and quantitative analysis. Models scoring below 50% may struggle with multi-step mathematical problems or require explicit problem breakdown.
How is AI model pricing calculated and what's considered cost-effective?
AI model pricing is measured per 1 million tokens (approximately 750,000 words). Input pricing covers text you send, while output pricing covers generated responses. Budget models like Llama 3.3 70B cost $0.54/$0.71 per million tokens, mid-tier models like GPT-5 nano cost $0.05/$0.40, while premium models like GPT-5 cost $1.25/$10. For typical applications with 3:1 input-to-output ratio, budget models can be 10-20x cheaper than flagship models while maintaining 70-80% performance.
Which AI models are best for coding and programming tasks?
Sort by Coding Index to see top programming models. Our Coding Index combines LiveCodeBench, SciCode, and coding benchmarks. Top performers include GPT-5.1 (57.5 index), GPT-5 mini (51.4), and GPT-5 Codex (53.5). These models excel at code generation, debugging, refactoring, and explaining complex algorithms. For budget-conscious developers, models with 40+ coding index scores offer excellent value for routine programming tasks.
How often are AI model benchmarks and rankings updated?
Our leaderboard syncs daily with Artificial Analysis API to ensure benchmark scores (MMLU-Pro, GPQA, AIME 2025), pricing, and inference speed data reflect the latest model versions. New model releases appear immediately under the "Newest" sort option. Benchmark scores can change when providers release updated versions - for example, GPT-5.1 released in November 2025 achieved 69.7 intelligence compared to GPT-5's 68.5 from August 2025.
What inference speed (tokens/second) do I need for my application?
Inference speed determines how fast models generate responses. For real-time chatbots and interactive applications, target 100+ tokens/second (models like gpt-oss-120B at 340 tok/s). For background processing and batch jobs, 50-100 tok/s is sufficient. Premium reasoning models like GPT-5 (103 tok/s) balance speed and capability. Note that higher inference speed doesn't always mean better quality - slower models often deliver more thoughtful, detailed responses.
Can I test these AI models for free before committing?
Yes! Try our free AI chat interface to test different models instantly without creating an account. Many providers also offer free tiers: OpenAI (ChatGPT with daily limits), Anthropic (Claude with usage caps), Google (Gemini free tier), and open-source models like Llama 3.3. Compare performance on your specific use case before upgrading to paid plans.