Meta logo

Llama 4 Maverick

by Meta

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.

Chat with Llama 4 Maverick

Capabilities

Vision

Pricing

Input Tokens
Per 1M tokens
Free
Output Tokens
Per 1M tokens
Free
Image Processing
Per 1M tokens
$668.40/1M tokens

Supported Modalities

Input

text
image

Output

text

Performance Benchmarks

Intelligence Index
Overall intelligence score
35.8
Coding Index
Programming capability
26.4
Math Index
Mathematical reasoning
19.3
GPQA
Graduate-level questions
67.1%
MMLU Pro
Multitask language understanding
80.9%
HLE
Human-like evaluation
4.8%
LiveCodeBench
Real-world coding tasks
39.7%
AIME 2025
Advanced mathematics
19.3%
MATH 500
Mathematical problem solving
88.9%

Specifications

Context Length
1.0M tokens
Provider
Meta
Throughput
130.118 tokens/s
Released
Apr 5, 2025
Model ID
meta-llama/llama-4-maverick

Ready to try it?

Start chatting with Llama 4 Maverick right now. No credit card required.

Start Chatting

More from Meta

View all models