DeepSeek logo

R1 Distill Qwen 32B

32B

by DeepSeek

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Chat with R1 Distill Qwen 32B

Pricing

Input Tokens
Per 1M tokens
Free
Output Tokens
Per 1M tokens
Free
Image Processing
Per 1M tokens
$0.00/1M tokens

Supported Modalities

Input

text

Output

text

Performance Benchmarks

Intelligence Index
Overall intelligence score
17.2
Math Index
Mathematical reasoning
63.0
GPQA
Graduate-level questions
61.5%
MMLU Pro
Multitask language understanding
73.9%
HLE
Human-like evaluation
5.5%
LiveCodeBench
Real-world coding tasks
27.0%
AIME 2025
Advanced mathematics
63.0%
MATH 500
Mathematical problem solving
94.1%

Specifications

Context Length
131K tokens
Provider
DeepSeek
Throughput
38.798 tokens/s
Released
Jan 29, 2025
Model ID
deepseek/deepseek-r1-distill-qwen-32b

Ready to try it?

Start chatting with R1 Distill Qwen 32B right now. No credit card required.

Start Chatting

More from DeepSeek

View all models