Qwen logo

Qwen3 235B A22B Instruct 2507

235B

by Qwen

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

Pricing

Input Tokens
Per 1M tokens
Free
Output Tokens
Per 1M tokens
Free
Image Processing
Per 1M tokens
$0.00/1M tokens

Supported Modalities

Input

text

Output

text

Specifications

Context Length
131K tokens
Provider
Qwen
Released
Jul 21, 2025
Model ID
qwen/qwen3-235b-a22b-2507

Ready to try it?

Start chatting with Qwen3 235B A22B Instruct 2507 right now. No credit card required.

Start Chatting

More from Qwen

View all models