GLM-4.6 (Reasoning) vs Gemini 3 Flash Preview (Reasoning)

Comparing 2 AI models · 5 benchmarks · Z AI, Google

Most Affordable
Google logo
Gemini 3 Flash Preview (Reasoning)
$0.50/1M
Highest Intelligence
Google logo
Gemini 3 Flash Preview (Reasoning)
89.8% GPQA
Best for Coding
Google logo
Gemini 3 Flash Preview (Reasoning)
41.0 Coding Index
Price Difference
1.1x
input cost range

Composite Indices

Intelligence, Coding, Math

Standard Benchmarks

Academic and industry benchmarks

Benchmark Winners

5 tests
Z AI logo

GLM-4.6 (Reasoning)

0

No clear wins

Google logo

Gemini 3 Flash Preview (Reasoning)

5
  • GPQA
  • MMLU Pro
  • HLE
  • LiveCodeBench
  • AIME 2025
Metric
Z AI logo GLM-4.6 (Reasoning)
Z AI
Google logo Gemini 3 Flash Preview (Reasoning)
Google
Pricing
Per 1M tokens
Input Cost $0.55/1M $0.50/1M
Output Cost $2.20/1M $3.00/1M
Blended Cost 3:1 input/output ratio
$0.96/1M $1.13/1M
Specifications
Organization Model creator
Z AI Google
Release Date Launch date
Sep 30, 2025 Dec 17, 2025
Performance & Speed
Throughput Output speed
133.6 tok/s 217.7 tok/s
Time to First Token (TTFT) Initial response delay
502ms 11638ms
Latency Time to first answer token
15477ms 11638ms
Composite Indices
Intelligence Index Overall reasoning capability
32.2 45.9
Coding Index Programming ability
28.4 41.0
Math Index Mathematical reasoning
86.0 97.0
Standard Benchmarks
GPQA Graduate-level reasoning
78.0% 89.8%
MMLU Pro Advanced knowledge
82.9% 89.0%
HLE Hard language evaluation
13.3% 34.7%
LiveCodeBench Real-world coding tasks
69.5% 90.8%
MATH 500 Mathematical problems
AIME 2025 Advanced math competition
86.0% 97.0%
AIME (Original) Math olympiad problems
SciCode Scientific code generation
38.4% 50.6%
LCR Code review capability
54.3% 66.3%
IFBench Instruction-following
43.4% 78.0%
TAU-bench v2 Tool use & agentic tasks
70.5% 80.4%
TerminalBench Hard CLI command generation