Inception logo

Inception: Mercury Coder: Pricing, Context Window & Benchmarks

by Inception

Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality. Read more in the [blog post here](https://www.inceptionlabs.ai/blog/introducing-mercury).

Chat with Inception: Mercury Coder
Input Price
$0.25/1M tokens
Output Price
$1.00/1M tokens
Context Window
128,000 tokens
Modalities
text

What you can do with Inception: Mercury Coder

Everyday Q&A and clear explanations

Writing help (emails, posts, summaries)

Idea generation and brainstorming

Learning support with step-by-step guidance

Benchmarks not available

This model isn't listed on Artificial Analysis yet. Showing OpenRouter specs below.

Metric Value
Provider Inception
Context Window 128,000 tokens
Input Price $0.25/1M tokens
Output Price $1.00/1M tokens
Release Date Apr 30, 2025
Modalities text
Capabilities Function Calling, Structured Outputs, JSON Mode

Compare Inception: Mercury Coder to other models

See how it stacks up on price, quality, and overall performance.

Frequently asked questions

What is Inception: Mercury Coder good for?

Use Inception: Mercury Coder for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.

How much does Inception: Mercury Coder cost?

Pricing is based on usage. Current rates are $0.25/1M tokens for input and $1.00/1M tokens for output.

Can I try Inception: Mercury Coder for free?

Yes. You can start a chat instantly and test the model before deciding on a plan.

Does Inception: Mercury Coder support images or audio?

Inception: Mercury Coder focuses on text-based tasks.

Pricing, context, and capability data are sourced from OpenRouter.