Inception logo

Mercury Coder: Pricing, Context Window & Benchmarks

by Inception

Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude 3.5 Haiku while matching their performance. Mercury's speed enables developers to provide responsive user experiences, including with voice agents, search interfaces, and chatbots. Read more in the [blog post] (https://www.inceptionlabs.ai/blog/introducing-mercury) here.

Chat with Mercury Coder
Input Price
$0.25/1M tokens
Output Price
$1.00/1M tokens
Context Window
128,000 tokens
Modalities
text

What you can do with Mercury Coder

Everyday Q&A and clear explanations

Writing help (emails, posts, summaries)

Idea generation and brainstorming

Learning support with step-by-step guidance

Benchmarks not available

This model isn't listed on Artificial Analysis yet. Showing OpenRouter specs below.

Metric Value
Provider Inception
Context Window 128,000 tokens
Input Price $0.25/1M tokens
Output Price $1.00/1M tokens
Release Date Apr 30, 2025
Modalities text
Capabilities N/A

Compare Mercury Coder to other models

See how it stacks up on price, quality, and overall performance.

Frequently asked questions

What is Mercury Coder good for?

Use Mercury Coder for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.

How much does Mercury Coder cost?

Pricing is based on usage. Current rates are $0.25/1M tokens for input and $1.00/1M tokens for output.

Can I try Mercury Coder for free?

Yes. You can start a chat instantly and test the model before deciding on a plan.

Does Mercury Coder support images or audio?

Mercury Coder focuses on text-based tasks.

Pricing, context, and capability data are sourced from OpenRouter.