LFM2-24B-A2B: Pricing, Context Window & Benchmarks
24Bby LiquidAI
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
What you can do with LFM2-24B-A2B
Everyday Q&A and clear explanations
Writing help (emails, posts, summaries)
Idea generation and brainstorming
Learning support with step-by-step guidance
Benchmarks not available
This model isn't listed on Artificial Analysis yet. Showing OpenRouter specs below.
| Metric | Value |
|---|---|
| Provider | LiquidAI |
| Context Window | 32,768 tokens |
| Input Price | $0.03/1M tokens |
| Output Price | $0.12/1M tokens |
| Release Date | Feb 25, 2026 |
| Modalities | text |
| Capabilities | N/A |
Compare LFM2-24B-A2B to other models
See how it stacks up on price, quality, and overall performance.
Frequently asked questions
What is LFM2-24B-A2B good for?
Use LFM2-24B-A2B for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.
How much does LFM2-24B-A2B cost?
Pricing is based on usage. Current rates are $0.03/1M tokens for input and $0.12/1M tokens for output.
Can I try LFM2-24B-A2B for free?
Yes. You can start a chat instantly and test the model before deciding on a plan.
Does LFM2-24B-A2B support images or audio?
LFM2-24B-A2B focuses on text-based tasks.
Similar models
Pricing, context, and capability data are sourced from OpenRouter.
Compare Models
Select a model to compare with LFM2-24B-A2B