LFM2-8B-A1B: Pricing, Context Window & Benchmarks
8Bby LiquidAI
LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
What you can do with LFM2-8B-A1B
Everyday Q&A and clear explanations
Writing help (emails, posts, summaries)
Idea generation and brainstorming
Learning support with step-by-step guidance
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Benchmark Highlights
| Metric | Value |
|---|---|
| Provider | LiquidAI |
| Context Window | 32,768 tokens |
| Input Price | $0.00/1M tokens |
| Output Price | $0.00/1M tokens |
| Release Date | Oct 7, 2025 |
| Modalities | text |
| Capabilities | N/A |
Compare LFM2-8B-A1B to other models
See how it stacks up on price, quality, and overall performance.
Frequently asked questions
What is LFM2-8B-A1B good for?
Use LFM2-8B-A1B for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.
How much does LFM2-8B-A1B cost?
Pricing is based on usage. Current rates are $0.00/1M tokens for input and $0.00/1M tokens for output.
Can I try LFM2-8B-A1B for free?
Yes. You can start a chat instantly and test the model before deciding on a plan.
Does LFM2-8B-A1B support images or audio?
LFM2-8B-A1B focuses on text-based tasks.
Suggested comparisons
Similar models
Benchmarks and pricing are sourced from Artificial Analysis where available. OpenRouter specs are used as a fallback.
Compare Models
Select a model to compare with LFM2-8B-A1B