Mistral 7B Instruct v0.2
7Bby Mistral
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of [Mistral 7B Instruct](/modelsmistralai/mistral-7b-instruct-v0.1), with the following changes: - 32k context window (vs 8k context in v0.1) - Rope-theta = 1e6 - No Sliding-Window Attention
Pricing
Input Tokens
Per 1M tokens
Free
Output Tokens
Per 1M tokens
Free
Image Processing
Per 1M tokens
$0.00/1M tokens
Supported Modalities
Input
text
Output
text
Performance Benchmarks
Intelligence Index
Overall intelligence score
1.0
GPQA
Graduate-level questions
17.7%
MMLU Pro
Multitask language understanding
24.5%
HLE
Human-like evaluation
4.3%
LiveCodeBench
Real-world coding tasks
4.6%
MATH 500
Mathematical problem solving
12.1%
Specifications
- Context Length
- 33K tokens
- Provider
- Mistral
- Throughput
- 120.047 tokens/s
- Released
- Dec 28, 2023
- Model ID
- mistralai/mistral-7b-instruct-v0.2
Ready to try it?
Start chatting with Mistral 7B Instruct v0.2 right now. No credit card required.
Start ChattingMore from Mistral
View all modelsCompare Models
Select a model to compare with Mistral 7B Instruct v0.2