A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of [Mistral 7B Instruct](/modelsmistralai/mistral-7b-instruct-v0.1), with the following changes:
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
Capabilities
Text Generation 32K Context
0/500
AI can make mistakes. Handle with care.
Daily Limit Reached
You've used all 10 free messages for today. Join our waitlist to get unlimited access when we launch!