Mistral 7B

Mistral AI

The model that proved small models can compete. Mistral 7B uses sliding window attention and grouped-query attention for excellent efficiency and quality.

Parameters 7B
Min VRAM 6 GB
Recommended VRAM 8 GB
Context Length 32K
License Apache 2.0

🚀 Get Started

Run Mistral 7B locally with one command:

ollama run mistral

Requires Ollama installed.

📊 Benchmarks

BenchmarkScore
MMLU 62.5
GSM8K 52.2
HumanEval 30.5

💻 Hardware Recommendations

🟢 Minimum

6 GB VRAM GPU or 12+ GB RAM (CPU mode)

Expect slower generation in CPU mode

🔵 Recommended

8 GB VRAM GPU

Fast generation with room for context

Best For

chatcodinginstruction-following

Similar Models

Related Comparisons