Llama 3.2 3B
MetaSmall but mighty model that punches above its weight. Runs on almost any hardware and handles conversational AI, summarization, and simple coding tasks well.
Parameters 3B
Min VRAM 3 GB
Recommended VRAM 4 GB
Context Length 128K
License Llama 3.2 Community
🚀 Get Started
Run Llama 3.2 3B locally with one command:
ollama run llama3.2:3b Requires Ollama installed.
📊 Benchmarks
BenchmarkScore
MMLU 45.8
GSM8K 48.3
HumanEval 36.5
💻 Hardware Recommendations
🟢 Minimum
3 GB VRAM GPU or 6+ GB RAM (CPU mode)
Expect slower generation in CPU mode
🔵 Recommended
4 GB VRAM GPU
Fast generation with room for context
Similar Models
Related Comparisons
Gemma 2 vs Llama 3.2
Google vs Meta in the small model arena. Gemma 2 offers research-grade quality w...
Mistral 7B vs Llama 3.2 3BTwo popular small models compared: Mistral's efficient 7B vs Meta's tiny-but-cap...
Phi-4 vs Llama 3.2Microsoft's STEM-focused 14B vs Meta's lightweight 3B. Two different philosophie...