Llama 3.2 1B

Meta

Ultra-lightweight model perfect for edge devices, mobile phones, and IoT. Surprisingly capable for its tiny size, great for simple tasks and classification.

Parameters 1B
Min VRAM 2 GB
Recommended VRAM 4 GB
Context Length 128K
License Llama 3.2 Community

🚀 Get Started

Run Llama 3.2 1B locally with one command:

ollama run llama3.2:1b

Requires Ollama installed.

📊 Benchmarks

BenchmarkScore
MMLU 32.2
GSM8K 33.1
HumanEval 28.0

💻 Hardware Recommendations

🟢 Minimum

2 GB VRAM GPU or 4+ GB RAM (CPU mode)

Expect slower generation in CPU mode

🔵 Recommended

4 GB VRAM GPU

Fast generation with room for context

Best For

chatclassificationedge

Similar Models