Gemma 2 2B
GoogleGoogle's ultra-lightweight model that runs anywhere. Ideal for on-device AI, edge computing, and rapid prototyping with surprisingly good quality.
Parameters 2B
Min VRAM 2 GB
Recommended VRAM 4 GB
Context Length 8K
License Gemma
🚀 Get Started
Run Gemma 2 2B locally with one command:
ollama run gemma2:2b Requires Ollama installed.
📊 Benchmarks
BenchmarkScore
MMLU 51.3
GSM8K 26.7
HumanEval 17.7
💻 Hardware Recommendations
🟢 Minimum
2 GB VRAM GPU or 4+ GB RAM (CPU mode)
Expect slower generation in CPU mode
🔵 Recommended
4 GB VRAM GPU
Fast generation with room for context