Gemma 2 27B

Google

Google's largest Gemma model delivering near-Llama-70B quality at half the size. Excellent efficiency-to-performance ratio for local deployment.

Parameters 27B
Min VRAM 18 GB
Recommended VRAM 24 GB
Context Length 8K
License Gemma

🚀 Get Started

Run Gemma 2 27B locally with one command:

ollama run gemma2:27b

Requires Ollama installed.

📊 Benchmarks

BenchmarkScore
MMLU 75.2
GSM8K 80.8
HumanEval 68.4

💻 Hardware Recommendations

🟢 Minimum

18 GB VRAM GPU or 36+ GB RAM (CPU mode)

Expect slower generation in CPU mode

🔵 Recommended

24 GB VRAM GPU

Fast generation with room for context

Best For

chatreasoningcodinggeneral

Similar Models