Code Llama 34B

Meta

Meta's specialized coding model built on Llama 2. Excels at code generation, completion, and debugging across multiple programming languages.

Parameters 34B
Min VRAM 20 GB
Recommended VRAM 24 GB
Context Length 16K
License Llama 2 Community

🚀 Get Started

Run Code Llama 34B locally with one command:

ollama run codellama:34b

Requires Ollama installed.

📊 Benchmarks

BenchmarkScore
HumanEval 48.8
MBPP 55.0
MultiPL-E 45.1

💻 Hardware Recommendations

🟢 Minimum

20 GB VRAM GPU or 40+ GB RAM (CPU mode)

Expect slower generation in CPU mode

🔵 Recommended

24 GB VRAM GPU

Fast generation with room for context

Best For

codingcode-completiondebugging

Similar Models