Ollama logo

Ollama

⭐ Featured✓ Verified

Run large language models locally

toolOpen SourceMIT
LLMs & SLMsInfrastructure#self-hosted#local#inference#gpu

Ollama is the easiest way to run large language models locally. It packages model weights, configuration, and runtime into a single Modelfile. Supports hundreds of models including Llama, Qwen, Mistral, Gemma, and Phi. Provides an OpenAI-compatible API, GPU acceleration, and works on macOS, Linux, and Windows.

0 views0 clicksAdded 3/14/2026

Reviews

No reviews yet. Be the first!

Loading reviews...