AI
Agent Directory
NewsBlogBrowseBenchmarkSubmitFAQAbout
AI
Agent Directory

The home for AI agents, frameworks, and tools. Discover what's next.

Explore

  • Browse All
  • News
  • Submit Listing
  • FAQ
  • API

Company

  • About
  • Contact
  • Privacy
  • Terms

Community

  • X
  • GitHub
  • LinkedIn
© 2026 AI Agent Directory
Home / Listings / llama.cpp
llama.cpp logo

llama.cpp

C/C++ engine for running LLMs on consumer hardware

toolFree
Infrastructure#open-source#local#inference#quantization

llama.cpp is the foundational C/C++ project that enabled running large language models on consumer hardware. It implements efficient quantized inference (GGUF format) that runs LLMs on CPUs, Apple Silicon, and consumer GPUs. The project spawned an entire ecosystem of local AI tools and remains the performance baseline for edge LLM deployment.

Visit Website →GitHub
0 views0 clicksAdded 3/14/2026

Reviews

No reviews yet. Be the first!

Loading reviews...

Advertise Here

Reach AI developers and builders

From $49/mo →