Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
#

llama-cpp

Here are 116 public repositories matching this topic...

A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!

  • UpdatedApr 23, 2024
  • TypeScript

A C#/.NET library to run LLM (🦙LLaMA/LLaVA) on your local device efficiently.

  • UpdatedApr 29, 2025
  • C#
maid

Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.

  • UpdatedApr 29, 2025
  • Dart
node-llama-cpp

Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

  • UpdatedMar 28, 2025
  • TypeScript

llama.go is like llama.cpp in pure Golang!

  • UpdatedSep 20, 2024
  • Go

prima.cpp: Speeding up 70B-scale LLM inference on low-resource everyday home clusters

  • UpdatedApr 28, 2025
  • C++

Self-evaluating interview for AI coders

  • UpdatedApr 24, 2025
  • Python

React Native binding of llama.cpp

  • UpdatedMar 24, 2025
  • C++

LLama.cpp rust bindings

  • UpdatedJun 27, 2024
  • Rust

This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.

  • UpdatedJul 12, 2024
  • Python

Run LLMs locally. A clojure wrapper for llama.cpp.

  • UpdatedMar 29, 2025
  • Clojure

Booster - open accelerator for LLM models. Better inference and debugging for AI hackers

  • UpdatedAug 15, 2024
  • C++

Review/Check GGUF files and estimate the memory usage and maximum tokens per second.

  • UpdatedApr 29, 2025
  • Go
shady.ai

LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.

  • UpdatedJun 10, 2023
  • Python

Your customized AI assistant - Personal assistants on any hardware! With llama.cpp, whisper.cpp, ggml, LLaMA-v2.

  • UpdatedDec 5, 2023
  • C++

A Pure Rust based LLM (Any LLM based MLLM such as Spark-TTS) Inference Engine, powering by Candle framework.

  • UpdatedMar 26, 2025
  • Rust

Improve this page

Add a description, image, and links to thellama-cpp topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thellama-cpp topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp