graph TD;ggml --> whisper.cppggml --> llama.cppllama.cpp --> codingsubgraph coding[Coding] llama.vim llama.vscode llama.qtcreatorendggml[<a href="https://github.com/ggml-org/ggml" >ggml</a> <br><span>Machine learning library</span>];whisper.cpp[<a href="https://github.com/ggml-org/whisper.cpp" >whisper.cpp</a> <br><span>speech-to-text</span>];llama.cpp[<a href="https://github.com/ggml-org/llama.cpp" >llama.cpp</a> <br><span>LLM inference</span>];llama.vim[<a href="https://github.com/ggml-org/llama.vim" >llama.vim</a> <br><span>Vim/Neovim plugin</span>];llama.vscode[<a href="https://github.com/ggml-org/llama.vscode">llama.vscode</a> <br><span>VSCode plugin</span>];llama.qtcreator[<a href="https://github.com/ggml-org/llama.qtcreator">llama.qtcreator</a> <br><span>Qt Creator plugin</span>];[2025 Oct 28]ggml-org/llama.cpp featured in GitHub's Octoverse 2025 report as Top OSS by contributors[2025 Oct 21]NVIDIA RTX 5090 outperforms AMD and Apple running local OpenAI language models[2025 Sep 18]Latest Open-Source AMD Improvements Allowing For Better Llama.cpp AI Performance Against Windows 11[2025 Sep 09]Llama.cpp Meets Instinct: A New Era of Open-Source AI Acceleration[2025 Aug 19]Firefox 142 Allows Browser Extensions/Add-Ons To Use AI LLMs[2025 Aug 13]FFmpeg 8.0 Merges OpenAI Whisper Filter For Automatic Speech Recognition[2025 Jul 30]MLCommons Releases MLPerf Client v1.0: A New Standard for AI PC and Client LLM Benchmarking[2025 Jul 26]Shotcut 25.07 Video Editor Introduces Speech to Text Model Downloader[2025 Jul 10]Introducing LFM2: The Fastest On-Device Foundation Models on the Market[2025 Jun 26]Introducing Gemma 3n: The developer guide[2025 Jun 23]Running and optimizing small language models on-premises and at the edge[2025 Jun 10]Docker Model Runner adds Qualcomm support[2025 Jun 05]Run small language models cost-efficiently with AWS Graviton and Amazon SageMaker AI[2025 Jun 03]Try out Link Previews in Firefox Labs 138[2025 May 29]Llama.cpp and GGML are optimized for NVIDIA RTX GPUs and the fifth-generation Tensor Cores[2025 May 08]LM Studio Accelerates LLM Performance With NVIDIA GeForce RTX GPUs and CUDA 12.8[2025 Apr 18]Gemma 3 QAT Models: Bringing state-of-the-Art AI to consumer GPUs[2025 Apr 16]Llama 4 Runs on Arm[2025 Apr 04]Run LLMs Locally with Docker[2025 Mar 25]Deploy a Large Language Model (LLM) chatbot with llama.cpp using KleidiAI on Arm servers[2025 Feb 11]OLMoE, meet iOS[2024 Oct 02]Accelerating LLMs with llama.cpp on NVIDIA RTX Systems
PinnedLoading
- llama.vscode
llama.vscode PublicVS Code extension for LLM-assisted code/text completion
- llama.qtcreator
llama.qtcreator PublicForked fromcristianadam/llama.qtcreator
Local LLM-assisted text completion for Qt Creator.
C++ 36
Repositories
Showing 10 of 15 repositories
- llama.qtcreator Public Forked fromcristianadam/llama.qtcreator
Local LLM-assisted text completion for Qt Creator.
ggml-org/llama.qtcreator’s past year of commit activity - ggml-org.github.io Public
ggml-org/ggml-org.github.io’s past year of commit activity