- Notifications
You must be signed in to change notification settings - Fork875
Locally run an Instruction-Tuned Chat-Style LLM
License
antimatter15/alpaca.cpp
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Run a fast ChatGPT-like model locally on your device. The screencast below is not sped up and running on an M2 Macbook Air with 4GB of weights.
This combines theLLaMA foundation model with anopen reproduction ofStanford Alpaca a fine-tuning of the base model to obey instructions (akin to theRLHF used to train ChatGPT) and a set of modifications tollama.cpp to add a chat interface.
The changes from alpaca.cpp have since been upstreamed inllama.cpp.
Download the zip file corresponding to your operating system from thelatest release. On Windows, downloadalpaca-win.zip, on Mac (both Intel or ARM) downloadalpaca-mac.zip, and on Linux (x64) downloadalpaca-linux.zip.
Downloadggml-alpaca-7b-q4.bin and place it in the same folder as thechat executable in the zip file. There are several options:
Once you've downloaded the model weights and placed them into the same directory as thechat orchat.exe executable, run:
./chatThe weights are based on the published fine-tunes fromalpaca-lora, converted back into a pytorch checkpoint with amodified script and then quantized with llama.cpp the regular way.
git clone https://github.com/antimatter15/alpaca.cppcd alpaca.cppmake chat./chat- Download and install CMake:https://cmake.org/download/
- Download and install
git. If you've never used git before, consider a GUI client likehttps://desktop.github.com/ - Clone this repo using your git client of choice (for GitHub Desktop, go to File -> Clone repository -> From URL and paste
https://github.com/antimatter15/alpaca.cppin as the URL) - Open a Windows Terminal inside the folder you cloned the repository to
- Run the following commands one by one:
cmake .cmake--build.--config Release
- Download the weights via any of the links in "Get started" above, and save the file as
ggml-alpaca-7b-q4.binin the main Alpaca directory. - In the terminal window, run this command:
.\Release\chat.exe- (You can add other launch options like
--n 8as preferred onto the same line) - You can now type to the AI in the terminal and it will reply. Enjoy!
This combinesFacebook's LLaMA,Stanford Alpaca,alpaca-lora andcorresponding weights by Eric Wang (which usesJason Phang's implementation of LLaMA on top of Hugging Face Transformers), andllama.cpp by Georgi Gerganov. The chat implementation is based on Matvey Soloviev'sInteractive Mode for llama.cpp. Inspired bySimon Willison's getting started guide for LLaMA.Andy Matuschak's thread on adapting this to 13B, using fine tuning weights bySam Witteveen.
Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models.
About
Locally run an Instruction-Tuned Chat-Style LLM
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Languages
- C79.4%
- C++17.0%
- Makefile1.3%
- Python1.2%
- Other1.1%
