- Notifications
You must be signed in to change notification settings - Fork15
llama_cpp provides Ruby bindings for llama.cpp
License
yoshoku/llama_cpp.rb
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
llama_cpp.rb provides Ruby bindings for thellama.cpp.
Install the llama.cpp. If you use homebrew, install it by executing:
$ brew install llama.cpp
Install the gem and add to the application's Gemfile by executing:
$ bundle config --local build.llama_cpp"--with-opt-dir=/opt/homebrew/"$ bundle add llama_cpp
If bundler is not being used to manage dependencies, install the gem by executing:
$ gem install llama_cpp -- --with-opt-dir=/opt/homebrew
Prepare the quantized model by refering tothe usage section on the llama.cpp README.For example, you could prepare the quatization model based onopen_llama_7bor more useful in the context of Ruby might be a smaller model such astiny_llama_1b:
$cd~/$ brew install git-lfs$ git lfs install$ git clone https://github.com/ggerganov/llama.cpp.git$cd llama.cpp$ python3 -m pip install -r requirements.txt$cd models$ git clone https://huggingface.co/openlm-research/open_llama_7b$cd ../$ python3 convert-hf-to-gguf.py models/open_llama_7b$ make$ ./llama-quantize ./models/open_llama_7b/ggml-model-f16.gguf ./models/open_llama_7b/ggml-model-q4_0.bin q4_0
An example of Ruby code that generates sentences with the quantization model is as follows:
require'llama_cpp'LlamaCpp.ggml_backend_load_allmodel_params=LlamaCpp::LlamaModelParams.newmodel=LlamaCpp::llama_model_load_from_file('/home/user/llama.cpp/models/open_llama_7b/ggml-model-q4_0.bin',model_params)context_params=LlamaCpp::LlamaContextParams.newcontext=LlamaCpp.llama_init_from_model(model,context_params)putsLLaMACpp.generate(context,'Hello, World.')LlamaCpp.llama_free(context)LlamaCpp.llama_model_free(model)
Bug reports and pull requests are welcome on GitHub athttps://github.com/yoshoku/llama_cpp.rb.This project is intended to be a safe, welcoming space for collaboration,and contributors are expected to adhere to thecode of conduct.
The gem is available as open source under the terms of theMIT License.
Everyone interacting in the LlamaCpp project's codebases, issue trackers,chat rooms and mailing lists is expected to follow thecode of conduct.
About
llama_cpp provides Ruby bindings for llama.cpp