Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

llama_cpp provides Ruby bindings for llama.cpp

License

NotificationsYou must be signed in to change notification settings

yoshoku/llama_cpp.rb

Repository files navigation

Gem VersionLicenseDocumentation

llama_cpp.rb provides Ruby bindings for thellama.cpp.

Installation

Install the llama.cpp. If you use homebrew, install it by executing:

$ brew install llama.cpp

Install the gem and add to the application's Gemfile by executing:

$ bundle config --local build.llama_cpp"--with-opt-dir=/opt/homebrew/"$ bundle add llama_cpp

If bundler is not being used to manage dependencies, install the gem by executing:

$ gem install llama_cpp -- --with-opt-dir=/opt/homebrew

Usage

Prepare the quantized model by refering tothe usage section on the llama.cpp README.For example, you could prepare the quatization model based onopen_llama_7bor more useful in the context of Ruby might be a smaller model such astiny_llama_1b:

$cd~/$ brew install git-lfs$ git lfs install$ git clone https://github.com/ggerganov/llama.cpp.git$cd llama.cpp$ python3 -m pip install -r requirements.txt$cd models$ git clone https://huggingface.co/openlm-research/open_llama_7b$cd ../$ python3 convert-hf-to-gguf.py models/open_llama_7b$ make$ ./llama-quantize ./models/open_llama_7b/ggml-model-f16.gguf ./models/open_llama_7b/ggml-model-q4_0.bin q4_0

An example of Ruby code that generates sentences with the quantization model is as follows:

require'llama_cpp'LlamaCpp.ggml_backend_load_allmodel_params=LlamaCpp::LlamaModelParams.newmodel=LlamaCpp::llama_model_load_from_file('/home/user/llama.cpp/models/open_llama_7b/ggml-model-q4_0.bin',model_params)context_params=LlamaCpp::LlamaContextParams.newcontext=LlamaCpp.llama_init_from_model(model,context_params)putsLLaMACpp.generate(context,'Hello, World.')LlamaCpp.llama_free(context)LlamaCpp.llama_model_free(model)

Contributing

Bug reports and pull requests are welcome on GitHub athttps://github.com/yoshoku/llama_cpp.rb.This project is intended to be a safe, welcoming space for collaboration,and contributors are expected to adhere to thecode of conduct.

License

The gem is available as open source under the terms of theMIT License.

Code of Conduct

Everyone interacting in the LlamaCpp project's codebases, issue trackers,chat rooms and mailing lists is expected to follow thecode of conduct.

About

llama_cpp provides Ruby bindings for llama.cpp

Topics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp