- Notifications
You must be signed in to change notification settings - Fork48
mdrokz/rust-llama.cpp
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
LLama.cpp rust bindings.
The rust bindings are mostly based onhttps://github.com/go-skynet/go-llama.cpp/
Note: This repository uses git submodules to keep track ofLLama.cpp.
Clone the repository locally:
git clone --recurse-submodules https://github.com/mdrokz/rust-llama.cpp
cargo build
[dependencies]llama_cpp_rs ="0.3.0"
use llama_cpp_rs::{ options::{ModelOptions,PredictOptions},LLama,};fnmain(){let model_options =ModelOptions::default();let llama =LLama::new("../wizard-vicuna-13B.ggmlv3.q4_0.bin".into(),&model_options,).unwrap();let predict_options =PredictOptions{token_callback:Some(Box::new(|token|{println!("token1: {}", token);true})), ..Default::default()}; llama.predict("what are the national animals of india".into(), predict_options,).unwrap();}
The examples contain dockerfiles to run them
seeexamples
- Implement support for cublas,openBLAS & OpenCL#7
- Implement support for GPU (Metal)
- Add some test cases
- Support for fetching models through http & S3
- Sync with latest master & support GGUF
- Add some proper examples#7
MIT
About
LLama.cpp rust bindings
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
No releases published
Packages0
No packages published
Uh oh!
There was an error while loading.Please reload this page.