Movatterモバイル変換


[0]ホーム

URL:


localLLM: Running Local LLMs with 'llama.cpp' Backend

The 'localLLM' package provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled with the package. Package features include text generation, reproducible generation, and parallel inference.

Version:1.0.1
Depends:R (≥ 3.6.0)
Imports:Rcpp (≥ 1.0.14), tools, utils
Suggests:testthat (≥ 3.0.0),covr
Published:2025-10-15
DOI:10.32614/CRAN.package.localLLM
Author:Eddie YangORCID iD [aut], Yaosheng XuORCID iD [aut, cre]
Maintainer:Yaosheng Xu <xu2009 at purdue.edu>
BugReports:https://github.com/EddieYang211/localLLM/issues
License:MIT + fileLICENSE
URL:https://github.com/EddieYang211/localLLM
NeedsCompilation:yes
SystemRequirements:C++17, libcurl (optional, for model downloading)
CRAN checks:localLLM results

Documentation:

Reference manual:localLLM.html ,localLLM.pdf

Downloads:

Package source: localLLM_1.0.1.tar.gz
Windows binaries: r-devel:localLLM_1.0.1.zip, r-release:localLLM_1.0.1.zip, r-oldrel:localLLM_1.0.1.zip
macOS binaries: r-release (arm64):localLLM_1.0.1.tgz, r-oldrel (arm64):localLLM_1.0.1.tgz, r-release (x86_64):localLLM_1.0.1.tgz, r-oldrel (x86_64):localLLM_1.0.1.tgz

Linking:

Please use the canonical formhttps://CRAN.R-project.org/package=localLLMto link to this page.


[8]ページ先頭

©2009-2025 Movatter.jp