Movatterモバイル変換


[0]ホーム

URL:


Skip to contents

chatLLM

Overview

chatLLM is an R package providing a single, consistent interface to multiple “OpenAI‑compatible” chat APIs (OpenAI, Groq, Anthropic, DeepSeek, Alibaba DashScope, Gemini, Grok and GitHub Models).

Key features:

  • 🔄Uniform API across providers
  • 🗣Multi‑message context (system/user/assistant roles)
  • 🔁Retries & backoff with clear timeout handling
  • 🔈Verbose control (verbose = TRUE/FALSE)
  • ⚙️Discover models vialist_models()
  • 🏗Factory interface for repeated calls
  • 🌐Custom endpoint override and advanced tuning

Installation

From CRAN:

install.packages("chatLLM")

Development version:

# install.packages("remotes")  # if neededremotes::install_github("knowusuboaky/chatLLM")

Setup

Set your API keys or tokens once per session:

Sys.setenv(  OPENAI_API_KEY="your-openai-key",  GROQ_API_KEY="your-groq-key",  ANTHROPIC_API_KEY="your-anthropic-key",  DEEPSEEK_API_KEY="your-deepseek-key",  DASHSCOPE_API_KEY="your-dashscope-key",  GH_MODELS_TOKEN="your-github-models-token",  GEMINI_API_KEY="your-gemini-key",  XAI_API_KEY="your-grok-key")

Usage

1. Simple Prompt

response<-call_llm(  prompt="Who is Messi?",  provider="openai",  max_tokens=300)cat(response)

2. Multi‑Message Conversation

conv<-list(list(role="system",    content="You are a helpful assistant."),list(role="user",      content="Explain recursion in R."))response<-call_llm(  messages=conv,  provider="openai",  max_tokens=200,  presence_penalty=0.2,  frequency_penalty=0.1,  top_p=0.95)cat(response)

3. Verbose Off

Suppress informational messages:

res<-call_llm(  prompt="Tell me a joke",  provider="openai",  verbose=FALSE)cat(res)

4. Factory Interface

Create a reusable LLM function:

# Build a “GitHub Models” engine with defaults baked inGitHubLLM<-call_llm(  provider="github",  max_tokens=60,  verbose=FALSE)# Invoke it like a function:story<-GitHubLLM("Tell me a short story about libraries.")cat(story)

5. Discover Available Models

# All providers at onceall_models<-list_models("all")names(all_models)# Only OpenAI modelsopenai_models<-list_models("openai")head(openai_models)

6. Call a Specific Model

Pick from the list and pass it tocall_llm():

anthro_models<-list_models("anthropic")cat(call_llm(  prompt="Write a haiku about autumn.",  provider="anthropic",  model=anthro_models[1],  max_tokens=60))

Troubleshooting

  • Timeouts: increasen_tries /backoff or supply a custom.post_func with highertimeout().
  • Model Not Found: uselist_models("<provider>") or consult provider docs.
  • Auth Errors: verify your API key/token and environment variables.
  • Network Issues: check VPN/proxy, firewall, or SSL certs.

Contributing & Support

Issues and PRs welcome athttps://github.com/knowusuboaky/chatLLM


License

MIT ©Kwadwo Daddy Nyame Owusu - Boakye


Acknowledgements

Inspired byRAGFlowChainR, powered byhttr and the R community. Enjoy!

Links

License

Citation

Developers

  • Kwadwo Daddy Nyame Owusu Boakye
    Author, maintainer

Dev status

  • CRAN status
  • Codecov
  • Last Commit
  • Issues
  • Downloads

[8]ページ先頭

©2009-2025 Movatter.jp