Overview
chatLLM is an R package providing a single, consistent interface to multiple “OpenAI‑compatible” chat APIs (OpenAI, Groq, Anthropic, DeepSeek, Alibaba DashScope, Gemini, Grok and GitHub Models).
Key features:
- 🔄Uniform API across providers
- 🗣Multi‑message context (system/user/assistant roles)
- 🔁Retries & backoff with clear timeout handling
- 🔈Verbose control (
verbose = TRUE/FALSE) - ⚙️Discover models via
list_models() - 🏗Factory interface for repeated calls
- 🌐Custom endpoint override and advanced tuning
Installation
From CRAN:
install.packages("chatLLM")Development version:
# install.packages("remotes") # if neededremotes::install_github("knowusuboaky/chatLLM")Setup
Set your API keys or tokens once per session:
Sys.setenv( OPENAI_API_KEY="your-openai-key", GROQ_API_KEY="your-groq-key", ANTHROPIC_API_KEY="your-anthropic-key", DEEPSEEK_API_KEY="your-deepseek-key", DASHSCOPE_API_KEY="your-dashscope-key", GH_MODELS_TOKEN="your-github-models-token", GEMINI_API_KEY="your-gemini-key", XAI_API_KEY="your-grok-key")Usage
5. Discover Available Models
# All providers at onceall_models<-list_models("all")names(all_models)# Only OpenAI modelsopenai_models<-list_models("openai")head(openai_models)6. Call a Specific Model
Pick from the list and pass it tocall_llm():
anthro_models<-list_models("anthropic")cat(call_llm( prompt="Write a haiku about autumn.", provider="anthropic", model=anthro_models[1], max_tokens=60))