| Title: | Chat with Large Language Models |
| Version: | 0.4.0 |
| Description: | Chat with large language models from a range of providers including 'Claude'https://claude.ai, 'OpenAI'https://chatgpt.com, and more. Supports streaming, asynchronous calls, tool calling, and structured data extraction. |
| License: | MIT + file LICENSE |
| URL: | https://ellmer.tidyverse.org,https://github.com/tidyverse/ellmer |
| BugReports: | https://github.com/tidyverse/ellmer/issues |
| Depends: | R (≥ 4.1) |
| Imports: | cli, coro (≥ 1.1.0), glue, httr2 (≥ 1.2.1), jsonlite, later(≥ 1.4.0), lifecycle, promises (≥ 1.3.1), R6, rlang (≥1.1.0), S7 (≥ 0.2.0), tibble, vctrs |
| Suggests: | connectcreds, curl (≥ 6.0.1), gargle, gitcreds, jose, knitr,magick, openssl, paws.common, png, rmarkdown, shiny, shinychat(≥ 0.2.0), testthat (≥ 3.0.0), vcr (≥ 2.0.0), withr |
| VignetteBuilder: | knitr |
| Config/Needs/website: | tidyverse/tidytemplate, rmarkdown |
| Config/testthat/edition: | 3 |
| Config/testthat/parallel: | true |
| Config/testthat/start-first: | chat, provider* |
| Encoding: | UTF-8 |
| RoxygenNote: | 7.3.3 |
| Collate: | 'utils-S7.R' 'types.R' 'ellmer-package.R' 'tools-def.R''content.R' 'provider.R' 'as-json.R' 'batch-chat.R''chat-structured.R' 'chat-tools-content.R' 'turns.R''chat-tools.R' 'chat-utils.R' 'utils-coro.R' 'chat.R''content-image.R' 'content-pdf.R' 'content-replay.R' 'httr2.R''import-standalone-obj-type.R' 'import-standalone-purrr.R''import-standalone-types-check.R' 'interpolate.R' 'live.R''parallel-chat.R' 'params.R' 'provider-any.R' 'provider-aws.R''provider-openai-compatible.R' 'provider-azure.R''provider-claude-files.R' 'provider-claude-tools.R''provider-claude.R' 'provider-google.R' 'provider-cloudflare.R''provider-databricks.R' 'provider-deepseek.R''provider-github.R' 'provider-google-tools.R''provider-google-upload.R' 'provider-groq.R''provider-huggingface.R' 'provider-mistral.R''provider-ollama.R' 'provider-openai-tools.R''provider-openai.R' 'provider-openrouter.R''provider-perplexity.R' 'provider-portkey.R''provider-snowflake.R' 'provider-vllm.R' 'schema.R' 'tokens.R''tools-built-in.R' 'tools-def-auto.R' 'utils-auth.R''utils-callbacks.R' 'utils-cat.R' 'utils-merge.R''utils-prettytime.R' 'utils.R' 'zzz.R' |
| NeedsCompilation: | no |
| Packaged: | 2025-11-14 20:31:09 UTC; hadleywickham |
| Author: | Hadley Wickham |
| Maintainer: | Hadley Wickham <hadley@posit.co> |
| Repository: | CRAN |
| Date/Publication: | 2025-11-15 12:00:16 UTC |
ellmer: Chat with Large Language Models
Description

Chat with large language models from a range of providers including 'Claude'https://claude.ai, 'OpenAI'https://chatgpt.com, and more. Supports streaming, asynchronous calls, tool calling, and structured data extraction.
Author(s)
Maintainer: Hadley Wickhamhadley@posit.co (ORCID)
Authors:
Joe Cheng
Aaron Jacobs
Garrick Aden-Buiegarrick@posit.co (ORCID)
Barret Schloerkebarret@posit.co (ORCID)
Other contributors:
Posit Software, PBC (ROR) [copyright holder, funder]
See Also
Useful links:
Report bugs athttps://github.com/tidyverse/ellmer/issues
The Chat object
Description
AChat is a sequence of user and assistantTurns sentto a specificProvider. AChat is a mutable R6 object that takes care ofmanaging the state associated with the chat; i.e. it records the messagesthat you send to the server, and the messages that you receive back.If you register a tool (i.e. an R function that the assistant can call onyour behalf), it also takes care of the tool loop.
You should generally not create this object yourself,but instead callchat_openai() or friends instead.
Value
A Chat object
Methods
Public methods
Methodnew()
Usage
Chat$new(provider, system_prompt = NULL, echo = "none")
Arguments
providerA provider object.
system_promptSystem prompt to start the conversation with.
echoOne of the following options:
none: don't emit any output (default when running in a function).output: echo text and tool-calling output as it streams in (defaultwhen running at the console).all: echo all input and output.
Note this only affects the
chat()method. You can override the defaultby setting theellmer_echooption.
Methodget_turns()
Retrieve the turns that have been sent and received so far(optionally starting with the system prompt, if any).
Usage
Chat$get_turns(include_system_prompt = FALSE)
Arguments
include_system_promptWhether to include the system prompt in theturns (if any exists).
Methodset_turns()
Replace existing turns with a new list.
Usage
Chat$set_turns(value)
Arguments
valueA list ofTurns.
Methodadd_turn()
Add a pair of turns to the chat.
Usage
Chat$add_turn(user, assistant, log_tokens = TRUE)
Arguments
Methodget_system_prompt()
If set, the system prompt, it not,NULL.
Usage
Chat$get_system_prompt()
Methodget_model()
Retrieve the model name
Usage
Chat$get_model()
Methodset_system_prompt()
Update the system prompt
Usage
Chat$set_system_prompt(value)
Arguments
valueA character vector giving the new system prompt
Methodget_tokens()
A data frame with token usage and cost data. There are fourcolumns:input,output,cached_input, andcost. There is onerow for each assistant turn, because token counts and costs are onlyavailable when the API returns the assistant's response.
Usage
Chat$get_tokens(include_system_prompt = deprecated())
Arguments
Methodget_cost()
The cost of this chat
Usage
Chat$get_cost(include = c("all", "last"))Arguments
includeThe default,
"all", gives the total cumulative costof this chat. Alternatively, use"last"to get the cost of just themost recent turn.
Methodlast_turn()
The last turn returned by the assistant.
Usage
Chat$last_turn(role = c("assistant", "user", "system"))Arguments
roleOptionally, specify a role to find the last turn withfor the role.
Returns
Either aTurn orNULL, if no turns with the specifiedrole have occurred.
Methodchat()
Submit input to the chatbot, and return the response as asimple string (probably Markdown).
Usage
Chat$chat(..., echo = NULL)
Arguments
...The input to send to the chatbot. Can be strings or images(see
content_image_file()andcontent_image_url().echoWhether to emit the response to stdout as it is received. If
NULL, then the value ofechoset when the chat object was createdwill be used.
Methodchat_structured()
Extract structured data
Usage
Chat$chat_structured(..., type, echo = "none", convert = TRUE)
Arguments
...The input to send to the chatbot. This is typically the textyou want to extract data from, but it can be omitted if the data isobvious from the existing conversation.
typeA type specification for the extracted data. Should becreated with a
type_()function.echoWhether to emit the response to stdout as it is received.Set to "text" to stream JSON data as it's generated (not supported byall providers).
convertAutomatically convert from JSON lists to R data typesusing the schema. For example, this will turn arrays of objects intodata frames and arrays of strings into a character vector.
Methodchat_structured_async()
Extract structured data, asynchronously. Returns a promisethat resolves to an object matching the type specification.
Usage
Chat$chat_structured_async(..., type, echo = "none", convert = TRUE)
Arguments
...The input to send to the chatbot. Will typically includethe phrase "extract structured data".
typeA type specification for the extracted data. Should becreated with a
type_()function.echoWhether to emit the response to stdout as it is received.Set to "text" to stream JSON data as it's generated (not supported byall providers).
convertAutomatically convert from JSON lists to R data typesusing the schema. For example, this will turn arrays of objects intodata frames and arrays of strings into a character vector.
Methodchat_async()
Submit input to the chatbot, and receive a promise thatresolves with the response all at once. Returns a promise that resolvesto a string (probably Markdown).
Usage
Chat$chat_async(..., tool_mode = c("concurrent", "sequential"))Arguments
...The input to send to the chatbot. Can be strings or images.
tool_modeWhether tools should be invoked one-at-a-time(
"sequential") or concurrently ("concurrent"). Sequential mode isbest for interactive applications, especially when a tool may involvean interactive user interface. Concurrent mode is the default and isbest suited for automated scripts or non-interactive applications.
Methodstream()
Submit input to the chatbot, returning streaming results.Returns Acoro generatorthat yields strings. While iterating, the generator will block whilewaiting for more content from the chatbot.
Usage
Chat$stream(..., stream = c("text", "content"))Arguments
...The input to send to the chatbot. Can be strings or images.
streamWhether the stream should yield only
"text"or ellmer'srich content types. Whenstream = "content",stream()yieldsContent objects.
Methodstream_async()
Submit input to the chatbot, returning asynchronouslystreaming results. Returns acoro async generator thatyields string promises.
Usage
Chat$stream_async( ..., tool_mode = c("concurrent", "sequential"), stream = c("text", "content"))Arguments
...The input to send to the chatbot. Can be strings or images.
tool_modeWhether tools should be invoked one-at-a-time(
"sequential") or concurrently ("concurrent"). Sequential mode isbest for interactive applications, especially when a tool may involvean interactive user interface. Concurrent mode is the default and isbest suited for automated scripts or non-interactive applications.streamWhether the stream should yield only
"text"or ellmer'srich content types. Whenstream = "content",stream()yieldsContent objects.
Methodregister_tool()
Register a tool (an R function) that the chatbot can use.Learn more invignette("tool-calling").
Usage
Chat$register_tool(tool)
Arguments
toolA tool definition created by
tool().
Methodregister_tools()
Register a list of tools.Learn more invignette("tool-calling").
Usage
Chat$register_tools(tools)
Arguments
toolsA list of tool definitions created by
tool().
Methodget_provider()
Get the underlying provider object. For expert use only.
Usage
Chat$get_provider()
Methodget_tools()
Retrieve the list of registered tools.
Usage
Chat$get_tools()
Methodset_tools()
Sets the available tools. For expert use only; most usersshould useregister_tool().
Usage
Chat$set_tools(tools)
Arguments
toolsA list of tool definitions created with
tool().
Methodon_tool_request()
Register a callback for a tool request event.
Usage
Chat$on_tool_request(callback)
Arguments
callbackA function to be called when a tool request event occurs,which must have
requestas its only argument.
Returns
A function that can be called to remove the callback.
Methodon_tool_result()
Register a callback for a tool result event.
Usage
Chat$on_tool_result(callback)
Arguments
callbackA function to be called when a tool result event occurs,which must have
resultas its only argument.
Returns
A function that can be called to remove the callback.
Methodclone()
The objects of this class are cloneable with this method.
Usage
Chat$clone(deep = FALSE)
Arguments
deepWhether to make a deep clone.
Examples
chat <- chat_openai()chat$chat("Tell me a funny joke")Content types received from and sent to a chatbot
Description
Use these functions if you're writing a package that extends ellmer and needto customise methods for various types of content. For normal use, seecontent_image_url() and friends.
ellmer abstracts away differences in the way that differentProvidersrepresent various types of content, allowing you to more easily writecode that works with any chatbot. This set of classes represents types ofcontent that can be either sent to and received from a provider:
ContentText: simple text (often in markdown format). This is the onlytype of content that can be streamed live as it's received.ContentImageRemoteandContentImageInline: images, either as a pointerto a remote URL or included inline in the object. Seecontent_image_file()and friends for convenient ways to construct theseobjects.ContentToolRequest: a request to perform a tool call (sent by theassistant).ContentToolResult: the result of calling the tool (sent by the user).This object is automatically created from the value returned by calling thetool()function. Alternatively, expert users can return aContentToolResultfrom atool()function to include additional data orto customize the display of the result.
Usage
Content()ContentText(text = stop("Required"))ContentImage()ContentImageRemote(url = stop("Required"), detail = "")ContentImageInline(type = stop("Required"), data = NULL)ContentToolRequest( id = stop("Required"), name = stop("Required"), arguments = list(), tool = NULL)ContentToolResult(value = NULL, error = NULL, extra = list(), request = NULL)ContentThinking(thinking = stop("Required"), extra = list())ContentPDF( type = stop("Required"), data = stop("Required"), filename = stop("Required"))Arguments
text | A single string. |
url | URL to a remote image. |
detail | Not currently used. |
type | MIME type of the image. |
data | Base64 encoded image data. |
id | Tool call id (used to associate a request and a result).Automatically managed byellmer. |
name | Function name |
arguments | Named list of arguments to call the function with. |
tool | ellmer automatically matches a tool request to the tools definedfor the chatbot. If |
value | The results of calling the tool function, if it succeeded. |
error | The error message, as a string, or the error condition thrownas a result of a failure when calling the tool function. Must be |
extra | Additional data. |
request | TheContentToolRequest associated with the tool result,automatically added byellmer when evaluating the tool call. |
thinking | The text of the thinking output. |
filename | File name, used to identify the PDF. |
Value
S7 objects that all inherit fromContent
Examples
Content()ContentText("Tell me a joke")ContentImageRemote("https://www.r-project.org/Rlogo.png")ContentToolRequest(id = "abc", name = "mean", arguments = list(x = 1:5))A chatbot provider
Description
A Provider captures the details of one chatbot service/API. This captureshow the API works, not the details of the underlying large language model.Different providers might offer the same (open source) model behind adifferent API.
Usage
Provider( name = stop("Required"), model = stop("Required"), base_url = stop("Required"), params = list(), extra_args = list(), extra_headers = character(0), credentials = function() NULL)Arguments
name | Name of the provider. |
model | Name of the model. |
base_url | The base URL for the API. |
params | A list of standard parameters created by |
extra_args | Arbitrary extra arguments to be included in the request body. |
extra_headers | Arbitrary extra headers to be added to the request. |
credentials | A zero-argument function that returns the credentials to usefor authentication. Can either return a string, representing an API key,or a named list of headers. |
Details
To add support for a new backend, you will need to subclassProvider(adding any additional fields that your provider needs) and then implementthe various generics that control the behavior of each provider.
Value
An S7 Provider object.
Examples
Provider( name = "CoolModels", model = "my_model", base_url = "https://cool-models.com")A user, assistant, or system turn
Description
Every conversation with a chatbot consists of pairs of user and assistantturns, corresponding to an HTTP request and response. These turns arerepresented by theTurn object, which contains a list ofContents representingthe individual messages within the turn. These might be text, images, toolrequests (assistant only), or tool responses (user only).
UserTurn,AssistantTurn, andSystemTurn are specialized subclassesofTurn for different types of conversation turns.AssistantTurn includesadditional metadata about the API response.
Note that a call to$chat() and related functions may result in multipleuser-assistant turn cycles. For example, if you have registered tools,ellmer will automatically handle the tool calling loop, which may result inany number of additional cycles. Learn more about tool calling invignette("tool-calling").
Usage
Turn(role = NULL, contents = list(), tokens = NULL)UserTurn(contents = list())SystemTurn(contents = list())AssistantTurn( contents = list(), json = list(), tokens = c(NA_real_, NA_real_, NA_real_), cost = NA_real_, duration = NA_real_)Arguments
role |
|
contents | A list ofContent objects. |
tokens | A numeric vector of length 3 representing the number ofinput tokens (uncached), output tokens, and input tokens (cached)used in this turn. |
json | The serialized JSON corresponding to the underlying data ofthe turns. This is useful if there's information returned by the providerthat ellmer doesn't otherwise expose. |
cost | The cost of the turn in dollars. |
duration | The duration of the request in seconds. |
Value
An S7Turn object
An S7AssistantTurn object
Examples
UserTurn(list(ContentText("Hello, world!")))Type definitions for function calling and structured data extraction.
Description
These S7 classes are provided for use by package devlopers who areextending ellmer. In every day use, usetype_boolean() and friends.
Usage
TypeBasic(description = NULL, required = TRUE, type = stop("Required"))TypeEnum(description = NULL, required = TRUE, values = character(0))TypeArray(description = NULL, required = TRUE, items = Type())TypeJsonSchema(description = NULL, required = TRUE, json = list())TypeIgnore(description = NULL, required = TRUE)TypeObject( description = NULL, required = TRUE, properties = list(), additional_properties = FALSE)Arguments
description | The purpose of the component. This isused by the LLM to determine what values to pass to the tool or whatvalues to extract in the structured data, so the more detail that you canprovide here, the better. |
required | Is the component or argument required? In type descriptions for structured data, if In tool definitions, |
type | Basic type name. Must be one of |
values | Character vector of permitted values. |
items | The type of the array items. Can be created by any of the |
json | A JSON schema object as a list. |
properties | Named list of properties stored inside the object.Each element should be an S7 |
additional_properties | Can the object have arbitrary additionalproperties that are not explicitly listed? Only supported by Claude. |
Value
S7 objects inheriting fromType
Examples
TypeBasic(type = "boolean")TypeArray(items = TypeBasic(type = "boolean"))Submit multiple chats in one batch
Description
batch_chat() andbatch_chat_structured() currently only work withchat_openai() andchat_anthropic(). They use theOpenAI andAnthropicbatch APIs which allow you to submit multiple requests simultaneously.The results can take up to 24 hours to complete, but in return you pay 50%less than usual (but note that ellmer doesn't include this discount inits pricing metadata). If you want to get results back more quickly, oryou're working with a different provider, you may want to useparallel_chat() instead.
Since batched requests can take a long time to complete,batch_chat()requires a file path that is used to store information about the batch soyou never lose any work. You can either setwait = FALSE or simplyinterrupt the waiting process, then later, either callbatch_chat() toresume where you left off or callbatch_chat_completed() to see if theresults are ready to retrieve.batch_chat() will store the chat responsesin this file, so you can either keep it around to cache the results,or delete it to free up disk space.
This API is marked as experimental since I don't yet know how to handleerrors in the most helpful way. Fortunately they don't seem to be common,but if you have ideas, please let me know!
Usage
batch_chat(chat, prompts, path, wait = TRUE, ignore_hash = FALSE)batch_chat_text(chat, prompts, path, wait = TRUE, ignore_hash = FALSE)batch_chat_structured( chat, prompts, path, type, wait = TRUE, ignore_hash = FALSE, convert = TRUE, include_tokens = FALSE, include_cost = FALSE)batch_chat_completed(chat, prompts, path)Arguments
chat | A chat object created by a |
prompts | A vector created by |
path | Path to file (with The file records a hash of the provider, the prompts, and the existingchat turns. If you attempt to reuse the same file with any of these beingdifferent, you'll get an error. |
wait | If |
ignore_hash | If |
type | A type specification for the extracted data. Should becreated with a |
convert | If |
include_tokens | If |
include_cost | If |
Value
Forbatch_chat(), a list ofChat objects, one for each prompt.Forbatch_chat_test(), a character vector of text responses.Forbatch_chat_structured(), a single structured data object with oneelement for each prompt. Typically, whentype is an object, this willwill be a data frame with one row for each prompt, and one column for eachproperty.
For any of the aboves, will returnNULL ifwait = FALSE and the jobis not complete.
Examples
chat <- chat_openai(model = "gpt-4.1-nano")# Chat ----------------------------------------------------------------------prompts <- interpolate("What do people from {{state.name}} bring to a potluck dinner?")## Not run: chats <- batch_chat(chat, prompts, path = "potluck.json")chats## End(Not run)# Structured data -----------------------------------------------------------prompts <- list( "I go by Alex. 42 years on this planet and counting.", "Pleased to meet you! I'm Jamal, age 27.", "They call me Li Wei. Nineteen years young.", "Fatima here. Just celebrated my 35th birthday last week.", "The name's Robert - 51 years old and proud of it.", "Kwame here - just hit the big 5-0 this year.")type_person <- type_object(name = type_string(), age = type_number())## Not run: data <- batch_chat_structured( chat = chat, prompts = prompts, path = "people-data.json", type = type_person)data## End(Not run)Chat with any provider
Description
This is a generic interface to all the otherchat_ functions that allowto you pick the provider and the model with a simple string.
Usage
chat( name, ..., system_prompt = NULL, params = NULL, echo = c("none", "output", "all"))Arguments
name | Provider (and optionally model) name in the form |
... | Arguments passed to the provider function. |
system_prompt | A system prompt to set the behavior of the assistant. |
params | Common model parameters, usually created by |
echo | One of the following options:
Note this only affects the |
Chat with an Anthropic Claude model
Description
Anthropic provides a number of chat based modelsunder theClaude moniker. Note that aClaude Pro membership does not give you the ability to call models via theAPI; instead, you will need to sign up (and pay for) adeveloper account.
Usage
chat_anthropic( system_prompt = NULL, params = NULL, model = NULL, cache = c("5m", "1h", "none"), api_args = list(), base_url = "https://api.anthropic.com/v1", beta_headers = character(), api_key = NULL, credentials = NULL, api_headers = character(), echo = NULL)chat_claude( system_prompt = NULL, params = NULL, model = NULL, cache = c("5m", "1h", "none"), api_args = list(), base_url = "https://api.anthropic.com/v1", beta_headers = character(), api_key = NULL, credentials = NULL, api_headers = character(), echo = NULL)models_claude( base_url = "https://api.anthropic.com/v1", api_key = anthropic_key())models_anthropic( base_url = "https://api.anthropic.com/v1", api_key = anthropic_key())Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
params | Common model parameters, usually created by |
model | The model to use for the chat (defaults to "claude-sonnet-4-5-20250929").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.Use |
cache | How long to cache inputs? Defaults to "5m" (five minutes).Set to "none" to disable caching or "1h" to cache for one hour. See details below. |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
base_url | The base URL to the endpoint; the default is Claude'spublic API. |
beta_headers | Optionally, a character vector of beta headers to opt-inclaude features that are still in beta. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
echo | One of the following options:
Note this only affects the |
Value
AChat object.
Caching
Caching with Claude is a bit more complicated than other providers but webelieve that on average it will save you both money and time, so we haveenabled it by default. With other providers, like OpenAI and Google,you only pay for cache reads, which cost 10% of the normal price. WithClaude, you also pay for cache writes, which cost 125% of the normal pricefor 5 minute caching and 200% of the normal price for 1 hour caching.
How does this affect the total cost of a conversation? Imagine the firstturn sends 1000 input tokens and receives 200 output tokens. The secondturn must first send both the input and output from the previous turn(1200 tokens). It then sends a further 1000 tokens and receives 200 tokensback.
To compare the prices of these two approaches we can ignore the cost ofoutput tokens, because they are the same for both. How much will the inputtokens cost? If we don't use caching, we send 1000 tokens in the first turnand 2200 (1000 + 200 + 1000) tokens in the second turn for a total of 3200tokens. If we use caching, we'll send (the equivalent of) 1000 * 1.25 = 1250tokens in the first turn. In the second turn, 1000 of the input tokens willbe cached so the total cost is 1000 * 0.1 + (200 + 1000) * 1.25 = 1600tokens. That makes a total of 2850 tokens, i.e. 11% fewer tokens,decreasing the overall cost.
Obviously, the details will vary from conversation to conversation, butif you have a large system prompt that you re-use many times you shouldexpect to see larger savings. You can see exactly how many input andcache input tokens each turn uses, along with the total cost,withchat$get_tokens(). If you don't see savings for your use case, you cansuppress caching withcache = "none".
I know this is already quite complicated, but there's one final wrinkle:Claude will only cache longer prompts, with caching requiring at least1024-4096 tokens, depending on the model. So don't be surprised it if youdon't see any differences with caching if you have a short prompt.
See all the details athttps://docs.claude.com/en/docs/build-with-claude/prompt-caching.
See Also
Other chatbots:chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
chat <- chat_anthropic()chat$chat("Tell me three jokes about statisticians")Chat with an AWS bedrock model
Description
AWS Bedrock provides a number oflanguage models, including those from Anthropic'sClaude, using the BedrockConverse API.
Authentication
Authentication is handled through {paws.common}, so if authenticationdoes not work for you automatically, you'll need to follow the adviceathttps://www.paws-r-sdk.com/#credentials. In particular, if yourorg uses AWS SSO, you'll need to runaws sso login at the terminal.
Usage
chat_aws_bedrock( system_prompt = NULL, base_url = NULL, model = NULL, profile = NULL, params = NULL, api_args = list(), api_headers = character(), echo = NULL)models_aws_bedrock(profile = NULL, base_url = NULL)Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
model | The model to use for the chat (defaults to "anthropic.claude-sonnet-4-5-20250929-v1:0").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.Use While ellmer provides a default model, there's no guarantee that you'llhave access to it, so you'll need to specify a model that you can.If you're usingcross-region inference,you'll need to use the inference profile ID, e.g. |
profile | AWS profile to use. |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Some useful arguments include: api_args = list( inferenceConfig = list( maxTokens = 100, temperature = 0.7, topP = 0.9, topK = 20 )) |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
echo | One of the following options:
Note this only affects the |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: # Basic usagechat <- chat_aws_bedrock()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on Azure OpenAI
Description
TheAzure OpenAI serverhosts a number of open source models as well as proprietary modelsfrom OpenAI.
Built on top ofchat_openai_compatible().
Authentication
chat_azure_openai() supports API keys and thecredentials parameter, butit also makes use of:
Azure service principals (when the
AZURE_TENANT_ID,AZURE_CLIENT_ID,andAZURE_CLIENT_SECRETenvironment variables are set).Interactive Entra ID authentication, like the Azure CLI.
Viewer-based credentials on Posit Connect. Requires theconnectcredspackage.
Usage
chat_azure_openai( endpoint = azure_endpoint(), model, params = NULL, api_version = NULL, system_prompt = NULL, api_key = NULL, credentials = NULL, api_args = list(), echo = c("none", "output", "all"), api_headers = character(), deployment_id = deprecated())Arguments
endpoint | Azure OpenAI endpoint url with protocol and hostname, i.e. |
model | Thedeployment id for the model you want to use. |
params | Common model parameters, usually created by |
api_version | The API version to use. |
system_prompt | A system prompt to set the behavior of the assistant. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
deployment_id |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_azure_openai(model = "gpt-4o-mini")chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on CloudFlare
Description
CloudflareWorkers AI hosts a variety of open-source AI models. To use the CloudflareAPI, you must have an Account ID and an Access Token, which you can obtainby following these instructions.
Built on top ofchat_openai_compatible().
Known limitations
Tool calling does not appear to work.
Images don't appear to work.
Usage
chat_cloudflare( account = cloudflare_account(), system_prompt = NULL, params = NULL, api_key = NULL, credentials = NULL, model = NULL, api_args = list(), echo = NULL, api_headers = character())Arguments
account | The Cloudflare account ID. Taken from the |
system_prompt | A system prompt to set the behavior of the assistant. |
params | Common model parameters, usually created by |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "meta-llama/Llama-3.3-70b-instruct-fp8-fast").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_cloudflare()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on Databricks
Description
Databricks provides out-of-the-box access to a number offoundation modelsand can also serve as a gateway for external models hosted by a third party.
Built on top ofchat_openai_compatible().
Authentication
chat_databricks() picks up on ambient Databricks credentials for a subsetof theDatabricks client unified authenticationmodel. Specifically, it supports:
Personal access tokens
Service principals via OAuth (OAuth M2M)
User account via OAuth (OAuth U2M)
Authentication via the Databricks CLI
Posit Workbench-managed credentials
Viewer-based credentials on Posit Connect. Requires theconnectcredspackage.
Usage
chat_databricks( workspace = databricks_workspace(), system_prompt = NULL, model = NULL, token = NULL, params = NULL, api_args = list(), echo = c("none", "output", "all"), api_headers = character())Arguments
workspace | The URL of a Databricks workspace, e.g. |
system_prompt | A system prompt to set the behavior of the assistant. |
model | The model to use for the chat (defaults to "databricks-claude-3-7-sonnet").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. Available foundational models include:
|
token | An authentication token for the Databricks workspace, or |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_databricks()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on DeepSeek
Description
Sign up athttps://platform.deepseek.com.
Built on top ofchat_openai_compatible().
Known limitations
Structured data extraction is not supported.
Images are not supported.
Usage
chat_deepseek( system_prompt = NULL, base_url = "https://api.deepseek.com", api_key = NULL, credentials = NULL, model = NULL, params = NULL, api_args = list(), echo = NULL, api_headers = character())Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default uses DeepSeek. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "deepseek-chat").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_deepseek()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on the GitHub model marketplace
Description
GitHub Models hosts a number of open source and OpenAI models. To access theGitHub model marketplace, you will need to apply for and be accepted into thebeta access program. Seehttps://github.com/marketplace/models for details.
This function is a lightweight wrapper aroundchat_openai() withthe defaults tweaked for the GitHub Models marketplace.
GitHub also suports the Azure AI Inference SDK, which you can use by settingbase_url to"https://models.inference.ai.azure.com/". This endpoint wasused inellmer v0.3.0 and earlier.
Usage
chat_github( system_prompt = NULL, base_url = "https://models.github.ai/inference/", api_key = NULL, credentials = NULL, model = NULL, params = NULL, api_args = list(), echo = NULL, api_headers = character())models_github( base_url = "https://models.github.ai/", api_key = NULL, credentials = NULL)Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "gpt-4o").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_github()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a Google Gemini or Vertex AI model
Description
Google's AI offering is broken up into two parts: Gemini and Vertex AI.Most enterprises are likely to use Vertex AI, and individuals are likelyto use Gemini.
Usegoogle_upload() to upload files (PDFs, images, video, audio, etc.)
Authentication
These functions try a number of authentication strategies, in this order:
An API key set in the
GOOGLE_API_KEYenv var, or,forchat_google_gemini()only,GEMINI_API_KEY.Google's default application credentials, if thegargle packageis installed.
Viewer-based credentials on Posit Connect, if theconnectcredspackage.
. An browser-based OAuth flow, ifyou're in an interactive session. This currently uses an unverifiedOAuth app (so you will get a scary warning); we plan to verify in thenear future.
Usage
chat_google_gemini( system_prompt = NULL, base_url = "https://generativelanguage.googleapis.com/v1beta/", api_key = NULL, credentials = NULL, model = NULL, params = NULL, api_args = list(), api_headers = character(), echo = NULL)chat_google_vertex( location, project_id, system_prompt = NULL, model = NULL, params = NULL, api_args = list(), api_headers = character(), echo = NULL)models_google_gemini( base_url = "https://generativelanguage.googleapis.com/v1beta/", api_key = NULL, credentials = NULL)models_google_vertex(location, project_id, credentials = NULL)Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
api_key | |
credentials | A function that returns a list of authentication headersor |
model | The model to use for the chat (defaults to "gemini-2.5-flash").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.Use |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
echo | One of the following options:
Note this only affects the |
location | Location, e.g. |
project_id | Project ID. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_google_gemini()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on Groq
Description
Sign up athttps://groq.com.
Built on top ofchat_openai_compatible().
Known limitations
groq does not currently support structured data extraction.
Usage
chat_groq( system_prompt = NULL, base_url = "https://api.groq.com/openai/v1", api_key = NULL, credentials = NULL, model = NULL, params = NULL, api_args = list(), echo = NULL, api_headers = character())Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "llama-3.1-8b-instant").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_groq()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on Hugging Face Serverless Inference API
Description
Hugging Face hosts a variety of open-sourceand proprietary AI models available via their Inference API.To use the Hugging Face API, you must have an Access Token, which you can obtainfrom yourHugging Face account(ensure that at least "Make calls to Inference Providers" and"Make calls to your Inference Endpoints" is checked).
Built on top ofchat_openai_compatible().
Known limitations
Some models do not support the chat interface or parts of it, for example
google/gemma-2-2b-itdoes not support a system prompt. You will need tocarefully choose the model.
Usage
chat_huggingface( system_prompt = NULL, params = NULL, api_key = NULL, credentials = NULL, model = NULL, api_args = list(), echo = NULL, api_headers = character())Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
params | Common model parameters, usually created by |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "meta-llama/Llama-3.1-8B-Instruct").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_huggingface()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on Mistral's La Platforme
Description
Get your API key fromhttps://console.mistral.ai/api-keys.
Built on top ofchat_openai_compatible().
Known limitations
Tool calling is unstable.
Images require a model that supports images.
Usage
chat_mistral( system_prompt = NULL, params = NULL, api_key = NULL, credentials = NULL, model = NULL, api_args = list(), echo = NULL, api_headers = character())models_mistral(api_key = mistral_key())Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
params | Common model parameters, usually created by |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "mistral-large-latest").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_mistral()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a local Ollama model
Description
To usechat_ollama() first download and installOllama. Then install some models either from thecommand line (e.g. withollama pull llama3.1) or within R usingollamar (e.g.ollamar::pull("llama3.1")).
Built on top ofchat_openai_compatible().
Known limitations
Tool calling is not supported with streaming (i.e. when
echois"text"or"all")Models can only use 2048 input tokens, and there's no wayto get them to use more, except by creating a custom model with adifferent default.
Tool calling generally seems quite weak, at least with the models I havetried it with.
Usage
chat_ollama( system_prompt = NULL, base_url = Sys.getenv("OLLAMA_BASE_URL", "http://localhost:11434"), model, params = NULL, api_args = list(), echo = NULL, api_key = NULL, credentials = NULL, api_headers = character())models_ollama(base_url = "http://localhost:11434", credentials = NULL)Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
model | The model to use for the chat.Use |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_key | |
credentials | Ollama doesn't require credentials for local usage and in mostcases you do not need to provide However, if you're accessing an Ollama instance hosted behind a reverseproxy or secured endpoint that enforces bearer‐token authentication, youcan set the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_ollama(model = "llama3.2")chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with an OpenAI model
Description
This is the main interface toOpenAI's models,using theresponses API. You can use this to access OpenAI's latestmodels and features like image generation and web search. If you need to usean OpenAI-compatible API from another provider, or thechat completionsAPI with OpenAI,usechat_openai_compatible() instead.
Note that a ChatGPT Plus membership does not grant access to the API.You will need to sign up for a developer account (and pay for it) at thedeveloper platform.
Usage
chat_openai( system_prompt = NULL, base_url = "https://api.openai.com/v1", api_key = NULL, credentials = NULL, model = NULL, params = NULL, api_args = list(), api_headers = character(), service_tier = c("auto", "default", "flex", "priority"), echo = c("none", "output", "all"))models_openai( base_url = "https://api.openai.com/v1", api_key = NULL, credentials = NULL)Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "gpt-4.1").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.Use |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
service_tier | Request a specific service tier. There are four options:
|
echo | One of the following options:
Note this only affects the |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
chat <- chat_openai()chat$chat(" What is the difference between a tibble and a data frame? Answer with a bulleted list")chat$chat("Tell me three funny jokes about statisticians")Chat with an OpenAI-compatible model
Description
This function is for use with OpenAI-compatible APIs, also known as thechat completions API. If you want to use OpenAI itself, we recommendchat_openai(), which uses the newerresponses API.
Many providers offer OpenAI-compatible APIs, including:
Ollama for local models
vLLM for self-hosted models
Various cloud providers with OpenAI-compatible endpoints
Usage
chat_openai_compatible( base_url, name = "OpenAI-compatible", system_prompt = NULL, api_key = NULL, credentials = NULL, model = NULL, params = NULL, api_args = list(), api_headers = character(), echo = c("none", "output", "all"))Arguments
base_url | The base URL to the endpoint. This parameter isrequiredsince there is no default for OpenAI-compatible APIs. |
name | The name of the provider; this is shown in |
system_prompt | A system prompt to set the behavior of the assistant. |
api_key | |
credentials | Credentials to use for authentication. If not provided,will attempt to use the |
model | The model to use for chat. No default; depends on your provider. |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
echo | One of the following options:
Note this only affects the |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openrouter(),chat_perplexity(),chat_portkey()
Examples
## Not run: # Example with Ollama (requires Ollama running locally)chat <- chat_openai_compatible( base_url = "http://localhost:11434/v1", model = "llama2")chat$chat("What is the difference between a tibble and a data frame?")## End(Not run)Chat with one of the many models hosted on OpenRouter
Description
Sign up athttps://openrouter.ai.
Support for features depends on the underlying model that you use; seehttps://openrouter.ai/models for details.
Usage
chat_openrouter( system_prompt = NULL, api_key = NULL, credentials = NULL, model = NULL, params = NULL, api_args = list(), echo = c("none", "output", "all"), api_headers = character())Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "gpt-4o").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_perplexity(),chat_portkey()
Examples
## Not run: chat <- chat_openrouter()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on perplexity.ai
Description
Sign up athttps://www.perplexity.ai.
Perplexity AI is a platform for running LLMs that are capable ofsearching the web in real-time to help them answer questions withinformation that may not have been available when the model wastrained.
This function is a Uses OpenAI compatible API viachat_openai_compatible() withthe defaults tweaked for Perplexity AI.
Usage
chat_perplexity( system_prompt = NULL, base_url = "https://api.perplexity.ai/", api_key = NULL, credentials = NULL, model = NULL, params = NULL, api_args = list(), echo = NULL, api_headers = character())Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
model | The model to use for the chat (defaults to "llama-3.1-sonar-small-128k-online").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_portkey()
Examples
## Not run: chat <- chat_perplexity()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on PortkeyAI
Description
PortkeyAIprovides an interface (AI Gateway) to connect through its Universal API to avariety of LLMs providers via a single endpoint.
Usage
chat_portkey( model, system_prompt = NULL, base_url = "https://api.portkey.ai/v1", api_key = NULL, credentials = NULL, virtual_key = deprecated(), params = NULL, api_args = list(), echo = NULL, api_headers = character())models_portkey(base_url = "https://api.portkey.ai/v1", api_key = portkey_key())Arguments
model | The model name, e.g. |
system_prompt | A system prompt to set the behavior of the assistant. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
virtual_key |
For backward compatibility, the |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
See Also
Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity()
Examples
## Not run: chat <- chat_portkey()chat$chat("Tell me three jokes about statisticians")## End(Not run)Chat with a model hosted on Snowflake
Description
The Snowflake provider allows you to interact with LLM models availablethrough theCortex LLM REST API.
Authentication
chat_snowflake() picks up the following ambient Snowflake credentials:
A static OAuth token defined via the
SNOWFLAKE_TOKENenvironmentvariable.Key-pair authentication credentials defined via the
SNOWFLAKE_USERandSNOWFLAKE_PRIVATE_KEY(which can be a PEM-encoded private key or a pathto one) environment variables.Posit Workbench-managed Snowflake credentials for the corresponding
account.Viewer-based credentials on Posit Connect. Requires theconnectcredspackage.
Known limitations
Note that Snowflake-hosted models do not support images.
Usage
chat_snowflake( system_prompt = NULL, account = snowflake_account(), credentials = NULL, model = NULL, params = NULL, api_args = list(), echo = c("none", "output", "all"), api_headers = character())Arguments
system_prompt | A system prompt to set the behavior of the assistant. |
account | A Snowflakeaccount identifier,e.g. |
credentials | A list of authentication headers to pass into |
model | The model to use for the chat (defaults to "claude-3-7-sonnet").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use. |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
Examples
chat <- chat_snowflake()chat$chat("Tell me a joke in the form of a SQL query.")Chat with a model hosted by vLLM
Description
vLLM is an open source library thatprovides an efficient and convenient LLMs model server. You can usechat_vllm() to connect to endpoints powered by vLLM.
Uses OpenAI compatible API viachat_openai_compatible().
Usage
chat_vllm( base_url, system_prompt = NULL, model, params = NULL, api_args = list(), api_key = NULL, credentials = NULL, echo = NULL, api_headers = character())models_vllm(base_url, api_key = NULL, credentials = NULL)Arguments
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
system_prompt | A system prompt to set the behavior of the assistant. |
model | The model to use for the chat.Use |
params | Common model parameters, usually created by |
api_args | Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwith |
api_key | |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
echo | One of the following options:
Note this only affects the |
api_headers | Named character vector of arbitrary extra headers appendedto every chat API call. |
Value
AChat object.
Examples
## Not run: chat <- chat_vllm("http://my-vllm.com")chat$chat("Tell me three jokes about statisticians")## End(Not run)Upload, downloand, and manage files for Claude
Description
Use the beta Files API to upload files to and manage files in Claude.This is currently experimental because the API is in beta and may change.Note that you need
beta-headers = "files-api-2025-04-14" to use the API.
Claude offers 100GB of file storage per organization, with each filehaving a maximum size of 500MB. For more details seehttps://docs.claude.com/en/docs/build-with-claude/files
claude_file_upload()uploads a file and returns an object thatyou can use in chat.claude_file_list()lists all uploaded files.claude_file_get()returns an object for an previously uploaded file.claude_file_download()downloads the file with the given ID. Notethat you can only download files created by skills or the code executiontool.claude_file_delete()deletes the file with the given ID.
Usage
claude_file_upload( path, base_url = "https://api.anthropic.com/v1/", beta_headers = "files-api-2025-04-14", credentials = NULL)claude_file_list( base_url = "https://api.anthropic.com/v1/", credentials = NULL, beta_headers = "files-api-2025-04-14")claude_file_get( file_id, base_url = "https://api.anthropic.com/v1/", credentials = NULL, beta_headers = "files-api-2025-04-14")claude_file_download( file_id, path, base_url = "https://api.anthropic.com/v1/", credentials = NULL, beta_headers = "files-api-2025-04-14")claude_file_delete( file_id, base_url = "https://api.anthropic.com/v1/", credentials = NULL, beta_headers = "files-api-2025-04-14")Arguments
path | Path to download the file to. |
base_url | The base URL to the endpoint; the default is Claude'spublic API. |
beta_headers | Beta headers to use for the request. Defaults to |
credentials | Override the default credentials. You generally should not need this argument; instead set the If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request). |
file_id | ID of the file to get information about, download, or delete. |
Examples
## Not run: file <- claude_file_upload("path/to/file.pdf")chat <- chat_anthropic(beta_headers = "files-api-2025-04-14")chat$chat("Please summarize the document.", file)## End(Not run)Claude web fetch tool
Description
Enables Claude to fetch and analyze content from web URLs. Claude can onlyfetch URLs that appear in the conversation context (user messages orprevious tool results). For security reasons, Claude cannot dynamicallyconstruct URLs to fetch.
Requires theweb-fetch-2025-09-10 beta header.Learn more inhttps://docs.claude.com/en/docs/agents-and-tools/tool-use/web-fetch-tool.
Usage
claude_tool_web_fetch( max_uses = NULL, allowed_domains = NULL, blocked_domains = NULL, citations = FALSE, max_content_tokens = NULL)Arguments
max_uses | Integer. Maximum number of fetches allowed per request. |
allowed_domains | Character vector. Restrict fetches to specific domains.Cannot be used with |
blocked_domains | Character vector. Exclude specific domains from fetches.Cannot be used with |
citations | Logical. Whether to include citations in the response. Default is |
max_content_tokens | Integer. Maximum number of tokens to fetch from each URL. |
See Also
Other built-in tools:claude_tool_web_search(),google_tool_web_fetch(),google_tool_web_search(),openai_tool_web_search()
Examples
## Not run: chat <- chat_claude(beta_headers = "web-fetch-2025-09-10")chat$register_tool(claude_tool_web_fetch())chat$chat("What are the latest package releases on https://tidyverse.org/blog")## End(Not run)Claude web search tool
Description
Enables Claude to search the web for up-to-date information. Your organizationadministrator must enable web search in the Anthropic Console before usingthis tool, as it costs extra ($10 per 1,000 tokens at time of writing).
Learn more inhttps://docs.claude.com/en/docs/agents-and-tools/tool-use/web-search-tool.
Usage
claude_tool_web_search( max_uses = NULL, allowed_domains = NULL, blocked_domains = NULL, user_location = NULL)Arguments
max_uses | Integer. Maximum number of searches allowed per request. |
allowed_domains | Character vector. Restrict searches to specific domains(e.g., |
blocked_domains | Character vector. Exclude specific domains from searches.Cannot be used with |
user_location | List with optional elements: |
See Also
Other built-in tools:claude_tool_web_fetch(),google_tool_web_fetch(),google_tool_web_search(),openai_tool_web_search()
Examples
## Not run: chat <- chat_claude()chat$register_tool(claude_tool_web_search())chat$chat("What was in the news today?")chat$chat("What's the biggest news in the economy?")## End(Not run)Encode images for chat input
Description
These functions are used to prepare image URLs and files for input to thechatbot. Thecontent_image_url() function is used to provide a URL to animage, whilecontent_image_file() is used to provide the image data itself.
Usage
content_image_url(url, detail = c("auto", "low", "high"))content_image_file(path, content_type = "auto", resize = "low")content_image_plot(width = 768, height = 768)Arguments
url | The URL of the image to include in the chat input. Can be a |
detail | Thedetail settingfor this image. Can be |
path | The path to the image file to include in the chat input. Validfile extensions are |
content_type | The content type of the image (e.g. |
resize | If You can also pass a custom string to resize the image to a specific size,e.g. All values other than |
width,height | Width and height in pixels. |
Value
An input object suitable for including in the... parameter ofthechat(),stream(),chat_async(), orstream_async() methods.
Examples
## Not run: chat <- chat_openai()chat$chat( "What do you see in these images?", content_image_url("https://www.r-project.org/Rlogo.png"), content_image_file(system.file("httr2.png", package = "ellmer")))plot(waiting ~ eruptions, data = faithful)chat <- chat_openai()chat$chat( "Describe this plot in one paragraph, as suitable for inclusion in alt-text. You should briefly describe the plot type, the axes, and 2-5 major visual patterns.", content_image_plot())## End(Not run)Encode PDFs content for chat input
Description
These functions are used to prepare PDFs as input to the chatbot. Thecontent_pdf_url() function is used to provide a URL to an PDF file,whilecontent_pdf_file() is used to for local PDF files.
Not all providers support PDF input, so check the documentation for theprovider you are using.
Usage
content_pdf_file(path)content_pdf_url(url)Arguments
path,url | Path or URL to a PDF file. |
Value
AContentPDF object
Record and replay content
Description
These generic functions can be use to convertTurn/Content objectsinto easily serializable representations (i.e. lists and atomic vectors).
contents_record()accepts aTurn orContent and return a simple list.contents_replay()takes the output ofcontents_record()and returnsaTurn orContent object.
Usage
contents_record(x)contents_replay(x, tools = list(), .envir = parent.frame())Arguments
x | ATurn orContent object to serialize; or a serialized objectto replay. |
tools | A named list of tools |
.envir | The environment in which to look for class definitions. Usedwhen the recorded objects include classes that extendTurn orContent but are not from theellmer package itself. |
Format contents into a textual representation
Description
These generic functions can be use to convertTurn contents orContentobjects into textual representations.
contents_text()is the most minimal and only includesContentTextobjects in the output.contents_markdown()returns the text content (which it assumes to bemarkdown and does not convert it) plus markdown representations of imagesand other content types.contents_html()returns the text content, converted from markdown toHTML withcommonmark::markdown_html(), plus HTML representations ofimages and other content types.
These content types will continue to grow and change as ellmer evolves tosupport more providers and as providers add more content types.
Usage
contents_text(content, ...)contents_html(content, ...)contents_markdown(content, ...)Arguments
content | TheTurn orContent object to be converted into text. |
... | Additional arguments passed to methods. |
Value
A string of text, markdown or HTML.
Examples
turns <- list( UserTurn(list( ContentText("What's this image?"), content_image_url("https://placehold.co/200x200") )), AssistantTurn("It's a placeholder image."))lapply(turns, contents_text)lapply(turns, contents_markdown)if (rlang::is_installed("commonmark")) { contents_html(turns[[1]])}Create metadata for a tool
Description
In order to use a function as a tool in a chat, you need to craft the rightcall totool(). This function helps you do that for documented functions byextracting the function's R documentation and using an LLM to generate thetool() call. It's meant to be used interactively while writing yourcode, not as part of your final code.
If the function has package documentation, that will be used. Otherwise, ifthe source code of the function can be automatically detected, then thecomments immediately preceding the function are used (especially helpful ifthose are roxygen2 comments). If neither are available, then just the functionsignature is used.
Note that this function is inherently imperfect. It can't handle all possibleR functions, because not all parameters are suitable for use in a tool call(for example, because they're not serializable to simple JSON objects). Thedocumentation might not specify the expected shape of arguments to the levelof detail that would allow an exact JSON schema to be generated. Please besure to review the generated code before using it!
Usage
create_tool_def(topic, chat = NULL, echo = interactive(), verbose = FALSE)Arguments
topic | A symbol or string literal naming the function to createmetadata for. Can also be an expression of the form |
chat | A |
echo | Emit the registration code to the console. Defaults to |
verbose | If |
Value
Aregister_tool call that you can copy and paste into your code.Returned invisibly ifecho isTRUE.
Examples
## Not run: # These are all equivalent create_tool_def(rnorm) create_tool_def(stats::rnorm) create_tool_def("rnorm") create_tool_def("rnorm", chat = chat_azure_openai())## End(Not run)Describe the schema of a data frame, suitable for sending to an LLM
Description
df_schema() gives a column-by-column description of a data frame. Foreach column, it gives the name, type, label (if present), and number ofmissing values. For numeric and date/time columns, it also gives therange. For character and factor columns, it also gives the number of uniquevalues, and if there's only a few (<= 10), their values.
The goal is to give the LLM a sense of the structure of the data, so thatit can generate useful code, and the output attempts to balance betweenconciseness and accuracy.
Usage
df_schema(df, max_cols = 50)Arguments
df | A data frame to describe. |
max_cols | Maximum number of columns to includes. Defaults to 50 toavoid accidentally generating very large prompts. |
Examples
df_schema(mtcars)df_schema(iris)Google URL fetch tool
Description
When this tool is enabled, you can include URLs directly in your prompts andGemini will fetch and analyze the content.
Learn more inhttps://ai.google.dev/gemini-api/docs/url-context.
Usage
google_tool_web_fetch()See Also
Other built-in tools:claude_tool_web_fetch(),claude_tool_web_search(),google_tool_web_search(),openai_tool_web_search()
Examples
## Not run: chat <- chat_google_gemini()chat$register_tool(google_tool_web_fetch())chat$chat("What are the latest package releases on https://tidyverse.org/blog?")## End(Not run)Google web search (grounding) tool
Description
Enables Gemini models to search the web for up-to-date information and groundresponses with citations to sources. The model automatically decides when(and how) to search the web based on your prompt. Search results areincorporated into the response with grounding metadata including sourceURLs and titles.
Learn more inhttps://ai.google.dev/gemini-api/docs/google-search.
Usage
google_tool_web_search()See Also
Other built-in tools:claude_tool_web_fetch(),claude_tool_web_search(),google_tool_web_fetch(),openai_tool_web_search()
Examples
## Not run: chat <- chat_google_gemini()chat$register_tool(google_tool_web_search())chat$chat("What was in the news today?")chat$chat("What's the biggest news in the economy?")## End(Not run)Upload a file to gemini
Description
This function uploads a file then waits for Gemini to finish processing itso that you can immediately use it in a prompt. It's experimental becauseit's currently Gemini specific, and we expect other providers to evolvesimilar feature in the future.
Uploaded files are automatically deleted after 2 days. Each file must beless than 2 GB and you can upload a total of 20 GB. ellmer doesn't currentlyprovide a way to delete files early; pleasefile an issue if this wouldbe useful for you.
Usage
google_upload( path, base_url = "https://generativelanguage.googleapis.com/", api_key = NULL, credentials = NULL, mime_type = NULL)Arguments
path | Path to a file to upload. |
base_url | The base URL to the endpoint; the default is OpenAI'spublic API. |
api_key | |
credentials | A function that returns a list of authentication headersor |
mime_type | Optionally, specify the mime type of the file.If not specified, will be guesses from the file extension. |
Value
A<ContentUploaded> object that can be passed to$chat().
Examples
## Not run: file <- google_upload("path/to/file.pdf")chat <- chat_google_gemini()chat$chat(file, "Give me a three paragraph summary of this PDF")## End(Not run)Are credentials avaiable?
Description
Used for examples/testing.
Usage
has_credentials(provider)Arguments
provider | Provider name. |
Helpers for interpolating data into prompts
Description
These functions are lightweight wrappers aroundglue that make it easier to interpolatedynamic data into a static prompt:
interpolate()works with a string.interpolate_file()works with a file.interpolate_package()works with a file in theinst/promptsdirectory of a package.
Compared to glue, dynamic values should be wrapped in{{ }}, making iteasier to include R code and JSON in your prompt.
Usage
interpolate(prompt, ..., .envir = parent.frame())interpolate_file(path, ..., .envir = parent.frame())interpolate_package(package, path, ..., .envir = parent.frame())Arguments
prompt | A prompt string. You should not generally expose thisto the end user, since glue interpolation makes it easy to run arbitrarycode. |
... | Define additional temporary variables for substitution. |
.envir | Environment to evaluate |
path | A path to a prompt file (often a |
package | Package name. |
Value
A {glue} string.
Examples
joke <- "You're a cool dude who loves to make jokes. Tell me a joke about {{topic}}."# You can supply valuese directly:interpolate(joke, topic = "bananas")# Or allow interpolate to find them in the current environment:topic <- "applies"interpolate(joke)Open a live chat application
Description
live_console()lets you chat interactively in the console.live_browser()lets you chat interactively in a browser.
Note that these functions will mutate the inputchat object asyou chat because your turns will be appended to the history.
Usage
live_console(chat, quiet = FALSE)live_browser(chat, quiet = FALSE)Arguments
chat | A chat object created by |
quiet | If |
Value
(Invisibly) The inputchat.
Examples
## Not run: chat <- chat_anthropic()live_console(chat)live_browser(chat)## End(Not run)OpenAI web search tool
Description
Enables OpenAI models to search the web for up-to-date information. The searchbehavior varies by model: non-reasoning models perform simple searches, whilereasoning models can perform agentic, iterative searches.
Learn more athttps://platform.openai.com/docs/guides/tools-web-search
Usage
openai_tool_web_search( allowed_domains = NULL, user_location = NULL, external_web_access = TRUE)Arguments
allowed_domains | Character vector. Restrict searches to specific domains(e.g., |
user_location | List with optional elements: |
external_web_access | Logical. Whether to allow live internet access( |
See Also
Other built-in tools:claude_tool_web_fetch(),claude_tool_web_search(),google_tool_web_fetch(),google_tool_web_search()
Examples
## Not run: chat <- chat_openai()chat$register_tool(openai_tool_web_search())chat$chat("Very briefly summarise the top 3 news stories of the day")chat$chat("Of those stories, which one do you think was the most interesting?")## End(Not run)Submit multiple chats in parallel
Description
If you have multiple prompts, you can submit them in parallel. This istypically considerably faster than submitting them in sequence, especiallywith Gemini and OpenAI.
If you're usingchat_openai() orchat_anthropic() and you're willingto wait longer, you might want to usebatch_chat() instead, as it comeswith a 50% discount in return for taking up to 24 hours.
Usage
parallel_chat( chat, prompts, max_active = 10, rpm = 500, on_error = c("return", "continue", "stop"))parallel_chat_text( chat, prompts, max_active = 10, rpm = 500, on_error = c("return", "continue", "stop"))parallel_chat_structured( chat, prompts, type, convert = TRUE, include_tokens = FALSE, include_cost = FALSE, max_active = 10, rpm = 500, on_error = c("return", "continue", "stop"))Arguments
chat | A chat object created by a |
prompts | A vector created by |
max_active | The maximum number of simultaneous requests to send. For |
rpm | Maximum number of requests per minute. |
on_error | What to do when a request fails. One of:
|
type | A type specification for the extracted data. Should becreated with a |
convert | If |
include_tokens | If |
include_cost | If |
Value
Forparallel_chat(), a list with one element for each prompt. Each elementis either aChat object (if successful), aNULL (if the request wasn'tperformed) or an error object (if it failed).
Forparallel_chat_text(), a character vector with one element for eachprompt. Requests that weren't succesful get anNA.
Forparallel_chat_structured(), a single structured data object with oneelement for each prompt. Typically, whentype is an object, this willbe a tibble with one row for each prompt, and one column for eachproperty. If the output is a data frame, and some requests error,an.error column will be added with the error objects.
Examples
chat <- chat_openai()# Chat ----------------------------------------------------------------------country <- c("Canada", "New Zealand", "Jamaica", "United States")prompts <- interpolate("What's the capital of {{country}}?")parallel_chat(chat, prompts)# Structured data -----------------------------------------------------------prompts <- list( "I go by Alex. 42 years on this planet and counting.", "Pleased to meet you! I'm Jamal, age 27.", "They call me Li Wei. Nineteen years young.", "Fatima here. Just celebrated my 35th birthday last week.", "The name's Robert - 51 years old and proud of it.", "Kwame here - just hit the big 5-0 this year.")type_person <- type_object(name = type_string(), age = type_number())parallel_chat_structured(chat, prompts, type_person)Standard model parameters
Description
This helper function makes it easier to create a list of parameters usedacross many models. The parameter names are automatically standardised andincluded in the correctly place in the API call.
Note that parameters that are not supported by a given provider will generatea warning, not an error. This allows you to use the same set of parametersacross multiple providers.
Usage
params( temperature = NULL, top_p = NULL, top_k = NULL, frequency_penalty = NULL, presence_penalty = NULL, seed = NULL, max_tokens = NULL, log_probs = NULL, stop_sequences = NULL, reasoning_effort = NULL, reasoning_tokens = NULL, ...)Arguments
temperature | Temperature of the sampling distribution. |
top_p | The cumulative probability for token selection. |
top_k | The number of highest probability vocabulary tokens to keep. |
frequency_penalty | Frequency penalty for generated tokens. |
presence_penalty | Presence penalty for generated tokens. |
seed | Seed for random number generator. |
max_tokens | Maximum number of tokens to generate. |
log_probs | Include the log probabilities in the output? |
stop_sequences | A character vector of tokens to stop generation on. |
reasoning_effort,reasoning_tokens | How much effort to spend thinking? |
... | Additional named parameters to send to the provider. |
Report on token usage in the current session
Description
Call this function to find out the cumulative number of tokens that youhave sent and recieved in the current session. The price will be shownif known.
Usage
token_usage()Value
A data frame
Examples
token_usage()Define a tool
Description
Annotate a function for use in tool calls, by providing a name, description,and type definition for the arguments.
Learn more invignette("tool-calling").
Usage
tool( fun, description, ..., arguments = list(), name = NULL, convert = TRUE, annotations = list(), .name = deprecated(), .description = deprecated(), .convert = deprecated(), .annotations = deprecated())Arguments
fun | The function to be invoked when the tool is called. The returnvalue of the function is sent back to the chatbot. Expert users can customize the tool result by returning aContentToolResult object. |
description | A detailed description of what the function does.Generally, the more information that you can provide here, the better. |
... | |
arguments | A named list that defines the arguments accepted by thefunction. Each element should be created by a |
name | The name of the function. This can be omitted if |
convert | Should JSON inputs be automatically convert to theirR data type equivalents? Defaults to |
annotations | Additional properties that describe the tool and itsbehavior. Usually created by |
.name,.description,.convert,.annotations |
Value
An S7ToolDef object.
ellmer 0.3.0
In ellmer 0.3.0, the definition of thetool() function changed quitea bit. To make it easier to update old versions, you can use an LLM withthe following system prompt
Help the user convert an ellmer 0.2.0 and earlier tool definition into aellmer 0.3.0 tool definition. Here's what changed:* All arguments, apart from the first, should be named, and the argument names no longer use `.` prefixes. The argument order should be function, name (as a string), description, then arguments, then anything* Previously `arguments` was passed as `...`, so all type specifications should now be moved into a named list and passed to the `arguments` argument. It can be omitted if the function has no arguments.```R# oldtool( add, "Add two numbers together" x = type_number(), y = type_number())# newtool( add, name = "add", description = "Add two numbers together", arguments = list( x = type_number(), y = type_number() ))```Don't respond; just let the user provide function calls to convert.
See Also
Other tool calling helpers:tool_annotations(),tool_reject()
Examples
# First define the metadata that the model uses to figure out when to# call the tooltool_rnorm <- tool( rnorm, description = "Draw numbers from a random normal distribution", arguments = list( n = type_integer("The number of observations. Must be a positive integer."), mean = type_number("The mean value of the distribution."), sd = type_number("The standard deviation of the distribution. Must be a non-negative number.") ))tool_rnorm(n = 5, mean = 0, sd = 1)chat <- chat_openai()# Then register itchat$register_tool(tool_rnorm)# Then ask a question that needs it.chat$chat("Give me five numbers from a random normal distribution.")# Look at the chat history to see how tool calling works:chat# Assistant sends a tool request which is evaluated locally and# results are sent back in a tool result.Tool annotations
Description
Tool annotations are additional properties that, when passed to the.annotations argument oftool(), provide additional information about thetool and its behavior. This information can be used for display to users, forexample in a Shiny app or another user interface.
The annotations intool_annotations() are drawn from theModel Context Protocol and are consideredhints. Tool authors should use these annotations to communicate toolproperties, but users should note that these annotations are not guaranteed.
Usage
tool_annotations( title = NULL, read_only_hint = NULL, open_world_hint = NULL, idempotent_hint = NULL, destructive_hint = NULL, ...)Arguments
title | A human-readable title for the tool. |
read_only_hint | If |
open_world_hint | If |
idempotent_hint | If |
destructive_hint | If |
... | Additional named parameters to include in the tool annotations. |
Value
A list of tool annotations.
See Also
Other tool calling helpers:tool(),tool_reject()
Examples
# See ?tool() for a full example using this function.# We're creating a tool around R's `rnorm()` function to allow the chatbot to# generate random numbers from a normal distribution.tool_rnorm <- tool( rnorm, # Describe the tool function to the LLM .description = "Drawn numbers from a random normal distribution", # Describe the parameters used by the tool function n = type_integer("The number of observations. Must be a positive integer."), mean = type_number("The mean value of the distribution."), sd = type_number("The standard deviation of the distribution. Must be a non-negative number."), # Tool annotations optionally provide additional context to the LLM .annotations = tool_annotations( title = "Draw Random Normal Numbers", read_only_hint = TRUE, # the tool does not modify any state open_world_hint = FALSE # the tool does not interact with the outside world ))Reject a tool call
Description
Throws an error to reject a tool call.tool_reject() can be used within thetool function to indicate that the tool call should not be processed.tool_reject() can also be called in anChat$on_tool_request() callback.When used in the callback, the tool call is rejected before the toolfunction is invoked.
Here's an example whereutils::askYesNo() is used to ask the user forpermission before accessing their current working directory. This happensdirectly in the tool function and is appropriate when you write the tooldefinition and know exactly how it will be called.
chat <- chat_openai(model = "gpt-4.1-nano")list_files <- function() { allow_read <- utils::askYesNo( "Would you like to allow access to your current directory?" ) if (isTRUE(allow_read)) { dir(pattern = "[.](r|R|csv)$") } else { tool_reject() }}chat$register_tool(tool( list_files, "List files in the user's current directory"))chat$chat("What files are available in my current directory?")#> [tool call] list_files()#> Would you like to allow access to your current directory? (Yes/no/cancel) no#> #> Error: Tool call rejected. The user has chosen to disallow the tool #' call.#> It seems I am unable to access the files in your current directory right now.#> If you can tell me what specific files you're looking for or if you can #' provide#> the list, I can assist you further.chat$chat("Try again.")#> [tool call] list_files()#> Would you like to allow access to your current directory? (Yes/no/cancel) yes#> #> app.R#> #> data.csv#> The files available in your current directory are "app.R" and "data.csv".You can achieve a similar experience with tools written by others by using atool_request callback. In the next example, imagine the tool is provided bya third-party package. This example implements a simple menu to ask the userfor consent before runningany tool.
packaged_list_files_tool <- tool( function() dir(pattern = "[.](r|R|csv)$"), "List files in the user's current directory")chat <- chat_openai(model = "gpt-4.1-nano")chat$register_tool(packaged_list_files_tool)always_allowed <- c()# ContentToolRequestchat$on_tool_request(function(request) { if (request@name %in% always_allowed) return() answer <- utils::menu( title = sprintf("Allow tool `%s()` to run?", request@name), choices = c("Always", "Once", "No"), graphics = FALSE ) if (answer == 1) { always_allowed <<- append(always_allowed, request@name) } else if (answer %in% c(0, 3)) { tool_reject() }})# Try choosing different answers to the menu each timechat$chat("What files are available in my current directory?")chat$chat("How about now?")chat$chat("And again now?")Usage
tool_reject(reason = "The user has chosen to disallow the tool call.")Arguments
reason | A character string describing the reason for rejecting thetool call. |
Value
Throws an error of classellmer_tool_reject with the providedreason.
See Also
Other tool calling helpers:tool(),tool_annotations()
Type specifications
Description
These functions specify object types in a way that chatbots understand andare used for tool calling and structured data extraction. Their names arebased on theJSON schema, which is what the APIsexpect behind the scenes. The translation from R concepts to these types isfairly straightforward.
type_boolean(),type_integer(),type_number(), andtype_string()each represent scalars. These are equivalent to length-1 logical,integer, double, and character vectors (respectively).type_enum()is equivalent to a length-1 factor; it is a string that canonly take the specified values.type_array()is equivalent to a vector in R. You can use it to representan atomic vector: e.g.type_array(type_boolean())is equivalentto a logical vector andtype_array(type_string())is equivalentto a character vector). You can also use it to represent a list of morecomplicated types where every element is the same type (R has no baseequivalent to this), e.g.type_array(type_array(type_string()))represents a list of character vectors.type_object()is equivalent to a named list in R, but where every elementmust have the specified type. For example,type_object(a = type_string(), b = type_array(type_integer()))isequivalent to a list with an element calledathat is a string andan element calledbthat is an integer vector.type_ignore()is used in tool calling to indicate that an argument shouldnot be provided by the LLM. This is useful when the R function has adefault value for the argument and you don't want the LLM to supply it.type_from_schema()allows you to specify the full schema that you want toget back from the LLM as a JSON schema. This is useful if you have apre-defined schema that you want to use directly without manually creatingthe type using thetype_*()functions. You can point to a file with thepathargument or provide a JSON string withtext. The schema must be avalid JSON schema object.
Usage
type_boolean(description = NULL, required = TRUE)type_integer(description = NULL, required = TRUE)type_number(description = NULL, required = TRUE)type_string(description = NULL, required = TRUE)type_enum(values, description = NULL, required = TRUE)type_array(items, description = NULL, required = TRUE)type_object( .description = NULL, ..., .required = TRUE, .additional_properties = FALSE)type_from_schema(text, path)type_ignore()Arguments
description,.description | The purpose of the component. This isused by the LLM to determine what values to pass to the tool or whatvalues to extract in the structured data, so the more detail that you canprovide here, the better. |
required,.required | Is the component or argument required? In type descriptions for structured data, if In tool definitions, |
values | Character vector of permitted values. |
items | The type of the array items. Can be created by any of the |
... | < |
.additional_properties | Can the object have arbitrary additionalproperties that are not explicitly listed? Only supported by Claude. |
text | A JSON string. |
path | A file path to a JSON file. |
Examples
# An integer vectortype_array(type_integer())# The closest equivalent to a data frame is an array of objectstype_array(type_object( x = type_boolean(), y = type_string(), z = type_number()))# There's no specific type for dates, but you use a string with the# requested format in the description (it's not gauranteed that you'll# get this format back, but you should most of the time)type_string("The creation date, in YYYY-MM-DD format.")type_string("The update date, in dd/mm/yyyy format.")