Movatterモバイル変換


[0]ホーム

URL:


Title:Chat with Large Language Models
Version:0.4.0
Description:Chat with large language models from a range of providers including 'Claude'https://claude.ai, 'OpenAI'https://chatgpt.com, and more. Supports streaming, asynchronous calls, tool calling, and structured data extraction.
License:MIT + file LICENSE
URL:https://ellmer.tidyverse.org,https://github.com/tidyverse/ellmer
BugReports:https://github.com/tidyverse/ellmer/issues
Depends:R (≥ 4.1)
Imports:cli, coro (≥ 1.1.0), glue, httr2 (≥ 1.2.1), jsonlite, later(≥ 1.4.0), lifecycle, promises (≥ 1.3.1), R6, rlang (≥1.1.0), S7 (≥ 0.2.0), tibble, vctrs
Suggests:connectcreds, curl (≥ 6.0.1), gargle, gitcreds, jose, knitr,magick, openssl, paws.common, png, rmarkdown, shiny, shinychat(≥ 0.2.0), testthat (≥ 3.0.0), vcr (≥ 2.0.0), withr
VignetteBuilder:knitr
Config/Needs/website:tidyverse/tidytemplate, rmarkdown
Config/testthat/edition:3
Config/testthat/parallel:true
Config/testthat/start-first:chat, provider*
Encoding:UTF-8
RoxygenNote:7.3.3
Collate:'utils-S7.R' 'types.R' 'ellmer-package.R' 'tools-def.R''content.R' 'provider.R' 'as-json.R' 'batch-chat.R''chat-structured.R' 'chat-tools-content.R' 'turns.R''chat-tools.R' 'chat-utils.R' 'utils-coro.R' 'chat.R''content-image.R' 'content-pdf.R' 'content-replay.R' 'httr2.R''import-standalone-obj-type.R' 'import-standalone-purrr.R''import-standalone-types-check.R' 'interpolate.R' 'live.R''parallel-chat.R' 'params.R' 'provider-any.R' 'provider-aws.R''provider-openai-compatible.R' 'provider-azure.R''provider-claude-files.R' 'provider-claude-tools.R''provider-claude.R' 'provider-google.R' 'provider-cloudflare.R''provider-databricks.R' 'provider-deepseek.R''provider-github.R' 'provider-google-tools.R''provider-google-upload.R' 'provider-groq.R''provider-huggingface.R' 'provider-mistral.R''provider-ollama.R' 'provider-openai-tools.R''provider-openai.R' 'provider-openrouter.R''provider-perplexity.R' 'provider-portkey.R''provider-snowflake.R' 'provider-vllm.R' 'schema.R' 'tokens.R''tools-built-in.R' 'tools-def-auto.R' 'utils-auth.R''utils-callbacks.R' 'utils-cat.R' 'utils-merge.R''utils-prettytime.R' 'utils.R' 'zzz.R'
NeedsCompilation:no
Packaged:2025-11-14 20:31:09 UTC; hadleywickham
Author:Hadley WickhamORCID iD [aut, cre], Joe Cheng [aut], Aaron Jacobs [aut], Garrick Aden-BuieORCID iD [aut], Barret SchloerkeORCID iD [aut], Posit Software, PBCROR ID [cph, fnd]
Maintainer:Hadley Wickham <hadley@posit.co>
Repository:CRAN
Date/Publication:2025-11-15 12:00:16 UTC

ellmer: Chat with Large Language Models

Description

logo

Chat with large language models from a range of providers including 'Claude'https://claude.ai, 'OpenAI'https://chatgpt.com, and more. Supports streaming, asynchronous calls, tool calling, and structured data extraction.

Author(s)

Maintainer: Hadley Wickhamhadley@posit.co (ORCID)

Authors:

Other contributors:

See Also

Useful links:


The Chat object

Description

AChat is a sequence of user and assistantTurns sentto a specificProvider. AChat is a mutable R6 object that takes care ofmanaging the state associated with the chat; i.e. it records the messagesthat you send to the server, and the messages that you receive back.If you register a tool (i.e. an R function that the assistant can call onyour behalf), it also takes care of the tool loop.

You should generally not create this object yourself,but instead callchat_openai() or friends instead.

Value

A Chat object

Methods

Public methods


Methodnew()

Usage
Chat$new(provider, system_prompt = NULL, echo = "none")
Arguments
provider

A provider object.

system_prompt

System prompt to start the conversation with.

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method. You can override the defaultby setting theellmer_echo option.


Methodget_turns()

Retrieve the turns that have been sent and received so far(optionally starting with the system prompt, if any).

Usage
Chat$get_turns(include_system_prompt = FALSE)
Arguments
include_system_prompt

Whether to include the system prompt in theturns (if any exists).


Methodset_turns()

Replace existing turns with a new list.

Usage
Chat$set_turns(value)
Arguments
value

A list ofTurns.


Methodadd_turn()

Add a pair of turns to the chat.

Usage
Chat$add_turn(user, assistant, log_tokens = TRUE)
Arguments
user

The userTurn.

assistant

The systemTurn.

log_tokens

Should tokens used in the turn be logged to thesession counter?


Methodget_system_prompt()

If set, the system prompt, it not,NULL.

Usage
Chat$get_system_prompt()

Methodget_model()

Retrieve the model name

Usage
Chat$get_model()

Methodset_system_prompt()

Update the system prompt

Usage
Chat$set_system_prompt(value)
Arguments
value

A character vector giving the new system prompt


Methodget_tokens()

A data frame with token usage and cost data. There are fourcolumns:input,output,cached_input, andcost. There is onerow for each assistant turn, because token counts and costs are onlyavailable when the API returns the assistant's response.

Usage
Chat$get_tokens(include_system_prompt = deprecated())
Arguments
include_system_prompt

[Deprecated]


Methodget_cost()

The cost of this chat

Usage
Chat$get_cost(include = c("all", "last"))
Arguments
include

The default,"all", gives the total cumulative costof this chat. Alternatively, use"last" to get the cost of just themost recent turn.


Methodlast_turn()

The last turn returned by the assistant.

Usage
Chat$last_turn(role = c("assistant", "user", "system"))
Arguments
role

Optionally, specify a role to find the last turn withfor the role.

Returns

Either aTurn orNULL, if no turns with the specifiedrole have occurred.


Methodchat()

Submit input to the chatbot, and return the response as asimple string (probably Markdown).

Usage
Chat$chat(..., echo = NULL)
Arguments
...

The input to send to the chatbot. Can be strings or images(seecontent_image_file() andcontent_image_url().

echo

Whether to emit the response to stdout as it is received. IfNULL, then the value ofecho set when the chat object was createdwill be used.


Methodchat_structured()

Extract structured data

Usage
Chat$chat_structured(..., type, echo = "none", convert = TRUE)
Arguments
...

The input to send to the chatbot. This is typically the textyou want to extract data from, but it can be omitted if the data isobvious from the existing conversation.

type

A type specification for the extracted data. Should becreated with atype_() function.

echo

Whether to emit the response to stdout as it is received.Set to "text" to stream JSON data as it's generated (not supported byall providers).

convert

Automatically convert from JSON lists to R data typesusing the schema. For example, this will turn arrays of objects intodata frames and arrays of strings into a character vector.


Methodchat_structured_async()

Extract structured data, asynchronously. Returns a promisethat resolves to an object matching the type specification.

Usage
Chat$chat_structured_async(..., type, echo = "none", convert = TRUE)
Arguments
...

The input to send to the chatbot. Will typically includethe phrase "extract structured data".

type

A type specification for the extracted data. Should becreated with atype_() function.

echo

Whether to emit the response to stdout as it is received.Set to "text" to stream JSON data as it's generated (not supported byall providers).

convert

Automatically convert from JSON lists to R data typesusing the schema. For example, this will turn arrays of objects intodata frames and arrays of strings into a character vector.


Methodchat_async()

Submit input to the chatbot, and receive a promise thatresolves with the response all at once. Returns a promise that resolvesto a string (probably Markdown).

Usage
Chat$chat_async(..., tool_mode = c("concurrent", "sequential"))
Arguments
...

The input to send to the chatbot. Can be strings or images.

tool_mode

Whether tools should be invoked one-at-a-time("sequential") or concurrently ("concurrent"). Sequential mode isbest for interactive applications, especially when a tool may involvean interactive user interface. Concurrent mode is the default and isbest suited for automated scripts or non-interactive applications.


Methodstream()

Submit input to the chatbot, returning streaming results.Returns Acoro generatorthat yields strings. While iterating, the generator will block whilewaiting for more content from the chatbot.

Usage
Chat$stream(..., stream = c("text", "content"))
Arguments
...

The input to send to the chatbot. Can be strings or images.

stream

Whether the stream should yield only"text" or ellmer'srich content types. Whenstream = "content",stream() yieldsContent objects.


Methodstream_async()

Submit input to the chatbot, returning asynchronouslystreaming results. Returns acoro async generator thatyields string promises.

Usage
Chat$stream_async(  ...,  tool_mode = c("concurrent", "sequential"),  stream = c("text", "content"))
Arguments
...

The input to send to the chatbot. Can be strings or images.

tool_mode

Whether tools should be invoked one-at-a-time("sequential") or concurrently ("concurrent"). Sequential mode isbest for interactive applications, especially when a tool may involvean interactive user interface. Concurrent mode is the default and isbest suited for automated scripts or non-interactive applications.

stream

Whether the stream should yield only"text" or ellmer'srich content types. Whenstream = "content",stream() yieldsContent objects.


Methodregister_tool()

Register a tool (an R function) that the chatbot can use.Learn more invignette("tool-calling").

Usage
Chat$register_tool(tool)
Arguments
tool

A tool definition created bytool().


Methodregister_tools()

Register a list of tools.Learn more invignette("tool-calling").

Usage
Chat$register_tools(tools)
Arguments
tools

A list of tool definitions created bytool().


Methodget_provider()

Get the underlying provider object. For expert use only.

Usage
Chat$get_provider()

Methodget_tools()

Retrieve the list of registered tools.

Usage
Chat$get_tools()

Methodset_tools()

Sets the available tools. For expert use only; most usersshould useregister_tool().

Usage
Chat$set_tools(tools)
Arguments
tools

A list of tool definitions created withtool().


Methodon_tool_request()

Register a callback for a tool request event.

Usage
Chat$on_tool_request(callback)
Arguments
callback

A function to be called when a tool request event occurs,which must haverequest as its only argument.

Returns

A function that can be called to remove the callback.


Methodon_tool_result()

Register a callback for a tool result event.

Usage
Chat$on_tool_result(callback)
Arguments
callback

A function to be called when a tool result event occurs,which must haveresult as its only argument.

Returns

A function that can be called to remove the callback.


Methodclone()

The objects of this class are cloneable with this method.

Usage
Chat$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

Examples

chat <- chat_openai()chat$chat("Tell me a funny joke")

Content types received from and sent to a chatbot

Description

Use these functions if you're writing a package that extends ellmer and needto customise methods for various types of content. For normal use, seecontent_image_url() and friends.

ellmer abstracts away differences in the way that differentProvidersrepresent various types of content, allowing you to more easily writecode that works with any chatbot. This set of classes represents types ofcontent that can be either sent to and received from a provider:

Usage

Content()ContentText(text = stop("Required"))ContentImage()ContentImageRemote(url = stop("Required"), detail = "")ContentImageInline(type = stop("Required"), data = NULL)ContentToolRequest(  id = stop("Required"),  name = stop("Required"),  arguments = list(),  tool = NULL)ContentToolResult(value = NULL, error = NULL, extra = list(), request = NULL)ContentThinking(thinking = stop("Required"), extra = list())ContentPDF(  type = stop("Required"),  data = stop("Required"),  filename = stop("Required"))

Arguments

text

A single string.

url

URL to a remote image.

detail

Not currently used.

type

MIME type of the image.

data

Base64 encoded image data.

id

Tool call id (used to associate a request and a result).Automatically managed byellmer.

name

Function name

arguments

Named list of arguments to call the function with.

tool

ellmer automatically matches a tool request to the tools definedfor the chatbot. IfNULL, the request did not match a defined tool.

value

The results of calling the tool function, if it succeeded.

error

The error message, as a string, or the error condition thrownas a result of a failure when calling the tool function. Must beNULLwhen the tool call is successful.

extra

Additional data.

request

TheContentToolRequest associated with the tool result,automatically added byellmer when evaluating the tool call.

thinking

The text of the thinking output.

filename

File name, used to identify the PDF.

Value

S7 objects that all inherit fromContent

Examples

Content()ContentText("Tell me a joke")ContentImageRemote("https://www.r-project.org/Rlogo.png")ContentToolRequest(id = "abc", name = "mean", arguments = list(x = 1:5))

A chatbot provider

Description

A Provider captures the details of one chatbot service/API. This captureshow the API works, not the details of the underlying large language model.Different providers might offer the same (open source) model behind adifferent API.

Usage

Provider(  name = stop("Required"),  model = stop("Required"),  base_url = stop("Required"),  params = list(),  extra_args = list(),  extra_headers = character(0),  credentials = function() NULL)

Arguments

name

Name of the provider.

model

Name of the model.

base_url

The base URL for the API.

params

A list of standard parameters created byparams().

extra_args

Arbitrary extra arguments to be included in the request body.

extra_headers

Arbitrary extra headers to be added to the request.

credentials

A zero-argument function that returns the credentials to usefor authentication. Can either return a string, representing an API key,or a named list of headers.

Details

To add support for a new backend, you will need to subclassProvider(adding any additional fields that your provider needs) and then implementthe various generics that control the behavior of each provider.

Value

An S7 Provider object.

Examples

Provider(  name = "CoolModels",  model = "my_model",  base_url = "https://cool-models.com")

A user, assistant, or system turn

Description

Every conversation with a chatbot consists of pairs of user and assistantturns, corresponding to an HTTP request and response. These turns arerepresented by theTurn object, which contains a list ofContents representingthe individual messages within the turn. These might be text, images, toolrequests (assistant only), or tool responses (user only).

UserTurn,AssistantTurn, andSystemTurn are specialized subclassesofTurn for different types of conversation turns.AssistantTurn includesadditional metadata about the API response.

Note that a call to⁠$chat()⁠ and related functions may result in multipleuser-assistant turn cycles. For example, if you have registered tools,ellmer will automatically handle the tool calling loop, which may result inany number of additional cycles. Learn more about tool calling invignette("tool-calling").

Usage

Turn(role = NULL, contents = list(), tokens = NULL)UserTurn(contents = list())SystemTurn(contents = list())AssistantTurn(  contents = list(),  json = list(),  tokens = c(NA_real_, NA_real_, NA_real_),  cost = NA_real_,  duration = NA_real_)

Arguments

role

[Deprecated]For system, user and assistant turns, useSystemTurn(),UserTurn(), andAssistantTurn(), respectively.

contents

A list ofContent objects.

tokens

A numeric vector of length 3 representing the number ofinput tokens (uncached), output tokens, and input tokens (cached)used in this turn.

json

The serialized JSON corresponding to the underlying data ofthe turns. This is useful if there's information returned by the providerthat ellmer doesn't otherwise expose.

cost

The cost of the turn in dollars.

duration

The duration of the request in seconds.

Value

An S7Turn object

An S7AssistantTurn object

Examples

UserTurn(list(ContentText("Hello, world!")))

Type definitions for function calling and structured data extraction.

Description

These S7 classes are provided for use by package devlopers who areextending ellmer. In every day use, usetype_boolean() and friends.

Usage

TypeBasic(description = NULL, required = TRUE, type = stop("Required"))TypeEnum(description = NULL, required = TRUE, values = character(0))TypeArray(description = NULL, required = TRUE, items = Type())TypeJsonSchema(description = NULL, required = TRUE, json = list())TypeIgnore(description = NULL, required = TRUE)TypeObject(  description = NULL,  required = TRUE,  properties = list(),  additional_properties = FALSE)

Arguments

description

The purpose of the component. This isused by the LLM to determine what values to pass to the tool or whatvalues to extract in the structured data, so the more detail that you canprovide here, the better.

required

Is the component or argument required?

In type descriptions for structured data, ifrequired = FALSE and thecomponent does not exist in the data, the LLM may hallucinate a value. Onlyapplies when the element is nested inside of atype_object().

In tool definitions,required = TRUE signals that the LLM should alwaysprovide a value. Arguments withrequired = FALSE should have a defaultvalue in the tool function's definition. If the LLM does not provide avalue, the default value will be used.

type

Basic type name. Must be one ofboolean,integer,number, orstring.

values

Character vector of permitted values.

items

The type of the array items. Can be created by any of thetype_ function.

json

A JSON schema object as a list.

properties

Named list of properties stored inside the object.Each element should be an S7Type object.'

additional_properties

Can the object have arbitrary additionalproperties that are not explicitly listed? Only supported by Claude.

Value

S7 objects inheriting fromType

Examples

TypeBasic(type = "boolean")TypeArray(items = TypeBasic(type = "boolean"))

Submit multiple chats in one batch

Description

batch_chat() andbatch_chat_structured() currently only work withchat_openai() andchat_anthropic(). They use theOpenAI andAnthropicbatch APIs which allow you to submit multiple requests simultaneously.The results can take up to 24 hours to complete, but in return you pay 50%less than usual (but note that ellmer doesn't include this discount inits pricing metadata). If you want to get results back more quickly, oryou're working with a different provider, you may want to useparallel_chat() instead.

Since batched requests can take a long time to complete,batch_chat()requires a file path that is used to store information about the batch soyou never lose any work. You can either setwait = FALSE or simplyinterrupt the waiting process, then later, either callbatch_chat() toresume where you left off or callbatch_chat_completed() to see if theresults are ready to retrieve.batch_chat() will store the chat responsesin this file, so you can either keep it around to cache the results,or delete it to free up disk space.

This API is marked as experimental since I don't yet know how to handleerrors in the most helpful way. Fortunately they don't seem to be common,but if you have ideas, please let me know!

Usage

batch_chat(chat, prompts, path, wait = TRUE, ignore_hash = FALSE)batch_chat_text(chat, prompts, path, wait = TRUE, ignore_hash = FALSE)batch_chat_structured(  chat,  prompts,  path,  type,  wait = TRUE,  ignore_hash = FALSE,  convert = TRUE,  include_tokens = FALSE,  include_cost = FALSE)batch_chat_completed(chat, prompts, path)

Arguments

chat

A chat object created by achat_ function, or astring passed tochat().

prompts

A vector created byinterpolate() or a listof character vectors.

path

Path to file (with.json extension) to store state.

The file records a hash of the provider, the prompts, and the existingchat turns. If you attempt to reuse the same file with any of these beingdifferent, you'll get an error.

wait

IfTRUE, will wait for batch to complete. IfFALSE,it will returnNULL if the batch is not complete, and you can retrievethe results later by re-runningbatch_chat() whenbatch_chat_completed() isTRUE.

ignore_hash

IfTRUE, will only warn rather than error when the hashdoesn't match. You can use this if ellmer has changed the hash structureand you're confident that you're reusing the same inputs.

type

A type specification for the extracted data. Should becreated with atype_() function.

convert

IfTRUE, automatically convert from JSON lists to Rdata types using the schema. This typically works best whentype istype_object() as this will give you a data frame with one column foreach property. IfFALSE, returns a list.

include_tokens

IfTRUE, and the result is a data frame, willaddinput_tokens andoutput_tokens columns giving the total inputand output tokens for each prompt.

include_cost

IfTRUE, and the result is a data frame, willaddcost column giving the cost of each prompt.

Value

Forbatch_chat(), a list ofChat objects, one for each prompt.Forbatch_chat_test(), a character vector of text responses.Forbatch_chat_structured(), a single structured data object with oneelement for each prompt. Typically, whentype is an object, this willwill be a data frame with one row for each prompt, and one column for eachproperty.

For any of the aboves, will returnNULL ifwait = FALSE and the jobis not complete.

Examples

chat <- chat_openai(model = "gpt-4.1-nano")# Chat ----------------------------------------------------------------------prompts <- interpolate("What do people from {{state.name}} bring to a potluck dinner?")## Not run: chats <- batch_chat(chat, prompts, path = "potluck.json")chats## End(Not run)# Structured data -----------------------------------------------------------prompts <- list(  "I go by Alex. 42 years on this planet and counting.",  "Pleased to meet you! I'm Jamal, age 27.",  "They call me Li Wei. Nineteen years young.",  "Fatima here. Just celebrated my 35th birthday last week.",  "The name's Robert - 51 years old and proud of it.",  "Kwame here - just hit the big 5-0 this year.")type_person <- type_object(name = type_string(), age = type_number())## Not run: data <- batch_chat_structured(  chat = chat,  prompts = prompts,  path = "people-data.json",  type = type_person)data## End(Not run)

Chat with any provider

Description

This is a generic interface to all the otherchat_ functions that allowto you pick the provider and the model with a simple string.

Usage

chat(  name,  ...,  system_prompt = NULL,  params = NULL,  echo = c("none", "output", "all"))

Arguments

name

Provider (and optionally model) name in the form"provider/model" or"provider" (which will use the default modelfor that provider).

...

Arguments passed to the provider function.

system_prompt

A system prompt to set the behavior of the assistant.

params

Common model parameters, usually created byparams().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.


Chat with an Anthropic Claude model

Description

Anthropic provides a number of chat based modelsunder theClaude moniker. Note that aClaude Pro membership does not give you the ability to call models via theAPI; instead, you will need to sign up (and pay for) adeveloper account.

Usage

chat_anthropic(  system_prompt = NULL,  params = NULL,  model = NULL,  cache = c("5m", "1h", "none"),  api_args = list(),  base_url = "https://api.anthropic.com/v1",  beta_headers = character(),  api_key = NULL,  credentials = NULL,  api_headers = character(),  echo = NULL)chat_claude(  system_prompt = NULL,  params = NULL,  model = NULL,  cache = c("5m", "1h", "none"),  api_args = list(),  base_url = "https://api.anthropic.com/v1",  beta_headers = character(),  api_key = NULL,  credentials = NULL,  api_headers = character(),  echo = NULL)models_claude(  base_url = "https://api.anthropic.com/v1",  api_key = anthropic_key())models_anthropic(  base_url = "https://api.anthropic.com/v1",  api_key = anthropic_key())

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

params

Common model parameters, usually created byparams().

model

The model to use for the chat (defaults to "claude-sonnet-4-5-20250929").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.Usemodels_anthropic() to see all options.

cache

How long to cache inputs? Defaults to "5m" (five minutes).Set to "none" to disable caching or "1h" to cache for one hour.

See details below.

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

base_url

The base URL to the endpoint; the default is Claude'spublic API.

beta_headers

Optionally, a character vector of beta headers to opt-inclaude features that are still in beta.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theANTHROPIC_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

Value

AChat object.

Caching

Caching with Claude is a bit more complicated than other providers but webelieve that on average it will save you both money and time, so we haveenabled it by default. With other providers, like OpenAI and Google,you only pay for cache reads, which cost 10% of the normal price. WithClaude, you also pay for cache writes, which cost 125% of the normal pricefor 5 minute caching and 200% of the normal price for 1 hour caching.

How does this affect the total cost of a conversation? Imagine the firstturn sends 1000 input tokens and receives 200 output tokens. The secondturn must first send both the input and output from the previous turn(1200 tokens). It then sends a further 1000 tokens and receives 200 tokensback.

To compare the prices of these two approaches we can ignore the cost ofoutput tokens, because they are the same for both. How much will the inputtokens cost? If we don't use caching, we send 1000 tokens in the first turnand 2200 (1000 + 200 + 1000) tokens in the second turn for a total of 3200tokens. If we use caching, we'll send (the equivalent of) 1000 * 1.25 = 1250tokens in the first turn. In the second turn, 1000 of the input tokens willbe cached so the total cost is 1000 * 0.1 + (200 + 1000) * 1.25 = 1600tokens. That makes a total of 2850 tokens, i.e. 11% fewer tokens,decreasing the overall cost.

Obviously, the details will vary from conversation to conversation, butif you have a large system prompt that you re-use many times you shouldexpect to see larger savings. You can see exactly how many input andcache input tokens each turn uses, along with the total cost,withchat$get_tokens(). If you don't see savings for your use case, you cansuppress caching withcache = "none".

I know this is already quite complicated, but there's one final wrinkle:Claude will only cache longer prompts, with caching requiring at least1024-4096 tokens, depending on the model. So don't be surprised it if youdon't see any differences with caching if you have a short prompt.

See all the details athttps://docs.claude.com/en/docs/build-with-claude/prompt-caching.

See Also

Other chatbots:chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

chat <- chat_anthropic()chat$chat("Tell me three jokes about statisticians")

Chat with an AWS bedrock model

Description

AWS Bedrock provides a number oflanguage models, including those from Anthropic'sClaude, using the BedrockConverse API.

Authentication

Authentication is handled through {paws.common}, so if authenticationdoes not work for you automatically, you'll need to follow the adviceathttps://www.paws-r-sdk.com/#credentials. In particular, if yourorg uses AWS SSO, you'll need to run⁠aws sso login⁠ at the terminal.

Usage

chat_aws_bedrock(  system_prompt = NULL,  base_url = NULL,  model = NULL,  profile = NULL,  params = NULL,  api_args = list(),  api_headers = character(),  echo = NULL)models_aws_bedrock(profile = NULL, base_url = NULL)

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

model

The model to use for the chat (defaults to "anthropic.claude-sonnet-4-5-20250929-v1:0").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.Usemodels_models_aws_bedrock() to see all options..

While ellmer provides a default model, there's no guarantee that you'llhave access to it, so you'll need to specify a model that you can.If you're usingcross-region inference,you'll need to use the inference profile ID, e.g.model="us.anthropic.claude-sonnet-4-5-20250929-v1:0".

profile

AWS profile to use.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Some useful arguments include:

api_args = list(  inferenceConfig = list(    maxTokens = 100,    temperature = 0.7,    topP = 0.9,    topK = 20  ))
api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: # Basic usagechat <- chat_aws_bedrock()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on Azure OpenAI

Description

TheAzure OpenAI serverhosts a number of open source models as well as proprietary modelsfrom OpenAI.

Built on top ofchat_openai_compatible().

Authentication

chat_azure_openai() supports API keys and thecredentials parameter, butit also makes use of:

Usage

chat_azure_openai(  endpoint = azure_endpoint(),  model,  params = NULL,  api_version = NULL,  system_prompt = NULL,  api_key = NULL,  credentials = NULL,  api_args = list(),  echo = c("none", "output", "all"),  api_headers = character(),  deployment_id = deprecated())

Arguments

endpoint

Azure OpenAI endpoint url with protocol and hostname, i.e.⁠https://{your-resource-name}.openai.azure.com⁠. Defaults to using thevalue of theAZURE_OPENAI_ENDPOINT environment variable.

model

Thedeployment id for the model you want to use.

params

Common model parameters, usually created byparams().

api_version

The API version to use.

system_prompt

A system prompt to set the behavior of the assistant.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theAZURE_OPENAI_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

deployment_id

[Deprecated] Usemodel instead.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_azure_openai(model = "gpt-4o-mini")chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on CloudFlare

Description

CloudflareWorkers AI hosts a variety of open-source AI models. To use the CloudflareAPI, you must have an Account ID and an Access Token, which you can obtainby following these instructions.

Built on top ofchat_openai_compatible().

Known limitations

Usage

chat_cloudflare(  account = cloudflare_account(),  system_prompt = NULL,  params = NULL,  api_key = NULL,  credentials = NULL,  model = NULL,  api_args = list(),  echo = NULL,  api_headers = character())

Arguments

account

The Cloudflare account ID. Taken from theCLOUDFLARE_ACCOUNT_ID env var, if defined.

system_prompt

A system prompt to set the behavior of the assistant.

params

Common model parameters, usually created byparams().

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theCLOUDFLARE_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "meta-llama/Llama-3.3-70b-instruct-fp8-fast").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_cloudflare()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on Databricks

Description

Databricks provides out-of-the-box access to a number offoundation modelsand can also serve as a gateway for external models hosted by a third party.

Built on top ofchat_openai_compatible().

Authentication

chat_databricks() picks up on ambient Databricks credentials for a subsetof theDatabricks client unified authenticationmodel. Specifically, it supports:

Usage

chat_databricks(  workspace = databricks_workspace(),  system_prompt = NULL,  model = NULL,  token = NULL,  params = NULL,  api_args = list(),  echo = c("none", "output", "all"),  api_headers = character())

Arguments

workspace

The URL of a Databricks workspace, e.g."https://example.cloud.databricks.com". Will use the value of theenvironment variableDATABRICKS_HOST, if set.

system_prompt

A system prompt to set the behavior of the assistant.

model

The model to use for the chat (defaults to "databricks-claude-3-7-sonnet").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

Available foundational models include:

  • databricks-claude-3-7-sonnet (the default)

  • ⁠databricks-mixtral-8x7b-instruct⁠

  • ⁠databricks-meta-llama-3-1-70b-instruct⁠

  • ⁠databricks-meta-llama-3-1-405b-instruct⁠

token

An authentication token for the Databricks workspace, orNULL to use ambient credentials.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_databricks()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on DeepSeek

Description

Sign up athttps://platform.deepseek.com.

Built on top ofchat_openai_compatible().

Known limitations

Usage

chat_deepseek(  system_prompt = NULL,  base_url = "https://api.deepseek.com",  api_key = NULL,  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  echo = NULL,  api_headers = character())

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default uses DeepSeek.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theDEEPSEEK_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "deepseek-chat").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_deepseek()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on the GitHub model marketplace

Description

GitHub Models hosts a number of open source and OpenAI models. To access theGitHub model marketplace, you will need to apply for and be accepted into thebeta access program. Seehttps://github.com/marketplace/models for details.

This function is a lightweight wrapper aroundchat_openai() withthe defaults tweaked for the GitHub Models marketplace.

GitHub also suports the Azure AI Inference SDK, which you can use by settingbase_url to"https://models.inference.ai.azure.com/". This endpoint wasused inellmer v0.3.0 and earlier.

Usage

chat_github(  system_prompt = NULL,  base_url = "https://models.github.ai/inference/",  api_key = NULL,  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  echo = NULL,  api_headers = character())models_github(  base_url = "https://models.github.ai/",  api_key = NULL,  credentials = NULL)

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theGITHUB_PAT environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "gpt-4o").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_github()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a Google Gemini or Vertex AI model

Description

Google's AI offering is broken up into two parts: Gemini and Vertex AI.Most enterprises are likely to use Vertex AI, and individuals are likelyto use Gemini.

Usegoogle_upload() to upload files (PDFs, images, video, audio, etc.)

Authentication

These functions try a number of authentication strategies, in this order:

Usage

chat_google_gemini(  system_prompt = NULL,  base_url = "https://generativelanguage.googleapis.com/v1beta/",  api_key = NULL,  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  api_headers = character(),  echo = NULL)chat_google_vertex(  location,  project_id,  system_prompt = NULL,  model = NULL,  params = NULL,  api_args = list(),  api_headers = character(),  echo = NULL)models_google_gemini(  base_url = "https://generativelanguage.googleapis.com/v1beta/",  api_key = NULL,  credentials = NULL)models_google_vertex(location, project_id, credentials = NULL)

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

api_key

[Deprecated] Usecredentials instead.

credentials

A function that returns a list of authentication headersorNULL, the default, to use ambient credentials. See above for details.

model

The model to use for the chat (defaults to "gemini-2.5-flash").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.Usemodels_google_gemini() to see all options.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

location

Location, e.g.us-east1,me-central1,africa-south1 orglobal.

project_id

Project ID.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_google_gemini()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on Groq

Description

Sign up athttps://groq.com.

Built on top ofchat_openai_compatible().

Known limitations

groq does not currently support structured data extraction.

Usage

chat_groq(  system_prompt = NULL,  base_url = "https://api.groq.com/openai/v1",  api_key = NULL,  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  echo = NULL,  api_headers = character())

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theGROQ_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "llama-3.1-8b-instant").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_groq()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on Hugging Face Serverless Inference API

Description

Hugging Face hosts a variety of open-sourceand proprietary AI models available via their Inference API.To use the Hugging Face API, you must have an Access Token, which you can obtainfrom yourHugging Face account(ensure that at least "Make calls to Inference Providers" and"Make calls to your Inference Endpoints" is checked).

Built on top ofchat_openai_compatible().

Known limitations

Usage

chat_huggingface(  system_prompt = NULL,  params = NULL,  api_key = NULL,  credentials = NULL,  model = NULL,  api_args = list(),  echo = NULL,  api_headers = character())

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

params

Common model parameters, usually created byparams().

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theHUGGINGFACE_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "meta-llama/Llama-3.1-8B-Instruct").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_huggingface()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on Mistral's La Platforme

Description

Get your API key fromhttps://console.mistral.ai/api-keys.

Built on top ofchat_openai_compatible().

Known limitations

Usage

chat_mistral(  system_prompt = NULL,  params = NULL,  api_key = NULL,  credentials = NULL,  model = NULL,  api_args = list(),  echo = NULL,  api_headers = character())models_mistral(api_key = mistral_key())

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

params

Common model parameters, usually created byparams().

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theMISTRAL_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "mistral-large-latest").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_mistral()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a local Ollama model

Description

To usechat_ollama() first download and installOllama. Then install some models either from thecommand line (e.g. with⁠ollama pull llama3.1⁠) or within R usingollamar (e.g.ollamar::pull("llama3.1")).

Built on top ofchat_openai_compatible().

Known limitations

Usage

chat_ollama(  system_prompt = NULL,  base_url = Sys.getenv("OLLAMA_BASE_URL", "http://localhost:11434"),  model,  params = NULL,  api_args = list(),  echo = NULL,  api_key = NULL,  credentials = NULL,  api_headers = character())models_ollama(base_url = "http://localhost:11434", credentials = NULL)

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

model

The model to use for the chat.Usemodels_ollama() to see all options.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_key

[Deprecated] Usecredentials instead.

credentials

Ollama doesn't require credentials for local usage and in mostcases you do not need to providecredentials.

However, if you're accessing an Ollama instance hosted behind a reverseproxy or secured endpoint that enforces bearer‐token authentication, youcan set theOLLAMA_API_KEY environment variable or provide a callbackfunction tocredentials.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_ollama(model = "llama3.2")chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with an OpenAI model

Description

This is the main interface toOpenAI's models,using theresponses API. You can use this to access OpenAI's latestmodels and features like image generation and web search. If you need to usean OpenAI-compatible API from another provider, or thechat completionsAPI with OpenAI,usechat_openai_compatible() instead.

Note that a ChatGPT Plus membership does not grant access to the API.You will need to sign up for a developer account (and pay for it) at thedeveloper platform.

Usage

chat_openai(  system_prompt = NULL,  base_url = "https://api.openai.com/v1",  api_key = NULL,  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  api_headers = character(),  service_tier = c("auto", "default", "flex", "priority"),  echo = c("none", "output", "all"))models_openai(  base_url = "https://api.openai.com/v1",  api_key = NULL,  credentials = NULL)

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theOPENAI_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "gpt-4.1").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.Usemodels_openai() to see all options.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

service_tier

Request a specific service tier. There are four options:

  • "auto" (default): uses the service tier configured in Project settings.

  • "default": standard pricing and performance.

  • "flex": slower and cheaper.

  • "priority": faster and more expensive.

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai_compatible(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

chat <- chat_openai()chat$chat("  What is the difference between a tibble and a data frame?  Answer with a bulleted list")chat$chat("Tell me three funny jokes about statisticians")

Chat with an OpenAI-compatible model

Description

This function is for use with OpenAI-compatible APIs, also known as thechat completions API. If you want to use OpenAI itself, we recommendchat_openai(), which uses the newerresponses API.

Many providers offer OpenAI-compatible APIs, including:

Usage

chat_openai_compatible(  base_url,  name = "OpenAI-compatible",  system_prompt = NULL,  api_key = NULL,  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  api_headers = character(),  echo = c("none", "output", "all"))

Arguments

base_url

The base URL to the endpoint. This parameter isrequiredsince there is no default for OpenAI-compatible APIs.

name

The name of the provider; this is shown intoken_usage() andis used to compute costs.

system_prompt

A system prompt to set the behavior of the assistant.

api_key

[Deprecated] Usecredentials instead.

credentials

Credentials to use for authentication. If not provided,will attempt to use theOPENAI_API_KEY environment variable.

model

The model to use for chat. No default; depends on your provider.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openrouter(),chat_perplexity(),chat_portkey()

Examples

## Not run: # Example with Ollama (requires Ollama running locally)chat <- chat_openai_compatible(  base_url = "http://localhost:11434/v1",  model = "llama2")chat$chat("What is the difference between a tibble and a data frame?")## End(Not run)

Chat with one of the many models hosted on OpenRouter

Description

Sign up athttps://openrouter.ai.

Support for features depends on the underlying model that you use; seehttps://openrouter.ai/models for details.

Usage

chat_openrouter(  system_prompt = NULL,  api_key = NULL,  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  echo = c("none", "output", "all"),  api_headers = character())

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theOPENROUTER_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "gpt-4o").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_perplexity(),chat_portkey()

Examples

## Not run: chat <- chat_openrouter()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on perplexity.ai

Description

Sign up athttps://www.perplexity.ai.

Perplexity AI is a platform for running LLMs that are capable ofsearching the web in real-time to help them answer questions withinformation that may not have been available when the model wastrained.

This function is a Uses OpenAI compatible API viachat_openai_compatible() withthe defaults tweaked for Perplexity AI.

Usage

chat_perplexity(  system_prompt = NULL,  base_url = "https://api.perplexity.ai/",  api_key = NULL,  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  echo = NULL,  api_headers = character())

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set thePERPLEXITY_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

model

The model to use for the chat (defaults to "llama-3.1-sonar-small-128k-online").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_portkey()

Examples

## Not run: chat <- chat_perplexity()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on PortkeyAI

Description

PortkeyAIprovides an interface (AI Gateway) to connect through its Universal API to avariety of LLMs providers via a single endpoint.

Usage

chat_portkey(  model,  system_prompt = NULL,  base_url = "https://api.portkey.ai/v1",  api_key = NULL,  credentials = NULL,  virtual_key = deprecated(),  params = NULL,  api_args = list(),  echo = NULL,  api_headers = character())models_portkey(base_url = "https://api.portkey.ai/v1", api_key = portkey_key())

Arguments

model

The model name, e.g.⁠@my-provider/my-model⁠.

system_prompt

A system prompt to set the behavior of the assistant.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set thePORTKEY_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

virtual_key

[Deprecated].Portkey now recommend supplying the model provider(formerly known as thevirtual_key), in the model name, e.g.⁠@my-provider/my-model⁠. Seehttps://portkey.ai/docs/support/upgrade-to-model-catalog for details.

For backward compatibility, thePORTKEY_VIRTUAL_KEY env var is still usedif the model doesn't include a provider.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

See Also

Other chatbots:chat_anthropic(),chat_aws_bedrock(),chat_azure_openai(),chat_cloudflare(),chat_databricks(),chat_deepseek(),chat_github(),chat_google_gemini(),chat_groq(),chat_huggingface(),chat_mistral(),chat_ollama(),chat_openai(),chat_openai_compatible(),chat_openrouter(),chat_perplexity()

Examples

## Not run: chat <- chat_portkey()chat$chat("Tell me three jokes about statisticians")## End(Not run)

Chat with a model hosted on Snowflake

Description

The Snowflake provider allows you to interact with LLM models availablethrough theCortex LLM REST API.

Authentication

chat_snowflake() picks up the following ambient Snowflake credentials:

Known limitations

Note that Snowflake-hosted models do not support images.

Usage

chat_snowflake(  system_prompt = NULL,  account = snowflake_account(),  credentials = NULL,  model = NULL,  params = NULL,  api_args = list(),  echo = c("none", "output", "all"),  api_headers = character())

Arguments

system_prompt

A system prompt to set the behavior of the assistant.

account

A Snowflakeaccount identifier,e.g."testorg-test_account". Defaults to the value of theSNOWFLAKE_ACCOUNT environment variable.

credentials

A list of authentication headers to pass intohttr2::req_headers(), a function that returns them when called, orNULL, the default, to use ambient credentials.

model

The model to use for the chat (defaults to "claude-3-7-sonnet").We regularly update the default, so we strongly recommend explicitly specifying a model for anything other than casual use.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

Examples

chat <- chat_snowflake()chat$chat("Tell me a joke in the form of a SQL query.")

Chat with a model hosted by vLLM

Description

vLLM is an open source library thatprovides an efficient and convenient LLMs model server. You can usechat_vllm() to connect to endpoints powered by vLLM.

Uses OpenAI compatible API viachat_openai_compatible().

Usage

chat_vllm(  base_url,  system_prompt = NULL,  model,  params = NULL,  api_args = list(),  api_key = NULL,  credentials = NULL,  echo = NULL,  api_headers = character())models_vllm(base_url, api_key = NULL, credentials = NULL)

Arguments

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

system_prompt

A system prompt to set the behavior of the assistant.

model

The model to use for the chat.Usemodels_vllm() to see all options.

params

Common model parameters, usually created byparams().

api_args

Named list of arbitrary extra arguments appended to the bodyof every chat API call. Combined with the body object generated by ellmerwithmodifyList().

api_key

[Deprecated] Usecredentials instead.

credentials

Override the default credentials. You generally should not need this argument; instead set theVLLM_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

echo

One of the following options:

  • none: don't emit any output (default when running in a function).

  • output: echo text and tool-calling output as it streams in (defaultwhen running at the console).

  • all: echo all input and output.

Note this only affects thechat() method.

api_headers

Named character vector of arbitrary extra headers appendedto every chat API call.

Value

AChat object.

Examples

## Not run: chat <- chat_vllm("http://my-vllm.com")chat$chat("Tell me three jokes about statisticians")## End(Not run)

Upload, downloand, and manage files for Claude

Description

[Experimental]Use the beta Files API to upload files to and manage files in Claude.This is currently experimental because the API is in beta and may change.Note that you needbeta-headers = "files-api-2025-04-14" to use the API.

Claude offers 100GB of file storage per organization, with each filehaving a maximum size of 500MB. For more details seehttps://docs.claude.com/en/docs/build-with-claude/files

Usage

claude_file_upload(  path,  base_url = "https://api.anthropic.com/v1/",  beta_headers = "files-api-2025-04-14",  credentials = NULL)claude_file_list(  base_url = "https://api.anthropic.com/v1/",  credentials = NULL,  beta_headers = "files-api-2025-04-14")claude_file_get(  file_id,  base_url = "https://api.anthropic.com/v1/",  credentials = NULL,  beta_headers = "files-api-2025-04-14")claude_file_download(  file_id,  path,  base_url = "https://api.anthropic.com/v1/",  credentials = NULL,  beta_headers = "files-api-2025-04-14")claude_file_delete(  file_id,  base_url = "https://api.anthropic.com/v1/",  credentials = NULL,  beta_headers = "files-api-2025-04-14")

Arguments

path

Path to download the file to.

base_url

The base URL to the endpoint; the default is Claude'spublic API.

beta_headers

Beta headers to use for the request. Defaults tofiles-api-2025-04-14.

credentials

Override the default credentials. You generally should not need this argument; instead set theANTHROPIC_API_KEY environment variable. The best place to set this is in.Renviron,which you can easily edit by callingusethis::edit_r_environ().

If you do need additional control, this argument takes a zero-argument function that returns either a string (the API key), or a named list (added as additional headers to every request).

file_id

ID of the file to get information about, download, or delete.

Examples

## Not run: file <- claude_file_upload("path/to/file.pdf")chat <- chat_anthropic(beta_headers = "files-api-2025-04-14")chat$chat("Please summarize the document.", file)## End(Not run)

Claude web fetch tool

Description

Enables Claude to fetch and analyze content from web URLs. Claude can onlyfetch URLs that appear in the conversation context (user messages orprevious tool results). For security reasons, Claude cannot dynamicallyconstruct URLs to fetch.

Requires theweb-fetch-2025-09-10 beta header.Learn more inhttps://docs.claude.com/en/docs/agents-and-tools/tool-use/web-fetch-tool.

Usage

claude_tool_web_fetch(  max_uses = NULL,  allowed_domains = NULL,  blocked_domains = NULL,  citations = FALSE,  max_content_tokens = NULL)

Arguments

max_uses

Integer. Maximum number of fetches allowed per request.

allowed_domains

Character vector. Restrict fetches to specific domains.Cannot be used withblocked_domains.

blocked_domains

Character vector. Exclude specific domains from fetches.Cannot be used withallowed_domains.

citations

Logical. Whether to include citations in the response. Default isTRUE.

max_content_tokens

Integer. Maximum number of tokens to fetch from each URL.

See Also

Other built-in tools:claude_tool_web_search(),google_tool_web_fetch(),google_tool_web_search(),openai_tool_web_search()

Examples

## Not run: chat <- chat_claude(beta_headers = "web-fetch-2025-09-10")chat$register_tool(claude_tool_web_fetch())chat$chat("What are the latest package releases on https://tidyverse.org/blog")## End(Not run)

Description

Enables Claude to search the web for up-to-date information. Your organizationadministrator must enable web search in the Anthropic Console before usingthis tool, as it costs extra ($10 per 1,000 tokens at time of writing).

Learn more inhttps://docs.claude.com/en/docs/agents-and-tools/tool-use/web-search-tool.

Usage

claude_tool_web_search(  max_uses = NULL,  allowed_domains = NULL,  blocked_domains = NULL,  user_location = NULL)

Arguments

max_uses

Integer. Maximum number of searches allowed per request.

allowed_domains

Character vector. Restrict searches to specific domains(e.g.,c("nytimes.com", "bbc.com")). Cannot be used withblocked_domains.

blocked_domains

Character vector. Exclude specific domains from searches.Cannot be used withallowed_domains.

user_location

List with optional elements:country (2-letter code),city,region, andtimezone (IANA timezone) to localize search results.

See Also

Other built-in tools:claude_tool_web_fetch(),google_tool_web_fetch(),google_tool_web_search(),openai_tool_web_search()

Examples

## Not run: chat <- chat_claude()chat$register_tool(claude_tool_web_search())chat$chat("What was in the news today?")chat$chat("What's the biggest news in the economy?")## End(Not run)

Encode images for chat input

Description

These functions are used to prepare image URLs and files for input to thechatbot. Thecontent_image_url() function is used to provide a URL to animage, whilecontent_image_file() is used to provide the image data itself.

Usage

content_image_url(url, detail = c("auto", "low", "high"))content_image_file(path, content_type = "auto", resize = "low")content_image_plot(width = 768, height = 768)

Arguments

url

The URL of the image to include in the chat input. Can be a⁠data:⁠ URL or a regular URL. Valid image types are PNG, JPEG, WebP, andnon-animated GIF.

detail

Thedetail settingfor this image. Can be"auto","low", or"high".

path

The path to the image file to include in the chat input. Validfile extensions are.png,.jpeg,.jpg,.webp, and (non-animated).gif.

content_type

The content type of the image (e.g.image/png). If"auto", the content type is inferred from the file extension.

resize

If"low", resize images to fit within 512x512. If"high",resize to fit within 2000x768 or 768x2000. (See theOpenAI docsfor more on why these specific sizes are used.) If"none", do not resize.

You can also pass a custom string to resize the image to a specific size,e.g."200x200" to resize to 200x200 pixels while preserving aspect ratio.Append> to resize only if the image is larger than the specified size,and! to ignore aspect ratio (e.g."300x200>!").

All values other thannone require themagick package.

width,height

Width and height in pixels.

Value

An input object suitable for including in the... parameter ofthechat(),stream(),chat_async(), orstream_async() methods.

Examples

## Not run: chat <- chat_openai()chat$chat(  "What do you see in these images?",  content_image_url("https://www.r-project.org/Rlogo.png"),  content_image_file(system.file("httr2.png", package = "ellmer")))plot(waiting ~ eruptions, data = faithful)chat <- chat_openai()chat$chat(  "Describe this plot in one paragraph, as suitable for inclusion in   alt-text. You should briefly describe the plot type, the axes, and   2-5 major visual patterns.",   content_image_plot())## End(Not run)

Encode PDFs content for chat input

Description

These functions are used to prepare PDFs as input to the chatbot. Thecontent_pdf_url() function is used to provide a URL to an PDF file,whilecontent_pdf_file() is used to for local PDF files.

Not all providers support PDF input, so check the documentation for theprovider you are using.

Usage

content_pdf_file(path)content_pdf_url(url)

Arguments

path,url

Path or URL to a PDF file.

Value

AContentPDF object


Record and replay content

Description

These generic functions can be use to convertTurn/Content objectsinto easily serializable representations (i.e. lists and atomic vectors).

Usage

contents_record(x)contents_replay(x, tools = list(), .envir = parent.frame())

Arguments

x

ATurn orContent object to serialize; or a serialized objectto replay.

tools

A named list of tools

.envir

The environment in which to look for class definitions. Usedwhen the recorded objects include classes that extendTurn orContent but are not from theellmer package itself.


Format contents into a textual representation

Description

[Experimental]

These generic functions can be use to convertTurn contents orContentobjects into textual representations.

These content types will continue to grow and change as ellmer evolves tosupport more providers and as providers add more content types.

Usage

contents_text(content, ...)contents_html(content, ...)contents_markdown(content, ...)

Arguments

content

TheTurn orContent object to be converted into text.contents_markdown() also acceptsChat instances to turn the entireconversation history into markdown text.

...

Additional arguments passed to methods.

Value

A string of text, markdown or HTML.

Examples

turns <- list(  UserTurn(list(    ContentText("What's this image?"),    content_image_url("https://placehold.co/200x200")  )),  AssistantTurn("It's a placeholder image."))lapply(turns, contents_text)lapply(turns, contents_markdown)if (rlang::is_installed("commonmark")) {  contents_html(turns[[1]])}

Create metadata for a tool

Description

In order to use a function as a tool in a chat, you need to craft the rightcall totool(). This function helps you do that for documented functions byextracting the function's R documentation and using an LLM to generate thetool() call. It's meant to be used interactively while writing yourcode, not as part of your final code.

If the function has package documentation, that will be used. Otherwise, ifthe source code of the function can be automatically detected, then thecomments immediately preceding the function are used (especially helpful ifthose are roxygen2 comments). If neither are available, then just the functionsignature is used.

Note that this function is inherently imperfect. It can't handle all possibleR functions, because not all parameters are suitable for use in a tool call(for example, because they're not serializable to simple JSON objects). Thedocumentation might not specify the expected shape of arguments to the levelof detail that would allow an exact JSON schema to be generated. Please besure to review the generated code before using it!

Usage

create_tool_def(topic, chat = NULL, echo = interactive(), verbose = FALSE)

Arguments

topic

A symbol or string literal naming the function to createmetadata for. Can also be an expression of the formpkg::fun.

chat

AChat object used to generate the output. IfNULL(the default) useschat_openai().

echo

Emit the registration code to the console. Defaults toTRUE ininteractive sessions.

verbose

IfTRUE, print the input we send to the LLM, which may beuseful for debugging unexpectedly poor results.

Value

Aregister_tool call that you can copy and paste into your code.Returned invisibly ifecho isTRUE.

Examples

## Not run:   # These are all equivalent  create_tool_def(rnorm)  create_tool_def(stats::rnorm)  create_tool_def("rnorm")  create_tool_def("rnorm", chat = chat_azure_openai())## End(Not run)

Describe the schema of a data frame, suitable for sending to an LLM

Description

df_schema() gives a column-by-column description of a data frame. Foreach column, it gives the name, type, label (if present), and number ofmissing values. For numeric and date/time columns, it also gives therange. For character and factor columns, it also gives the number of uniquevalues, and if there's only a few (<= 10), their values.

The goal is to give the LLM a sense of the structure of the data, so thatit can generate useful code, and the output attempts to balance betweenconciseness and accuracy.

Usage

df_schema(df, max_cols = 50)

Arguments

df

A data frame to describe.

max_cols

Maximum number of columns to includes. Defaults to 50 toavoid accidentally generating very large prompts.

Examples

df_schema(mtcars)df_schema(iris)

Google URL fetch tool

Description

When this tool is enabled, you can include URLs directly in your prompts andGemini will fetch and analyze the content.

Learn more inhttps://ai.google.dev/gemini-api/docs/url-context.

Usage

google_tool_web_fetch()

See Also

Other built-in tools:claude_tool_web_fetch(),claude_tool_web_search(),google_tool_web_search(),openai_tool_web_search()

Examples

## Not run: chat <- chat_google_gemini()chat$register_tool(google_tool_web_fetch())chat$chat("What are the latest package releases on https://tidyverse.org/blog?")## End(Not run)

Description

Enables Gemini models to search the web for up-to-date information and groundresponses with citations to sources. The model automatically decides when(and how) to search the web based on your prompt. Search results areincorporated into the response with grounding metadata including sourceURLs and titles.

Learn more inhttps://ai.google.dev/gemini-api/docs/google-search.

Usage

google_tool_web_search()

See Also

Other built-in tools:claude_tool_web_fetch(),claude_tool_web_search(),google_tool_web_fetch(),openai_tool_web_search()

Examples

## Not run: chat <- chat_google_gemini()chat$register_tool(google_tool_web_search())chat$chat("What was in the news today?")chat$chat("What's the biggest news in the economy?")## End(Not run)

Upload a file to gemini

Description

[Experimental]

This function uploads a file then waits for Gemini to finish processing itso that you can immediately use it in a prompt. It's experimental becauseit's currently Gemini specific, and we expect other providers to evolvesimilar feature in the future.

Uploaded files are automatically deleted after 2 days. Each file must beless than 2 GB and you can upload a total of 20 GB. ellmer doesn't currentlyprovide a way to delete files early; pleasefile an issue if this wouldbe useful for you.

Usage

google_upload(  path,  base_url = "https://generativelanguage.googleapis.com/",  api_key = NULL,  credentials = NULL,  mime_type = NULL)

Arguments

path

Path to a file to upload.

base_url

The base URL to the endpoint; the default is OpenAI'spublic API.

api_key

[Deprecated] Usecredentials instead.

credentials

A function that returns a list of authentication headersorNULL, the default, to use ambient credentials. See above for details.

mime_type

Optionally, specify the mime type of the file.If not specified, will be guesses from the file extension.

Value

A⁠<ContentUploaded>⁠ object that can be passed to⁠$chat()⁠.

Examples

## Not run: file <- google_upload("path/to/file.pdf")chat <- chat_google_gemini()chat$chat(file, "Give me a three paragraph summary of this PDF")## End(Not run)

Are credentials avaiable?

Description

Used for examples/testing.

Usage

has_credentials(provider)

Arguments

provider

Provider name.


Helpers for interpolating data into prompts

Description

These functions are lightweight wrappers aroundglue that make it easier to interpolatedynamic data into a static prompt:

Compared to glue, dynamic values should be wrapped in{{ }}, making iteasier to include R code and JSON in your prompt.

Usage

interpolate(prompt, ..., .envir = parent.frame())interpolate_file(path, ..., .envir = parent.frame())interpolate_package(package, path, ..., .envir = parent.frame())

Arguments

prompt

A prompt string. You should not generally expose thisto the end user, since glue interpolation makes it easy to run arbitrarycode.

...

Define additional temporary variables for substitution.

.envir

Environment to evaluate... expressions in. Used whenwrapping in another function. Seevignette("wrappers", package = "glue")for more details.

path

A path to a prompt file (often a.md). Ininterpolate_package(), this path is relative toinst/prompts.

package

Package name.

Value

A {glue} string.

Examples

joke <- "You're a cool dude who loves to make jokes. Tell me a joke about {{topic}}."# You can supply valuese directly:interpolate(joke, topic = "bananas")# Or allow interpolate to find them in the current environment:topic <- "applies"interpolate(joke)

Open a live chat application

Description

Note that these functions will mutate the inputchat object asyou chat because your turns will be appended to the history.

Usage

live_console(chat, quiet = FALSE)live_browser(chat, quiet = FALSE)

Arguments

chat

A chat object created bychat_openai() or friends.

quiet

IfTRUE, suppresses the initial message that explains howto use the console.

Value

(Invisibly) The inputchat.

Examples

## Not run: chat <- chat_anthropic()live_console(chat)live_browser(chat)## End(Not run)

Description

Enables OpenAI models to search the web for up-to-date information. The searchbehavior varies by model: non-reasoning models perform simple searches, whilereasoning models can perform agentic, iterative searches.

Learn more athttps://platform.openai.com/docs/guides/tools-web-search

Usage

openai_tool_web_search(  allowed_domains = NULL,  user_location = NULL,  external_web_access = TRUE)

Arguments

allowed_domains

Character vector. Restrict searches to specific domains(e.g.,c("nytimes.com", "bbc.com")). Maximum 20 domains. URLs will beautomatically cleaned (http/https prefixes removed).

user_location

List with optional elements:country (2-letter ISO code),city,region, andtimezone (IANA timezone) to localize search results.

external_web_access

Logical. Whether to allow live internet access(TRUE, default) or use only cached/indexed results (FALSE).

See Also

Other built-in tools:claude_tool_web_fetch(),claude_tool_web_search(),google_tool_web_fetch(),google_tool_web_search()

Examples

## Not run: chat <- chat_openai()chat$register_tool(openai_tool_web_search())chat$chat("Very briefly summarise the top 3 news stories of the day")chat$chat("Of those stories, which one do you think was the most interesting?")## End(Not run)

Submit multiple chats in parallel

Description

If you have multiple prompts, you can submit them in parallel. This istypically considerably faster than submitting them in sequence, especiallywith Gemini and OpenAI.

If you're usingchat_openai() orchat_anthropic() and you're willingto wait longer, you might want to usebatch_chat() instead, as it comeswith a 50% discount in return for taking up to 24 hours.

Usage

parallel_chat(  chat,  prompts,  max_active = 10,  rpm = 500,  on_error = c("return", "continue", "stop"))parallel_chat_text(  chat,  prompts,  max_active = 10,  rpm = 500,  on_error = c("return", "continue", "stop"))parallel_chat_structured(  chat,  prompts,  type,  convert = TRUE,  include_tokens = FALSE,  include_cost = FALSE,  max_active = 10,  rpm = 500,  on_error = c("return", "continue", "stop"))

Arguments

chat

A chat object created by achat_ function, or astring passed tochat().

prompts

A vector created byinterpolate() or a listof character vectors.

max_active

The maximum number of simultaneous requests to send.

Forchat_anthropic(), note that the number of active connections islimited primarily by the output tokens per minute limit (OTPM) which isestimated from themax_tokens parameter, which defaults to 4096. Thatmeans if your usage tier limits you to 16,000 OTPM, you should either setmax_active = 4 (16,000 / 4096) to decrease the number of activeconnections or useparams() inchat_anthropic() to decreasemax_tokens.

rpm

Maximum number of requests per minute.

on_error

What to do when a request fails. One of:

  • "return" (the default): stop processing new requests, wait forin flight requests to finish, then return.

  • "continue": keep going, performing every request.

  • "stop": stop processing and throw an error.

type

A type specification for the extracted data. Should becreated with atype_() function.

convert

IfTRUE, automatically convert from JSON lists to Rdata types using the schema. This typically works best whentype istype_object() as this will give you a data frame with one column foreach property. IfFALSE, returns a list.

include_tokens

IfTRUE, and the result is a data frame, willaddinput_tokens andoutput_tokens columns giving the total inputand output tokens for each prompt.

include_cost

IfTRUE, and the result is a data frame, willaddcost column giving the cost of each prompt.

Value

Forparallel_chat(), a list with one element for each prompt. Each elementis either aChat object (if successful), aNULL (if the request wasn'tperformed) or an error object (if it failed).

Forparallel_chat_text(), a character vector with one element for eachprompt. Requests that weren't succesful get anNA.

Forparallel_chat_structured(), a single structured data object with oneelement for each prompt. Typically, whentype is an object, this willbe a tibble with one row for each prompt, and one column for eachproperty. If the output is a data frame, and some requests error,an.error column will be added with the error objects.

Examples

chat <- chat_openai()# Chat ----------------------------------------------------------------------country <- c("Canada", "New Zealand", "Jamaica", "United States")prompts <- interpolate("What's the capital of {{country}}?")parallel_chat(chat, prompts)# Structured data -----------------------------------------------------------prompts <- list(  "I go by Alex. 42 years on this planet and counting.",  "Pleased to meet you! I'm Jamal, age 27.",  "They call me Li Wei. Nineteen years young.",  "Fatima here. Just celebrated my 35th birthday last week.",  "The name's Robert - 51 years old and proud of it.",  "Kwame here - just hit the big 5-0 this year.")type_person <- type_object(name = type_string(), age = type_number())parallel_chat_structured(chat, prompts, type_person)

Standard model parameters

Description

This helper function makes it easier to create a list of parameters usedacross many models. The parameter names are automatically standardised andincluded in the correctly place in the API call.

Note that parameters that are not supported by a given provider will generatea warning, not an error. This allows you to use the same set of parametersacross multiple providers.

Usage

params(  temperature = NULL,  top_p = NULL,  top_k = NULL,  frequency_penalty = NULL,  presence_penalty = NULL,  seed = NULL,  max_tokens = NULL,  log_probs = NULL,  stop_sequences = NULL,  reasoning_effort = NULL,  reasoning_tokens = NULL,  ...)

Arguments

temperature

Temperature of the sampling distribution.

top_p

The cumulative probability for token selection.

top_k

The number of highest probability vocabulary tokens to keep.

frequency_penalty

Frequency penalty for generated tokens.

presence_penalty

Presence penalty for generated tokens.

seed

Seed for random number generator.

max_tokens

Maximum number of tokens to generate.

log_probs

Include the log probabilities in the output?

stop_sequences

A character vector of tokens to stop generation on.

reasoning_effort,reasoning_tokens

How much effort to spend thinking?ressoning_effort is a string, like "low", "medium", "high".reasoning_tokens is an integer, giving a maximum token budget.Each provider only takes one of these two parameters.

...

Additional named parameters to send to the provider.


Report on token usage in the current session

Description

Call this function to find out the cumulative number of tokens that youhave sent and recieved in the current session. The price will be shownif known.

Usage

token_usage()

Value

A data frame

Examples

token_usage()

Define a tool

Description

Annotate a function for use in tool calls, by providing a name, description,and type definition for the arguments.

Learn more invignette("tool-calling").

Usage

tool(  fun,  description,  ...,  arguments = list(),  name = NULL,  convert = TRUE,  annotations = list(),  .name = deprecated(),  .description = deprecated(),  .convert = deprecated(),  .annotations = deprecated())

Arguments

fun

The function to be invoked when the tool is called. The returnvalue of the function is sent back to the chatbot.

Expert users can customize the tool result by returning aContentToolResult object.

description

A detailed description of what the function does.Generally, the more information that you can provide here, the better.

...

[Deprecated] Usearguments instead.

arguments

A named list that defines the arguments accepted by thefunction. Each element should be created by atype_*()function. Usetype_ignore() if you don't want the LLM to provide thatargument (e.g., because the R function has a suitable default value).

name

The name of the function. This can be omitted iffun is anexisting function (i.e. not defined inline).

convert

Should JSON inputs be automatically convert to theirR data type equivalents? Defaults toTRUE.

annotations

Additional properties that describe the tool and itsbehavior. Usually created bytool_annotations(), where you can find adescription of the annotation properties recommended by theModel Context Protocol.

.name,.description,.convert,.annotations

[Deprecated] Please switch to the non-prefixedequivalents.

Value

An S7ToolDef object.

ellmer 0.3.0

In ellmer 0.3.0, the definition of thetool() function changed quitea bit. To make it easier to update old versions, you can use an LLM withthe following system prompt

Help the user convert an ellmer 0.2.0 and earlier tool definition into aellmer 0.3.0 tool definition. Here's what changed:* All arguments, apart from the first, should be named, and the argument  names no longer use `.` prefixes. The argument order should be function,  name (as a string), description, then arguments, then anything* Previously `arguments` was passed as `...`, so all type specifications  should now be moved into a named list and passed to the `arguments`  argument. It can be omitted if the function has no arguments.```R# oldtool(  add,  "Add two numbers together"  x = type_number(),  y = type_number())# newtool(  add,  name = "add",  description = "Add two numbers together",  arguments = list(    x = type_number(),    y = type_number()  ))```Don't respond; just let the user provide function calls to convert.

See Also

Other tool calling helpers:tool_annotations(),tool_reject()

Examples

# First define the metadata that the model uses to figure out when to# call the tooltool_rnorm <- tool(  rnorm,  description = "Draw numbers from a random normal distribution",  arguments = list(    n = type_integer("The number of observations. Must be a positive integer."),    mean = type_number("The mean value of the distribution."),    sd = type_number("The standard deviation of the distribution. Must be a non-negative number.")  ))tool_rnorm(n = 5, mean = 0, sd = 1)chat <- chat_openai()# Then register itchat$register_tool(tool_rnorm)# Then ask a question that needs it.chat$chat("Give me five numbers from a random normal distribution.")# Look at the chat history to see how tool calling works:chat# Assistant sends a tool request which is evaluated locally and# results are sent back in a tool result.

Tool annotations

Description

Tool annotations are additional properties that, when passed to the.annotations argument oftool(), provide additional information about thetool and its behavior. This information can be used for display to users, forexample in a Shiny app or another user interface.

The annotations intool_annotations() are drawn from theModel Context Protocol and are consideredhints. Tool authors should use these annotations to communicate toolproperties, but users should note that these annotations are not guaranteed.

Usage

tool_annotations(  title = NULL,  read_only_hint = NULL,  open_world_hint = NULL,  idempotent_hint = NULL,  destructive_hint = NULL,  ...)

Arguments

title

A human-readable title for the tool.

read_only_hint

IfTRUE, the tool does not modify its environment.

open_world_hint

IfTRUE, the tool may interact with an "open world"of external entities. IfFALSE, the tool's domain of interaction isclosed. For example, the world of a web search tool is open, but the worldof a memory tool is not.

idempotent_hint

IfTRUE, calling the tool repeatedly with the samearguments will have no additional effect on its environment. (Onlymeaningful whenread_only_hint isFALSE.)

destructive_hint

IfTRUE, the tool may perform destructive updatesto its environment, otherwise it only performs additive updates. (Onlymeaningful whenread_only_hint isFALSE.)

...

Additional named parameters to include in the tool annotations.

Value

A list of tool annotations.

See Also

Other tool calling helpers:tool(),tool_reject()

Examples

# See ?tool() for a full example using this function.# We're creating a tool around R's `rnorm()` function to allow the chatbot to# generate random numbers from a normal distribution.tool_rnorm <- tool(  rnorm,  # Describe the tool function to the LLM  .description = "Drawn numbers from a random normal distribution",  # Describe the parameters used by the tool function  n = type_integer("The number of observations. Must be a positive integer."),  mean = type_number("The mean value of the distribution."),  sd = type_number("The standard deviation of the distribution. Must be a non-negative number."),  # Tool annotations optionally provide additional context to the LLM  .annotations = tool_annotations(    title = "Draw Random Normal Numbers",    read_only_hint = TRUE, # the tool does not modify any state    open_world_hint = FALSE # the tool does not interact with the outside world  ))

Reject a tool call

Description

Throws an error to reject a tool call.tool_reject() can be used within thetool function to indicate that the tool call should not be processed.tool_reject() can also be called in anChat$on_tool_request() callback.When used in the callback, the tool call is rejected before the toolfunction is invoked.

Here's an example whereutils::askYesNo() is used to ask the user forpermission before accessing their current working directory. This happensdirectly in the tool function and is appropriate when you write the tooldefinition and know exactly how it will be called.

chat <- chat_openai(model = "gpt-4.1-nano")list_files <- function() {  allow_read <- utils::askYesNo(    "Would you like to allow access to your current directory?"  )  if (isTRUE(allow_read)) {    dir(pattern = "[.](r|R|csv)$")  } else {    tool_reject()  }}chat$register_tool(tool(  list_files,  "List files in the user's current directory"))chat$chat("What files are available in my current directory?")#> [tool call] list_files()#> Would you like to allow access to your current directory? (Yes/no/cancel) no#> #> Error: Tool call rejected. The user has chosen to disallow the tool #' call.#> It seems I am unable to access the files in your current directory right now.#> If you can tell me what specific files you're looking for or if you can #' provide#> the list, I can assist you further.chat$chat("Try again.")#> [tool call] list_files()#> Would you like to allow access to your current directory? (Yes/no/cancel) yes#> #> app.R#> #> data.csv#> The files available in your current directory are "app.R" and "data.csv".

You can achieve a similar experience with tools written by others by using atool_request callback. In the next example, imagine the tool is provided bya third-party package. This example implements a simple menu to ask the userfor consent before runningany tool.

packaged_list_files_tool <- tool(  function() dir(pattern = "[.](r|R|csv)$"),  "List files in the user's current directory")chat <- chat_openai(model = "gpt-4.1-nano")chat$register_tool(packaged_list_files_tool)always_allowed <- c()# ContentToolRequestchat$on_tool_request(function(request) {  if (request@name %in% always_allowed) return()  answer <- utils::menu(    title = sprintf("Allow tool `%s()` to run?", request@name),    choices = c("Always", "Once", "No"),    graphics = FALSE  )  if (answer == 1) {    always_allowed <<- append(always_allowed, request@name)  } else if (answer %in% c(0, 3)) {    tool_reject()  }})# Try choosing different answers to the menu each timechat$chat("What files are available in my current directory?")chat$chat("How about now?")chat$chat("And again now?")

Usage

tool_reject(reason = "The user has chosen to disallow the tool call.")

Arguments

reason

A character string describing the reason for rejecting thetool call.

Value

Throws an error of classellmer_tool_reject with the providedreason.

See Also

Other tool calling helpers:tool(),tool_annotations()


Type specifications

Description

These functions specify object types in a way that chatbots understand andare used for tool calling and structured data extraction. Their names arebased on theJSON schema, which is what the APIsexpect behind the scenes. The translation from R concepts to these types isfairly straightforward.

Usage

type_boolean(description = NULL, required = TRUE)type_integer(description = NULL, required = TRUE)type_number(description = NULL, required = TRUE)type_string(description = NULL, required = TRUE)type_enum(values, description = NULL, required = TRUE)type_array(items, description = NULL, required = TRUE)type_object(  .description = NULL,  ...,  .required = TRUE,  .additional_properties = FALSE)type_from_schema(text, path)type_ignore()

Arguments

description,.description

The purpose of the component. This isused by the LLM to determine what values to pass to the tool or whatvalues to extract in the structured data, so the more detail that you canprovide here, the better.

required,.required

Is the component or argument required?

In type descriptions for structured data, ifrequired = FALSE and thecomponent does not exist in the data, the LLM may hallucinate a value. Onlyapplies when the element is nested inside of atype_object().

In tool definitions,required = TRUE signals that the LLM should alwaysprovide a value. Arguments withrequired = FALSE should have a defaultvalue in the tool function's definition. If the LLM does not provide avalue, the default value will be used.

values

Character vector of permitted values.

items

The type of the array items. Can be created by any of thetype_ function.

...

<dynamic-dots> Name-type pairs definingthe components that the object must possess.

.additional_properties

Can the object have arbitrary additionalproperties that are not explicitly listed? Only supported by Claude.

text

A JSON string.

path

A file path to a JSON file.

Examples

# An integer vectortype_array(type_integer())# The closest equivalent to a data frame is an array of objectstype_array(type_object(   x = type_boolean(),   y = type_string(),   z = type_number()))# There's no specific type for dates, but you use a string with the# requested format in the description (it's not gauranteed that you'll# get this format back, but you should most of the time)type_string("The creation date, in YYYY-MM-DD format.")type_string("The update date, in dd/mm/yyyy format.")

[8]ページ先頭

©2009-2025 Movatter.jp