| Title: | Unit Testing for R |
|---|---|
| Description: | Software testing is important, but, in part because it is frustrating and boring, many of us avoid it. 'testthat' is a testing framework for R that is easy to learn and use, and integrates with your existing 'workflow'. |
| Authors: | Hadley Wickham [aut, cre], Posit Software, PBC [cph, fnd], R Core team [ctb] (Implementation of utils::recover()) |
| Maintainer: | Hadley Wickham <[email protected]> |
| License: | MIT + file LICENSE |
| Version: | 3.3.1 |
| Built: | 2025-11-26 05:48:56 UTC |
| Source: | https://github.com/r-lib/testthat |
R CMD checkR CMD check displays only the last 13 lines of the result, so thisreport is designed to ensure that you see something useful there.
Other reporters:DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
These functions compare values of comparable data types, such as numbers,dates, and times.
expect_lt(object, expected, label = NULL, expected.label = NULL)expect_lte(object, expected, label = NULL, expected.label = NULL)expect_gt(object, expected, label = NULL, expected.label = NULL)expect_gte(object, expected, label = NULL, expected.label = NULL)expect_lt(object, expected, label=NULL, expected.label=NULL)expect_lte(object, expected, label=NULL, expected.label=NULL)expect_gt(object, expected, label=NULL, expected.label=NULL)expect_gte(object, expected, label=NULL, expected.label=NULL)
object,expected | A value to compare and its expected bound. |
label,expected.label | Used to customise failure messages. For expertuse only. |
Other expectations:equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations
a <- 9expect_lt(a, 10)## Not run: expect_lt(11, 10)## End(Not run)a <- 11expect_gt(a, 10)## Not run: expect_gt(9, 10)## End(Not run)a<-9expect_lt(a,10)## Not run:expect_lt(11,10)## End(Not run)a<-11expect_gt(a,10)## Not run:expect_gt(9,10)## End(Not run)
This reporter will call a modified version ofrecover() on allbroken expectations.
Other reporters:CheckReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
These functions provide two levels of strictness when comparing acomputation to a reference value.expect_identical() is the baseline;expect_equal() relaxes the test to ignore small numeric differences.
In the 2nd edition,expect_identical() usesidentical() andexpect_equal usesall.equal(). In the 3rd edition, both functions usewaldo. They differ only in thatexpect_equal() setstolerance = testthat_tolerance() so that smallfloating point differences are ignored; this also implies that (e.g.)1and1L are treated as equal.
expect_equal( object, expected, ..., tolerance = if (edition_get() >= 3) testthat_tolerance(), info = NULL, label = NULL, expected.label = NULL)expect_identical( object, expected, info = NULL, label = NULL, expected.label = NULL, ...)expect_equal( object, expected,..., tolerance=if(edition_get()>=3) testthat_tolerance(), info=NULL, label=NULL, expected.label=NULL)expect_identical( object, expected, info=NULL, label=NULL, expected.label=NULL,...)
object,expected | Computation and value to compare it to. Both arguments supports limited unquoting to make it easier to generatereadable failures within a function or for loop. Seequasi_label formore details. |
... | 3e: passed on to 2e: passed on to |
tolerance | 3e: passed on to 2e: passed on to |
info | Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label. |
label,expected.label | Used to customise failure messages. For expertuse only. |
expect_setequal()/expect_mapequal() to test for set equality.
expect_reference() to test if two names point to same memory address.
Other expectations:comparison-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations
a <- 10expect_equal(a, 10)# Use expect_equal() when testing for numeric equality## Not run: expect_identical(sqrt(2) ^ 2, 2)## End(Not run)expect_equal(sqrt(2) ^ 2, 2)a<-10expect_equal(a,10)# Use expect_equal() when testing for numeric equality## Not run:expect_identical(sqrt(2)^2,2)## End(Not run)expect_equal(sqrt(2)^2,2)
These expectations are similar toexpect_true(all(x == "x")),expect_true(all(x)) andexpect_true(all(!x)) but give more informativefailure messages if the expectations are not met.
expect_all_equal(object, expected)expect_all_true(object)expect_all_false(object)expect_all_equal(object, expected)expect_all_true(object)expect_all_false(object)
object,expected | Computation and value to compare it to. Both arguments supports limited unquoting to make it easier to generatereadable failures within a function or for loop. Seequasi_label formore details. |
x1 <- c(1, 1, 1, 1, 1, 1)expect_all_equal(x1, 1)x2 <- c(1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2)show_failure(expect_all_equal(x2, 1))# expect_all_true() and expect_all_false() are helpers for common casesset.seed(1016)show_failure(expect_all_true(rpois(100, 10) < 20))show_failure(expect_all_false(rpois(100, 10) > 20))x1<- c(1,1,1,1,1,1)expect_all_equal(x1,1)x2<- c(1,1,1,2,1,1,1,1,1,1,1,2)show_failure(expect_all_equal(x2,1))# expect_all_true() and expect_all_false() are helpers for common casesset.seed(1016)show_failure(expect_all_true(rpois(100,10)<20))show_failure(expect_all_false(rpois(100,10)>20))
expect_error(),expect_warning(),expect_message(), andexpect_condition() check that code throws an error, warning, message,or condition with a message that matchesregexp, or a class that inheritsfromclass. See below for more details.
In the 3rd edition, these functions match (at most) a single condition. Alladditional and non-matching (ifregexp orclass are used) conditionswill bubble up outside the expectation. If these additional conditionsare important you'll need to catch them with additionalexpect_message()/expect_warning() calls; if they're unimportant youcan ignore withsuppressMessages()/suppressWarnings().
It can be tricky to test for a combination of different conditions,such as a message followed by an error.expect_snapshot() isoften an easier alternative for these more complex cases.
expect_error( object, regexp = NULL, class = NULL, ..., inherit = TRUE, info = NULL, label = NULL)expect_warning( object, regexp = NULL, class = NULL, ..., inherit = TRUE, all = FALSE, info = NULL, label = NULL)expect_message( object, regexp = NULL, class = NULL, ..., inherit = TRUE, all = FALSE, info = NULL, label = NULL)expect_condition( object, regexp = NULL, class = NULL, ..., inherit = TRUE, info = NULL, label = NULL)expect_error( object, regexp=NULL, class=NULL,..., inherit=TRUE, info=NULL, label=NULL)expect_warning( object, regexp=NULL, class=NULL,..., inherit=TRUE, all=FALSE, info=NULL, label=NULL)expect_message( object, regexp=NULL, class=NULL,..., inherit=TRUE, all=FALSE, info=NULL, label=NULL)expect_condition( object, regexp=NULL, class=NULL,..., inherit=TRUE, info=NULL, label=NULL)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
regexp | Regular expression to test against.
Note that you should only use |
class | Instead of supplying a regular expression, you can also supplya class name. This is useful for "classed" conditions. |
... | Arguments passed on to
|
inherit | Whether to match |
info | Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label. |
label | Used to customise failure messages. For expert use only. |
all | DEPRECATED If you need to test multiple warnings/messagesyou now need to use multiple calls to |
Ifregexp = NA, the value of the first argument; otherwisethe captured condition.
message vsclassWhen checking that code generates an error, it's important to check that theerror is the one you expect. There are two ways to do this. The firstway is the simplest: you just provide aregexp that match some fragmentof the error message. This is easy, but fragile, because the test willfail if the error message changes (even if its the same error).
A more robust way is to test for the class of the error, if it has one.You can learn more about custom conditions athttps://adv-r.hadley.nz/conditions.html#custom-conditions, but inshort, errors are S3 classes and you can generate a custom class and checkfor it usingclass instead ofregexp.
If you are usingexpect_error() to check that an error message isformatted in such a way that it makes sense to a human, we recommendusingexpect_snapshot() instead.
expect_no_error(),expect_no_warning(),expect_no_message(), andexpect_no_condition() to assertthat code runs without errors/warnings/messages/conditions.
Other expectations:comparison-expectations,equality-expectations,expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations
# Errors ------------------------------------------------------------------f <- function() stop("My error!")expect_error(f())expect_error(f(), "My error!")# You can use the arguments of grepl to control the matchingexpect_error(f(), "my error!", ignore.case = TRUE)# Note that `expect_error()` returns the error object so you can test# its components if needederr <- expect_error(rlang::abort("a", n = 10))expect_equal(err$n, 10)# Warnings ------------------------------------------------------------------f <- function(x) { if (x < 0) { warning("*x* is already negative") return(x) } -x}expect_warning(f(-1))expect_warning(f(-1), "already negative")expect_warning(f(1), NA)# To test message and output, store results to a variableexpect_warning(out <- f(-1), "already negative")expect_equal(out, -1)# Messages ------------------------------------------------------------------f <- function(x) { if (x < 0) { message("*x* is already negative") return(x) } -x}expect_message(f(-1))expect_message(f(-1), "already negative")expect_message(f(1), NA)# Errors ------------------------------------------------------------------f<-function() stop("My error!")expect_error(f())expect_error(f(),"My error!")# You can use the arguments of grepl to control the matchingexpect_error(f(),"my error!", ignore.case=TRUE)# Note that `expect_error()` returns the error object so you can test# its components if needederr<- expect_error(rlang::abort("a", n=10))expect_equal(err$n,10)# Warnings ------------------------------------------------------------------f<-function(x){if(x<0){ warning("*x* is already negative") return(x)}-x}expect_warning(f(-1))expect_warning(f(-1),"already negative")expect_warning(f(1),NA)# To test message and output, store results to a variableexpect_warning(out<- f(-1),"already negative")expect_equal(out,-1)# Messages ------------------------------------------------------------------f<-function(x){if(x<0){ message("*x* is already negative") return(x)}-x}expect_message(f(-1))expect_message(f(-1),"already negative")expect_message(f(1),NA)
Use this to test whether a function returns a visible or invisibleoutput. Typically you'll use this to check that functions called primarilyfor their side-effects return their data argument invisibly.
expect_invisible(call, label = NULL)expect_visible(call, label = NULL)expect_invisible(call, label=NULL)expect_visible(call, label=NULL)
call | A function call. |
label | Used to customise failure messages. For expert use only. |
The evaluatedcall, invisibly.
expect_invisible(x <- 10)expect_visible(x)# Typically you'll assign the result of the expectation so you can# also check that the value is as you expect.greet <- function(name) { message("Hi ", name) invisible(name)}out <- expect_invisible(greet("Hadley"))expect_equal(out, "Hadley")expect_invisible(x<-10)expect_visible(x)# Typically you'll assign the result of the expectation so you can# also check that the value is as you expect.greet<-function(name){ message("Hi ", name) invisible(name)}out<- expect_invisible(greet("Hadley"))expect_equal(out,"Hadley")
expect_length() inspects thelength() of an object;expect_shape()inspects the "shape" (i.e.nrow(),ncol(), ordim()) ofhigher-dimensional objects like data.frames, matrices, and arrays.
expect_length(object, n)expect_shape(object, ..., nrow, ncol, dim)expect_length(object, n)expect_shape(object,..., nrow, ncol, dim)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
n | Expected length. |
... | Not used; used to force naming of other arguments. |
nrow,ncol | |
dim | Expected |
expect_vector() to make assertions about the "size" of a vector.
Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations
expect_length(1, 1)expect_length(1:10, 10)show_failure(expect_length(1:10, 1))x <- matrix(1:9, nrow = 3)expect_shape(x, nrow = 3)show_failure(expect_shape(x, nrow = 4))expect_shape(x, ncol = 3)show_failure(expect_shape(x, ncol = 4))expect_shape(x, dim = c(3, 3))show_failure(expect_shape(x, dim = c(3, 4, 5)))expect_length(1,1)expect_length(1:10,10)show_failure(expect_length(1:10,1))x<- matrix(1:9, nrow=3)expect_shape(x, nrow=3)show_failure(expect_shape(x, nrow=4))expect_shape(x, ncol=3)show_failure(expect_shape(x, ncol=4))expect_shape(x, dim= c(3,3))show_failure(expect_shape(x, dim= c(3,4,5)))
Do you expect a string to match this pattern?
expect_match( object, regexp, perl = FALSE, fixed = FALSE, ..., all = TRUE, info = NULL, label = NULL)expect_no_match( object, regexp, perl = FALSE, fixed = FALSE, ..., all = TRUE, info = NULL, label = NULL)expect_match( object, regexp, perl=FALSE, fixed=FALSE,..., all=TRUE, info=NULL, label=NULL)expect_no_match( object, regexp, perl=FALSE, fixed=FALSE,..., all=TRUE, info=NULL, label=NULL)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
regexp | Regular expression to test against. |
perl | logical. Should Perl-compatible regexps be used? |
fixed | If |
... | Arguments passed on to
|
all | Should all elements of actual value match |
info | Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label. |
label | Used to customise failure messages. For expert use only. |
expect_match() checks if a character vector matches a regular expression,powered bygrepl().
expect_no_match() provides the complementary case, checking that acharacter vectordoes not match a regular expression.
expect_no_match(): Check that a string doesn't match a regularexpression.
Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations
expect_match("Testing is fun", "fun")expect_match("Testing is fun", "f.n")expect_no_match("Testing is fun", "horrible")show_failure(expect_match("Testing is fun", "horrible"))show_failure(expect_match("Testing is fun", "horrible", fixed = TRUE))# Zero-length inputs always failshow_failure(expect_match(character(), "."))expect_match("Testing is fun","fun")expect_match("Testing is fun","f.n")expect_no_match("Testing is fun","horrible")show_failure(expect_match("Testing is fun","horrible"))show_failure(expect_match("Testing is fun","horrible", fixed=TRUE))# Zero-length inputs always failshow_failure(expect_match(character(),"."))
You can either check for the presence of names (leavingexpectedblank), specific names (by supplying a vector of names), or absence ofnames (withNULL).
expect_named( object, expected, ignore.order = FALSE, ignore.case = FALSE, info = NULL, label = NULL)expect_named( object, expected, ignore.order=FALSE, ignore.case=FALSE, info=NULL, label=NULL)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
expected | Character vector of expected names. Leave missing tomatch any names. Use |
ignore.order | If |
ignore.case | If |
info | Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label. |
label | Used to customise failure messages. For expert use only. |
Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations
x <- c(a = 1, b = 2, c = 3)expect_named(x)expect_named(x, c("a", "b", "c"))# Use options to control sensitivityexpect_named(x, c("B", "C", "A"), ignore.order = TRUE, ignore.case = TRUE)# Can also check for the absence of names with NULLz <- 1:4expect_named(z, NULL)x<- c(a=1, b=2, c=3)expect_named(x)expect_named(x, c("a","b","c"))# Use options to control sensitivityexpect_named(x, c("B","C","A"), ignore.order=TRUE, ignore.case=TRUE)# Can also check for the absence of names with NULLz<-1:4expect_named(z,NULL)
These expectations are the opposite ofexpect_error(),expect_warning(),expect_message(), andexpect_condition(). Theyassert the absence of an error, warning, or message, respectively.
expect_no_error(object, ..., message = NULL, class = NULL)expect_no_warning(object, ..., message = NULL, class = NULL)expect_no_message(object, ..., message = NULL, class = NULL)expect_no_condition(object, ..., message = NULL, class = NULL)expect_no_error(object,..., message=NULL, class=NULL)expect_no_warning(object,..., message=NULL, class=NULL)expect_no_message(object,..., message=NULL, class=NULL)expect_no_condition(object,..., message=NULL, class=NULL)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
... | These dots are for future extensions and must be empty. |
message,class | The default, In many cases, particularly when testing warnings and messages, you willwant to be more specific about the condition you are hopingnot to see,i.e. the condition that motivated you to write the test. Similar to Note that you should only use |
expect_no_warning(1 + 1)foo <- function(x) { warning("This is a problem!")}# warning doesn't match so bubbles up:expect_no_warning(foo(), message = "bananas")# warning does match so causes a failure:try(expect_no_warning(foo(), message = "problem"))expect_no_warning(1+1)foo<-function(x){ warning("This is a problem!")}# warning doesn't match so bubbles up:expect_no_warning(foo(), message="bananas")# warning does match so causes a failure:try(expect_no_warning(foo(), message="problem"))
NULL?This is a special case becauseNULL is a singleton so it's possiblecheck for it either withexpect_equal(x, NULL) orexpect_type(x, "NULL").
expect_null(object, info = NULL, label = NULL)expect_null(object, info=NULL, label=NULL)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
info | Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label. |
label | Used to customise failure messages. For expert use only. |
Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations
x <- NULLy <- 10expect_null(x)show_failure(expect_null(y))x<-NULLy<-10expect_null(x)show_failure(expect_null(y))
Test for output produced byprint() orcat(). This is best used forvery simple output; for more complex cases useexpect_snapshot().
expect_output( object, regexp = NULL, ..., info = NULL, label = NULL, width = 80)expect_output( object, regexp=NULL,..., info=NULL, label=NULL, width=80)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
regexp | Regular expression to test against.
|
... | Arguments passed on to
|
info | Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label. |
label | Used to customise failure messages. For expert use only. |
width | Number of characters per line of output. This does notinherit from |
The first argument, invisibly.
Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations
str(mtcars)expect_output(str(mtcars), "32 obs")expect_output(str(mtcars), "11 variables")# You can use the arguments of grepl to control the matchingexpect_output(str(mtcars), "11 VARIABLES", ignore.case = TRUE)expect_output(str(mtcars), "$ mpg", fixed = TRUE)str(mtcars)expect_output(str(mtcars),"32 obs")expect_output(str(mtcars),"11 variables")# You can use the arguments of grepl to control the matchingexpect_output(str(mtcars),"11 VARIABLES", ignore.case=TRUE)expect_output(str(mtcars),"$ mpg", fixed=TRUE)
expect_setequal(x, y) tests that every element ofx occurs iny,and that every element ofy occurs inx.
expect_contains(x, y) tests thatx contains every element ofy(i.e.y is a subset ofx).
expect_in(x, y) tests that every element ofx is iny(i.e.x is a subset ofy).
expect_disjoint(x, y) tests that no element ofx is iny(i.e.x is disjoint fromy).
expect_mapequal(x, y) treats lists as if they are mappings between namesand values. Concretely, checks thatx andy have the same names, thenchecks thatx[names(y)] equalsy.
expect_setequal(object, expected)expect_mapequal(object, expected)expect_contains(object, expected)expect_in(object, expected)expect_disjoint(object, expected)expect_setequal(object, expected)expect_mapequal(object, expected)expect_contains(object, expected)expect_in(object, expected)expect_disjoint(object, expected)
object,expected | Computation and value to compare it to. Both arguments supports limited unquoting to make it easier to generatereadable failures within a function or for loop. Seequasi_label formore details. |
Note thatexpect_setequal() ignores names, and you will be warned if bothobject andexpected have them.
expect_setequal(letters, rev(letters))show_failure(expect_setequal(letters[-1], rev(letters)))x <- list(b = 2, a = 1)expect_mapequal(x, list(a = 1, b = 2))show_failure(expect_mapequal(x, list(a = 1)))show_failure(expect_mapequal(x, list(a = 1, b = "x")))show_failure(expect_mapequal(x, list(a = 1, b = 2, c = 3)))expect_setequal(letters, rev(letters))show_failure(expect_setequal(letters[-1], rev(letters)))x<- list(b=2, a=1)expect_mapequal(x, list(a=1, b=2))show_failure(expect_mapequal(x, list(a=1)))show_failure(expect_mapequal(x, list(a=1, b="x")))show_failure(expect_mapequal(x, list(a=1, b=2, c=3)))
Checks that the code produces no output, messages, or warnings.
expect_silent(object)expect_silent(object)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
The first argument, invisibly.
Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),inheritance-expectations,logical-expectations
expect_silent("123")f <- function() { message("Hi!") warning("Hey!!") print("OY!!!")}## Not run: expect_silent(f())## End(Not run)expect_silent("123")f<-function(){ message("Hi!") warning("Hey!!") print("OY!!!")}## Not run:expect_silent(f())## End(Not run)
Snapshot tests (aka golden tests) are similar to unit tests except that theexpected result is stored in a separate file that is managed by testthat.Snapshot tests are useful for when the expected value is large, or whenthe intent of the code is something that can only be verified by a human(e.g. this is a useful error message). Learn more invignette("snapshotting").
expect_snapshot() runs code as if you had executed it at the console, andrecords the results, including output, messages, warnings, and errors.If you just want to compare the result, tryexpect_snapshot_value().
expect_snapshot( x, cran = FALSE, error = FALSE, transform = NULL, variant = NULL, cnd_class = FALSE)expect_snapshot( x, cran=FALSE, error=FALSE, transform=NULL, variant=NULL, cnd_class=FALSE)
x | Code to evaluate. |
cran | Should these expectations be verified on CRAN? By default,they are not, because snapshot tests tend to be fragile because theyoften rely on minor details of dependencies. |
error | Do you expect the code to throw an error? The expectationwill fail (even on CRAN) if an unexpected error is thrown or theexpected error is not thrown. |
transform | Optionally, a function to scrub sensitive or stochastictext from the output. Should take a character vector of lines as inputand return a modified character vector as output. |
variant | If non- You can use variants to deal with cases where the snapshot output variesand you want to capture and test the variations. Common use cases includevariations for operating system, R version, or version of key dependency.Variants are an advanced feature. When you use them, you'll need tocarefully think about your testing strategy to ensure that all importantvariants are covered by automated tests, and ensure that you have a wayto get snapshot changes out of your CI system and back into the repo. Note that there's no way to declare all possible variants up front whichmeans that as soon as you start using variants, you are responsible fordeleting snapshot variants that are no longer used. (testthat will stilldelete all variants if you delete the test.) |
cnd_class | Whether to include the class of messages,warnings, and errors in the snapshot. Only the most specificclass is included, i.e. the first element of |
The first time that you run a snapshot expectation it will runx,capture the results, and record them intests/testthat/_snaps/{test}.md.Each test file gets its own snapshot file, e.g.test-foo.R will get_snaps/foo.md.
It's important to review the Markdown files and commit them to git. They aredesigned to be human readable, and you should always review new additionsto ensure that the salient information has been captured. They should alsobe carefully reviewed in pull requests, to make sure that snapshots haveupdated in the expected way.
On subsequent runs, the result ofx will be compared to the value storedon disk. If it's different, the expectation will fail, and a new file_snaps/{test}.new.md will be created. If the change was deliberate,you can approve the change withsnapshot_accept() and then the tests willpass the next time you run them.
Note that snapshotting can only work when executing a complete test file(withtest_file(),test_dir(), or friends) because there's otherwiseno way to figure out the snapshot path. If you run snapshot testsinteractively, they'll just display the current value.
Whole file snapshot testing is designed for testing objects that don't havea convenient textual representation, with initial support for images(.png,.jpg,.svg), data frames (.csv), and text files(.R,.txt,.json, ...).
The first timeexpect_snapshot_file() is run, it will create_snaps/{test}/{name}.{ext} containing reference output. Future runs willbe compared to this reference: if different, the test will fail and the newresults will be saved in_snaps/{test}/{name}.new.{ext}. To reviewfailures, callsnapshot_review().
We generally expect this function to be used via a wrapper that takes careof ensuring that output is as reproducible as possible, e.g. automaticallyskipping tests where it's known that images can't be reproduced exactly.
expect_snapshot_file( path, name = basename(path), binary = deprecated(), cran = FALSE, compare = NULL, transform = NULL, variant = NULL)announce_snapshot_file(path, name = basename(path))compare_file_binary(old, new)compare_file_text(old, new)expect_snapshot_file( path, name= basename(path), binary= deprecated(), cran=FALSE, compare=NULL, transform=NULL, variant=NULL)announce_snapshot_file(path, name= basename(path))compare_file_binary(old, new)compare_file_text(old, new)
path | Path to file to snapshot. Optional for |
name | Snapshot name, taken from |
binary | |
cran | Should these expectations be verified on CRAN? By default,they are not, because snapshot tests tend to be fragile because theyoften rely on minor details of dependencies. |
compare | A function used to compare the snapshot files. It should taketwo inputs, the paths to the
|
transform | Optionally, a function to scrub sensitive or stochastictext from the output. Should take a character vector of lines as inputand return a modified character vector as output. |
variant | If not- Note that there's no way to declare all possible variants up front whichmeans that as soon as you start using variants, you are responsible fordeleting snapshot variants that are no longer used. (testthat will stilldelete all variants if you delete the test.) |
old,new | Paths to old and new snapshot files. |
testthat automatically detects dangling snapshots that have beenwritten to the_snaps directory but which no longer havecorresponding R code to generate them. These dangling files areautomatically deleted so they don't clutter the snapshotdirectory.
This can cause problems if your test is conditionally executed, eitherbecause of anif statement or askip(). To avoid files being deleted inthis case, you can callannounce_snapshot_file() before the conditionalcode.
test_that("can save a file", { if (!can_save()) { announce_snapshot_file(name = "data.txt") skip("Can't save file") } path <- withr::local_tempfile() expect_snapshot_file(save_file(path, mydata()), "data.txt")})# To use expect_snapshot_file() you'll typically need to start by writing# a helper function that creates a file from your code, returning a pathsave_png <- function(code, width = 400, height = 400) { path <- tempfile(fileext = ".png") png(path, width = width, height = height) on.exit(dev.off()) code path}path <- save_png(plot(1:5))path## Not run: expect_snapshot_file(save_png(hist(mtcars$mpg)), "plot.png")## End(Not run)# You'd then also provide a helper that skips tests where you can't# be sure of producing exactly the same output.expect_snapshot_plot <- function(name, code) { # Announce the file before touching skips or running `code`. This way, # if the skips are active, testthat will not auto-delete the corresponding # snapshot file. name <- paste0(name, ".png") announce_snapshot_file(name = name) # Other packages might affect results skip_if_not_installed("ggplot2", "2.0.0") # Or maybe the output is different on some operating systems skip_on_os("windows") # You'll need to carefully think about and experiment with these skips path <- save_png(code) expect_snapshot_file(path, name)}# To use expect_snapshot_file() you'll typically need to start by writing# a helper function that creates a file from your code, returning a pathsave_png<-function(code, width=400, height=400){ path<- tempfile(fileext=".png") png(path, width= width, height= height) on.exit(dev.off()) code path}path<- save_png(plot(1:5))path## Not run:expect_snapshot_file(save_png(hist(mtcars$mpg)),"plot.png")## End(Not run)# You'd then also provide a helper that skips tests where you can't# be sure of producing exactly the same output.expect_snapshot_plot<-function(name, code){# Announce the file before touching skips or running `code`. This way,# if the skips are active, testthat will not auto-delete the corresponding# snapshot file. name<- paste0(name,".png") announce_snapshot_file(name= name)# Other packages might affect results skip_if_not_installed("ggplot2","2.0.0")# Or maybe the output is different on some operating systems skip_on_os("windows")# You'll need to carefully think about and experiment with these skips path<- save_png(code) expect_snapshot_file(path, name)}
Captures the result of function, flexibly serializing it into a textrepresentation that's stored in a snapshot file. Seeexpect_snapshot()for more details on snapshot testing.
expect_snapshot_value( x, style = c("json", "json2", "deparse", "serialize"), cran = FALSE, tolerance = testthat_tolerance(), ..., variant = NULL)expect_snapshot_value( x, style= c("json","json2","deparse","serialize"), cran=FALSE, tolerance= testthat_tolerance(),..., variant=NULL)
x | Code to evaluate. |
style | Serialization style to use:
|
cran | Should these expectations be verified on CRAN? By default,they are not, because snapshot tests tend to be fragile because theyoften rely on minor details of dependencies. |
tolerance | Numerical tolerance: any differences (in the sense of The default tolerance is |
... | Passed on to |
variant | If non- You can use variants to deal with cases where the snapshot output variesand you want to capture and test the variations. Common use cases includevariations for operating system, R version, or version of key dependency.Variants are an advanced feature. When you use them, you'll need tocarefully think about your testing strategy to ensure that all importantvariants are covered by automated tests, and ensure that you have a wayto get snapshot changes out of your CI system and back into the repo. Note that there's no way to declare all possible variants up front whichmeans that as soon as you start using variants, you are responsible fordeleting snapshot variants that are no longer used. (testthat will stilldelete all variants if you delete the test.) |
expect_success() checks that there's exactly one success and no failures;expect_failure() checks that there's exactly one failure and no successes.expect_snapshot_failure() records the failure message so that you canmanually check that it is informative.
Useshow_failure() in examples to print the failure message withoutthrowing an error.
expect_success(expr)expect_failure(expr, message = NULL, ...)expect_snapshot_failure(expr)show_failure(expr)expect_success(expr)expect_failure(expr, message=NULL,...)expect_snapshot_failure(expr)show_failure(expr)
expr | Code to evaluate |
message | Check that the failure message matches this regexp. |
... | Other arguments passed on to |
expect_vector() is a thin wrapper aroundvctrs::vec_assert(), convertingthe results of that function in to the expectations used by testthat. Thismeans that it used the vctrs ofptype (prototype) andsize. Seedetails inhttps://vctrs.r-lib.org/articles/type-size.html
expect_vector(object, ptype = NULL, size = NULL)expect_vector(object, ptype=NULL, size=NULL)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
ptype | (Optional) Vector prototype to test against. Should be asize-0 (empty) generalised vector. |
size | (Optional) Size to check for. |
expect_vector(1:10, ptype = integer(), size = 10)show_failure(expect_vector(1:10, ptype = integer(), size = 5))show_failure(expect_vector(1:10, ptype = character(), size = 5))expect_vector(1:10, ptype= integer(), size=10)show_failure(expect_vector(1:10, ptype= integer(), size=5))show_failure(expect_vector(1:10, ptype= character(), size=5))
extract_test() creates a minimal reprex for a failed expectation.It extracts all non-test code before the failed expectation as well asall code inside the test up to and including the failed expectation.
This is particularly useful when you're debugging test failures insomeone else's package.
extract_test(location, path = stdout(), package = Sys.getenv("TESTTHAT_PKG"))extract_test(location, path= stdout(), package= Sys.getenv("TESTTHAT_PKG"))
location | A string giving the location in the form |
path | Path to write the reprex to. Defaults to |
package | If supplied, will be used to construct a test environmentfor the extracted code. |
This function is called for its side effect of rendering areprex topath. This function will never error: if extractionfails, the error message will be written topath.
# If you see a test failure like this:# -- Failure (test-extract.R:46:3): errors if can't find test -------------# Expected FALSE to be TRUE.# Differences:# `actual`: FALSE# `expected`: TRUE# You can run this:## Not run: extract_test("test-extract.R:46:3")# to see just the code needed to reproduce the failure# If you see a test failure like this:# -- Failure (test-extract.R:46:3): errors if can't find test -------------# Expected FALSE to be TRUE.# Differences:# `actual`: FALSE# `expected`: TRUE# You can run this:## Not run: extract_test("test-extract.R:46:3")# to see just the code needed to reproduce the failure
These are the primitives that you can use to implement your own expectations.Every path through an expectation should either callpass(),fail(),or throw an error (e.g. if the arguments are invalid). Expectations shouldalways returninvisible(act$val).
Learn more about creating your own expectations invignette("custom-expectation").
fail( message = "Failure has been forced", info = NULL, srcref = NULL, trace_env = caller_env(), trace = NULL)pass()fail( message="Failure has been forced", info=NULL, srcref=NULL, trace_env= caller_env(), trace=NULL)pass()
message | A character vector describing the failure. Thefirst element should describe the expected value, and the second (andoptionally subsequence) elements should describe what was actually seen. |
info | Character vector continuing additional information. Includedfor backward compatibility only and new expectations should not use it. |
srcref | Location of the failure. Should only needed to be explicitlysupplied when you need to forward a srcref captured elsewhere. |
trace_env | If |
trace | An optional backtrace created by |
expect_length <- function(object, n) { act <- quasi_label(rlang::enquo(object), arg = "object") act_n <- length(act$val) if (act_n != n) { fail(sprintf("%s has length %i, not length %i.", act$lab, act_n, n)) } else { pass() } invisible(act$val)}expect_length<-function(object, n){ act<- quasi_label(rlang::enquo(object), arg="object") act_n<- length(act$val)if(act_n!= n){ fail(sprintf("%s has length %i, not length %i.", act$lab, act_n, n))}else{ pass()} invisible(act$val)}
This reporter will simply throw an error if any of the tests failed. It isbest combined with another reporter, such as theSummaryReporter.
Other reporters:CheckReporter,DebugReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
Seehttps://adv-r.hadley.nz/oo.html for an overview of R's OO systems, andthe vocabulary used here.
expect_type(x, type) checks thattypeof(x) istype.
expect_s3_class(x, class) checks thatx is an S3 object thatinherits() fromclass
expect_s3_class(x, NA) checks thatx isn't an S3 object.
expect_s4_class(x, class) checks thatx is an S4 object thatis()class.
expect_s4_class(x, NA) checks thatx isn't an S4 object.
expect_r6_class(x, class) checks thatx an R6 object thatinherits fromclass.
expect_s7_class(x, Class) checks thatx is an S7 object thatS7::S7_inherits() fromClass
Seeexpect_vector() for testing properties of objects created by vctrs.
expect_type(object, type)expect_s3_class(object, class, exact = FALSE)expect_s4_class(object, class)expect_r6_class(object, class)expect_s7_class(object, class)expect_type(object, type)expect_s3_class(object, class, exact=FALSE)expect_s4_class(object, class)expect_r6_class(object, class)expect_s7_class(object, class)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
type | String giving base type (as returned by |
class | The required type varies depending on the function:
For historical reasons, |
exact | If |
Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),logical-expectations
x <- data.frame(x = 1:10, y = "x", stringsAsFactors = TRUE)# A data frame is an S3 object with class data.frameexpect_s3_class(x, "data.frame")show_failure(expect_s4_class(x, "data.frame"))# A data frame is built from a list:expect_type(x, "list")f <- factor(c("a", "b", "c"))o <- ordered(f)# Using multiple class names tests if the object inherits from any of themexpect_s3_class(f, c("ordered", "factor"))# Use exact = TRUE to test for exact matchshow_failure(expect_s3_class(f, c("ordered", "factor"), exact = TRUE))expect_s3_class(o, c("ordered", "factor"), exact = TRUE)# An integer vector is an atomic vector of type "integer"expect_type(x$x, "integer")# It is not an S3 objectshow_failure(expect_s3_class(x$x, "integer"))# Above, we requested data.frame() converts strings to factors:show_failure(expect_type(x$y, "character"))expect_s3_class(x$y, "factor")expect_type(x$y, "integer")x<- data.frame(x=1:10, y="x", stringsAsFactors=TRUE)# A data frame is an S3 object with class data.frameexpect_s3_class(x,"data.frame")show_failure(expect_s4_class(x,"data.frame"))# A data frame is built from a list:expect_type(x,"list")f<- factor(c("a","b","c"))o<- ordered(f)# Using multiple class names tests if the object inherits from any of themexpect_s3_class(f, c("ordered","factor"))# Use exact = TRUE to test for exact matchshow_failure(expect_s3_class(f, c("ordered","factor"), exact=TRUE))expect_s3_class(o, c("ordered","factor"), exact=TRUE)# An integer vector is an atomic vector of type "integer"expect_type(x$x,"integer")# It is not an S3 objectshow_failure(expect_s3_class(x$x,"integer"))# Above, we requested data.frame() converts strings to factors:show_failure(expect_type(x$y,"character"))expect_s3_class(x$y,"factor")expect_type(x$y,"integer")
These functions help you determine if you code is running in a particulartesting context:
is_testing() isTRUE inside a test.
is_snapshot() isTRUE inside a snapshot test
is_checking() isTRUE inside ofR CMD check (i.e. bytest_check()).
is_parallel() isTRUE if the tests are run in parallel.
testing_package() gives name of the package being tested.
A common use of these functions is to compute a default value for aquietargument withis_testing() && !is_snapshot(). In this case, you'llwant to avoid an run-time dependency on testthat, in which case you shouldjust copy the implementation of these functions into autils.R or similar.
is_testing()is_parallel()is_checking()is_snapshot()testing_package()is_testing()is_parallel()is_checking()is_snapshot()testing_package()
This reporter includes detailed results about each test and summaries,written to a file (or stdout) in jUnit XML format. This can be read bythe Jenkins Continuous Integration System to report on a dashboard etc.Requires thexml2 package.
To fit into the jUnit structure,context() becomes the<testsuite>name as well as the base of the<testcase> classname. Thetest_that() name becomes the rest of the<testcase> classname.The deparsedexpect_that() call becomes the<testcase> name.On failure, the message goes into the<failure> node messageargument (first line only) and into its text content (full message).Execution time and some other details are also recorded.
References for the jUnit XML format:https://github.com/testmoapp/junitxml
Other reporters:CheckReporter,DebugReporter,FailReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
This reporter gathers all results, adding additional information such astest elapsed time, and test filename if available. Very useful for reporting.
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
local_edition() allows you to temporarily (within a single test ora single test file) change the active edition of testthat.edition_get() allows you to retrieve the currently active edition.
local_edition(x, .env = parent.frame())edition_get()local_edition(x, .env= parent.frame())edition_get()
x | Edition Should be a single integer. |
.env | Environment that controls scope of changes. For expert use only. |
with_mocked_bindings() andlocal_mocked_bindings() provide tools for"mocking", temporarily redefining a function so that it behaves differentlyduring tests. This is helpful for testing functions that depend on externalstate (i.e. reading a value from a file or a website, or pretending a packageis or isn't installed).
Learn more invignette("mocking").
local_mocked_bindings(..., .package = NULL, .env = caller_env())with_mocked_bindings(code, ..., .package = NULL)local_mocked_bindings(..., .package=NULL, .env= caller_env())with_mocked_bindings(code,..., .package=NULL)
... | Name-value pairs providing new values (typically functions) totemporarily replace the named bindings. |
.package | The name of the package where mocked functions should beinserted. Generally, you should not supply this as it will be automaticallydetected when whole package tests are run or when there's one packageunder active development (i.e. loaded with |
.env | Environment that defines effect scope. For expert use only. |
code | Code to execute with specified bindings. |
There are four places that the function you are trying to mock mightcome from:
Internal to your package.
Imported from an external package via theNAMESPACE.
The base environment.
Called from an external package with::.
They are described in turn below.
(To mock S3 & S4 methods and R6 classes seelocal_mocked_s3_method(),local_mocked_s4_method(), andlocal_mocked_r6_class().)
You mock internal and imported functions the same way. For example, takethis code:
some_function <- function() { another_function()}It doesn't matter whetheranother_function() is defined by your packageor you've imported it from a dependency with@import or@importFrom,you mock it the same way:
local_mocked_bindings( another_function = function(...) "new_value")
To mock a function in the base package, you need to make sure that youhave a binding for this function in your package. It's easiest to do thisby binding the value toNULL. For example, if you wanted to mockinteractive() in your package, you'd need to include this code somewherein your package:
interactive <- NULL
Why is this necessary?with_mocked_bindings() andlocal_mocked_bindings()work by temporarily modifying the bindings within your package's namespace.When these tests are running inside ofR CMD check the namespace is lockedwhich means it's not possible to create new bindings so you need to make surethat the binding exists already.
It's trickier to mock functions in other packages that you call with::.For example, take this minor variation:
some_function <- function() { anotherpackage::another_function()}To mock this function, you'd need to modifyanother_function() inside theanotherpackage package. Youcan do this by supplying the.packageargument tolocal_mocked_bindings() but we don't recommend it becauseit will affect all calls toanotherpackage::another_function(), not justthe calls originating in your package. Instead, it's safer to either importthe function into your package, or make a wrapper that you can mock:
some_function <- function() { my_wrapper()}my_wrapper <- function(...) { anotherpackage::another_function(...)}local_mocked_bindings( my_wrapper = function(...) "new_value")To mock a function that returns different values in sequence,for instance an API call whose status would be 502 then 200,or an user input toreadline(), you can usemock_output_sequence()
local_mocked_bindings(readline = mock_output_sequence("3", "This is a note", "n"))Other mocking:mock_output_sequence()
This function allows you to temporarily override an R6 class definition.It works by creating a subclass then usinglocal_mocked_bindings() totemporarily replace the original definition. This means that it will notaffect subclasses of the original class; please file an issue if you needthis.
Learn more about mocking invignette("mocking").
local_mocked_r6_class( class, public = list(), private = list(), frame = caller_env())local_mocked_r6_class( class, public= list(), private= list(), frame= caller_env())
class | An R6 class definition. |
public,private | A named list of public and private methods/data. |
frame | Calling frame which determines the scope of the mock.Only needed when wrapping in another local helper. |
These functions allow you to temporarily override S3 and S4 methods thatalready exist. It works by usingregisterS3method()/setMethod() totemporarily replace the original definition.
Learn more about mocking invignette("mocking").
local_mocked_s3_method(generic, signature, definition, frame = caller_env())local_mocked_s4_method(generic, signature, definition, frame = caller_env())local_mocked_s3_method(generic, signature, definition, frame= caller_env())local_mocked_s4_method(generic, signature, definition, frame= caller_env())
generic | A string giving the name of the generic. |
signature | A character vector giving the signature of the method. |
definition | A function providing the method definition. |
frame | Calling frame which determines the scope of the mock.Only needed when wrapping in another local helper. |
x <- as.POSIXlt(Sys.time())local({ local_mocked_s3_method("length", "POSIXlt", function(x) 42) length(x)})length(x)x<- as.POSIXlt(Sys.time())local({ local_mocked_s3_method("length","POSIXlt",function(x)42) length(x)})length(x)
local_test_context() is run automatically bytest_that() but you maywant to run it yourself if you want to replicate test results interactively.If run inside a function, the effects are automatically reversed when thefunction exits; if running in the global environment, usewithr::deferred_run() to undo.
local_reproducible_output() is run automatically bytest_that() in the3rd edition. You might want to call it to override the the default settingsinside a test, if you want to test Unicode, coloured output, or anon-standard width.
local_test_context(.env = parent.frame())local_reproducible_output( width = 80, crayon = FALSE, unicode = FALSE, rstudio = FALSE, hyperlinks = FALSE, lang = "C", .env = parent.frame())local_test_context(.env= parent.frame())local_reproducible_output( width=80, crayon=FALSE, unicode=FALSE, rstudio=FALSE, hyperlinks=FALSE, lang="C", .env= parent.frame())
.env | Environment to use for scoping; expert use only. |
width | Value of the |
crayon | Determines whether or not crayon (now cli) colourshould be applied. |
unicode | Value of the |
rstudio | Should we pretend that we're inside of RStudio? |
hyperlinks | Should we use ANSI hyperlinks. |
lang | Optionally, supply a BCP47 language code to set the languageused for translating error messages. This is a lower case two letterISO 639 country code,optionally followed by "_" or "-" and an upper case two letterISO 3166 region code. |
local_test_context() setsTESTTHAT = "true", which ensures thatis_testing() returnsTRUE and allows code to tell if it is run bytestthat.
In the third edition,local_test_context() also callslocal_reproducible_output() which temporary sets the following options:
cli.dynamic = FALSE so that tests assume that they are not run ina dynamic console (i.e. one where you can move the cursor around).
cli.unicode (default:FALSE) so that the cli package never generatesunicode output (normally cli uses unicode on Linux/Mac but not Windows).Windows can't easily save unicode output to disk, so it must be set tofalse for consistency.
cli.condition_width = Inf so that new lines introduced whilewidth-wrapping condition messages don't interfere with message matching.
crayon.enabled (default:FALSE) suppresses ANSI colours generated bythe cli and crayon packages (normally colours are used if cli detectsthat you're in a terminal that supports colour).
cli.num_colors (default:1L) Same as the crayon option.
lifecycle_verbosity = "warning" so that every lifecycle problem alwaysgenerates a warning (otherwise deprecated functions don't generate awarning every time).
max.print = 99999 so the same number of values are printed.
OutDec = "." so numbers always uses. as the decimal point(European users sometimes setOutDec = ",").
rlang_interactive = FALSE so thatrlang::is_interactive() returnsFALSE, and code that uses it pretends you're in a non-interactiveenvironment.
useFancyQuotes = FALSE so base R functions always use regular (straight)quotes (otherwise the default is locale dependent, seesQuote() fordetails).
width (default: 80) to control the width of printed output (usually thisvaries with the size of your console).
And modifies the following env vars:
UnsetsRSTUDIO, which ensures that RStudio is never detected as running.
SetsLANGUAGE = "en", which ensures that no message translation occurs.
Finally, it sets the collation locale to "C", which ensures that charactersorting the same regardless of system locale.
local({ local_test_context() cat(cli::col_blue("Text will not be colored")) cat(cli::symbol$ellipsis) cat("\n")})test_that("test ellipsis", { local_reproducible_output(unicode = FALSE) expect_equal(cli::symbol$ellipsis, "...") local_reproducible_output(unicode = TRUE) expect_equal(cli::symbol$ellipsis, "\u2026")})local({ local_test_context() cat(cli::col_blue("Text will not be colored")) cat(cli::symbol$ellipsis) cat("\n")})test_that("test ellipsis",{ local_reproducible_output(unicode=FALSE) expect_equal(cli::symbol$ellipsis,"...") local_reproducible_output(unicode=TRUE) expect_equal(cli::symbol$ellipsis,"\u2026")})
This reporter simply prints the location of every expectation and error.This is useful if you're trying to figure out the source of a segfault,or you want to figure out which code triggers a C/C++ breakpoint
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
TRUE orFALSE?These are fall-back expectations that you can use when none of the othermore specific expectations apply. The disadvantage is that you may geta less informative error message.
Attributes are ignored.
expect_true(object, info = NULL, label = NULL)expect_false(object, info = NULL, label = NULL)expect_true(object, info=NULL, label=NULL)expect_false(object, info=NULL, label=NULL)
object | Object to test. Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details. |
info | Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label. |
label | Used to customise failure messages. For expert use only. |
Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations
expect_true(2 == 2)# Failed expectations will throw an errorshow_failure(expect_true(2 != 2))# where possible, use more specific expectations, to get more informative# error messagesa <- 1:4show_failure(expect_true(length(a) == 3))show_failure(expect_equal(length(a), 3))x <- c(TRUE, TRUE, FALSE, TRUE)show_failure(expect_true(all(x)))show_failure(expect_all_true(x))expect_true(2==2)# Failed expectations will throw an errorshow_failure(expect_true(2!=2))# where possible, use more specific expectations, to get more informative# error messagesa<-1:4show_failure(expect_true(length(a)==3))show_failure(expect_equal(length(a),3))x<- c(TRUE,TRUE,FALSE,TRUE)show_failure(expect_true(all(x)))show_failure(expect_all_true(x))
The minimal test reporter provides the absolutely minimum amount ofinformation: whether each expectation has succeeded, failed or experiencedan error. If you want to find out what the failures and errors actuallywere, you'll need to run a more informative test reporter.
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
Specify multiple return values for mocking
mock_output_sequence(..., recycle = FALSE)mock_output_sequence(..., recycle=FALSE)
... | < |
recycle | whether to recycle. If |
A function that you can use withinlocal_mocked_bindings() andwith_mocked_bindings()
Other mocking:local_mocked_bindings()
# inside local_mocked_bindings()## Not run: local_mocked_bindings(readline = mock_output_sequence("3", "This is a note", "n"))## End(Not run)# for understandingmocked_sequence <- mock_output_sequence("3", "This is a note", "n")mocked_sequence()mocked_sequence()mocked_sequence()try(mocked_sequence())recycled_mocked_sequence <- mock_output_sequence( "3", "This is a note", "n", recycle = TRUE)recycled_mocked_sequence()recycled_mocked_sequence()recycled_mocked_sequence()recycled_mocked_sequence()# inside local_mocked_bindings()## Not run:local_mocked_bindings(readline= mock_output_sequence("3","This is a note","n"))## End(Not run)# for understandingmocked_sequence<- mock_output_sequence("3","This is a note","n")mocked_sequence()mocked_sequence()mocked_sequence()try(mocked_sequence())recycled_mocked_sequence<- mock_output_sequence("3","This is a note","n", recycle=TRUE)recycled_mocked_sequence()recycled_mocked_sequence()recycled_mocked_sequence()recycled_mocked_sequence()
This reporter is useful to use several reporters at the same time, e.g.adding a custom reporter without removing the current one.
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
ProgressReporter is designed for interactive use. Its goal is togive you actionable insights to help you understand the status of yourcode. This reporter also praises you from time-to-time if all your testspass. It's the default reporter fortest_dir().
ParallelProgressReporter is very similar toProgressReporter, butworks better for packages that want parallel tests.
CompactProgressReporter is a minimal version ofProgressReporterdesigned for use with single files. It's the default reporter fortest_file().
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
This reporter is designed for output to RStudio. It produces results inany easily parsed form.
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
One of the most pernicious challenges to debug is when a test runs finein your test suite, but fails when you run it interactively (or similarly,it fails randomly when running your tests in parallel). One of the mostcommon causes of this problem is accidentally changing global state in aprevious test (e.g. changing an option, an environment variable, or theworking directory). This is hard to debug, because it's very hard to figureout which test made the change.
Luckily testthat provides a tool to figure out if tests are changing globalstate. You can register a state inspector withset_state_inspector() andtestthat will run it before and after each test, store the results, thenreport if there are any differences. For example, if you wanted to see ifany of your tests were changing options or environment variables, you couldput this code intests/testthat/helper-state.R:
set_state_inspector(function() { list( options = options(), envvars = Sys.getenv() )})(You might discover other packages outside your control are changingthe global state, in which case you might want to modify this functionto ignore those values.)
Other problems that can be troublesome to resolve are CRAN check notes thatreport things like connections being left open. You can easily debugthat problem with:
set_state_inspector(function() { getAllConnections()})set_state_inspector(callback, tolerance = testthat_tolerance())set_state_inspector(callback, tolerance= testthat_tolerance())
callback | Either a zero-argument function that returns an objectcapturing global state that you're interested in, or |
tolerance | If non- It uses the same algorithm as |
This reporter quietly runs all tests, simply gathering all expectations.This is helpful for programmatically inspecting errors after a test run.You can retrieve the results with$expectations().
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
skip_if() andskip_if_not() allow you to skip tests, immediatelyconcluding atest_that() block without executing any further expectations.This allows you to skip a test without failure, if for some reason itcan't be run (e.g. it depends on the feature of a specific operating system,or it requires a specific version of a package).
Seevignette("skipping") for more details.
skip(message = "Skipping")skip_if_not(condition, message = NULL)skip_if(condition, message = NULL)skip_if_not_installed(pkg, minimum_version = NULL)skip_unless_r(spec)skip_if_offline(host = "captive.apple.com")skip_on_cran()local_on_cran(on_cran = TRUE, frame = caller_env())skip_on_os(os, arch = NULL)skip_on_ci()skip_on_covr()skip_on_bioc()skip_if_translated(msgid = "'%s' not found")skip(message="Skipping")skip_if_not(condition, message=NULL)skip_if(condition, message=NULL)skip_if_not_installed(pkg, minimum_version=NULL)skip_unless_r(spec)skip_if_offline(host="captive.apple.com")skip_on_cran()local_on_cran(on_cran=TRUE, frame= caller_env())skip_on_os(os, arch=NULL)skip_on_ci()skip_on_covr()skip_on_bioc()skip_if_translated(msgid="'%s' not found")
message | A message describing why the test was skipped. |
condition | Boolean condition to check. |
pkg | Name of package to check for |
minimum_version | Minimum required version for the package |
spec | A version specification like '>= 4.1.0' denoting that this testshould only be run on R versions 4.1.0 and later. |
host | A string with a hostname to lookup |
on_cran | Pretend we're on CRAN ( |
frame | Calling frame to tie change to; expect use only. |
os | Character vector of one or more operating systems to skip on.Supported values are |
arch | Character vector of one or more architectures to skip on.Common values include |
msgid | R message identifier used to check for translation: the defaultuses a message included in most translation packs. See the complete list in |
skip_if_not_installed("pkg") skips tests if package "pkg" is notinstalled or cannot be loaded (usingrequireNamespace()). Generally,you can assume that suggested packages are installed, and you do notneed to check for them specifically, unless they are particularlydifficult to install.
skip_if_offline() skips if an internet connection is not available(usingcurl::nslookup()) or if the test is run on CRAN. Requires{curl} to be installed and included in the dependencies of your package.
skip_if_translated("msg") skips tests if the "msg" is translated.
skip_on_bioc() skips on Bioconductor (using theIS_BIOC_BUILD_MACHINEenv var).
skip_on_cran() skips on CRAN (using theNOT_CRAN env var set bydevtools and friends).local_on_cran() gives you the ability toeasily simulate what will happen on CRAN.
skip_on_covr() skips when covr is running (using theR_COVR env var).
skip_on_ci() skips on continuous integration systems like GitHub Actions,travis, and appveyor (using theCI env var).
skip_on_os() skips on the specified operating system(s) ("windows","mac", "linux", or "solaris").
if (FALSE) skip("Some Important Requirement is not available")test_that("skip example", { expect_equal(1, 1L) # this expectation runs skip('skip') expect_equal(1, 2) # this one skipped expect_equal(1, 3) # this one is also skipped})if(FALSE) skip("Some Important Requirement is not available")test_that("skip example",{ expect_equal(1,1L)# this expectation runs skip('skip') expect_equal(1,2)# this one skipped expect_equal(1,3)# this one is also skipped})
SlowReporter is designed to identify slow tests. It reports theexecution time for each test and can optionally filter out tests thatrun faster than a specified threshold (default: 1 second). This reporteris useful for performance optimization and identifying tests that maybenefit from optimization or parallelization.
SlowReporter is designed to identify slow tests. It reports theexecution time for each test, ignoring tests faster than a specifiedthreshold (default: 0.5s).
The easiest way to run it over your package is withdevtools::test(reporter = "slow").
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter
snapshot_accept() accepts all modified snapshots.
snapshot_reject() rejects all modified snapshots by deleting the.new variants.
snapshot_review() opens a Shiny app that shows a visual diff of eachmodified snapshot. This is particularly useful for whole file snapshotscreated byexpect_snapshot_file().
snapshot_accept(files = NULL, path = "tests/testthat")snapshot_reject(files = NULL, path = "tests/testthat")snapshot_review(files = NULL, path = "tests/testthat", ...)snapshot_accept(files=NULL, path="tests/testthat")snapshot_reject(files=NULL, path="tests/testthat")snapshot_review(files=NULL, path="tests/testthat",...)
files | Optionally, filter effects to snapshots from specified files.This can be a snapshot name (e.g. |
path | Path to tests. |
... | Additional arguments passed on to |
If your snapshots fail on GitHub, it can be a pain to figure out exactlywhy, or to incorporate them into your local package. This function makes iteasy, only requiring you to interactively select which job you want totake the artifacts from.
Note that you should not generally need to use this function manually;instead copy and paste from the hint emitted on GitHub.
snapshot_download_gh(repository, run_id, dest_dir = ".")snapshot_download_gh(repository, run_id, dest_dir=".")
repository | Repository owner/name, e.g. |
run_id | Run ID, e.g. |
dest_dir | Directory to download to. Defaults to the current directory. |
The default reporter used whenexpect_that() is run interactively.It responds by displaying a summary of the number of successes and failuresandstop()ping on if there are any failures.
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,SummaryReporter,TapReporter,TeamcityReporter
This is designed for interactive usage: it lets you know which tests haverun successfully and as well as fully reporting information aboutfailures and errors.
You can use themax_reports field to control the maximum numberof detailed reports produced by this reporter.
As an additional benefit, this reporter will praise you from time-to-timeif all your tests pass.
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,TapReporter,TeamcityReporter
This reporter will output results in the Test Anything Protocol (TAP),a simple text-based interface between testing modules in a test harness.For more information about TAP, see http://testanything.org
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TeamcityReporter
This reporter will output results in the Teamcity message format.For more information about Teamcity messages, seehttp://confluence.jetbrains.com/display/TCD7/Build+Script+Interaction+with+TeamCity
Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter
This environment has no purpose other than as a handle forwithr::defer():use it when you want to run code after all tests have been run.Typically, you'll usewithr::defer(cleanup(), teardown_env())immediately after you've made a mess in asetup-*.R file.
teardown_env()teardown_env()
This function is the low-level workhorse that powerstest_local() andtest_package(). Generally, you should not call this function directly.In particular, you are responsible for ensuring that the functions to testare available in the testenv (e.g. viaload_package).
Seevignette("special-files") to learn more about the conventions for test,helper, and setup files that testthat uses, and what you might use each for.
test_dir( path, filter = NULL, reporter = NULL, env = NULL, ..., load_helpers = TRUE, stop_on_failure = TRUE, stop_on_warning = FALSE, package = NULL, load_package = c("none", "installed", "source"), shuffle = FALSE)test_dir( path, filter=NULL, reporter=NULL, env=NULL,..., load_helpers=TRUE, stop_on_failure=TRUE, stop_on_warning=FALSE, package=NULL, load_package= c("none","installed","source"), shuffle=FALSE)
path | Path to directory containing tests. |
filter | If not |
reporter | Reporter to use to summarise output. Can be suppliedas a string (e.g. "summary") or as an R6 object(e.g. SeeReporter for more details and a list of built-in reporters. |
env | Environment in which to execute the tests. Expert use only. |
... | Additional arguments passed to |
load_helpers | Source helper files before running the tests? |
stop_on_failure | If |
stop_on_warning | If |
package | If these tests belong to a package, the name of the package. |
load_package | Strategy to use for load package code:
|
shuffle | If |
A list (invisibly) containing data about the test results.
Each test is run in a clean environment to keep tests as isolated aspossible. For package tests, that environment inherits from the package'snamespace environment, so that tests can access internal functionsand objects.
Helper, setup, and teardown files located in the same directory as thetest will also be run. Seevignette("special-files") for details.
test_file( path, reporter = default_compact_reporter(), desc = NULL, package = NULL, shuffle = FALSE, ...)test_file( path, reporter= default_compact_reporter(), desc=NULL, package=NULL, shuffle=FALSE,...)
path | Path to file. |
reporter | Reporter to use to summarise output. Can be suppliedas a string (e.g. "summary") or as an R6 object(e.g. SeeReporter for more details and a list of built-in reporters. |
desc | Optionally, supply a string here to run only a singletest ( |
package | If these tests belong to a package, the name of the package. |
shuffle | If |
... | Additional parameters passed on to |
A list (invisibly) containing data about the test results.
Each test is run in a clean environment to keep tests as isolated aspossible. For package tests, that environment inherits from the package'snamespace environment, so that tests can access internal functionsand objects.
path <- testthat_example("success")test_file(path)test_file(path, desc = "some tests have warnings")test_file(path, reporter = "minimal")path<- testthat_example("success")test_file(path)test_file(path, desc="some tests have warnings")test_file(path, reporter="minimal")
test_local() tests a local source package.
test_package() tests an installed package.
test_check() checks a package duringR CMD check.
Seevignette("special-files") to learn about the various files thattestthat works with.
test_package(package, reporter = check_reporter(), ...)test_check(package, reporter = check_reporter(), ...)test_local( path = ".", reporter = NULL, ..., load_package = "source", shuffle = FALSE)test_package(package, reporter= check_reporter(),...)test_check(package, reporter= check_reporter(),...)test_local( path=".", reporter=NULL,..., load_package="source", shuffle=FALSE)
package | If these tests belong to a package, the name of the package. |
reporter | Reporter to use to summarise output. Can be suppliedas a string (e.g. "summary") or as an R6 object(e.g. SeeReporter for more details and a list of built-in reporters. |
... | Additional arguments passed to |
path | Path to directory containing tests. |
load_package | Strategy to use for load package code:
|
shuffle | If |
A list (invisibly) containing data about the test results.
R CMD checkTo run testthat automatically fromR CMD check, make sure you haveatests/testthat.R that contains:
library(testthat)library(yourpackage)test_check("yourpackage")Each test is run in a clean environment to keep tests as isolated aspossible. For package tests, that environment inherits from the package'snamespace environment, so that tests can access internal functionsand objects.
Many tests require some external file (e.g. a.csv if you're testing adata import function) but the working directory varies depending on the waythat you're running the test (e.g. interactively, withdevtools::test(),or withR CMD check).test_path() understands these variations andautomatically generates a path relative totests/testthat, regardless ofwhere that directory might reside relative to the current working directory.
test_path(...)test_path(...)
... | Character vectors giving path components. |
A character vector giving the path.
## Not run: test_path("foo.csv")test_path("data", "foo.csv")## End(Not run)## Not run:test_path("foo.csv")test_path("data","foo.csv")## End(Not run)
A test encapsulates a series of expectations about a small, self-containedunit of functionality. Each test contains one or more expectations, such asexpect_equal() orexpect_error(), and lives in atest/testhat/test*file, often together with other tests that relate to the same function or setof functions.
Each test has its own execution environment, so an object created in a testalso dies with the test. Note that this cleanup does not happen automaticallyfor other aspects of global state, such as session options or filesystemchanges. Avoid changing global state, when possible, and reverse any changesthat you do make.
test_that(desc, code)test_that(desc, code)
desc | Test name. Names should be brief, but evocative. It's common towrite the description so that it reads like a natural sentence, e.g. |
code | Test code containing expectations. Braces ( |
When run interactively, returnsinvisible(TRUE) if all testspass, otherwise throws an error.
test_that("trigonometric functions match identities", { expect_equal(sin(pi / 4), 1 / sqrt(2)) expect_equal(cos(pi / 4), 1 / sqrt(2)) expect_equal(tan(pi / 4), 1)})## Not run: test_that("trigonometric functions match identities", { expect_equal(sin(pi / 4), 1)})## End(Not run)test_that("trigonometric functions match identities",{ expect_equal(sin(pi/4),1/ sqrt(2)) expect_equal(cos(pi/4),1/ sqrt(2)) expect_equal(tan(pi/4),1)})## Not run:test_that("trigonometric functions match identities",{ expect_equal(sin(pi/4),1)})## End(Not run)
If you have a flaky test, you can usetry_again() to run it a few timesuntil it succeeds. In most cases, you are better fixing the underlyingcause of the flakeyness, but sometimes that's not possible.
try_again(times, code)try_again(times, code)
times | Number of times to retry. |
code | Code to evaluate. |
usually_return_1 <- function(i) { if (runif(1) < 0.1) 0 else 1}## Not run: # 10% chance of failure:expect_equal(usually_return_1(), 1)# 1% chance of failure:try_again(1, expect_equal(usually_return_1(), 1))# 0.1% chance of failure:try_again(2, expect_equal(usually_return_1(), 1))## End(Not run)usually_return_1<-function(i){if(runif(1)<0.1)0else1}## Not run:# 10% chance of failure:expect_equal(usually_return_1(),1)# 1% chance of failure:try_again(1, expect_equal(usually_return_1(),1))# 0.1% chance of failure:try_again(2, expect_equal(usually_return_1(),1))## End(Not run)
Add the necessary infrastructure to enable C++ unit testinginR packages withCatch andtestthat.
use_catch(dir = getwd())use_catch(dir= getwd())
dir | The directory containing anR package. |
Callinguse_catch() will:
Create a filesrc/test-runner.cpp, which ensures that thetestthat package will understand how to run your package'sunit tests,
Create an example test filesrc/test-example.cpp, whichshowcases how you might use Catch to write a unit test,
Add a test filetests/testthat/test-cpp.R, which ensures thattestthat will run your compiled tests during invocations ofdevtools::test() orR CMD check, and
Create a fileR/catch-routine-registration.R, which ensures thatR will automatically register this routine whentools::package_native_routine_registration_skeleton() is invoked.
You will also need to:
Add xml2 to Suggests, with e.g.usethis::use_package("xml2", "Suggests")
Add testthat to LinkingTo, with e.g.usethis::use_package("testthat", "LinkingTo")
C++ unit tests can be added to C++ source files within thesrc directory of your package, with a format similartoR code tested withtestthat. Here's a simple exampleof a unit test written withtestthat + Catch:
context("C++ Unit Test") { test_that("two plus two is four") { int result = 2 + 2; expect_true(result == 4); }}When your package is compiled, unit tests alongside a harnessfor running these tests will be compiled into yourR package,with the C entry pointrun_testthat_tests().testthatwill use that entry point to run your unit tests when detected.
All of the functions provided by Catch areavailable with theCATCH_ prefix – seeherefor a full list.testthat provides thefollowing wrappers, to conform withtestthat'sR interface:
| Function | Catch | Description |
context | CATCH_TEST_CASE | The context of a set of tests. |
test_that | CATCH_SECTION | A test section. |
expect_true | CATCH_CHECK | Test that an expression evaluates toTRUE. |
expect_false | CATCH_CHECK_FALSE | Test that an expression evaluates toFALSE. |
expect_error | CATCH_CHECK_THROWS | Test that evaluation of an expression throws an exception. |
expect_error_as | CATCH_CHECK_THROWS_AS | Test that evaluation of an expression throws an exception of a specific class. |
In general, you should prefer using thetestthatwrappers, astestthat also does some work toensure that any unit tests within will not be compiled orrun when using the Solaris Studio compilers (as these arecurrently unsupported by Catch). This should make iteasier to submit packages to CRAN that use Catch.
If you've opted to disable dynamic symbol lookup in yourpackage, then you'll need to explicitly export a symbolin your package thattestthat can use to run your unittests.testthat will look for a routine with one of the names:
C_run_testthat_tests c_run_testthat_tests run_testthat_tests
Assuming you haveuseDynLib(<pkg>, .registration = TRUE) in your package'sNAMESPACE file, this implies having routine registration code of the form:
// The definition for this function comes from the file 'src/test-runner.cpp',// which is generated via `testthat::use_catch()`.extern SEXP run_testthat_tests();static const R_CallMethodDef callMethods[] = { // other .Call method definitions, {"run_testthat_tests", (DL_FUNC) &run_testthat_tests, 0}, {NULL, NULL, 0}};void R_init_<pkg>(DllInfo* dllInfo) { R_registerRoutines(dllInfo, NULL, callMethods, NULL, NULL); R_useDynamicSymbols(dllInfo, FALSE);}replacing<pkg> above with the name of your package, as appropriate.
SeeControlling VisibilityandRegistering Symbolsin theWriting R Extensions manual for more information.
If you'd like to write your own Catch test runner, you caninstead use thetestthat::catchSession() object in a filewith the form:
#define TESTTHAT_TEST_RUNNER#include <testthat.h>void run(){ Catch::Session& session = testthat::catchSession(); // interact with the session object as desired}This can be useful if you'd like to run your unit testswith custom arguments passed to the Catch session.
If you'd like to use the C++ unit testing facilities providedby Catch, but would prefer not to use the regulartestthatR testing infrastructure, you can manually run the unit testsby inserting a call to:
.Call("run_testthat_tests", PACKAGE = <pkgName>)as necessary within your unit test suite.
Catch,the library used to enable C++ unit testing.