Movatterモバイル変換


[0]ホーム

URL:


Title:Unit Testing for R
Version:3.3.1
Description:Software testing is important, but, in part because it is frustrating and boring, many of us avoid it. 'testthat' is a testing framework for R that is easy to learn and use, and integrates with your existing 'workflow'.
License:MIT + file LICENSE
URL:https://testthat.r-lib.org,https://github.com/r-lib/testthat
BugReports:https://github.com/r-lib/testthat/issues
Depends:R (≥ 4.1.0)
Imports:brio (≥ 1.1.5), callr (≥ 3.7.6), cli (≥ 3.6.5), desc (≥1.4.3), evaluate (≥ 1.0.4), jsonlite (≥ 2.0.0), lifecycle (≥1.0.4), magrittr (≥ 2.0.3), methods, pkgload (≥ 1.4.0),praise (≥ 1.0.0), processx (≥ 3.8.6), ps (≥ 1.9.1), R6 (≥2.6.1), rlang (≥ 1.1.6), utils, waldo (≥ 0.6.2), withr (≥3.0.2)
Suggests:covr, curl (≥ 0.9.5), diffviewer (≥ 0.1.0), digest (≥0.6.33), gh, knitr, rmarkdown, rstudioapi, S7, shiny, usethis,vctrs (≥ 0.1.0), xml2
VignetteBuilder:knitr
Config/Needs/website:tidyverse/tidytemplate
Config/testthat/edition:3
Config/testthat/parallel:true
Config/testthat/start-first:watcher, parallel*
Encoding:UTF-8
RoxygenNote:7.3.3
NeedsCompilation:yes
Packaged:2025-11-24 10:28:48 UTC; hadleywickham
Author:Hadley Wickham [aut, cre], Posit Software, PBC [cph, fnd], R Core team [ctb] (Implementation of utils::recover())
Maintainer:Hadley Wickham <hadley@posit.co>
Repository:CRAN
Date/Publication:2025-11-25 15:30:02 UTC

An R package to make testing fun!

Description

Try the example below. Have a look at the references and learn morefrom function documentation such astest_that().

Options

Author(s)

Maintainer: Hadley Wickhamhadley@posit.co

Other contributors:

See Also

Useful links:


Report results for⁠R CMD check⁠

Description

⁠R CMD check⁠ displays only the last 13 lines of the result, so thisreport is designed to ensure that you see something useful there.

See Also

Other reporters:DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Interactively debug failing tests

Description

This reporter will call a modified version ofrecover() on allbroken expectations.

See Also

Other reporters:CheckReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Fail if any tests fail

Description

This reporter will simply throw an error if any of the tests failed. It isbest combined with another reporter, such as theSummaryReporter.

See Also

Other reporters:CheckReporter,DebugReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Report results in jUnit XML format

Description

This reporter includes detailed results about each test and summaries,written to a file (or stdout) in jUnit XML format. This can be read bythe Jenkins Continuous Integration System to report on a dashboard etc.Requires thexml2 package.

To fit into the jUnit structure,context() becomes the⁠<testsuite>⁠name as well as the base of the⁠<testcase> classname⁠. Thetest_that() name becomes the rest of the⁠<testcase> classname⁠.The deparsedexpect_that() call becomes the⁠<testcase>⁠ name.On failure, the message goes into the⁠<failure>⁠ node messageargument (first line only) and into its text content (full message).Execution time and some other details are also recorded.

References for the jUnit XML format:https://github.com/testmoapp/junitxml

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Capture test results and metadata

Description

This reporter gathers all results, adding additional information such astest elapsed time, and test filename if available. Very useful for reporting.

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Test reporter: location

Description

This reporter simply prints the location of every expectation and error.This is useful if you're trying to figure out the source of a segfault,or you want to figure out which code triggers a C/C++ breakpoint

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Report minimal results as compactly as possible

Description

The minimal test reporter provides the absolutely minimum amount ofinformation: whether each expectation has succeeded, failed or experiencedan error. If you want to find out what the failures and errors actuallywere, you'll need to run a more informative test reporter.

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Run multiple reporters at the same time

Description

This reporter is useful to use several reporters at the same time, e.g.adding a custom reporter without removing the current one.

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Report progress interactively

Description

ProgressReporter is designed for interactive use. Its goal is togive you actionable insights to help you understand the status of yourcode. This reporter also praises you from time-to-time if all your testspass. It's the default reporter fortest_dir().

ParallelProgressReporter is very similar toProgressReporter, butworks better for packages that want parallel tests.

CompactProgressReporter is a minimal version ofProgressReporterdesigned for use with single files. It's the default reporter fortest_file().

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Report results to RStudio

Description

This reporter is designed for output to RStudio. It produces results inany easily parsed form.

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Manage test reporting

Description

The job of a reporter is to aggregate the results from files, tests, andexpectations and display them in an informative way. Every testthat functionthat runs multiple tests provides areporter argument which you canuse to override the default (which is selected bydefault_reporter()).

Details

You only need to use thisReporter object directly if you are creatinga new reporter. Currently, creating new Reporters is undocumented,so if you want to create your own, you'll need to make sure that you'refamiliar withR6 and then need read thesource code for a few.

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter

Examples

path <- testthat_example("success")test_file(path)# Override the default by supplying the name of a reportertest_file(path, reporter = "minimal")

Silently collect and all expectations

Description

This reporter quietly runs all tests, simply gathering all expectations.This is helpful for programmatically inspecting errors after a test run.You can retrieve the results with⁠$expectations()⁠.

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SlowReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Find slow tests

Description

SlowReporter is designed to identify slow tests. It reports theexecution time for each test and can optionally filter out tests thatrun faster than a specified threshold (default: 1 second). This reporteris useful for performance optimization and identifying tests that maybenefit from optimization or parallelization.

SlowReporter is designed to identify slow tests. It reports theexecution time for each test, ignoring tests faster than a specifiedthreshold (default: 0.5s).

The easiest way to run it over your package is withdevtools::test(reporter = "slow").

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,StopReporter,SummaryReporter,TapReporter,TeamcityReporter


Error if any test fails

Description

The default reporter used whenexpect_that() is run interactively.It responds by displaying a summary of the number of successes and failuresandstop()ping on if there are any failures.

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,SummaryReporter,TapReporter,TeamcityReporter


Report a summary of failures

Description

This is designed for interactive usage: it lets you know which tests haverun successfully and as well as fully reporting information aboutfailures and errors.

You can use themax_reports field to control the maximum numberof detailed reports produced by this reporter.

As an additional benefit, this reporter will praise you from time-to-timeif all your tests pass.

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,TapReporter,TeamcityReporter


Report results in TAP format

Description

This reporter will output results in the Test Anything Protocol (TAP),a simple text-based interface between testing modules in a test harness.For more information about TAP, see http://testanything.org

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TeamcityReporter


Report results in Teamcity format

Description

This reporter will output results in the Teamcity message format.For more information about Teamcity messages, seehttp://confluence.jetbrains.com/display/TCD7/Build+Script+Interaction+with+TeamCity

See Also

Other reporters:CheckReporter,DebugReporter,FailReporter,JunitReporter,ListReporter,LocationReporter,MinimalReporter,MultiReporter,ProgressReporter,RStudioReporter,Reporter,SilentReporter,SlowReporter,StopReporter,SummaryReporter,TapReporter


Watches code and tests for changes, rerunning tests as appropriate.

Description

[Superseded]

The idea behindauto_test() is that you just leave it running whileyou develop your code. Every time you save a file it will be automaticallytested and you can easily see if your changes have caused any testfailures.

The current strategy for rerunning tests is as follows:

Usage

auto_test(  code_path,  test_path,  reporter = default_reporter(),  env = test_env(),  hash = TRUE)auto_test_package(pkg = ".", reporter = default_reporter(), hash = TRUE)

Arguments

code_path

path to directory containing code

test_path

path to directory containing tests

reporter

test reporter to use

env

environment in which to execute test suite.

hash

Passed on towatch(). When FALSE, uses less accuratemodification time stamps, but those are faster for large files.

pkg

path to package

See Also

auto_test_package()


Capture conditions, including messages, warnings, expectations, and errors.

Description

[Superseded]

These functions allow you to capture the side-effects of a function callincluding printed output, messages and warnings. We no longer recommendthat you use these functions, instead relying on theexpect_message()and friends to bubble up unmatched conditions. If you just want to silenceunimportant warnings, usesuppressWarnings().

Usage

capture_condition(code, entrace = FALSE)capture_error(code, entrace = FALSE)capture_expectation(code, entrace = FALSE)capture_message(code, entrace = FALSE)capture_warning(code, entrace = FALSE)capture_messages(code)capture_warnings(code, ignore_deprecation = FALSE)

Arguments

code

Code to evaluate

entrace

Whether to add abacktrace tothe captured condition.

Value

Singular functions (capture_condition,capture_expectation etc)return a condition object.capture_messages() andcapture_warningsreturn a character vector of message text.

Examples

f <- function() {  message("First")  warning("Second")  message("Third")}capture_message(f())capture_messages(f())capture_warning(f())capture_warnings(f())# Condition will capture anythingcapture_condition(f())

Capture output to console

Description

Evaluatescode in a special context in which all output is captured,similar tocapture.output().

Usage

capture_output(code, print = FALSE, width = 80)capture_output_lines(code, print = FALSE, width = 80)testthat_print(x)

Arguments

code

Code to evaluate.

print

IfTRUE and the result of evaluatingcode isvisible, print the result usingtestthat_print().

width

Number of characters per line of output. This does notinherit fromgetOption("width") so that tests always use the sameoutput width, minimising spurious differences.

Details

Results are printed using thetestthat_print() generic, which defaultstoprint(), giving you the ability to customise the printing of yourobject in tests, if needed.

Value

capture_output() returns a single string.capture_output_lines()returns a character vector with one entry for each line

Examples

capture_output({  cat("Hi!\n")  cat("Bye\n")})capture_output_lines({  cat("Hi!\n")  cat("Bye\n")})capture_output("Hi")capture_output("Hi", print = TRUE)

Provide human-readable comparison of two objects

Description

[Superseded]

compare is similar tobase::all.equal(), but somewhat buggy in itsuse oftolerance. Please usewaldo instead.

Usage

compare(x, y, ...)## Default S3 method:compare(x, y, ..., max_diffs = 9)## S3 method for class 'character'compare(  x,  y,  check.attributes = TRUE,  ...,  max_diffs = 5,  max_lines = 5,  width = cli::console_width())## S3 method for class 'numeric'compare(  x,  y,  tolerance = testthat_tolerance(),  check.attributes = TRUE,  ...,  max_diffs = 9)## S3 method for class 'POSIXt'compare(x, y, tolerance = 0.001, ..., max_diffs = 9)

Arguments

x,y

Objects to compare

...

Additional arguments used to control specifics of comparison

max_diffs

Maximum number of differences to show

check.attributes

IfTRUE, also checks values of attributes.

max_lines

Maximum number of lines to show from each difference

width

Width of output device

tolerance

Numerical tolerance: any differences (in the sense ofbase::all.equal()) smaller than this value will be ignored.

The default tolerance issqrt(.Machine$double.eps), unless long doublesare not available, in which case the test is skipped.

Examples

# Character -----------------------------------------------------------------x <- c("abc", "def", "jih")compare(x, x)y <- paste0(x, "y")compare(x, y)compare(letters, paste0(letters, "-"))x <- "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis cursus tincidunt auctor. Vestibulum ac metus bibendum, facilisis nisi non, pulvinar dolor. Donec pretium iaculis nulla, ut interdum sapien ultricies a. "y <- "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Duis cursus tincidunt auctor. Vestibulum ac metus1 bibendum, facilisis nisi non, pulvinar dolor. Donec pretium iaculis nulla, ut interdum sapien ultricies a. "compare(x, y)compare(c(x, x), c(y, y))# Numeric -------------------------------------------------------------------x <- y <- runif(100)y[sample(100, 10)] <- 5compare(x, y)x <- y <- 1:10x[5] <- NAx[6] <- 6.5compare(x, y)# Compare ignores minor numeric differences in the same way# as all.equal.compare(x, x + 1e-9)

Compare two directory states.

Description

Compare two directory states.

Usage

compare_state(old, new)

Arguments

old

previous state

new

current state

Value

list containing number of changes and files which have beenadded,deleted andmodified


Do you expect a value bigger or smaller than this?

Description

These functions compare values of comparable data types, such as numbers,dates, and times.

Usage

expect_lt(object, expected, label = NULL, expected.label = NULL)expect_lte(object, expected, label = NULL, expected.label = NULL)expect_gt(object, expected, label = NULL, expected.label = NULL)expect_gte(object, expected, label = NULL, expected.label = NULL)

Arguments

object,expected

A value to compare and its expected bound.

label,expected.label

Used to customise failure messages. For expertuse only.

See Also

Other expectations:equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations

Examples

a <- 9expect_lt(a, 10)## Not run: expect_lt(11, 10)## End(Not run)a <- 11expect_gt(a, 10)## Not run: expect_gt(9, 10)## End(Not run)

Describe the context of a set of tests.

Description

[Superseded]

Use ofcontext() is no longer recommended. Instead omit it, and messageswill use the name of the file instead. This ensures that the context andtest file name are always in sync.

A context defines a set of tests that test related functionality. Usuallyyou will have one context per file, but you may have multiple contextsin a single file if you so choose.

Usage

context(desc)

Arguments

desc

description of context. Should start with a capital letter.

3rd edition

[Deprecated]

context() is deprecated in the third edition, and the equivalentinformation is instead recorded by the test file name.

Examples

context("String processing")context("Remote procedure calls")

Start test context from a file name

Description

For use in external reporters

Usage

context_start_file(name)

Arguments

name

file name


Retrieve the default reporter

Description

The defaults are:

Usage

default_reporter()default_parallel_reporter()default_compact_reporter()check_reporter()

describe: a BDD testing language

Description

A simplebehavior-driven development (BDD)domain-specific languagefor writing tests. The language is similar toRSpecfor Ruby orMocha for JavaScript. BDD tests readlike sentences and it should thus be easier to understand what thespecification of a function/component is.

Usage

describe(description, code)it(description, code = NULL)

Arguments

description

description of the feature

code

test code containing the specs

Details

Tests using thedescribe syntax not only verify the tested code, butalso document its intended behaviour. Eachdescribe block specifies alarger component or function and contains a set of specifications. Aspecification is defined by anit block. Eachit blockfunctions as a test and is evaluated in its own environment. Youcan also have nesteddescribe blocks.

This test syntax helps to test the intended behaviour of your code. Forexample: you want to write a new function for your package. Try to describethe specification first usingdescribe, before your write any code.After that, you start to implement the tests for each specification (i.e.theit block).

Usedescribe to verify that you implement the right things and usetest_that() to ensure you do the things right.

Examples

describe("matrix()", {  it("can be multiplied by a scalar", {    m1 <- matrix(1:4, 2, 2)    m2 <- m1 * 2    expect_equal(matrix(1:4 * 2, 2, 2), m2)  })  it("can have not yet tested specs")})# Nested specs:## codeaddition <- function(a, b) a + bdivision <- function(a, b) a / b## specsdescribe("math library", {  describe("addition()", {    it("can add two numbers", {      expect_equal(1 + 1, addition(1, 1))    })  })  describe("division()", {    it("can divide two numbers", {      expect_equal(10 / 2, division(10, 2))    })    it("can handle division by 0") #not yet implemented  })})

Capture the state of a directory.

Description

Capture the state of a directory.

Usage

dir_state(path, pattern = NULL, hash = TRUE)

Arguments

path

path to directory

pattern

regular expression with which to filter files

hash

use hash (slow but accurate) or time stamp (fast but lessaccurate)


Do you expect this value?

Description

These functions provide two levels of strictness when comparing acomputation to a reference value.expect_identical() is the baseline;expect_equal() relaxes the test to ignore small numeric differences.

In the 2nd edition,expect_identical() usesidentical() andexpect_equal usesall.equal(). In the 3rd edition, both functions usewaldo. They differ only in thatexpect_equal() setstolerance = testthat_tolerance() so that smallfloating point differences are ignored; this also implies that (e.g.)1and1L are treated as equal.

Usage

expect_equal(  object,  expected,  ...,  tolerance = if (edition_get() >= 3) testthat_tolerance(),  info = NULL,  label = NULL,  expected.label = NULL)expect_identical(  object,  expected,  info = NULL,  label = NULL,  expected.label = NULL,  ...)

Arguments

object,expected

Computation and value to compare it to.

Both arguments supports limited unquoting to make it easier to generatereadable failures within a function or for loop. Seequasi_label formore details.

...

3e: passed on towaldo::compare(). See its docs to see otherways to control comparison.

2e: passed on tocompare()/identical().

tolerance

3e: passed on towaldo::compare(). If non-NULL, willignore small floating point differences. It uses same algorithm asall.equal() so the tolerance is usually relative (i.e.⁠mean(abs(x - y) / mean(abs(y)) < tolerance⁠), except when the differencesare very small, when it becomes absolute (i.e.⁠mean(abs(x - y) < tolerance⁠).See waldo documentation for more details.

2e: passed on tocompare(), if set. It's hard toreason about exactly what tolerance means because depending on the precisecode path it could be either an absolute or relative tolerance.

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label,expected.label

Used to customise failure messages. For expertuse only.

See Also

Other expectations:comparison-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations

Examples

a <- 10expect_equal(a, 10)# Use expect_equal() when testing for numeric equality## Not run: expect_identical(sqrt(2) ^ 2, 2)## End(Not run)expect_equal(sqrt(2) ^ 2, 2)

Evaluate a promise, capturing all types of output.

Description

Evaluate a promise, capturing all types of output.

Usage

evaluate_promise(code, print = FALSE)

Arguments

code

Code to evaluate.

Value

A list containing

result

The result of the function

output

A string containing all the output from the function

warnings

A character vector containing the text from each warning

messages

A character vector containing the text from each message

Examples

evaluate_promise({  print("1")  message("2")  warning("3")  4})

The previous building block of allexpect_ functions

Description

Previously, we recommended usingexpect() when writing your ownexpectations. Now we instead recommendpass() andfail(). Seevignette("custom-expectation") for details.

Usage

expect(  ok,  failure_message,  info = NULL,  srcref = NULL,  trace = NULL,  trace_env = caller_env())

Arguments

ok

TRUE orFALSE indicating if the expectation was successful.

failure_message

A character vector describing the failure. Thefirst element should describe the expected value, and the second (andoptionally subsequence) elements should describe what was actually seen.

info

Character vector continuing additional information. Includedfor backward compatibility only and new expectations should not use it.

srcref

Location of the failure. Should only needed to be explicitlysupplied when you need to forward a srcref captured elsewhere.

trace

An optional backtrace created byrlang::trace_back().When supplied, the expectation is displayed with the backtrace.Expert use only.

trace_env

Iftrace is not specified, this is used to generate aninformative traceback for failures. You should only need to set this ifyou're callingfail() from a helper function; seevignette("custom-expectation") for details.

Value

An expectation object from eithersucceed() orfail().with amuffle_expectation restart.

See Also

exp_signal()


Do you expect every value in a vector to have this value?

Description

These expectations are similar toexpect_true(all(x == "x")),expect_true(all(x)) andexpect_true(all(!x)) but give more informativefailure messages if the expectations are not met.

Usage

expect_all_equal(object, expected)expect_all_true(object)expect_all_false(object)

Arguments

object,expected

Computation and value to compare it to.

Both arguments supports limited unquoting to make it easier to generatereadable failures within a function or for loop. Seequasi_label formore details.

Examples

x1 <- c(1, 1, 1, 1, 1, 1)expect_all_equal(x1, 1)x2 <- c(1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 2)show_failure(expect_all_equal(x2, 1))# expect_all_true() and expect_all_false() are helpers for common casesset.seed(1016)show_failure(expect_all_true(rpois(100, 10) < 20))show_failure(expect_all_false(rpois(100, 10) > 20))

Do C++ tests past?

Description

Test compiled code in the packagepackage. A call to this function willautomatically be generated for you intests/testthat/test-cpp.R aftercallinguse_catch(); you should not need to manually call this expectationyourself.

Usage

expect_cpp_tests_pass(package)run_cpp_tests(package)

Arguments

package

The name of the package to test.


Is an object equal to the expected value, ignoring attributes?

Description

Comparesobject andexpected usingall.equal() andcheck.attributes = FALSE.

Usage

expect_equivalent(  object,  expected,  ...,  info = NULL,  label = NULL,  expected.label = NULL)

Arguments

object,expected

Computation and value to compare it to.

Both arguments supports limited unquoting to make it easier to generatereadable failures within a function or for loop. Seequasi_label formore details.

...

Passed on tocompare().

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label,expected.label

Used to customise failure messages. For expertuse only.

3rd edition

[Deprecated]

expect_equivalent() is deprecated in the 3rd edition. Instead useexpect_equal(ignore_attr = TRUE).

Examples

#' # expect_equivalent() ignores attributesa <- b <- 1:3names(b) <- letters[1:3]## Not run: expect_equal(a, b)## End(Not run)expect_equivalent(a, b)

Do you expect an error, warning, message, or other condition?

Description

expect_error(),expect_warning(),expect_message(), andexpect_condition() check that code throws an error, warning, message,or condition with a message that matchesregexp, or a class that inheritsfromclass. See below for more details.

In the 3rd edition, these functions match (at most) a single condition. Alladditional and non-matching (ifregexp orclass are used) conditionswill bubble up outside the expectation. If these additional conditionsare important you'll need to catch them with additionalexpect_message()/expect_warning() calls; if they're unimportant youcan ignore withsuppressMessages()/suppressWarnings().

It can be tricky to test for a combination of different conditions,such as a message followed by an error.expect_snapshot() isoften an easier alternative for these more complex cases.

Usage

expect_error(  object,  regexp = NULL,  class = NULL,  ...,  inherit = TRUE,  info = NULL,  label = NULL)expect_warning(  object,  regexp = NULL,  class = NULL,  ...,  inherit = TRUE,  all = FALSE,  info = NULL,  label = NULL)expect_message(  object,  regexp = NULL,  class = NULL,  ...,  inherit = TRUE,  all = FALSE,  info = NULL,  label = NULL)expect_condition(  object,  regexp = NULL,  class = NULL,  ...,  inherit = TRUE,  info = NULL,  label = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

regexp

Regular expression to test against.

  • A character vector giving a regular expression that must match theerror message.

  • IfNULL, the default, asserts that there should be an error,but doesn't test for a specific value.

  • IfNA, asserts that there should be no errors, but we now recommendusingexpect_no_error() and friends instead.

Note that you should only usemessage with errors/warnings/messagesthat you generate. Avoid tests that rely on the specific text generated byanother package since this can easily change. If you do need to test textgenerated by another package, either protect the test withskip_on_cran()or useexpect_snapshot().

class

Instead of supplying a regular expression, you can also supplya class name. This is useful for "classed" conditions.

...

Arguments passed on toexpect_match

fixed

IfTRUE, treatsregexp as a string to be matched exactly(not a regular expressions). Overridesperl.

perl

logical. Should Perl-compatible regexps be used?

inherit

Whether to matchregexp andclass across theancestry of chained errors.

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label

Used to customise failure messages. For expert use only.

all

DEPRECATED If you need to test multiple warnings/messagesyou now need to use multiple calls toexpect_message()/expect_warning()

Value

Ifregexp = NA, the value of the first argument; otherwisethe captured condition.

Testingmessage vsclass

When checking that code generates an error, it's important to check that theerror is the one you expect. There are two ways to do this. The firstway is the simplest: you just provide aregexp that match some fragmentof the error message. This is easy, but fragile, because the test willfail if the error message changes (even if its the same error).

A more robust way is to test for the class of the error, if it has one.You can learn more about custom conditions athttps://adv-r.hadley.nz/conditions.html#custom-conditions, but inshort, errors are S3 classes and you can generate a custom class and checkfor it usingclass instead ofregexp.

If you are usingexpect_error() to check that an error message isformatted in such a way that it makes sense to a human, we recommendusingexpect_snapshot() instead.

See Also

expect_no_error(),expect_no_warning(),expect_no_message(), andexpect_no_condition() to assertthat code runs without errors/warnings/messages/conditions.

Other expectations:comparison-expectations,equality-expectations,expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations

Examples

# Errors ------------------------------------------------------------------f <- function() stop("My error!")expect_error(f())expect_error(f(), "My error!")# You can use the arguments of grepl to control the matchingexpect_error(f(), "my error!", ignore.case = TRUE)# Note that `expect_error()` returns the error object so you can test# its components if needederr <- expect_error(rlang::abort("a", n = 10))expect_equal(err$n, 10)# Warnings ------------------------------------------------------------------f <- function(x) {  if (x < 0) {    warning("*x* is already negative")    return(x)  }  -x}expect_warning(f(-1))expect_warning(f(-1), "already negative")expect_warning(f(1), NA)# To test message and output, store results to a variableexpect_warning(out <- f(-1), "already negative")expect_equal(out, -1)# Messages ------------------------------------------------------------------f <- function(x) {  if (x < 0) {    message("*x* is already negative")    return(x)  }  -x}expect_message(f(-1))expect_message(f(-1), "already negative")expect_message(f(1), NA)

Do you expect the result to be (in)visible?

Description

Use this to test whether a function returns a visible or invisibleoutput. Typically you'll use this to check that functions called primarilyfor their side-effects return their data argument invisibly.

Usage

expect_invisible(call, label = NULL)expect_visible(call, label = NULL)

Arguments

call

A function call.

label

Used to customise failure messages. For expert use only.

Value

The evaluatedcall, invisibly.

Examples

expect_invisible(x <- 10)expect_visible(x)# Typically you'll assign the result of the expectation so you can# also check that the value is as you expect.greet <- function(name) {  message("Hi ", name)  invisible(name)}out <- expect_invisible(greet("Hadley"))expect_equal(out, "Hadley")

Do you expect to inherit from this class?

Description

[Superseded]

expect_is() is an older form that usesinherits() without checkingwhetherx is S3, S4, or neither. Instead, I'd recommend usingexpect_type(),expect_s3_class(), orexpect_s4_class() to more clearlyconvey your intent.

Usage

expect_is(object, class, info = NULL, label = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

class

Class name passed toinherits().

3rd edition

[Deprecated]

expect_is() is formally deprecated in the 3rd edition.


Do you expect the results/output to equal a known value?

Description

For complex printed output and objects, it is often challenging to describeexactly what you expect to see.expect_known_value() andexpect_known_output() provide a slightly weaker guarantee, simplyasserting that the values have not changed since the last time that you ranthem.

Usage

expect_known_output(  object,  file,  update = TRUE,  ...,  info = NULL,  label = NULL,  print = FALSE,  width = 80)expect_known_value(  object,  file,  update = TRUE,  ...,  info = NULL,  label = NULL,  version = 2)expect_known_hash(object, hash = NULL)

Arguments

file

File path where known value/output will be stored.

update

Should the file be updated? Defaults toTRUE, withthe expectation that you'll notice changes because of the first failure,and then see the modified files in git.

...

Passed on towaldo::compare().

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

print

IfTRUE and the result of evaluatingcode isvisible, print the result usingtestthat_print().

width

Number of characters per line of output. This does notinherit fromgetOption("width") so that tests always use the sameoutput width, minimising spurious differences.

version

The serialization format version to use. The default, 2, wasthe default format from R 1.4.0 to 3.5.3. Version 3 became the default fromR 3.6.0 and can only be read by R versions 3.5.0 and higher.

hash

Known hash value. Leave empty and you'll be informed whatto use in the test output.

Details

These expectations should be used in conjunction with git, as otherwisethere is no way to revert to previous values. Git is particularly usefulin conjunction withexpect_known_output() as the diffs will show youexactly what has changed.

Note that known values updates will only be updated when running testsinteractively.⁠R CMD check⁠ clones the package source so any changes tothe reference files will occur in a temporary directory, and will not besynchronised back to the source package.

3rd edition

[Deprecated]

expect_known_output() and friends are deprecated in the 3rd edition;please useexpect_snapshot_output() and friends instead.

Examples

tmp <- tempfile()# The first run always succeedsexpect_known_output(mtcars[1:10, ], tmp, print = TRUE)# Subsequent runs will succeed only if the file is unchanged# This will succeed:expect_known_output(mtcars[1:10, ], tmp, print = TRUE)## Not run: # This will failexpect_known_output(mtcars[1:9, ], tmp, print = TRUE)## End(Not run)

Do you expect an object with this length or shape?

Description

expect_length() inspects thelength() of an object;expect_shape()inspects the "shape" (i.e.nrow(),ncol(), ordim()) ofhigher-dimensional objects like data.frames, matrices, and arrays.

Usage

expect_length(object, n)expect_shape(object, ..., nrow, ncol, dim)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

n

Expected length.

...

Not used; used to force naming of other arguments.

nrow,ncol

Expectednrow()/ncol() ofobject.

dim

Expecteddim() ofobject.

See Also

expect_vector() to make assertions about the "size" of a vector.

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations

Examples

expect_length(1, 1)expect_length(1:10, 10)show_failure(expect_length(1:10, 1))x <- matrix(1:9, nrow = 3)expect_shape(x, nrow = 3)show_failure(expect_shape(x, nrow = 4))expect_shape(x, ncol = 3)show_failure(expect_shape(x, ncol = 4))expect_shape(x, dim = c(3, 3))show_failure(expect_shape(x, dim = c(3, 4, 5)))

Deprecated numeric comparison functions

Description

These functions have been deprecated in favour of the more conciseexpect_gt() andexpect_lt().

Usage

expect_less_than(...)expect_more_than(...)

Arguments

...

All arguments passed on toexpect_lt()/expect_gt().


Do you expect a string to match this pattern?

Description

Do you expect a string to match this pattern?

Usage

expect_match(  object,  regexp,  perl = FALSE,  fixed = FALSE,  ...,  all = TRUE,  info = NULL,  label = NULL)expect_no_match(  object,  regexp,  perl = FALSE,  fixed = FALSE,  ...,  all = TRUE,  info = NULL,  label = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

regexp

Regular expression to test against.

perl

logical. Should Perl-compatible regexps be used?

fixed

IfTRUE, treatsregexp as a string to be matched exactly(not a regular expressions). Overridesperl.

...

Arguments passed on tobase::grepl

ignore.case

logical. ifFALSE, the pattern matching iscasesensitive and ifTRUE, case is ignored during matching.

useBytes

logical. IfTRUE the matching is donebyte-by-byte rather than character-by-character. See‘Details’.

all

Should all elements of actual value matchregexp (TRUE),or does only one need to match (FALSE).

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label

Used to customise failure messages. For expert use only.

Details

expect_match() checks if a character vector matches a regular expression,powered bygrepl().

expect_no_match() provides the complementary case, checking that acharacter vectordoes not match a regular expression.

Functions

See Also

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations

Examples

expect_match("Testing is fun", "fun")expect_match("Testing is fun", "f.n")expect_no_match("Testing is fun", "horrible")show_failure(expect_match("Testing is fun", "horrible"))show_failure(expect_match("Testing is fun", "horrible", fixed = TRUE))# Zero-length inputs always failshow_failure(expect_match(character(), "."))

Do you expect a vector with (these) names?

Description

You can either check for the presence of names (leavingexpectedblank), specific names (by supplying a vector of names), or absence ofnames (withNULL).

Usage

expect_named(  object,  expected,  ignore.order = FALSE,  ignore.case = FALSE,  info = NULL,  label = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

expected

Character vector of expected names. Leave missing tomatch any names. UseNULL to check for absence of names.

ignore.order

IfTRUE, sorts names before comparing toignore the effect of order.

ignore.case

IfTRUE, lowercases all names to ignore theeffect of case.

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label

Used to customise failure messages. For expert use only.

See Also

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations

Examples

x <- c(a = 1, b = 2, c = 3)expect_named(x)expect_named(x, c("a", "b", "c"))# Use options to control sensitivityexpect_named(x, c("B", "C", "A"), ignore.order = TRUE, ignore.case = TRUE)# Can also check for the absence of names with NULLz <- 1:4expect_named(z, NULL)

Do you expect the absence of errors, warnings, messages, or other conditions?

Description

These expectations are the opposite ofexpect_error(),expect_warning(),expect_message(), andexpect_condition(). Theyassert the absence of an error, warning, or message, respectively.

Usage

expect_no_error(object, ..., message = NULL, class = NULL)expect_no_warning(object, ..., message = NULL, class = NULL)expect_no_message(object, ..., message = NULL, class = NULL)expect_no_condition(object, ..., message = NULL, class = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

...

These dots are for future extensions and must be empty.

message,class

The default,⁠message = NULL, class = NULL⁠,will fail if there is any error/warning/message/condition.

In many cases, particularly when testing warnings and messages, you willwant to be more specific about the condition you are hopingnot to see,i.e. the condition that motivated you to write the test. Similar toexpect_error() and friends, you can specify themessage (a regularexpression that the message of the condition must match) and/or theclass (a class the condition must inherit from). This ensures thatthe message/warnings you don't want never recur, while allowing newmessages/warnings to bubble up for you to deal with.

Note that you should only usemessage with errors/warnings/messagesthat you generate, or that base R generates (which tend to be stable).Avoid tests that rely on the specific text generated by another packagesince this can easily change. If you do need to test text generated byanother package, either protect the test withskip_on_cran() oruseexpect_snapshot().

Examples

expect_no_warning(1 + 1)foo <- function(x) {  warning("This is a problem!")}# warning doesn't match so bubbles up:expect_no_warning(foo(), message = "bananas")# warning does match so causes a failure:try(expect_no_warning(foo(), message = "problem"))

Test for absence of success or failure

Description

[Deprecated]

These functions are deprecated becauseexpect_success() andexpect_failure() now test for exactly one success or no failures, andexactly one failure and no successes.

Usage

expect_no_success(expr)expect_no_failure(expr)

Do you expectNULL?

Description

This is a special case becauseNULL is a singleton so it's possiblecheck for it either withexpect_equal(x, NULL) orexpect_type(x, "NULL").

Usage

expect_null(object, info = NULL, label = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label

Used to customise failure messages. For expert use only.

See Also

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations

Examples

x <- NULLy <- 10expect_null(x)show_failure(expect_null(y))

Do you expect printed output to match this pattern?

Description

Test for output produced byprint() orcat(). This is best used forvery simple output; for more complex cases useexpect_snapshot().

Usage

expect_output(  object,  regexp = NULL,  ...,  info = NULL,  label = NULL,  width = 80)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

regexp

Regular expression to test against.

  • A character vector giving a regular expression that must match the output.

  • IfNULL, the default, asserts that there should output,but doesn't check for a specific value.

  • IfNA, asserts that there should be no output.

...

Arguments passed on toexpect_match

all

Should all elements of actual value matchregexp (TRUE),or does only one need to match (FALSE).

fixed

IfTRUE, treatsregexp as a string to be matched exactly(not a regular expressions). Overridesperl.

perl

logical. Should Perl-compatible regexps be used?

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label

Used to customise failure messages. For expert use only.

width

Number of characters per line of output. This does notinherit fromgetOption("width") so that tests always use the sameoutput width, minimising spurious differences.

Value

The first argument, invisibly.

See Also

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_reference(),expect_silent(),inheritance-expectations,logical-expectations

Examples

str(mtcars)expect_output(str(mtcars), "32 obs")expect_output(str(mtcars), "11 variables")# You can use the arguments of grepl to control the matchingexpect_output(str(mtcars), "11 VARIABLES", ignore.case = TRUE)expect_output(str(mtcars), "$ mpg", fixed = TRUE)

Do you expect the output/result to equal a known good value?

Description

expect_output_file() behaves identically toexpect_known_output().

Usage

expect_output_file(  object,  file,  update = TRUE,  ...,  info = NULL,  label = NULL,  print = FALSE,  width = 80)

3rd edition

[Deprecated]

expect_output_file() is deprecated in the 3rd edition;please useexpect_snapshot_output() and friends instead.


Do you expect a reference to this object?

Description

expect_reference() compares the underlying memory addresses oftwo symbols. It is for expert use only.

Usage

expect_reference(  object,  expected,  info = NULL,  label = NULL,  expected.label = NULL)

Arguments

object,expected

Computation and value to compare it to.

Both arguments supports limited unquoting to make it easier to generatereadable failures within a function or for loop. Seequasi_label formore details.

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label,expected.label

Used to customise failure messages. For expertuse only.

3rd edition

[Deprecated]

expect_reference() is deprecated in the third edition. If you know whatyou're doing, and you really need this behaviour, just useis_reference()directly:expect_true(rlang::is_reference(x, y)).

See Also

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_silent(),inheritance-expectations,logical-expectations


Do you expect a vector containing these values?

Description

Usage

expect_setequal(object, expected)expect_mapequal(object, expected)expect_contains(object, expected)expect_in(object, expected)expect_disjoint(object, expected)

Arguments

object,expected

Computation and value to compare it to.

Both arguments supports limited unquoting to make it easier to generatereadable failures within a function or for loop. Seequasi_label formore details.

Details

Note thatexpect_setequal() ignores names, and you will be warned if bothobject andexpected have them.

Examples

expect_setequal(letters, rev(letters))show_failure(expect_setequal(letters[-1], rev(letters)))x <- list(b = 2, a = 1)expect_mapequal(x, list(a = 1, b = 2))show_failure(expect_mapequal(x, list(a = 1)))show_failure(expect_mapequal(x, list(a = 1, b = "x")))show_failure(expect_mapequal(x, list(a = 1, b = 2, c = 3)))

Do you expect code to execute silently?

Description

Checks that the code produces no output, messages, or warnings.

Usage

expect_silent(object)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

Value

The first argument, invisibly.

See Also

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),inheritance-expectations,logical-expectations

Examples

expect_silent("123")f <- function() {  message("Hi!")  warning("Hey!!")  print("OY!!!")}## Not run: expect_silent(f())## End(Not run)

Do you expect this code to run the same way as last time?

Description

Snapshot tests (aka golden tests) are similar to unit tests except that theexpected result is stored in a separate file that is managed by testthat.Snapshot tests are useful for when the expected value is large, or whenthe intent of the code is something that can only be verified by a human(e.g. this is a useful error message). Learn more invignette("snapshotting").

expect_snapshot() runs code as if you had executed it at the console, andrecords the results, including output, messages, warnings, and errors.If you just want to compare the result, tryexpect_snapshot_value().

Usage

expect_snapshot(  x,  cran = FALSE,  error = FALSE,  transform = NULL,  variant = NULL,  cnd_class = FALSE)

Arguments

x

Code to evaluate.

cran

Should these expectations be verified on CRAN? By default,they are not, because snapshot tests tend to be fragile because theyoften rely on minor details of dependencies.

error

Do you expect the code to throw an error? The expectationwill fail (even on CRAN) if an unexpected error is thrown or theexpected error is not thrown.

transform

Optionally, a function to scrub sensitive or stochastictext from the output. Should take a character vector of lines as inputand return a modified character vector as output.

variant

If non-NULL, results will be saved in⁠_snaps/{variant}/{test.md}⁠, sovariant must be a single stringsuitable for use as a directory name.

You can use variants to deal with cases where the snapshot output variesand you want to capture and test the variations. Common use cases includevariations for operating system, R version, or version of key dependency.Variants are an advanced feature. When you use them, you'll need tocarefully think about your testing strategy to ensure that all importantvariants are covered by automated tests, and ensure that you have a wayto get snapshot changes out of your CI system and back into the repo.

Note that there's no way to declare all possible variants up front whichmeans that as soon as you start using variants, you are responsible fordeleting snapshot variants that are no longer used. (testthat will stilldelete all variants if you delete the test.)

cnd_class

Whether to include the class of messages,warnings, and errors in the snapshot. Only the most specificclass is included, i.e. the first element ofclass(cnd).

Workflow

The first time that you run a snapshot expectation it will runx,capture the results, and record them in⁠tests/testthat/_snaps/{test}.md⁠.Each test file gets its own snapshot file, e.g.test-foo.R will get⁠_snaps/foo.md⁠.

It's important to review the Markdown files and commit them to git. They aredesigned to be human readable, and you should always review new additionsto ensure that the salient information has been captured. They should alsobe carefully reviewed in pull requests, to make sure that snapshots haveupdated in the expected way.

On subsequent runs, the result ofx will be compared to the value storedon disk. If it's different, the expectation will fail, and a new file⁠_snaps/{test}.new.md⁠ will be created. If the change was deliberate,you can approve the change withsnapshot_accept() and then the tests willpass the next time you run them.

Note that snapshotting can only work when executing a complete test file(withtest_file(),test_dir(), or friends) because there's otherwiseno way to figure out the snapshot path. If you run snapshot testsinteractively, they'll just display the current value.


Do you expect this code to create the same file as last time?

Description

Whole file snapshot testing is designed for testing objects that don't havea convenient textual representation, with initial support for images(.png,.jpg,.svg), data frames (.csv), and text files(.R,.txt,.json, ...).

The first timeexpect_snapshot_file() is run, it will create⁠_snaps/{test}/{name}.{ext}⁠ containing reference output. Future runs willbe compared to this reference: if different, the test will fail and the newresults will be saved in⁠_snaps/{test}/{name}.new.{ext}⁠. To reviewfailures, callsnapshot_review().

We generally expect this function to be used via a wrapper that takes careof ensuring that output is as reproducible as possible, e.g. automaticallyskipping tests where it's known that images can't be reproduced exactly.

Usage

expect_snapshot_file(  path,  name = basename(path),  binary = deprecated(),  cran = FALSE,  compare = NULL,  transform = NULL,  variant = NULL)announce_snapshot_file(path, name = basename(path))compare_file_binary(old, new)compare_file_text(old, new)

Arguments

path

Path to file to snapshot. Optional forannounce_snapshot_file() ifname is supplied.

name

Snapshot name, taken frompath by default.

binary

[Deprecated] Please use thecompare argument instead.

cran

Should these expectations be verified on CRAN? By default,they are not, because snapshot tests tend to be fragile because theyoften rely on minor details of dependencies.

compare

A function used to compare the snapshot files. It should taketwo inputs, the paths to theold andnew snapshot, and return eitherTRUE orFALSE. This defaults tocompare_file_text ifname hasextension.r,.R,.Rmd,.md, or.txt, and otherwise usescompare_file_binary.

compare_file_binary() compares byte-by-byte andcompare_file_text() compares lines-by-line, ignoringthe difference between Windows and Mac/Linux line endings.

transform

Optionally, a function to scrub sensitive or stochastictext from the output. Should take a character vector of lines as inputand return a modified character vector as output.

variant

If not-NULL, results will be saved in⁠_snaps/{variant}/{test}/{name}⁠. This allows you to createdifferent snapshots for different scenarios, like different operatingsystems or different R versions.

Note that there's no way to declare all possible variants up front whichmeans that as soon as you start using variants, you are responsible fordeleting snapshot variants that are no longer used. (testthat will stilldelete all variants if you delete the test.)

old,new

Paths to old and new snapshot files.

Announcing snapshots

testthat automatically detects dangling snapshots that have beenwritten to the⁠_snaps⁠ directory but which no longer havecorresponding R code to generate them. These dangling files areautomatically deleted so they don't clutter the snapshotdirectory.

This can cause problems if your test is conditionally executed, eitherbecause of anif statement or askip(). To avoid files being deleted inthis case, you can callannounce_snapshot_file() before the conditionalcode.

test_that("can save a file", {  if (!can_save()) {    announce_snapshot_file(name = "data.txt")    skip("Can't save file")  }  path <- withr::local_tempfile()  expect_snapshot_file(save_file(path, mydata()), "data.txt")})

Examples

# To use expect_snapshot_file() you'll typically need to start by writing# a helper function that creates a file from your code, returning a pathsave_png <- function(code, width = 400, height = 400) {  path <- tempfile(fileext = ".png")  png(path, width = width, height = height)  on.exit(dev.off())  code  path}path <- save_png(plot(1:5))path## Not run: expect_snapshot_file(save_png(hist(mtcars$mpg)), "plot.png")## End(Not run)# You'd then also provide a helper that skips tests where you can't# be sure of producing exactly the same output.expect_snapshot_plot <- function(name, code) {  # Announce the file before touching skips or running `code`. This way,  # if the skips are active, testthat will not auto-delete the corresponding  # snapshot file.  name <- paste0(name, ".png")  announce_snapshot_file(name = name)  # Other packages might affect results  skip_if_not_installed("ggplot2", "2.0.0")  # Or maybe the output is different on some operating systems  skip_on_os("windows")  # You'll need to carefully think about and experiment with these skips  path <- save_png(code)  expect_snapshot_file(path, name)}

Snapshot helpers

Description

[Questioning]

These snapshotting functions are questioning because they were developedbeforeexpect_snapshot() and we're not sure that they still have arole to play.

Usage

expect_snapshot_output(x, cran = FALSE, variant = NULL)expect_snapshot_error(x, class = "error", cran = FALSE, variant = NULL)expect_snapshot_warning(x, class = "warning", cran = FALSE, variant = NULL)

Arguments

x

Code to evaluate.

cran

Should these expectations be verified on CRAN? By default,they are not, because snapshot tests tend to be fragile because theyoften rely on minor details of dependencies.

variant

If non-NULL, results will be saved in⁠_snaps/{variant}/{test.md}⁠, sovariant must be a single stringsuitable for use as a directory name.

You can use variants to deal with cases where the snapshot output variesand you want to capture and test the variations. Common use cases includevariations for operating system, R version, or version of key dependency.Variants are an advanced feature. When you use them, you'll need tocarefully think about your testing strategy to ensure that all importantvariants are covered by automated tests, and ensure that you have a wayto get snapshot changes out of your CI system and back into the repo.

Note that there's no way to declare all possible variants up front whichmeans that as soon as you start using variants, you are responsible fordeleting snapshot variants that are no longer used. (testthat will stilldelete all variants if you delete the test.)

class

Class of expected error or warning. The expectation willalways fail (even on CRAN) if an error of this class isn't seenwhen executingx.


Do you expect this code to return the same value as last time?

Description

Captures the result of function, flexibly serializing it into a textrepresentation that's stored in a snapshot file. Seeexpect_snapshot()for more details on snapshot testing.

Usage

expect_snapshot_value(  x,  style = c("json", "json2", "deparse", "serialize"),  cran = FALSE,  tolerance = testthat_tolerance(),  ...,  variant = NULL)

Arguments

x

Code to evaluate.

style

Serialization style to use:

cran

Should these expectations be verified on CRAN? By default,they are not, because snapshot tests tend to be fragile because theyoften rely on minor details of dependencies.

tolerance

Numerical tolerance: any differences (in the sense ofbase::all.equal()) smaller than this value will be ignored.

The default tolerance issqrt(.Machine$double.eps), unless long doublesare not available, in which case the test is skipped.

...

Passed on towaldo::compare() so you can control the details ofthe comparison.

variant

If non-NULL, results will be saved in⁠_snaps/{variant}/{test.md}⁠, sovariant must be a single stringsuitable for use as a directory name.

You can use variants to deal with cases where the snapshot output variesand you want to capture and test the variations. Common use cases includevariations for operating system, R version, or version of key dependency.Variants are an advanced feature. When you use them, you'll need tocarefully think about your testing strategy to ensure that all importantvariants are covered by automated tests, and ensure that you have a wayto get snapshot changes out of your CI system and back into the repo.

Note that there's no way to declare all possible variants up front whichmeans that as soon as you start using variants, you are responsible fordeleting snapshot variants that are no longer used. (testthat will stilldelete all variants if you delete the test.)


Test your custom expectations

Description

expect_success() checks that there's exactly one success and no failures;expect_failure() checks that there's exactly one failure and no successes.expect_snapshot_failure() records the failure message so that you canmanually check that it is informative.

Useshow_failure() in examples to print the failure message withoutthrowing an error.

Usage

expect_success(expr)expect_failure(expr, message = NULL, ...)expect_snapshot_failure(expr)show_failure(expr)

Arguments

expr

Code to evaluate

message

Check that the failure message matches this regexp.

...

Other arguments passed on toexpect_match().


Expect that a condition holds.

Description

[Superseded]

An old style of testing that's no longer encouraged.

Usage

expect_that(object, condition, info = NULL, label = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

condition

a function that returns whether or not the conditionis met, and if not, an error message to display.

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label

Used to customise failure messages. For expert use only.

Value

the (internal) expectation result as an invisible list

3rd edition

[Deprecated]

This style of testing is formally deprecated as of the 3rd edition.Use a more specificexpect_ function instead.

See Also

fail() for an expectation that always fails.

Examples

expect_that(5 * 2, equals(10))expect_that(sqrt(2) ^ 2, equals(2))## Not run: expect_that(sqrt(2) ^ 2, is_identical_to(2))## End(Not run)

Do you expect a vector with this size and/or prototype?

Description

expect_vector() is a thin wrapper aroundvctrs::vec_assert(), convertingthe results of that function in to the expectations used by testthat. Thismeans that it used the vctrs ofptype (prototype) andsize. Seedetails inhttps://vctrs.r-lib.org/articles/type-size.html

Usage

expect_vector(object, ptype = NULL, size = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

ptype

(Optional) Vector prototype to test against. Should be asize-0 (empty) generalised vector.

size

(Optional) Size to check for.

Examples

expect_vector(1:10, ptype = integer(), size = 10)show_failure(expect_vector(1:10, ptype = integer(), size = 5))show_failure(expect_vector(1:10, ptype = character(), size = 5))

Expectation conditions

Description

new_expectation() creates an expectation condition object andexp_signal() signals it.expectation() does both.is.expectation()tests if a captured condition is a testthat expectation.

These functions are primarily for internal use. If you are creating yourown expectation, you do not need these functions are instead should usepass() orfail(). Seevignette("custom-expectation") for moredetails.

Usage

expectation(type, message, ..., srcref = NULL, trace = NULL)new_expectation(  type,  message,  ...,  srcref = NULL,  trace = NULL,  .subclass = NULL)exp_signal(exp)is.expectation(x)

Arguments

type

Expectation type. Must be one of "success", "failure", "error","skip", "warning".

message

Message describing test failure

...

Additional attributes for the expectation object.

srcref

Optionalsrcref giving location of test.

trace

An optional backtrace created byrlang::trace_back().When supplied, the expectation is displayed with the backtrace.Expert use only.

.subclass

An optional subclass for the expectation object.

exp

An expectation object, as created bynew_expectation().

x

object to test for class membership


Extract a reprex from a failed expectation

Description

extract_test() creates a minimal reprex for a failed expectation.It extracts all non-test code before the failed expectation as well asall code inside the test up to and including the failed expectation.

This is particularly useful when you're debugging test failures insomeone else's package.

Usage

extract_test(location, path = stdout(), package = Sys.getenv("TESTTHAT_PKG"))

Arguments

location

A string giving the location in the form⁠FILE:LINE[:COLUMN]⁠.

path

Path to write the reprex to. Defaults tostdout().

package

If supplied, will be used to construct a test environmentfor the extracted code.

Value

This function is called for its side effect of rendering areprex topath. This function will never error: if extractionfails, the error message will be written topath.

Examples

# If you see a test failure like this:# -- Failure (test-extract.R:46:3): errors if can't find test -------------# Expected FALSE to be TRUE.# Differences:# `actual`:   FALSE# `expected`: TRUE# You can run this:## Not run: extract_test("test-extract.R:46:3")# to see just the code needed to reproduce the failure

Declare that an expectation either passes or fails

Description

These are the primitives that you can use to implement your own expectations.Every path through an expectation should either callpass(),fail(),or throw an error (e.g. if the arguments are invalid). Expectations shouldalways returninvisible(act$val).

Learn more about creating your own expectations invignette("custom-expectation").

Usage

fail(  message = "Failure has been forced",  info = NULL,  srcref = NULL,  trace_env = caller_env(),  trace = NULL)pass()

Arguments

message

A character vector describing the failure. Thefirst element should describe the expected value, and the second (andoptionally subsequence) elements should describe what was actually seen.

info

Character vector continuing additional information. Includedfor backward compatibility only and new expectations should not use it.

srcref

Location of the failure. Should only needed to be explicitlysupplied when you need to forward a srcref captured elsewhere.

trace_env

Iftrace is not specified, this is used to generate aninformative traceback for failures. You should only need to set this ifyou're callingfail() from a helper function; seevignette("custom-expectation") for details.

trace

An optional backtrace created byrlang::trace_back().When supplied, the expectation is displayed with the backtrace.Expert use only.

Examples

expect_length <- function(object, n) {  act <- quasi_label(rlang::enquo(object), arg = "object")  act_n <- length(act$val)  if (act_n != n) {    fail(sprintf("%s has length %i, not length %i.", act$lab, act_n, n))  } else {    pass()  }  invisible(act$val)}

Find reporter object given name or object.

Description

If not found, will return informative error message.Pass a character vector to create aMultiReporter composedof individual reporters.Will return null if given NULL.

Usage

find_reporter(reporter)

Arguments

reporter

name of reporter(s), or reporter object(s)


Find test files

Description

Find test files

Usage

find_test_scripts(  path,  filter = NULL,  invert = FALSE,  ...,  full.names = TRUE,  start_first = NULL)

Arguments

path

path to tests

filter

If notNULL, only tests with file names matching thisregular expression will be executed. Matching is performed on the filename after it's stripped of"test-" and".R".

invert

IfTRUE return files whichdon't match.

...

Additional arguments passed togrepl() to control filtering.

start_first

A character vector of file patterns (globs, seeutils::glob2rx()). The patterns are for the file names (base names),not for the whole paths. testthat starts the files matching thefirst pattern first, then the ones matching the second, etc. and thenthe rest of the files, alphabetically. Parallel tests tend to finishquicker if you start the slowest files first.NULL means alphabeticalorder.

Value

A character vector of paths


Do you expect an S3/S4/R6/S7 object that inherits from this class?

Description

Seehttps://adv-r.hadley.nz/oo.html for an overview of R's OO systems, andthe vocabulary used here.

Seeexpect_vector() for testing properties of objects created by vctrs.

Usage

expect_type(object, type)expect_s3_class(object, class, exact = FALSE)expect_s4_class(object, class)expect_r6_class(object, class)expect_s7_class(object, class)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

type

String giving base type (as returned bytypeof()).

class

The required type varies depending on the function:

  • expect_type(): a string.

  • expect_s3_class(): a string or character vector. The behaviour ofmultiple values (i.e. a character vector) is controlled by theexact argument.

  • expect_s4_class(): a string.

  • expect_r6_class(): a string.

  • expect_s7_class(): anS7::S7_class() object.

For historical reasons,expect_s3_class() andexpect_s4_class() alsotakeNA to assert that theobject is not an S3 or S4 object.

exact

IfFALSE, the default, checks thatobject inheritsfrom any element ofclass. IfTRUE, checks that object has a classthat exactly matchesclass.

See Also

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),logical-expectations

Examples

x <- data.frame(x = 1:10, y = "x", stringsAsFactors = TRUE)# A data frame is an S3 object with class data.frameexpect_s3_class(x, "data.frame")show_failure(expect_s4_class(x, "data.frame"))# A data frame is built from a list:expect_type(x, "list")f <- factor(c("a", "b", "c"))o <- ordered(f)# Using multiple class names tests if the object inherits from any of themexpect_s3_class(f, c("ordered", "factor"))# Use exact = TRUE to test for exact matchshow_failure(expect_s3_class(f, c("ordered", "factor"), exact = TRUE))expect_s3_class(o, c("ordered", "factor"), exact = TRUE)# An integer vector is an atomic vector of type "integer"expect_type(x$x, "integer")# It is not an S3 objectshow_failure(expect_s3_class(x$x, "integer"))# Above, we requested data.frame() converts strings to factors:show_failure(expect_type(x$y, "character"))expect_s3_class(x$y, "factor")expect_type(x$y, "integer")

Is an error informative?

Description

[Deprecated]

is_informative_error() is a generic predicate that indicateswhether testthat users should explicitly test for an errorclass. Since we no longer recommend you do that, this generichas been deprecated.

Usage

is_informative_error(x, ...)

Arguments

x

An error object.

...

These dots are for future extensions and must be empty.

Details

A few classes are hard-coded as uninformative:


Determine testing status

Description

These functions help you determine if you code is running in a particulartesting context:

A common use of these functions is to compute a default value for aquietargument withis_testing() && !is_snapshot(). In this case, you'llwant to avoid an run-time dependency on testthat, in which case you shouldjust copy the implementation of these functions into autils.R or similar.

Usage

is_testing()is_parallel()is_checking()is_snapshot()testing_package()

Temporarily change the active testthat edition

Description

local_edition() allows you to temporarily (within a single test ora single test file) change the active edition of testthat.edition_get() allows you to retrieve the currently active edition.

Usage

local_edition(x, .env = parent.frame())edition_get()

Arguments

x

Edition Should be a single integer.

.env

Environment that controls scope of changes. For expert use only.


Temporarily redefine function definitions

Description

with_mocked_bindings() andlocal_mocked_bindings() provide tools for"mocking", temporarily redefining a function so that it behaves differentlyduring tests. This is helpful for testing functions that depend on externalstate (i.e. reading a value from a file or a website, or pretending a packageis or isn't installed).

Learn more invignette("mocking").

Usage

local_mocked_bindings(..., .package = NULL, .env = caller_env())with_mocked_bindings(code, ..., .package = NULL)

Arguments

...

Name-value pairs providing new values (typically functions) totemporarily replace the named bindings.

.package

The name of the package where mocked functions should beinserted. Generally, you should not supply this as it will be automaticallydetected when whole package tests are run or when there's one packageunder active development (i.e. loaded withpkgload::load_all()).We don't recommend using this to mock functions in other packages,as you should not modify namespaces that you don't own.

.env

Environment that defines effect scope. For expert use only.

code

Code to execute with specified bindings.

Use

There are four places that the function you are trying to mock mightcome from:

They are described in turn below.

(To mock S3 & S4 methods and R6 classes seelocal_mocked_s3_method(),local_mocked_s4_method(), andlocal_mocked_r6_class().)

Internal & imported functions

You mock internal and imported functions the same way. For example, takethis code:

some_function <- function() {  another_function()}

It doesn't matter whetheranother_function() is defined by your packageor you've imported it from a dependency with⁠@import⁠ or⁠@importFrom⁠,you mock it the same way:

local_mocked_bindings(  another_function = function(...) "new_value")

Base functions

To mock a function in the base package, you need to make sure that youhave a binding for this function in your package. It's easiest to do thisby binding the value toNULL. For example, if you wanted to mockinteractive() in your package, you'd need to include this code somewherein your package:

interactive <- NULL

Why is this necessary?with_mocked_bindings() andlocal_mocked_bindings()work by temporarily modifying the bindings within your package's namespace.When these tests are running inside of⁠R CMD check⁠ the namespace is lockedwhich means it's not possible to create new bindings so you need to make surethat the binding exists already.

Namespaced calls

It's trickier to mock functions in other packages that you call with::.For example, take this minor variation:

some_function <- function() {  anotherpackage::another_function()}

To mock this function, you'd need to modifyanother_function() inside theanotherpackage package. Youcan do this by supplying the.packageargument tolocal_mocked_bindings() but we don't recommend it becauseit will affect all calls toanotherpackage::another_function(), not justthe calls originating in your package. Instead, it's safer to either importthe function into your package, or make a wrapper that you can mock:

some_function <- function() {  my_wrapper()}my_wrapper <- function(...) {  anotherpackage::another_function(...)}local_mocked_bindings(  my_wrapper = function(...) "new_value")

Multiple return values / sequence of outputs

To mock a function that returns different values in sequence,for instance an API call whose status would be 502 then 200,or an user input toreadline(), you can usemock_output_sequence()

local_mocked_bindings(readline = mock_output_sequence("3", "This is a note", "n"))

See Also

Other mocking:mock_output_sequence()


Mock an R6 class

Description

This function allows you to temporarily override an R6 class definition.It works by creating a subclass then usinglocal_mocked_bindings() totemporarily replace the original definition. This means that it will notaffect subclasses of the original class; please file an issue if you needthis.

Learn more about mocking invignette("mocking").

Usage

local_mocked_r6_class(  class,  public = list(),  private = list(),  frame = caller_env())

Arguments

class

An R6 class definition.

public,private

A named list of public and private methods/data.

frame

Calling frame which determines the scope of the mock.Only needed when wrapping in another local helper.


Mock S3 and S4 methods

Description

These functions allow you to temporarily override S3 and S4 methods thatalready exist. It works by usingregisterS3method()/setMethod() totemporarily replace the original definition.

Learn more about mocking invignette("mocking").

Usage

local_mocked_s3_method(generic, signature, definition, frame = caller_env())local_mocked_s4_method(generic, signature, definition, frame = caller_env())

Arguments

generic

A string giving the name of the generic.

signature

A character vector giving the signature of the method.

definition

A function providing the method definition.

frame

Calling frame which determines the scope of the mock.Only needed when wrapping in another local helper.

Examples

x <- as.POSIXlt(Sys.time())local({  local_mocked_s3_method("length", "POSIXlt", function(x) 42)  length(x)})length(x)

Instantiate local snapshotting context

Description

Needed if you want to run snapshot tests outside of the usual testthatframework For expert use only.

Usage

local_snapshotter(  reporter = SnapshotReporter,  snap_dir = "_snaps",  cleanup = FALSE,  desc = NULL,  fail_on_new = NULL,  frame = caller_env())

Temporarily set options for maximum reproducibility

Description

local_test_context() is run automatically bytest_that() but you maywant to run it yourself if you want to replicate test results interactively.If run inside a function, the effects are automatically reversed when thefunction exits; if running in the global environment, usewithr::deferred_run() to undo.

local_reproducible_output() is run automatically bytest_that() in the3rd edition. You might want to call it to override the the default settingsinside a test, if you want to test Unicode, coloured output, or anon-standard width.

Usage

local_test_context(.env = parent.frame())local_reproducible_output(  width = 80,  crayon = FALSE,  unicode = FALSE,  rstudio = FALSE,  hyperlinks = FALSE,  lang = "C",  .env = parent.frame())

Arguments

.env

Environment to use for scoping; expert use only.

width

Value of the"width" option.

crayon

Determines whether or not crayon (now cli) colourshould be applied.

unicode

Value of the"cli.unicode" option.The test is skipped ifl10n_info()$`UTF-8` isFALSE.

rstudio

Should we pretend that we're inside of RStudio?

hyperlinks

Should we use ANSI hyperlinks.

lang

Optionally, supply a BCP47 language code to set the languageused for translating error messages. This is a lower case two letterISO 639 country code,optionally followed by "_" or "-" and an upper case two letterISO 3166 region code.

Details

local_test_context() setsTESTTHAT = "true", which ensures thatis_testing() returnsTRUE and allows code to tell if it is run bytestthat.

In the third edition,local_test_context() also callslocal_reproducible_output() which temporary sets the following options:

And modifies the following env vars:

Finally, it sets the collation locale to "C", which ensures that charactersorting the same regardless of system locale.

Examples

local({  local_test_context()  cat(cli::col_blue("Text will not be colored"))  cat(cli::symbol$ellipsis)  cat("\n")})test_that("test ellipsis", {  local_reproducible_output(unicode = FALSE)  expect_equal(cli::symbol$ellipsis, "...")  local_reproducible_output(unicode = TRUE)  expect_equal(cli::symbol$ellipsis, "\u2026")})

Locally set test directory options

Description

For expert use only.

Usage

local_test_directory(path, package = NULL, .env = parent.frame())

Arguments

path

Path to directory of files

package

Optional package name, if known.


Do you expectTRUE orFALSE?

Description

These are fall-back expectations that you can use when none of the othermore specific expectations apply. The disadvantage is that you may geta less informative error message.

Attributes are ignored.

Usage

expect_true(object, info = NULL, label = NULL)expect_false(object, info = NULL, label = NULL)

Arguments

object

Object to test.

Supports limited unquoting to make it easier to generate readable failureswithin a function or for loop. Seequasi_label for more details.

info

Extra information to be included in the message. This argumentis soft-deprecated and should not be used in new code. Instead seealternatives inquasi_label.

label

Used to customise failure messages. For expert use only.

See Also

Other expectations:comparison-expectations,equality-expectations,expect_error(),expect_length(),expect_match(),expect_named(),expect_null(),expect_output(),expect_reference(),expect_silent(),inheritance-expectations

Examples

expect_true(2 == 2)# Failed expectations will throw an errorshow_failure(expect_true(2 != 2))# where possible, use more specific expectations, to get more informative# error messagesa <- 1:4show_failure(expect_true(length(a) == 3))show_failure(expect_equal(length(a), 3))x <- c(TRUE, TRUE, FALSE, TRUE)show_failure(expect_true(all(x)))show_failure(expect_all_true(x))

Make an equality test.

Description

This a convenience function to make a expectation that checks thatinput stays the same.

Usage

make_expectation(x, expectation = "equals")

Arguments

x

a vector of values

expectation

the type of equality you want to test for("equals","is_equivalent_to","is_identical_to")

Examples

x <- 1:10make_expectation(x)make_expectation(mtcars$mpg)df <- data.frame(x = 2)make_expectation(df)

Mock a sequence of output from a function

Description

Specify multiple return values for mocking

Usage

mock_output_sequence(..., recycle = FALSE)

Arguments

...

<dynamic-dots> Values to return in sequence.

recycle

whether to recycle. IfTRUE, once all values have been returned,they will be returned again in sequence.

Value

A function that you can use withinlocal_mocked_bindings() andwith_mocked_bindings()

See Also

Other mocking:local_mocked_bindings()

Examples

# inside local_mocked_bindings()## Not run: local_mocked_bindings(readline = mock_output_sequence("3", "This is a note", "n"))## End(Not run)# for understandingmocked_sequence <- mock_output_sequence("3", "This is a note", "n")mocked_sequence()mocked_sequence()mocked_sequence()try(mocked_sequence())recycled_mocked_sequence <- mock_output_sequence(  "3", "This is a note", "n",  recycle = TRUE)recycled_mocked_sequence()recycled_mocked_sequence()recycled_mocked_sequence()recycled_mocked_sequence()

Negate an expectation

Description

This negates an expectation, making it possible to express that youwant the opposite of a standard expectation. This function is deprecatedand will be removed in a future version.

Usage

not(f)

Arguments

f

an existing expectation function


Old-style expectations.

Description

[Superseded]

Initial testthat used a style of testing that looked like⁠expect_that(a, equals(b)))⁠ this allowed expectations to read likeEnglish sentences, but was verbose and a bit too cutesy. This stylewill continue to work but has been soft-deprecated - it is no longerdocumented, and new expectations will only use the new styleexpect_equal(a, b).

Usage

is_a(class)has_names(expected, ignore.order = FALSE, ignore.case = FALSE)is_less_than(expected, label = NULL, ...)is_more_than(expected, label = NULL, ...)equals(expected, label = NULL, ...)is_equivalent_to(expected, label = NULL)is_identical_to(expected, label = NULL)equals_reference(file, label = NULL, ...)shows_message(regexp = NULL, all = FALSE, ...)gives_warning(regexp = NULL, all = FALSE, ...)prints_text(regexp = NULL, ...)throws_error(regexp = NULL, ...)

Quasi-labelling

Description

The first argument to everyexpect_ function can use unquoting toconstruct better labels. This makes it easy to create informative labels whenexpectations are used inside a function or a for loop.quasi_label() wrapsup the details, returning the expression and label.

Usage

quasi_label(quo, label = NULL, arg = NULL)

Arguments

quo

A quosure created byrlang::enquo().

label

An optional label to override the default. This isonly provided for internal usage. Modern expectations should notinclude alabel parameter.

arg

Argument name shown in error message ifquo is missing.

Value

A list containing two elements:

val

The evaluate value ofquo

lab

The quasiquoted label generated fromquo

Limitations

Because allexpect_ function use unquoting to generate more informativelabels, you can not use unquoting for other purposes. Instead, you'll needto perform all other unquoting outside of the expectation and only testthe results.

Examples

f <- function(i) if (i > 3) i * 9 else i * 10i <- 10# This sort of expression commonly occurs inside a for loop or function# And the failure isn't helpful because you can't see the value of i# that caused the problem:show_failure(expect_equal(f(i), i * 10))# To overcome this issue, testthat allows you to unquote expressions using# !!. This causes the failure message to show the value rather than the# variable nameshow_failure(expect_equal(f(!!i), !!(i * 10)))

Objects exported from other packages

Description

These objects are imported from other packages. Follow the linksbelow to see their documentation.

magrittr

%>%


Get and set active reporter.

Description

get_reporter() andset_reporter() access and modify the current "active"reporter. Generally, these functions should not be called directly; insteadusewith_reporter() to temporarily change, then reset, the active reporter.

Usage

set_reporter(reporter)get_reporter()with_reporter(reporter, code, start_end_reporter = TRUE)

Arguments

reporter

Reporter to use to summarise output. Can be suppliedas a string (e.g. "summary") or as an R6 object(e.g.SummaryReporter$new()).

SeeReporter for more details and a list of built-in reporters.

code

Code to execute.

start_end_reporter

Should the reportersstart_reporter() andend_reporter() methods be called? For expert use only.

Value

with_reporter() invisible returns the reporter active whencodewas evaluated.


Set maximum number of test failures allowed before aborting the run

Description

This sets theTESTTHAT_MAX_FAILS env var which will affect both thecurrent R process and any processes launched from it.

Usage

set_max_fails(n)

Arguments

n

Maximum number of failures allowed.


Check for global state changes

Description

One of the most pernicious challenges to debug is when a test runs finein your test suite, but fails when you run it interactively (or similarly,it fails randomly when running your tests in parallel). One of the mostcommon causes of this problem is accidentally changing global state in aprevious test (e.g. changing an option, an environment variable, or theworking directory). This is hard to debug, because it's very hard to figureout which test made the change.

Luckily testthat provides a tool to figure out if tests are changing globalstate. You can register a state inspector withset_state_inspector() andtestthat will run it before and after each test, store the results, thenreport if there are any differences. For example, if you wanted to see ifany of your tests were changing options or environment variables, you couldput this code intests/testthat/helper-state.R:

set_state_inspector(function() {  list(    options = options(),    envvars = Sys.getenv()  )})

(You might discover other packages outside your control are changingthe global state, in which case you might want to modify this functionto ignore those values.)

Other problems that can be troublesome to resolve are CRAN check notes thatreport things like connections being left open. You can easily debugthat problem with:

set_state_inspector(function() {  getAllConnections()})

Usage

set_state_inspector(callback, tolerance = testthat_tolerance())

Arguments

callback

Either a zero-argument function that returns an objectcapturing global state that you're interested in, orNULL.

tolerance

If non-NULL, used as threshold for ignoring smallfloating point difference when comparing numeric vectors. Using anynon-NULL value will cause integer and double vectors to be comparedbased on their values, not their types, and will ignore the differencebetweenNaN andNA_real_.

It uses the same algorithm asall.equal(), i.e., first we generatex_diff andy_diff by subsettingx andy to look only locationswith differences. Then we check thatmean(abs(x_diff - y_diff)) / mean(abs(y_diff)) (or justmean(abs(x_diff - y_diff)) ify_diff is small) is less thantolerance.


Simulate a test environment

Description

This function is designed to allow you to simulate testthat's testingenvironment in an interactive session. To undo it's affect, youwill need to restart your R session.

Usage

simulate_test_env(package, path)

Arguments

package

Name of installed package.

path

Path totests/testthat.


Skip a test for various reasons

Description

skip_if() andskip_if_not() allow you to skip tests, immediatelyconcluding atest_that() block without executing any further expectations.This allows you to skip a test without failure, if for some reason itcan't be run (e.g. it depends on the feature of a specific operating system,or it requires a specific version of a package).

Seevignette("skipping") for more details.

Usage

skip(message = "Skipping")skip_if_not(condition, message = NULL)skip_if(condition, message = NULL)skip_if_not_installed(pkg, minimum_version = NULL)skip_unless_r(spec)skip_if_offline(host = "captive.apple.com")skip_on_cran()local_on_cran(on_cran = TRUE, frame = caller_env())skip_on_os(os, arch = NULL)skip_on_ci()skip_on_covr()skip_on_bioc()skip_if_translated(msgid = "'%s' not found")

Arguments

message

A message describing why the test was skipped.

condition

Boolean condition to check.skip_if_not() will skip ifFALSE,skip_if() will skip ifTRUE.

pkg

Name of package to check for

minimum_version

Minimum required version for the package

spec

A version specification like '>= 4.1.0' denoting that this testshould only be run on R versions 4.1.0 and later.

host

A string with a hostname to lookup

on_cran

Pretend we're on CRAN (TRUE) or not (FALSE).

frame

Calling frame to tie change to; expect use only.

os

Character vector of one or more operating systems to skip on.Supported values are"windows","mac","linux","solaris",and"emscripten".

arch

Character vector of one or more architectures to skip on.Common values include"i386" (32 bit),"x86_64" (64 bit), and"aarch64" (M1 mac). Supplyingarch makes the test stricter; i.e. bothos andarch must match in order for the test to be skipped.

msgid

R message identifier used to check for translation: the defaultuses a message included in most translation packs. See the complete list inR-base.pot.

Helpers

Examples

if (FALSE) skip("Some Important Requirement is not available")test_that("skip example", {  expect_equal(1, 1L)    # this expectation runs  skip('skip')  expect_equal(1, 2)     # this one skipped  expect_equal(1, 3)     # this one is also skipped})

Superseded skip functions

Description

[Superseded]

Usage

skip_on_travis()skip_on_appveyor()

Accept or reject modified snapshots

Description

Usage

snapshot_accept(files = NULL, path = "tests/testthat")snapshot_reject(files = NULL, path = "tests/testthat")snapshot_review(files = NULL, path = "tests/testthat", ...)

Arguments

files

Optionally, filter effects to snapshots from specified files.This can be a snapshot name (e.g.foo orfoo.md), a snapshot file name(e.g.testfile/foo.txt), or a snapshot file directory (e.g.⁠testfile/⁠).

path

Path to tests.

...

Additional arguments passed on toshiny::runApp().


Download snapshots from GitHub

Description

If your snapshots fail on GitHub, it can be a pain to figure out exactlywhy, or to incorporate them into your local package. This function makes iteasy, only requiring you to interactively select which job you want totake the artifacts from.

Note that you should not generally need to use this function manually;instead copy and paste from the hint emitted on GitHub.

Usage

snapshot_download_gh(repository, run_id, dest_dir = ".")

Arguments

repository

Repository owner/name, e.g."r-lib/testthat".

run_id

Run ID, e.g."47905180716". You can find this in the action url.

dest_dir

Directory to download to. Defaults to the current directory.


Source a file, directory of files, or various important subsets

Description

These are used bytest_dir() and friends

Usage

source_file(  path,  env = test_env(),  chdir = TRUE,  desc = NULL,  wrap = TRUE,  shuffle = FALSE,  error_call = caller_env())source_dir(  path,  pattern = "\\.[rR]$",  env = test_env(),  chdir = TRUE,  wrap = TRUE,  shuffle = FALSE)source_test_helpers(path = "tests/testthat", env = test_env())source_test_setup(path = "tests/testthat", env = test_env())source_test_teardown(path = "tests/testthat", env = test_env())

Arguments

path

Path to files.

env

Environment in which to evaluate code.

chdir

Change working directory todirname(path)?

desc

A character vector used to filter tests. This is used to(recursively) filter the content of the file, so that only the non-testcode up to and including the matching test is run.

wrap

Automatically wrap all code withintest_that()? This ensuresthat all expectations are reported, even if outside a test block.

shuffle

IfTRUE, randomly reorder the top-level expressionsin the file.

pattern

Regular expression used to filter files.


Mark a test as successful

Description

This is an older version ofpass() that exists for backwards compatibility.You should now usepass() instead.

Usage

succeed(message = "Success has been forced", info = NULL)

Arguments

message

A character vector describing the failure. Thefirst element should describe the expected value, and the second (andoptionally subsequence) elements should describe what was actually seen.

info

Character vector continuing additional information. Includedfor backward compatibility only and new expectations should not use it.


Does code take less than the expected amount of time to run?

Description

This is useful for performance regression testing.

Usage

takes_less_than(amount)

Arguments

amount

maximum duration in seconds


Run code before/after tests

Description

[Superseded]

We no longer recommend usingsetup() andteardown(); insteadwe think it's better practice to use atest fixture as described invignette("test-fixtures").

Code in asetup() block is run immediately in a clean environment.Code in ateardown() block is run upon completion of a test file,even if it exits with an error. Multiple calls toteardown() will beexecuted in the order they were created.

Usage

teardown(code, env = parent.frame())setup(code, env = parent.frame())

Arguments

code

Code to evaluate

env

Environment in which code will be evaluated. For expertuse only.

Examples

## Not run: # Old approachtmp <- tempfile()setup(writeLines("some test data", tmp))teardown(unlink(tmp))## End(Not run)# Now recommended:local_test_data <- function(env = parent.frame()) {  tmp <- tempfile()  writeLines("some test data", tmp)  withr::defer(unlink(tmp), env)  tmp}# Then call local_test_data() in your tests

Run code after all test files

Description

This environment has no purpose other than as a handle forwithr::defer():use it when you want to run code after all tests have been run.Typically, you'll usewithr::defer(cleanup(), teardown_env())immediately after you've made a mess in a⁠setup-*.R⁠ file.

Usage

teardown_env()

Run all tests in a directory

Description

This function is the low-level workhorse that powerstest_local() andtest_package(). Generally, you should not call this function directly.In particular, you are responsible for ensuring that the functions to testare available in the testenv (e.g. viaload_package).

Seevignette("special-files") to learn more about the conventions for test,helper, and setup files that testthat uses, and what you might use each for.

Usage

test_dir(  path,  filter = NULL,  reporter = NULL,  env = NULL,  ...,  load_helpers = TRUE,  stop_on_failure = TRUE,  stop_on_warning = FALSE,  package = NULL,  load_package = c("none", "installed", "source"),  shuffle = FALSE)

Arguments

path

Path to directory containing tests.

filter

If notNULL, only tests with file names matching thisregular expression will be executed. Matching is performed on the filename after it's stripped of"test-" and".R".

reporter

Reporter to use to summarise output. Can be suppliedas a string (e.g. "summary") or as an R6 object(e.g.SummaryReporter$new()).

SeeReporter for more details and a list of built-in reporters.

env

Environment in which to execute the tests. Expert use only.

...

Additional arguments passed togrepl() to control filtering.

load_helpers

Source helper files before running the tests?

stop_on_failure

IfTRUE, throw an error if any tests fail.

stop_on_warning

IfTRUE, throw an error if any tests generatewarnings.

package

If these tests belong to a package, the name of the package.

load_package

Strategy to use for load package code:

  • "none", the default, doesn't load the package.

  • "installed", useslibrary() to load an installed package.

  • "source", usespkgload::load_all() to a source package.To configure the arguments passed toload_all(), add thisfield in your DESCRIPTION file:

    Config/testthat/load-all: list(export_all = FALSE, helpers = FALSE)
shuffle

IfTRUE, randomly reorder the top-level expressionsin the file.

Value

A list (invisibly) containing data about the test results.

Environments

Each test is run in a clean environment to keep tests as isolated aspossible. For package tests, that environment inherits from the package'snamespace environment, so that tests can access internal functionsand objects.


Generate default testing environment.

Description

We use a new environment which inherits fromglobalenv() or a packagenamespace. In an ideal world, we'd avoid putting the global environment onthe search path for tests, but it's not currently possible without losingthe ability to load packages in tests.

Usage

test_env(package = NULL)

Test package examples

Description

These helper functions make it easier to test the examples in a package.Each example counts as one test, and it succeeds if the code runs withoutan error. Generally, this is redundant with R CMD check, and is notrecommended in routine practice.

Usage

test_examples(path = "../..")test_rd(rd, title = attr(rd, "Rdfile"))test_example(path, title = path)

Arguments

path

Fortest_examples(), path to directory containing Rd files.Fortest_example(), path to a single Rd file. Remember the workingdirectory for tests istests/testthat.

rd

A parsed Rd object, obtained fromtools::Rd_db() or otherwise.

title

Test title to use


Run tests in a single file

Description

Helper, setup, and teardown files located in the same directory as thetest will also be run. Seevignette("special-files") for details.

Usage

test_file(  path,  reporter = default_compact_reporter(),  desc = NULL,  package = NULL,  shuffle = FALSE,  ...)

Arguments

path

Path to file.

reporter

Reporter to use to summarise output. Can be suppliedas a string (e.g. "summary") or as an R6 object(e.g.SummaryReporter$new()).

SeeReporter for more details and a list of built-in reporters.

desc

Optionally, supply a string here to run only a singletest (test_that() ordescribe()) with thisdescription.

package

If these tests belong to a package, the name of the package.

shuffle

IfTRUE, randomly reorder the top-level expressionsin the file.

...

Additional parameters passed on totest_dir()

Value

A list (invisibly) containing data about the test results.

Environments

Each test is run in a clean environment to keep tests as isolated aspossible. For package tests, that environment inherits from the package'snamespace environment, so that tests can access internal functionsand objects.

Examples

path <- testthat_example("success")test_file(path)test_file(path, desc = "some tests have warnings")test_file(path, reporter = "minimal")

Run all tests in a package

Description

Seevignette("special-files") to learn about the various files thattestthat works with.

Usage

test_package(package, reporter = check_reporter(), ...)test_check(package, reporter = check_reporter(), ...)test_local(  path = ".",  reporter = NULL,  ...,  load_package = "source",  shuffle = FALSE)

Arguments

package

If these tests belong to a package, the name of the package.

reporter

Reporter to use to summarise output. Can be suppliedas a string (e.g. "summary") or as an R6 object(e.g.SummaryReporter$new()).

SeeReporter for more details and a list of built-in reporters.

...

Additional arguments passed totest_dir()

path

Path to directory containing tests.

load_package

Strategy to use for load package code:

  • "none", the default, doesn't load the package.

  • "installed", useslibrary() to load an installed package.

  • "source", usespkgload::load_all() to a source package.To configure the arguments passed toload_all(), add thisfield in your DESCRIPTION file:

    Config/testthat/load-all: list(export_all = FALSE, helpers = FALSE)
shuffle

IfTRUE, randomly reorder the top-level expressionsin the file.

Value

A list (invisibly) containing data about the test results.

⁠R CMD check⁠

To run testthat automatically from⁠R CMD check⁠, make sure you haveatests/testthat.R that contains:

library(testthat)library(yourpackage)test_check("yourpackage")

Environments

Each test is run in a clean environment to keep tests as isolated aspossible. For package tests, that environment inherits from the package'snamespace environment, so that tests can access internal functionsand objects.


Locate a file in the testing directory

Description

Many tests require some external file (e.g. a.csv if you're testing adata import function) but the working directory varies depending on the waythat you're running the test (e.g. interactively, withdevtools::test(),or with⁠R CMD check⁠).test_path() understands these variations andautomatically generates a path relative totests/testthat, regardless ofwhere that directory might reside relative to the current working directory.

Usage

test_path(...)

Arguments

...

Character vectors giving path components.

Value

A character vector giving the path.

Examples

## Not run: test_path("foo.csv")test_path("data", "foo.csv")## End(Not run)

Run a test

Description

A test encapsulates a series of expectations about a small, self-containedunit of functionality. Each test contains one or more expectations, such asexpect_equal() orexpect_error(), and lives in a⁠test/testhat/test*⁠file, often together with other tests that relate to the same function or setof functions.

Each test has its own execution environment, so an object created in a testalso dies with the test. Note that this cleanup does not happen automaticallyfor other aspects of global state, such as session options or filesystemchanges. Avoid changing global state, when possible, and reverse any changesthat you do make.

Usage

test_that(desc, code)

Arguments

desc

Test name. Names should be brief, but evocative. It's common towrite the description so that it reads like a natural sentence, e.g.test_that("multiplication works", { ... }).

code

Test code containing expectations. Braces ({}) should alwaysbe used in order to get accurate location data for test failures.

Value

When run interactively, returnsinvisible(TRUE) if all testspass, otherwise throws an error.

Examples

test_that("trigonometric functions match identities", {  expect_equal(sin(pi / 4), 1 / sqrt(2))  expect_equal(cos(pi / 4), 1 / sqrt(2))  expect_equal(tan(pi / 4), 1)})## Not run: test_that("trigonometric functions match identities", {  expect_equal(sin(pi / 4), 1)})## End(Not run)

Retrieve paths to built-in example test files

Description

testthat_examples() retrieves path to directory of test files,testthat_example() retrieves path to a single test file.

Usage

testthat_examples()testthat_example(filename)

Arguments

filename

Name of test file

Examples

dir(testthat_examples())testthat_example("success")

Create atestthat_results object from the test resultsas stored in the ListReporter results field.

Description

Create atestthat_results object from the test resultsas stored in the ListReporter results field.

Usage

testthat_results(results)

Arguments

results

a list as stored in ListReporter

Value

its list argument as atestthat_results object

See Also

ListReporter


Default numeric tolerance

Description

testthat's default numeric tolerance is 1.4901161 × 10-8.

Usage

testthat_tolerance()

Evaluate an expectation multiple times until it succeeds

Description

If you have a flaky test, you can usetry_again() to run it a few timesuntil it succeeds. In most cases, you are better fixing the underlyingcause of the flakeyness, but sometimes that's not possible.

Usage

try_again(times, code)

Arguments

times

Number of times to retry.

code

Code to evaluate.

Examples

usually_return_1 <- function(i) {  if (runif(1) < 0.1) 0 else 1}## Not run: # 10% chance of failure:expect_equal(usually_return_1(), 1)# 1% chance of failure:try_again(1, expect_equal(usually_return_1(), 1))# 0.1% chance of failure:try_again(2, expect_equal(usually_return_1(), 1))## End(Not run)

Use Catch for C++ unit testing

Description

Add the necessary infrastructure to enable C++ unit testinginR packages withCatch andtestthat.

Usage

use_catch(dir = getwd())

Arguments

dir

The directory containing anR package.

Details

Callinguse_catch() will:

  1. Create a filesrc/test-runner.cpp, which ensures that thetestthat package will understand how to run your package'sunit tests,

  2. Create an example test filesrc/test-example.cpp, whichshowcases how you might use Catch to write a unit test,

  3. Add a test filetests/testthat/test-cpp.R, which ensures thattestthat will run your compiled tests during invocations ofdevtools::test() or⁠R CMD check⁠, and

  4. Create a fileR/catch-routine-registration.R, which ensures thatR will automatically register this routine whentools::package_native_routine_registration_skeleton() is invoked.

You will also need to:

C++ unit tests can be added to C++ source files within thesrc directory of your package, with a format similartoR code tested withtestthat. Here's a simple exampleof a unit test written withtestthat + Catch:

context("C++ Unit Test") {  test_that("two plus two is four") {    int result = 2 + 2;    expect_true(result == 4);  }}

When your package is compiled, unit tests alongside a harnessfor running these tests will be compiled into yourR package,with the C entry pointrun_testthat_tests().testthatwill use that entry point to run your unit tests when detected.

Functions

All of the functions provided by Catch areavailable with theCATCH_ prefix – seeherefor a full list.testthat provides thefollowing wrappers, to conform withtestthat'sR interface:

FunctionCatchDescription
contextCATCH_TEST_CASE The context of a set of tests.
test_thatCATCH_SECTION A test section.
expect_trueCATCH_CHECK Test that an expression evaluates toTRUE.
expect_falseCATCH_CHECK_FALSE Test that an expression evaluates toFALSE.
expect_errorCATCH_CHECK_THROWS Test that evaluation of an expression throws an exception.
expect_error_asCATCH_CHECK_THROWS_AS Test that evaluation of an expression throws an exception of a specific class.

In general, you should prefer using thetestthatwrappers, astestthat also does some work toensure that any unit tests within will not be compiled orrun when using the Solaris Studio compilers (as these arecurrently unsupported by Catch). This should make iteasier to submit packages to CRAN that use Catch.

Symbol Registration

If you've opted to disable dynamic symbol lookup in yourpackage, then you'll need to explicitly export a symbolin your package thattestthat can use to run your unittests.testthat will look for a routine with one of the names:

    C_run_testthat_tests    c_run_testthat_tests    run_testthat_tests

Assuming you have⁠useDynLib(<pkg>, .registration = TRUE)⁠ in your package'sNAMESPACE file, this implies having routine registration code of the form:

// The definition for this function comes from the file 'src/test-runner.cpp',// which is generated via `testthat::use_catch()`.extern SEXP run_testthat_tests();static const R_CallMethodDef callMethods[] = {  // other .Call method definitions,  {"run_testthat_tests", (DL_FUNC) &run_testthat_tests, 0},  {NULL, NULL, 0}};void R_init_<pkg>(DllInfo* dllInfo) {  R_registerRoutines(dllInfo, NULL, callMethods, NULL, NULL);  R_useDynamicSymbols(dllInfo, FALSE);}

replacing⁠<pkg>⁠ above with the name of your package, as appropriate.

SeeControlling VisibilityandRegistering Symbolsin theWriting R Extensions manual for more information.

Advanced Usage

If you'd like to write your own Catch test runner, you caninstead use thetestthat::catchSession() object in a filewith the form:

#define TESTTHAT_TEST_RUNNER#include <testthat.h>void run(){    Catch::Session& session = testthat::catchSession();    // interact with the session object as desired}

This can be useful if you'd like to run your unit testswith custom arguments passed to the Catch session.

Standalone Usage

If you'd like to use the C++ unit testing facilities providedby Catch, but would prefer not to use the regulartestthatR testing infrastructure, you can manually run the unit testsby inserting a call to:

.Call("run_testthat_tests", PACKAGE = <pkgName>)

as necessary within your unit test suite.

See Also

Catch,the library used to enable C++ unit testing.


Verify output

Description

[Superseded]

This function is superseded in favour ofexpect_snapshot() and friends.

This is a regression test that records interwoven code and output into afile, in a similar way to knitting an.Rmd file (but see caveats below).

verify_output() is designed particularly for testing print methods and errormessages, where the primary goal is to ensure that the output is helpful toa human. Obviously, you can't test that with code, so the best you can do ismake the results explicit by saving them to a text file. This makes the outputeasy to verify in code reviews, and ensures that you don't change the outputby accident.

verify_output() is designed to be used with git: to see what has changedfrom the previous run, you'll need to use⁠git diff⁠ or similar.

Usage

verify_output(  path,  code,  width = 80,  crayon = FALSE,  unicode = FALSE,  env = caller_env())

Arguments

path

Path to record results.

This should usually be a call totest_path() in order to ensure thatthe same path is used when run interactively (when the working directoryis typically the project root), and when run as an automated test (whenthe working directory will betests/testthat).

code

Code to execute. This will usually be a multiline expressioncontained within{} (similarly totest_that() calls).

width

Width of console output

crayon

Enable cli/crayon package colouring?

unicode

Enable cli package UTF-8 symbols? If you set this toTRUE, callskip_if(!cli::is_utf8_output()) to disable thetest on your CI platforms that don't support UTF-8 (e.g. Windows).

env

The environment to evaluatecode in.

Syntax

verify_output() can only capture the abstract syntax tree, losing allwhitespace and comments. To mildly offset this limitation:

CRAN

On CRAN,verify_output() will never fail, even if the output changes.This avoids false positives because tests of print methods and errormessages are often fragile due to implicit dependencies on other packages,and failure does not imply incorrect computation, just a change inpresentation.


Watch a directory for changes (additions, deletions & modifications).

Description

This is used to power theauto_test() andauto_test_package() functions which are used to rerun testswhenever source code changes.

Usage

watch(path, callback, pattern = NULL, hash = TRUE)

Arguments

path

character vector of paths to watch. Omit trailing backslash.

callback

function called every time a change occurs. It shouldhave three parameters: added, deleted, modified, and should returnTRUE to keep watching, orFALSE to stop.

pattern

file pattern passed todir()

hash

hashes are more accurate at detecting changes, but are slowerfor large files. WhenFALSE, uses modification time stamps

Details

Use Ctrl + break (windows), Esc (mac gui) or Ctrl + C (command line) tostop the watcher.


Mock functions in a package.

Description

[Defunct]

with_mock() andlocal_mock() are now defunct, and can be replaced bywith_mocked_bindings() andlocal_mocked_bindings(). These functions onlyworked by abusing of R's internals.

Usage

with_mock(..., .env = topenv())local_mock(..., .env = topenv(), .local_envir = parent.frame())

Arguments

...

named parameters redefine mocked functions, unnamed parameterswill be evaluated after mocking the functions

.env

the environment in which to patch the functions,defaults to the top-level environment. A character is interpreted aspackage name.

.local_envir

Environment in which to add exit handler.For expert use only.


[8]ページ先頭

©2009-2025 Movatter.jp