matplotlib.testing#

matplotlib.testing#

Helper functions for testing.

matplotlib.testing.ipython_in_subprocess(requested_backend_or_gui_framework,all_expected_backends)[source]#
matplotlib.testing.is_ci_environment()[source]#
matplotlib.testing.set_font_settings_for_testing()[source]#
matplotlib.testing.set_reproducibility_for_testing()[source]#
matplotlib.testing.setup()[source]#
matplotlib.testing.subprocess_run_for_testing(command,env=None,timeout=60,stdout=None,stderr=None,check=False,text=True,capture_output=False)[source]#

Create and run a subprocess.

Thin wrapper aroundsubprocess.run, intended for testing. Willmark fork() failures on Cygwin as expected failures: not asuccess, but not indicating a problem with the code either.

Parameters:
argslist of str
envdict[str, str]
timeoutfloat
stdout, stderr
checkbool
textbool

Also calleduniversal_newlines in subprocess. I chose thisname since the main effect is returning bytes (False) vs. str(True), though it also tries to normalize newlines acrossplatforms.

capture_outputbool

Set stdout and stderr to subprocess.PIPE

Returns:
procsubprocess.Popen
Raises:
pytest.xfail

If platform is Cygwin and subprocess reports a fork() failure.

matplotlib.testing.subprocess_run_helper(func,*args,timeout,extra_env=None)[source]#

Run a function in a sub-process.

Parameters:
funcfunction

The function to be run. It must be in a module that is importable.

*argsstr

Any additional command line arguments to be passed inthe first argument tosubprocess.run.

extra_envdict[str, str]

Any additional environment variables to be set for the subprocess.

matplotlib.testing.compare#

Utilities for comparing image results.

matplotlib.testing.compare.calculate_rms(expected_image,actual_image)[source]#

Calculate the per-pixel errors, then compute the root mean square error.

matplotlib.testing.compare.comparable_formats()[source]#

Return the list of file formats thatcompare_images can compareon this system.

Returns:
list of str

E.g.['png','pdf','svg','eps'].

matplotlib.testing.compare.compare_images(expected,actual,tol,in_decorator=False)[source]#

Compare two "image" files checking differences within a tolerance.

The two given filenames may point to files which are convertible toPNG via theconverter dictionary. The underlying RMS is calculatedwith thecalculate_rms function.

Parameters:
expectedstr

The filename of the expected image.

actualstr

The filename of the actual image.

tolfloat

The tolerance (a color value difference, where 255 is themaximal difference). The test fails if the average pixeldifference is greater than this value.

in_decoratorbool

Determines the output format. If called from image_comparisondecorator, this should be True. (default=False)

Returns:
None or dict or str

ReturnNone if the images are equal within the given tolerance.

If the images differ, the return value depends onin_decorator.Ifin_decorator is true, a dict with the following entries isreturned:

  • rms: The RMS of the image difference.

  • expected: The filename of the expected image.

  • actual: The filename of the actual image.

  • diff_image: The filename of the difference image.

  • tol: The comparison tolerance.

Otherwise, a human-readable multi-line string representation of thisinformation is returned.

Examples

img1="./baseline/plot.png"img2="./output/plot.png"compare_images(img1,img2,0.001)

matplotlib.testing.decorators#

matplotlib.testing.decorators.check_figures_equal(*,extensions=('png',),tol=0)[source]#

Decorator for test cases that generate and compare two figures.

The decorated function must take two keyword arguments,fig_testandfig_ref, and draw the test and reference images on them.After the function returns, the figures are saved and compared.

This decorator should be preferred overimage_comparison when possible inorder to keep the size of the test suite from ballooning.

Parameters:
extensionslist, default: ["png"]

The extensions to test. Supported extensions are "png", "pdf", "svg".

Testing with the one default extension is sufficient if the output is notformat dependent, e.g. if you test that abar() plot yields the sameresult as some manually placed Rectangles. You should use all extensionsif a renderer property is involved, e.g. correct alpha blending.

tolfloat

The RMS threshold above which the test is considered failed.

Raises:
RuntimeError

If any new figures are created (and not subsequently closed) insidethe test function.

Examples

Check that callingAxes.plot with a single argument plots it against[0,1,2,...]:

@check_figures_equal()deftest_plot(fig_test,fig_ref):fig_test.subplots().plot([1,3,5])fig_ref.subplots().plot([0,1,2],[1,3,5])
matplotlib.testing.decorators.image_comparison(baseline_images,extensions=None,tol=0,freetype_version=None,remove_text=False,savefig_kwarg=None,style=('classic','_classic_test_patch'))[source]#

Compare images generated by the test with those specified inbaseline_images, which must correspond, else anImageComparisonFailureexception will be raised.

Parameters:
baseline_imageslist or None

A list of strings specifying the names of the images generated bycalls toFigure.savefig.

IfNone, the test function must use thebaseline_images fixture,either as a parameter or withpytest.mark.usefixtures. This value isonly allowed when using pytest.

extensionsNone or list of str

The list of extensions to test, e.g.['png','pdf'].

IfNone, defaults to: png, pdf, and svg.

When testing a single extension, it can be directly included in thenames passed tobaseline_images. In that case,extensions must notbe set.

In order to keep the size of the test suite from ballooning, we onlyinclude thesvg orpdf outputs if the test is explicitlyexercising a feature dependent on that backend (see also thecheck_figures_equal decorator for that purpose).

tolfloat, default: 0

The RMS threshold above which the test is considered failed.

Due to expected small differences in floating-point calculations, on32-bit systems an additional 0.06 is added to this threshold.

freetype_versionstr or tuple

The expected freetype version or range of versions for this test topass.

remove_textbool

Remove the title and tick text from the figure before comparison. Thisis useful to make the baseline images independent of variations in textrendering between different versions of FreeType.

This does not remove other, more deliberate, text, such as legends andannotations.

savefig_kwargdict

Optional arguments that are passed to the savefig method.

stylestr, dict, or list

The optional style(s) to apply to the image test. The test itselfcan also apply additional styles if desired. Defaults to["classic","_classic_test_patch"].

matplotlib.testing.decorators.remove_ticks_and_titles(figure)[source]#

matplotlib.testing.exceptions#

exceptionmatplotlib.testing.exceptions.ImageComparisonFailure[source]#

Bases:AssertionError

Raise this exception to mark a test as a comparison between two images.

Testing with optional dependencies#

For more information on fixtures, seepytest fixtures.

matplotlib.testing.conftest.pd()#

Fixture to import and configure pandas. Using this fixture, the test is skipped whenpandas is not installed. Use this fixture instead of importing pandas in test files.

Examples

Request the pandas fixture by passing inpd as an argument to the test

deftest_matshow_pandas(pd):df=pd.DataFrame({'x':[1,2,3],'y':[4,5,6]})im=plt.figure().subplots().matshow(df)np.testing.assert_array_equal(im.get_array(),df)
matplotlib.testing.conftest.xr()#

Fixture to import xarray so that the test is skipped when xarray is not installed.Use this fixture instead of importing xrray in test files.

Examples

Request the xarray fixture by passing inxr as an argument to the test

deftest_imshow_xarray(xr):ds=xr.DataArray(np.random.randn(2,3))im=plt.figure().subplots().imshow(ds)np.testing.assert_array_equal(im.get_array(),ds)