Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Fix Numpy 2.0 related test failures#27657

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
dstansby merged 1 commit intomatplotlib:mainfromksunden:np_20_tests
Jan 17, 2024

Conversation

ksunden
Copy link
Member

PR summary

Closes#27645 (technically combination of this and#27624, but the comments I've left on there have all been about this part)

Numpy made tweaks to dtype promotions that affected some computation (but only at the limits of floating point precision)

This PR counter acts these:

  • pie image tests have a tolerance introduced
    • could perhaps tweak some of the exact values down a bit, but none were more than (0.01 RMS)
    • All differences were imperceptable as a human, andsome tolerance would be required regardless, so I did not replace the images
    • Technically could be changed by setting the env varNPY_PROMOTION_state=legacy, but that is changing intendid numpy 2.0 behavior
  • pylab updated to ensure builtins are used for two more functions now included in numpy's namespace that occlude builtints
    • pow andbool
  • Assertions intest_scalarmappable_to_rgba changed tonp.testing.assert_almost_equal
    • Since some values are close to0, some of theassert_almost_equal_nulp and similar options proved insufficient.
    • Could useassert_allclose instead, but would have to specify tolerances.

PR checklist

@tacaswell
Copy link
Member

The second xref is to the pytest8 PR which I do not understand how it is related.

@tacaswell
Copy link
Member

oh, I understand, this +#27624 close#27645

@tacaswelltacaswell added this to thev3.8.3 milestoneJan 16, 2024
@dstansbydstansby merged commit101ffe8 intomatplotlib:mainJan 17, 2024
meeseeksmachine pushed a commit to meeseeksmachine/matplotlib that referenced this pull requestJan 17, 2024
ksunden added a commit that referenced this pull requestJan 17, 2024
…657-on-v3.8.xBackport PR#27657 on branch v3.8.x (Fix Numpy 2.0 related test failures)
Copy link
Member

@QuLogicQuLogic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Running with 1.24.1 andNPY_PROMOTION_STATE=weak_and_warn, I think we may be introducing an unintentional copy here?

__________ test_norm_update_figs[png] __________ext = 'png', request = <FixtureRequest for <Function test_norm_update_figs[png]>>, args = (), kwargs = {}, file_name = 'test_norm_update_figs[png]'fig_test = <Figure size 640x480 with 1 Axes>, fig_ref = <Figure size 640x480 with 1 Axes>, figs = []    @pytest.mark.parametrize("ext", extensions)    def wrapper(*args, ext, request, **kwargs):        if 'ext' in old_sig.parameters:            kwargs['ext'] = ext        if 'request' in old_sig.parameters:            kwargs['request'] = request        file_name = "".join(c for c in request.node.name                            if c in ALLOWED_CHARS)        try:            fig_test = plt.figure("test")            fig_ref = plt.figure("reference")            with _collect_new_figures() as figs:>               func(*args, fig_test=fig_test, fig_ref=fig_ref, **kwargs)lib/matplotlib/testing/decorators.py:411:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _lib/matplotlib/tests/test_colors.py:1661: in test_norm_update_figs    fig_test.canvas.draw()lib/matplotlib/backends/backend_agg.py:387: in draw    self.figure.draw(self.renderer)lib/matplotlib/artist.py:95: in draw_wrapper    result = draw(artist, renderer, *args, **kwargs)lib/matplotlib/artist.py:72: in draw_wrapper    return draw(artist, renderer)lib/matplotlib/figure.py:3117: in draw    mimage._draw_list_compositing_images(lib/matplotlib/image.py:132: in _draw_list_compositing_images    a.draw(renderer)lib/matplotlib/artist.py:72: in draw_wrapper    return draw(artist, renderer)lib/matplotlib/axes/_base.py:3095: in draw    mimage._draw_list_compositing_images(lib/matplotlib/image.py:132: in _draw_list_compositing_images    a.draw(renderer)lib/matplotlib/artist.py:72: in draw_wrapper    return draw(artist, renderer)lib/matplotlib/image.py:653: in draw    im, l, b, trans = self.make_image(lib/matplotlib/image.py:945: in make_image    return self._make_image(self._A, bbox, transformed_bbox, clip,_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _self = <matplotlib.image.AxesImage object at 0x7f02f6900c40>A = masked_array(  data=[[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9],        [10, 11, 12, 13, 14, 15, 16, 17, 18, 19],    ... 82, 83, 84, 85, 86, 87, 88, 89],        [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]],  mask=False,  fill_value=999999)in_bbox = Bbox([[-0.5, 9.5], [9.5, -0.5]]), out_bbox = <matplotlib.transforms.TransformedBbox object at 0x7f02f68f6370>clip_bbox = <matplotlib.transforms.TransformedBbox object at 0x7f02f6900e80>, magnification = 1.0, unsampled = False, round_to_pixel_border = True    def _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification=1.0,                    unsampled=False, round_to_pixel_border=True):        """        Normalize, rescale, and colormap the image *A* from the given *in_bbox*        (in data space), to the given *out_bbox* (in pixel space) clipped to        the given *clip_bbox* (also in pixel space), and magnified by the        *magnification* factor.        *A* may be a greyscale image (M, N) with a dtype of `~numpy.float32`,        `~numpy.float64`, `~numpy.float128`, `~numpy.uint16` or `~numpy.uint8`,        or an (M, N, 4) RGBA image with a dtype of `~numpy.float32`,        `~numpy.float64`, `~numpy.float128`, or `~numpy.uint8`.        If *unsampled* is True, the image will not be scaled, but an        appropriate affine transformation will be returned instead.        If *round_to_pixel_border* is True, the output image size will be        rounded to the nearest pixel boundary.  This makes the images align        correctly with the Axes.  It should not be used if exact scaling is        needed, such as for `FigureImage`.        Returns        -------        image : (M, N, 4) `numpy.uint8` array            The RGBA image, resampled unless *unsampled* is True.        x, y : float            The upper left corner where the image should be drawn, in pixel            space.        trans : `~matplotlib.transforms.Affine2D`            The affine transformation from image to pixel space.        """        if A is None:            raise RuntimeError('You must first set the image '                               'array or the image attribute')        if A.size == 0:            raise RuntimeError("_make_image must get a non-empty image. "                               "Your Artist's draw method must filter before "                               "this method is called.")        clipped_bbox = Bbox.intersection(out_bbox, clip_bbox)        if clipped_bbox is None:            return None, 0, 0, None        out_width_base = clipped_bbox.width * magnification        out_height_base = clipped_bbox.height * magnification        if out_width_base == 0 or out_height_base == 0:            return None, 0, 0, None        if self.origin == 'upper':            # Flip the input image using a transform.  This avoids the            # problem with flipping the array, which results in a copy            # when it is converted to contiguous in the C wrapper            t0 = Affine2D().translate(0, -A.shape[0]).scale(1, -1)        else:            t0 = IdentityTransform()        t0 += (            Affine2D()            .scale(                in_bbox.width / A.shape[1],                in_bbox.height / A.shape[0])            .translate(in_bbox.x0, in_bbox.y0)            + self.get_transform())        t = (t0             + (Affine2D()                .translate(-clipped_bbox.x0, -clipped_bbox.y0)                .scale(magnification)))        # So that the image is aligned with the edge of the Axes, we want to        # round up the output width to the next integer.  This also means        # scaling the transform slightly to account for the extra subpixel.        if ((not unsampled) and t.is_affine and round_to_pixel_border and                (out_width_base % 1.0 != 0.0 or out_height_base % 1.0 != 0.0)):            out_width = math.ceil(out_width_base)            out_height = math.ceil(out_height_base)            extra_width = (out_width - out_width_base) / out_width_base            extra_height = (out_height - out_height_base) / out_height_base            t += Affine2D().scale(1.0 + extra_width, 1.0 + extra_height)        else:            out_width = int(out_width_base)            out_height = int(out_height_base)        out_shape = (out_height, out_width)        if not unsampled:            if not (A.ndim == 2 or A.ndim == 3 and A.shape[-1] in (3, 4)):                raise ValueError(f"Invalid shape {A.shape} for image data")            if A.ndim == 2 and self._interpolation_stage != 'rgba':                # if we are a 2D array, then we are running through the                # norm + colormap transformation.  However, in general the                # input data is not going to match the size on the screen so we                # have to resample to the correct number of pixels                # TODO slice input array first                a_min = A.min()                a_max = A.max()                if a_min is np.ma.masked:  # All masked; values don't matter.                    a_min, a_max = np.int32(0), np.int32(1)                if A.dtype.kind == 'f':  # Float dtype: scale to same dtype.                    scaled_dtype = np.dtype(                        np.float64 if A.dtype.itemsize > 4 else np.float32)                    if scaled_dtype.itemsize < A.dtype.itemsize:                        _api.warn_external(f"Casting input data from {A.dtype}"                                           f" to {scaled_dtype} for imshow.")                else:  # Int dtype, likely.                    # Scale to appropriately sized float: use float32 if the                    # dynamic range is small, to limit the memory footprint.                    da = a_max.astype(np.float64) - a_min.astype(np.float64)                    scaled_dtype = np.float64 if da > 1e8 else np.float32                # Scale the input data to [.1, .9].  The Agg interpolators clip                # to [0, 1] internally, and we use a smaller input scale to                # identify the interpolated points that need to be flagged as                # over/under.  This may introduce numeric instabilities in very                # broadly scaled data.                # Always copy, and don't allow array subtypes.                A_scaled = np.array(A, dtype=scaled_dtype)                # Clip scaled data around norm if necessary.  This is necessary                # for big numbers at the edge of float64's ability to represent                # changes.  Applying a norm first would be good, but ruins the                # interpolation of over numbers.                self.norm.autoscale_None(A)                dv = np.float64(self.norm.vmax) - np.float64(self.norm.vmin)                vmid = np.float64(self.norm.vmin) + dv / 2                fact = 1e7 if scaled_dtype == np.float64 else 1e4                newmin = vmid - dv * fact                if newmin < a_min:                    newmin = None                else:                    a_min = np.float64(newmin)                newmax = vmid + dv * fact                if newmax > a_max:                    newmax = None                else:                    a_max = np.float64(newmax)                if newmax is not None or newmin is not None:                    np.clip(A_scaled, newmin, newmax, out=A_scaled)                # Rescale the raw data to [offset, 1-offset] so that the                # resampling code will run cleanly.  Using dyadic numbers here                # could reduce the error, but would not fully eliminate it and                # breaks a number of tests (due to the slightly different                # error bouncing some pixels across a boundary in the (very                # quantized) colormapping step).                offset = .1                frac = .8                # Run vmin/vmax through the same rescaling as the raw data;                # otherwise, data values close or equal to the boundaries can                # end up on the wrong side due to floating point error.                vmin, vmax = self.norm.vmin, self.norm.vmax                if vmin is np.ma.masked:                    vmin, vmax = a_min, a_max                vrange = np.array([vmin, vmax], dtype=scaled_dtype)>               A_scaled -= a_minE               UserWarning: result dtype changed due to the removal of value-based promotion from NumPy. Changed from float32 to float64.lib/matplotlib/image.py:492: UserWarning

Comment on lines +5746 to +5749
# Note: The `pie` image tests were affected by Numpy 2.0 changing promotions
# (NEP 50). While the changes were only marginal, tolerances were introduced.
# These tolerances could likely go away when numpy 2.0 is the minimum supported
# numpy and the images are regenerated.
Copy link
Member

@QuLogicQuLogicJan 17, 2024
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

It looks like this is caused by:

# The use of float32 is "historical", but can't be changed without
# regenerating the test baselines.
x=np.asarray(x,np.float32)

(which ironically is to avoid changing test images), so it could be fixed by explicitly upcasting again:

diff --git a/lib/matplotlib/axes/_axes.py b/lib/matplotlib/axes/_axes.pyindex b1343b5c65..d035e9b042 100644--- a/lib/matplotlib/axes/_axes.py+++ b/lib/matplotlib/axes/_axes.py@@ -3284,7 +3284,7 @@ class Axes(_AxesBase):         slices = []         autotexts = []-        for frac, label, expl in zip(x, labels, explode):+        for frac, label, expl in zip(x.astype(np.float64), labels, explode):             x, y = center             theta2 = (theta1 + frac) if counterclock else (theta1 - frac)             thetam = 2 * np.pi * 0.5 * (theta1 + theta2)

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@QuLogicQuLogicQuLogic left review comments

@tacaswelltacaswelltacaswell approved these changes

@dstansbydstansbydstansby approved these changes

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
v3.8.3
Development

Successfully merging this pull request may close these issues.

[TST] Upcoming dependency test failures
4 participants
@ksunden@tacaswell@QuLogic@dstansby

[8]ページ先頭

©2009-2025 Movatter.jp