Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Possible extra copy in image resampling with NumPy 2 #27666

Open
@QuLogic

Description

@QuLogic

Running with 1.24.1 andNPY_PROMOTION_STATE=weak_and_warn, I think we may be introducing an unintentional copy here?

__________ test_norm_update_figs[png] __________ext = 'png', request = <FixtureRequest for <Function test_norm_update_figs[png]>>, args = (), kwargs = {}, file_name = 'test_norm_update_figs[png]'fig_test = <Figure size 640x480 with 1 Axes>, fig_ref = <Figure size 640x480 with 1 Axes>, figs = []lib/matplotlib/testing/decorators.py:411:lib/matplotlib/tests/test_colors.py:1661: in test_norm_update_figslib/matplotlib/backends/backend_agg.py:387: in drawlib/matplotlib/artist.py:95: in draw_wrapperlib/matplotlib/artist.py:72: in draw_wrapperlib/matplotlib/figure.py:3117: in drawlib/matplotlib/image.py:132: in _draw_list_compositing_imageslib/matplotlib/artist.py:72: in draw_wrapperlib/matplotlib/axes/_base.py:3095: in drawlib/matplotlib/image.py:132: in _draw_list_compositing_imageslib/matplotlib/artist.py:72: in draw_wrapperlib/matplotlib/image.py:653: in drawlib/matplotlib/image.py:945: in make_image_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _self = <matplotlib.image.AxesImage object at 0x7f02f6900c40>A = masked_array(  data=[[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9],        [10, 11, 12, 13, 14, 15, 16, 17, 18, 19],    ... 82, 83, 84, 85, 86, 87, 88, 89],        [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]],  mask=False,  fill_value=999999)in_bbox = Bbox([[-0.5, 9.5], [9.5, -0.5]]), out_bbox = <matplotlib.transforms.TransformedBbox object at 0x7f02f68f6370>clip_bbox = <matplotlib.transforms.TransformedBbox object at 0x7f02f6900e80>, magnification = 1.0, unsampled = False, round_to_pixel_border = True    def _make_image(self, A, in_bbox, out_bbox, clip_bbox, magnification=1.0,                    unsampled=False, round_to_pixel_border=True):        """        Normalize, rescale, and colormap the image *A* from the given *in_bbox*        (in data space), to the given *out_bbox* (in pixel space) clipped to        the given *clip_bbox* (also in pixel space), and magnified by the        *magnification* factor.        *A* may be a greyscale image (M, N) with a dtype of `~numpy.float32`,        `~numpy.float64`, `~numpy.float128`, `~numpy.uint16` or `~numpy.uint8`,        or an (M, N, 4) RGBA image with a dtype of `~numpy.float32`,        `~numpy.float64`, `~numpy.float128`, or `~numpy.uint8`.        If *unsampled* is True, the image will not be scaled, but an        appropriate affine transformation will be returned instead.        If *round_to_pixel_border* is True, the output image size will be        rounded to the nearest pixel boundary.  This makes the images align        correctly with the Axes.  It should not be used if exact scaling is        needed, such as for `FigureImage`.        Returns        -------        image : (M, N, 4) `numpy.uint8` array            The RGBA image, resampled unless *unsampled* is True.        x, y : float            The upper left corner where the image should be drawn, in pixel            space.        trans : `~matplotlib.transforms.Affine2D`            The affine transformation from image to pixel space.        """        if A is None:            raise RuntimeError('You must first set the image '                               'array or the image attribute')        if A.size == 0:            raise RuntimeError("_make_image must get a non-empty image. "                               "Your Artist's draw method must filter before "                               "this method is called.")        clipped_bbox = Bbox.intersection(out_bbox, clip_bbox)        if clipped_bbox is None:            return None, 0, 0, None        out_width_base = clipped_bbox.width * magnification        out_height_base = clipped_bbox.height * magnification        if out_width_base == 0 or out_height_base == 0:            return None, 0, 0, None        if self.origin == 'upper':            # Flip the input image using a transform.  This avoids the            # problem with flipping the array, which results in a copy            # when it is converted to contiguous in the C wrapper            t0 = Affine2D().translate(0, -A.shape[0]).scale(1, -1)        else:            t0 = IdentityTransform()        t0 += (            Affine2D()            .scale(                in_bbox.width / A.shape[1],                in_bbox.height / A.shape[0])            .translate(in_bbox.x0, in_bbox.y0)            + self.get_transform())        t = (t0             + (Affine2D()                .translate(-clipped_bbox.x0, -clipped_bbox.y0)                .scale(magnification)))        # So that the image is aligned with the edge of the Axes, we want to        # round up the output width to the next integer.  This also means        # scaling the transform slightly to account for the extra subpixel.        if ((not unsampled) and t.is_affine and round_to_pixel_border and                (out_width_base % 1.0 != 0.0 or out_height_base % 1.0 != 0.0)):            out_width = math.ceil(out_width_base)            out_height = math.ceil(out_height_base)            extra_width = (out_width - out_width_base) / out_width_base            extra_height = (out_height - out_height_base) / out_height_base            t += Affine2D().scale(1.0 + extra_width, 1.0 + extra_height)        else:            out_width = int(out_width_base)            out_height = int(out_height_base)        out_shape = (out_height, out_width)        if not unsampled:            if not (A.ndim == 2 or A.ndim == 3 and A.shape[-1] in (3, 4)):                raise ValueError(f"Invalid shape {A.shape} for image data")            if A.ndim == 2 and self._interpolation_stage != 'rgba':                # if we are a 2D array, then we are running through the                # norm + colormap transformation.  However, in general the                # input data is not going to match the size on the screen so we                # have to resample to the correct number of pixels                # TODO slice input array first                a_min = A.min()                a_max = A.max()                if a_min is np.ma.masked:  # All masked; values don't matter.                    a_min, a_max = np.int32(0), np.int32(1)                if A.dtype.kind == 'f':  # Float dtype: scale to same dtype.                    scaled_dtype = np.dtype(                        np.float64 if A.dtype.itemsize > 4 else np.float32)                    if scaled_dtype.itemsize < A.dtype.itemsize:                        _api.warn_external(f"Casting input data from {A.dtype}"                                           f" to {scaled_dtype} for imshow.")                else:  # Int dtype, likely.                    # Scale to appropriately sized float: use float32 if the                    # dynamic range is small, to limit the memory footprint.                    da = a_max.astype(np.float64) - a_min.astype(np.float64)                    scaled_dtype = np.float64 if da > 1e8 else np.float32                # Scale the input data to [.1, .9].  The Agg interpolators clip                # to [0, 1] internally, and we use a smaller input scale to                # identify the interpolated points that need to be flagged as                # over/under.  This may introduce numeric instabilities in very                # broadly scaled data.                # Always copy, and don't allow array subtypes.                A_scaled = np.array(A, dtype=scaled_dtype)                # Clip scaled data around norm if necessary.  This is necessary                # for big numbers at the edge of float64's ability to represent                # changes.  Applying a norm first would be good, but ruins the                # interpolation of over numbers.                self.norm.autoscale_None(A)                dv = np.float64(self.norm.vmax) - np.float64(self.norm.vmin)                vmid = np.float64(self.norm.vmin) + dv / 2                fact = 1e7 if scaled_dtype == np.float64 else 1e4                newmin = vmid - dv * fact                if newmin < a_min:                    newmin = None                else:                    a_min = np.float64(newmin)                newmax = vmid + dv * fact                if newmax > a_max:                    newmax = None                else:                    a_max = np.float64(newmax)                if newmax is not None or newmin is not None:                    np.clip(A_scaled, newmin, newmax, out=A_scaled)                # Rescale the raw data to [offset, 1-offset] so that the                # resampling code will run cleanly.  Using dyadic numbers here                # could reduce the error, but would not fully eliminate it and                # breaks a number of tests (due to the slightly different                # error bouncing some pixels across a boundary in the (very                # quantized) colormapping step).                offset = .1                frac = .8                # Run vmin/vmax through the same rescaling as the raw data;                # otherwise, data values close or equal to the boundaries can                # end up on the wrong side due to floating point error.                vmin, vmax = self.norm.vmin, self.norm.vmax                if vmin is np.ma.masked:                    vmin, vmax = a_min, a_max                vrange = np.array([vmin, vmax], dtype=scaled_dtype)>               A_scaled -= a_minE               UserWarning: result dtype changed due to the removal of value-based promotion from NumPy. Changed from float32 to float64.lib/matplotlib/image.py:492: UserWarning

However, I'm not sure what this warning means for in-place modification. Itmay be alright, but I have not tested the exact behaviour.

Originally posted by@QuLogic in#27657 (review)

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions


      [8]ページ先頭

      ©2009-2025 Movatter.jp