Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

TST: Add failing test#26522

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Closed
larsoner wants to merge2 commits intomatplotlib:mainfromlarsoner:bug
Closed

TST: Add failing test#26522

larsoner wants to merge2 commits intomatplotlib:mainfromlarsoner:bug

Conversation

larsoner
Copy link
Contributor

This is a bug-report-as-PR -- MNE-Pythonmain just started failing with:

mne/viz/topo.py:496: in _imshow_tfr_unified    ax.imshow(.../opt/hostedtoolcache/Python/3.11.4/x64/lib/python3.11/site-packages/matplotlib/artist.py:766: in set_clip_box    if clipbox != self.clipbox:E   ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

So passingbbox-like used to work but now doesn't (i.e., the test added here should fail). cc@oscargus as this seems to have been introduced by#26326

Not sure if the correct solution here is to cast toBboxBase if not an instance of one or something... but at least in MNE-Python we can work around for now by passingtuple(bbox) or so, so no rush from our end. 🤷

@oscargusoscargus added this to thev3.8.0 milestoneAug 14, 2023
@oscargusoscargus added the Release criticalFor bugs that make the library unusable (segfaults, incorrect plots, etc) and major regressions. labelAug 14, 2023
@oscargus
Copy link
Member

I guess the easiest is just to revert that change forset_clipbox. Probably not worthwhile to add logic to check it in a more complicated way

Will add a PR tomorrow.

@oscargus
Copy link
Member

Reverting that change leads to:

    def make_image(self, renderer, magnification=1.0, unsampled=False):        # docstring inherited        trans = self.get_transform()        # image is created in the canvas coordinate.        x1, x2, y1, y2 = self.get_extent()        bbox = Bbox(np.array([[x1, y1], [x2, y2]]))        transformed_bbox = TransformedBbox(bbox, trans)>       clip = ((self.get_clip_box() or self.axes.bbox) if self.get_clip_on()                else self.figure.bbox)E       ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

The line with the new error goes back to 3.3.

Based on the doc-string, only Bbox and None are supported. That only Bbox is supported goes about ten years back.

When did this work most recently? Is it possible that the added test is not similar enough to your use case?

@larsoner
Copy link
ContributorAuthor

When did this work most recently? Is it possible that the added test is not similar enough to your use case?

It "works" on the latest release version, the error only popped up in ourpip-pre run that usesscientific-python-nightly-wheels:

https://github.com/mne-tools/mne-python/actions/runs/5858016804/job/15881180313#step:17:5090

I put "works" in scare quotes because if I revert to matplotlib 3.7 locally and check the type it'snp.ndarray, and it does not raise an error. But if I then force MNE-Python to draw during image creation with aax.figure.canvas.draw_idle() and at the MPL end:

  1. Hack intomatplotlib/image.py and add some print statements to__init__.py:
    print('\ninitializing')print(type(self))print(kwargs.get('clip_box', None))print(self.get_clip_box())
  2. Add a@property forclipbox with @clipbox.setter so I can see who sets it:
    @clipbox.setterdef clipbox(self, clip_box):    import inspect    print('\nsetting')    print(inspect.stack()[1])    print(clip_box)    self._clipbox = clip_box
  3. Add some print statements tomake_image (plus aRuntimeError to make it quit after the conditional inmake_image):
    print('making')print(self.get_clip_box())print(self.axes.bbox)

It looks like that in 3.7 the clip_box we pass never actually gets used, because it gets overwritten byset_clip_path:

Print statements
$ pytest mne/viz/tests/test_topo.py -xsk test_plot_tfr_topo...mne/viz/tests/test_topo.py settingFrameInfo(frame=<frame at 0x7f652914aae0, file '/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', line 191, code __init__>, filename='/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', lineno=191, function='__init__', code_context=['        self.clipbox = None\n'], index=0, positions=Positions(lineno=191, end_lineno=191, col_offset=8, end_col_offset=20))NonesettingFrameInfo(frame=<frame at 0x7f652914aae0, file '/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', line 774, code set_clip_box>, filename='/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', lineno=774, function='set_clip_box', code_context=['        self.clipbox = clipbox\n'], index=0, positions=Positions(lineno=774, end_lineno=774, col_offset=8, end_col_offset=20))[0.06488496 0.62804889 0.03376805 0.02814004]initializing<class 'matplotlib.image.AxesImage'>[0.06488496 0.62804889 0.03376805 0.02814004][0.06488496 0.62804889 0.03376805 0.02814004]settingFrameInfo(frame=<frame at 0x7f65297c6a40, file '/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', line 808, code set_clip_path>, filename='/home/larsoner/python/virtualenvs/base/lib/python3.11/site-packages/matplotlib/artist.py', lineno=808, function='set_clip_path', code_context=['                self.clipbox = TransformedBbox(Bbox.unit(),\n'], index=0, positions=Positions(lineno=808, end_lineno=808, col_offset=16, end_col_offset=28))TransformedBbox(    Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0),    CompositeGenericTransform(        CompositeGenericTransform(            BboxTransformTo(                Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)),            Affine2D().scale(1.0)),        BboxTransformTo(            TransformedBbox(                Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),                BboxTransformTo(                    TransformedBbox(                        Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),                        Affine2D().scale(100.0)))))))makingTransformedBbox(    Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0),    CompositeGenericTransform(        CompositeGenericTransform(            BboxTransformTo(                Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)),            Affine2D().scale(1.0)),        BboxTransformTo(            TransformedBbox(                Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),                BboxTransformTo(                    TransformedBbox(                        Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),                        Affine2D().scale(100.0)))))))TransformedBbox(    Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),    BboxTransformTo(        TransformedBbox(            Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),            Affine2D().scale(100.0))))F...mne/viz/topo.py:508: in _imshow_tfr_unified    ax.figure.canvas.draw_idle()../virtualenvs/base/lib/python3.11/site-packages/matplotlib/backend_bases.py:2082: in draw_idle    self.draw(*args, **kwargs)...../virtualenvs/base/lib/python3.11/site-packages/matplotlib/image.py:641: in draw    im, l, b, trans = self.make_image(../virtualenvs/base/lib/python3.11/site-packages/matplotlib/image.py:968: in make_image    raise RuntimeErrorE   RuntimeError

So in other words,ndarray was never actually used / checked. So this makes me think this could be closed...

But a follow-up question arises: passingtuple(my_ndarray_clip_box) also does not error, even onmain. It seems like it should, but maybe somewhere in there it casts thetuple, but at least looking at the 3.7 code it doesn't look like it. Making the same changes onmain and passingclip_box=tuple(my_ndarray) I get:

Using tuple on main
mne/viz/tests/test_topo.py settingFrameInfo(frame=<frame at 0x7fc02cf24f60, file '/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', line 191, code __init__>, filename='/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', lineno=191, function='__init__', code_context=['        self.clipbox = None\n'], index=0, positions=Positions(lineno=191, end_lineno=191, col_offset=8, end_col_offset=20))NonesettingFrameInfo(frame=<frame at 0x7fc02cf24f60, file '/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', line 767, code set_clip_box>, filename='/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', lineno=767, function='set_clip_box', code_context=['            self.clipbox = clipbox\n'], index=0, positions=Positions(lineno=767, end_lineno=767, col_offset=12, end_col_offset=24))(np.float64(0.06488496453324114), np.float64(0.6280488934398878), np.float64(0.03376805371721399), np.float64(0.028140044764344993))initializing<class 'matplotlib.image.AxesImage'>(np.float64(0.06488496453324114), np.float64(0.6280488934398878), np.float64(0.03376805371721399), np.float64(0.028140044764344993))(np.float64(0.06488496453324114), np.float64(0.6280488934398878), np.float64(0.03376805371721399), np.float64(0.028140044764344993))settingFrameInfo(frame=<frame at 0x7fc02d5d7440, file '/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', line 801, code set_clip_path>, filename='/home/larsoner/python/matplotlib/lib/matplotlib/artist.py', lineno=801, function='set_clip_path', code_context=['                self.clipbox = TransformedBbox(Bbox.unit(),\n'], index=0, positions=Positions(lineno=801, end_lineno=801, col_offset=16, end_col_offset=28))TransformedBbox(    Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0),    CompositeGenericTransform(        CompositeGenericTransform(            BboxTransformTo(                Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)),            Affine2D().scale(1.0)),        BboxTransformTo(            TransformedBbox(                Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),                BboxTransformTo(                    TransformedBbox(                        Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),                        Affine2D().scale(100.0)))))))makingTransformedBbox(    Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0),    CompositeGenericTransform(        CompositeGenericTransform(            BboxTransformTo(                Bbox(x0=0.0, y0=0.0, x1=1.0, y1=1.0)),            Affine2D().scale(1.0)),        BboxTransformTo(            TransformedBbox(                Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),                BboxTransformTo(                    TransformedBbox(                        Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),                        Affine2D().scale(100.0)))))))TransformedBbox(    Bbox(x0=0.015, y0=0.025, x1=0.8879999999999999, y1=0.975),    BboxTransformTo(        TransformedBbox(            Bbox(x0=0.0, y0=0.0, x1=6.4, y1=4.8),            Affine2D().scale(100.0))))

No idea if this is doing the right thing or not (my head is spinning trying to interpret the flow)... But it seems like you should get a TypeError that theclip_box is not the correct type at the very least in thendarray case, maybe also thetuple case. So maybe that's the right approach here rather than reverting#26326 ?

@oscargus
Copy link
Member

But it seems like you should get a TypeError that theclip_box is not the correct type

Yes, the type checking is typically quite limited in mostset_*-methods all over the code base.

Will try to understand the rest of the very nice testing you provided and see if there is something else hiding here. The overwriting fromset_clip_path and that the originalclip_box wasn't used seems at least at first glance as something that require more investigation. Thanks!

@ksunden
Copy link
Member

ndarray errors because numpy dissallowsif ndarray(..., dtype=bool): as ambiguous (and gives anarray, not a bool for!= comparisons.

The tuple has no such protections, and thus evaluates as False, which is not an error condition, but it remains not the correct type to be used as a BBox

Now, bboxesdo implement__array__, which returns [[x1, y1], [x2, y2]], the bounds as defined by the points, and so code (perhaps most particularly C code)may work in some edge cases (though even then it is a 2D array, so not quite the same as the tuple form provided here, though C code may ignore the given shape and thus work to some extent). However, it is documented as being expected to be a BBox, not a generic sequence, and so there are quite likely parts of the code thatdo rely on that fact.

Unfortunately the relationships of these arguments which stem fromset_* methods onArtist is rather difficult to document effectively... They are not super common to use and there are many that would make our already long and hard to know what is relevant toyou docstrings of methods likeimshow even longer and harder to figure out what is relevant. But equally, when itis relevant, it would have been nice to see the expected type there. (Not to mention that these are added based on the presence of aset_ method, which adds a layer of complexity if wewere to add them to the docstring)

Ultimately I will admit to a level of skepticism of the idea which appears to be at the center of wanting the clip box in the first place, which seems from my cursory glance at the code to be centered around using one Matplotlib.Axes object to serve as multiple logical axes by manually applying scaling factors and clip boxes. This is, in fact, the primary goal of Matplotlib Axes and the Transform stack. There are quite possibly some performance improvements to Matplotlib which alleviate (if not eliminate) the problems which caused that to be written in the first place. (Mind you, I have not run benchmarks, so I do not have anything empirical to back up the feeling that that is likely not as big of a gain as you would hope/intend, or potentially it oncewas) And further skepticism that it will continue to behave as expected for any interactive usage, or for resizing of the canvas (if all you do are static plots, that is certainly not a dealbreaker, just have a sense that it likely would behave strangely).

If the performance gainsare still relevant, what you probably want is instead oftuple(pos),TransformedBbox(Bbox.from_extents(*pos), ax.transAxes). (Please note, I have not tested, and please ensure that the order of the position bounds matches your expectation, it isleft, bottom, right, top forfrom_extents)

@larsoner
Copy link
ContributorAuthor

(Mind you, I have not run benchmarks, so I do not have anything empirical to back up the feeling that that is likely not as big of a gain as you would hope/intend, or potentially it once was) And further skepticism that it will continue to behave as expected for any interactive usage, or for resizing of the canvas (if all you do are static plots, that is certainly not a dealbreaker, just have a sense that it likely would behave strangely).

At the time we wrote the core of the code multiple Axes were indeedway slower, but it was also a long time ago. Certainly we could/should revisit better ways of doing what we want. And theclip_box I don't think ever worked... someday hopefully we'll do a nicer job :)

@ksunden
Copy link
Member

As I think we've determined that this is okay to error, just that the error message could perhaps be better/sooner, I'm going to go ahead and close this as not a release blocker.

@ksundenksunden closed thisSep 4, 2023
@ksundenksunden removed the Release criticalFor bugs that make the library unusable (segfaults, incorrect plots, etc) and major regressions. labelSep 4, 2023
@oscargusoscargus mentioned this pull requestSep 5, 2023
5 tasks
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers
No reviews
Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
v3.8.0
Development

Successfully merging this pull request may close these issues.

3 participants
@larsoner@oscargus@ksunden

[8]ページ先頭

©2009-2025 Movatter.jp