Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork7.9k
FIX: Allow deepcopy on norms and scales#18126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
This seems like too heavy handed fix? We are trying to fix the the deep-copy norms but are enabling deep copying all of the scales. The scales participate in the transform stack getting from data -> screen so I'm not sure that making just some of the things in the tree deep-copy-able is the right choice. |
Well, the problem is not deep copying norms, per-se, its deep copying norms that are derived from scales, which is currently just The point is that the scales are at theend of the transform tree, so locking their transforms out of The hack-ier way to fix this is to write a |
Haven't had a chance to look at this yet, but I think this should be targeted to 3.4 (which#16457) is targeted for and not 3.3. |
brianpenghe commentedAug 6, 2020
I don't know whether this is related but I hadthis issue |
@tacaswell can you weigh in again on this if you are still concerned? |
Looking at things again (it's been a while), I am actually slightly confused why copying and deepcopying of transforms is forbidden in general (perhaps I completely missed the point, in which case I apologize in advance). In#18119 (comment)@tacaswell wrote
but actually AFAICT it is the modified node that is in charge of notifying its dependents ("parents") that it has been modified and that they (the dependents) need to be invalidated. IOW, if (*) that's the normal semantics of copying: if you have |
ed82fab
to772e1a2
Comparelib/matplotlib/scale.py Outdated
@@ -79,6 +79,18 @@ def limit_range_for_scale(self, vmin, vmax, minpos): | |||
return vmin, vmax | |||
class _TransformScaleCopyable(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
class_TransformScaleCopyable(): | |
class_CopyableTransfromMixin(): | |
"""Mixin to support deep copy on a transfrom.""" |
I don't think we need "scale" in here. It's just the copy feature that we provide and doesn't have to say anything on who may use it.
While historically, we did not call mixins*Mixin
, I think it's good style to do so, and there is precedence with_BlendedMixin
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
sure, plus or minus spelling ;-)
772e1a2
tof7baacb
CompareThat unfortunately broke docs since the mixin became undocumented. |
I guess there is no fundamental reason for it not to be public? |
f7baacb
to01049f1
Compare
brunobeltran left a comment• edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Flagging this because it doesn't match my understanding of what we agreed was best on the call today. Worst case I'm wrong, but better safe than sorry.
That said it's a pretty simple fix.
lib/matplotlib/scale.py Outdated
class LinearTransform(CopyableTransformMixin, IdentityTransform): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Sorry to nitpick about this, but I think it makes a lot more sense to instead have
classIdentityTransform(CopyableTransformMixin,Transform)
intransform.py
. Then you can just returnIdentityTransform
below and not have to create theLinearTransform
class at all. That was my take away from the call today, but@timhoffm approved already so I may be missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
What is the difference between this and doing it in scale.py where it is needed? We really don't particularly want people using a special version oftransform.Identity
do we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Ah, I emphasized the wrong thing. the point is tonot make a special Transform class at all, but instead to add copy and deepcopy ability to therealIdentityTransform
(using the mix-in). Then you don't need the extra class at all, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Sure I understand. But my point is that all the other scales have a named transform associated with them, I think its fine for the linear scale to have a (trivial)LinearTransform
that is associated with it. The advantage is that the goofy mixin is confined toscale.py
, and we don't pollutetransform.py
.
The discussion aboutIdentityTransform
on the call was pointing out that it definitely didn't need to have a stricture against copying, but to my mind that just goes to@anntzer 's overall point that copying all the transforms could be reconsidered. But that is a 3.5 sort of thing, not a release-critical 3.4 fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I agree with@brunobeltran (@brunobeltran thanks for catching this. I was too concerned with naming the mixin 😄).
We have scales and transforms that are associated with it. For nonlinear scales we need special transforms and it's natural that a LogScale is connected to a LogTransform. However, for a linear scale we don't need a transform (or only the identity). Making aLinearTransform
, which is effectively anIdentityTransform
only for the sake of matching the name is more confusing than helpful (that's also what got me confused in the first place in yesterdays discussion).
Note also the docstring ofLinearScale.get_transform()
:
Return the transform for linear scaling, which is just the `~matplotlib.transforms.IdentityTransform`.
Given that te mixin is private again, we're not polluting the namespace oftransform.py
if we put it there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Sure but it is strange to have a mixin that we use once in the transform module and a bunch of times in the scale module. Particularly when the mixin is really only needed for the scales to work. I think it just makes things unnecessarily complicated.
brunobeltranJan 7, 2021 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think which is more complicated is in the eye of the beholder. I personally find the purpose of the mix-in to be clear from the name. On the other hand, an equivalent solution (that it sounds like you like better) would be to just have:
(in transforms.py):
class_CopyableTransformMixin(): ...classIdentityTransform(Affine2DBase,_CopyableTransformMixin): ...class_CopyableTransform(Transform,_CopyableTransformMixin):pass
then inscale.py
, we simply do
frommatplotlib.transformsimport ...,_CopyableTransformclassFuncTransform(_CopyableTransform): ...
Then there's not a mixin scattered over two files, the semantics of what's going on are confined to justtransforms.py
, and what we get is thatIdentityTrasnform
now copies correctly, and no users are confused about what this "LinearTransform" object is that quacks like IdentityTransform but has a different name.
(Bonus) whoever ends up making all Transforms copy-able in the future has a nice framework to start working within.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I like the current solution better where nothing changes intransform.py
and all the changes are inscale.py
.
I don't think this is the correct framework for any fundamental changes toTransform
that allow it to be copyable in general. That will require substantial more state checking etc that are beyond a simple mixin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I'd like to keep this with no additional public API for now (other than supporting the deepcopy on norms and scales). In particular, I don't want to introduce a funkyLinearTransform
that is basically anIdentityTransform
.
That implies we have to makeIdentityTransform
itself copyable, i.e. there has to be a change totransform.py
. Thogh it can all be private. I have no strong preference concerning mixin into all scale transforms vs. a_CopyableTransform
base class (partly also because it can all be kept private and so we can change that should the need arise).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I madeIdentityTranform
copyable by adding a__deepcopy__
(and__copy__
) to it, and removedLinearTransform
fromtransform.py
Uh oh!
There was an error while loading.Please reload this page.
713e675
to4699c0c
CompareI don't think the test failures are this PR. I can rerun after review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Modulo the style comment.
Uh oh!
There was an error while loading.Please reload this page.
0cac039
toa7439b6
CompareUh oh!
There was an error while loading.Please reload this page.
a7439b6
toe152f02
Compare@brunobeltran you are still blocking this. I believe your concerns are addressed. |
Sorry for being late to the party, but perhaps#19281 would be more general? |
Put on hold until#19281 gets decided. |
@anntzer does it make sense to get this in for 3.4 and then leave your more comprehensive change for a future release? I can only imagine testing it is going to be a bit of a challenge. |
I'll try to join the call tomorrow and we can discuss then? |
Superseeded by#19281. |
PR Summary
Closes#18119
Iguess its an API change to change the return type of
scale.linear.get_transform
?This needs tests but the following now works instead of raising an error.
PR Checklist