Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

API,MAINT: Reorganize array-wrap calling and introducereturn_scalar#25409

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
seberg merged 18 commits intonumpy:mainfromseberg:clean-wrapping
Jan 20, 2024

Conversation

@seberg
Copy link
Member

This reorganize how array-wrap is called. It might very mildly change the semantics for reductions I think (and for negative priorities).

Overall, it now passes a newreturn_scalar=False/True when calling__array_wrap__ and deprecates any array-wrap which does not acceptarr, context, return_scalar.

I have not integrated it yet, but half the reason for the reorganization is to integrate it/reuse it in thearray_coverter helper PR (gh-24214), which stumbled over trying to make the scalar handling sane.

Forcing downstream to addreturn_scalar=False to the signature is a bit annoying, but e.g. our memory maps currently try toguess at it, which seems bad. I am hoping, this can be part to making the scalar vs. array return more sane.

But, maybe mainly, I hope it consolidates things (together withgh-24124 mainly, as if ufuncs were the only complex place we used this, it wouldn't matter much).

Py_DECREF(f);
Py_DECREF(out);
if (ret_int) {
if (ret_int&&ret!=NULL) {
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Sorry, unrelated and a bit annoying to trigger. But the deprecations triggered the bug...

mhvk reacted with thumbs up emoji
@mhvk
Copy link
Contributor

Hmm, seeing this I'm really not sure it is worth breaking every package that implemented__array_wrap__ for something so relatively small... In the end, most classes could check whetherndim==0 and then do something based on that.

Another idea, that does not break the signature (but might well break implementations), is to pass on to__array_wrap__ what numpy would return, i.e., an array or a scalar.

But there also remains something to be said for just more generally using array scalars, and let that be the API break. It might just make code simpler rather than more complex...

p.s. Should stress that I haven't thought enough about this! I should really look more at yourtry-0d-preservation branch...

@seberg
Copy link
MemberAuthor

seberg commentedDec 18, 2023
edited
Loading

It would be nice not to require this, but...

In the end, most classes could check whether ndim==0 and then do something based on that.

No because 0-D doesn't imply that NumPy should be returning a scalar, after all that is what annoys us all the time! I would like to be able to decide thatnp.add(1, 1) should maybe a scalar, butnp.add(np.array(1), 1) should be an array. Orarr1d.sum() vs.arr1d.sum(-1) maybe/probably.

It doesn't matter much which cases are which, so long there is no absolute uniform 0-d is always array/scalar...

is to pass on to__array_wrap__ what numpy would return, i.e., an array or a scalar.

Might work out mostlyuntildtype=object hits and all is lost, since you need at leastobj.view(my_subclass) to work, and it cannot. And it isn't even really reasonable to check forisinstance(obj, ndarray)...

Which was, why I thought I would just callresult[()] instead, but that also doesn't work because we don't know that the result defines such indexing semantics.

So, I would love a different idea, but the only thing I have left is piggy-backing oncontext or try/except indefinitely. But I really dislike try/except indefinitely, so that would leave adding one more try/except branch as the current one breaks subclasses that rely oncontext (although I hope beyond masked arrays few exist, and everyone who might want to use it moves to__array_ufunc__).

This probably doesn't fix the 32bit issue unfortunately, only the windows ones...
@seberg
Copy link
MemberAuthor

FWIW, I have updated thetry-0d-preservation branch with these (and the array-converter) changes. There are some tests still failing, but the last commit shows that it is not impossible anymore to return scalars for scalar inputs and arrays otherwise.
(With the admitted caveat in the strange case where e.g.dtype="i,i" means that a tuple is considered a scalar rather, when it normally is not.)

@mhvk
Copy link
Contributor

Which was, why I thought I would just call result[()] instead, but that also doesn't work because we don't know that the result defines such indexing semantics.

Maybe we should still go with this? I think it may cause less breakage than a new argument...

Though perhaps I'm overly worried; not sure how many projects in fact have an__array_wrap__ to start with...

@seberg
Copy link
MemberAuthor

Let's say this, we currently have two possible paths:

  1. Functions which always return an array
  2. Functions which return a scalar if the result is 0-D.

And__array_wrap__ cannot possibly match that, because even if it relies oncontext being set for the second, it will miss reductions.

So to some degree, I really want to add the ability to pick the two behaviors consciously for NumPy. Now, you could just not tell__array_wrap__ about it as it rarely needs to know in practice (most implementations are already happy to just ignore it).

I think it may cause less breakage than a new argument...

Well, in the sense of requiring less code change? Yes, for sure. In the sense of braking thingsdisregarding the deprecation warning, not sure? Since I think some things like pandas Series will run intoresult[()] not working.

Though perhaps I'm overly worried; not sure how many projects in fact have an__array_wrap__ to start with...

Right, and if I add another step which tries to passarr, context whenarr, context, return_scalar fails, then this is aDeprecationWarning and will not fail straight away (albeit add a bunch of overhead).

I don't have a strong opinion, it seems tedious, but OTOH, it seems bound to be annoying for someone to not be able to match NumPy arrays (e.g.mmap is such a case).

@seberg
Copy link
MemberAuthor

@mhvk sorry, need to come back to this, I think it is pretty important (if just to push the followup ofarray_coverter(), whichis important to fixup NEP 50 niche case bugs in the Pytho API).

We had discussed this in a meeting a while ago, and I don't think anyone had serious concerns with adding it. But I do value your opinion more than most when it comes to__array_function__. What I will say is that this is just a deprecation, so yes, everyone must update, but end-users wouldn't see it immediately (although it's slow). It could be aPendingDeprecationWarning, but that just hides it even more with no gain I think. I can will add a try for__array_function__(arr, context), so that it is truly just a deprecation (even if it could go through 2 trys before succeeding).

To me, the new kwarg still just seems easier than any other polymorphism to try to guess this without it, and it is only some libraries that need to adapt.

Copy link
Contributor

@mhvkmhvk left a comment
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

@seberg - sorry to have forgotten about this! Looking with a fresh eye, I think your solution is for the best -- everything I can think of has the risk of subtle breakage.

So, now just a proper review of the code -- which looks great! One genuine comment about whether forcing wrapping for reductions is correct (looks like the old code did not). Though if forced wrapping for reductions is OK, then all the calls tonpy_apply_wrap in fact used forced, so the argument could be removed. Indeed, that suggests that if we do not want to force wrapping in reductions, we could just move theif statement to the reduction instead (or are you using it in the follow-up PR?).

# Return scalar instead of 0d memmap, e.g. for np.sum with
# axis=None
ifarr.shape== ():
ifarr.shape== ()andreturn_scalar:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This can just beif return_scalar:, right? or does the calling code not ensure this is set only for 0-d arrays?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

You are right, thanks. When I started I hadn't ensured that yet, but it does now (and I think it is much cleaner/better).

* @param inputs Original input objects
* @param out_wrap Set to the python callable or None (on success).
* @param out_wrap_type If not NULL, set to the type belonging to the wrapper
* or set to NULL.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

From the code, this seems no longer true:*out_wrap_type is always set. This means the rest of the comment can go, which is probably for the better!

seberg reacted with thumbs up emoji
* Wrapping is always done for ufuncs because the values stored will have been
* mutated (guarantees calling array-wrap if any mutation may have occurred;
* this is not true for Python calls, though).
* UFuncs always call array-wrap with base-class arrays in any case.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think removing the comment up to this point is OK.

Or was this comment meant fornpy_apply_wrap?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Removed this is a leftover and better and sufficiently covered byforce_wrap now...

arr= (PyArrayObject*)obj;
}
else {
/* TODO: This branch should ideally be NEVER taken! */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Tests fail if you remove this? Might be worth an issue enumerating those (if not too many...).

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I didn't really try, but the point is that as long as we convert 0-D arrays to scalars randomly, it would seem rather typical to callwrap(a + b) and the result of that can be a scalar.

I will expand the comment ofwhy rather than investigating for now (just seems more useful to me).

returnNULL;
for (intout_i=0;out_i<ufunc->nout;out_i++) {
context.out_i=out_i;
PyObject*original_out=NULL;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I'd make this a one-liner, but up to you!

PyObject *original_out = (full_args.out) ?  PyTuple_GET_ITEM(full_args.out, out_i) : NULL;

def__array_wrap__(self,arr,context=None):
return'pass'

classTest2:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Can inherit fromTest1 to allow removing the__array__ method.

seberg reacted with thumbs up emoji
Copy link
MemberAuthor

@sebergseberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Thanks for the review, sorry, I have not gotten back earlier. I think I addressed the issues (the largest one maybe being to try just passing context).

Forcing wrapping in the reduction path seemed right to me, but that said I don't care about just keeping things as they were. It doesn'treally belong here, except that I wanted to slowly try to standardize behavior more.

arr= (PyArrayObject*)obj;
}
else {
/* TODO: This branch should ideally be NEVER taken! */
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I didn't really try, but the point is that as long as we convert 0-D arrays to scalars randomly, it would seem rather typical to callwrap(a + b) and the result of that can be a scalar.

I will expand the comment ofwhy rather than investigating for now (just seems more useful to me).

# Return scalar instead of 0d memmap, e.g. for np.sum with
# axis=None
ifarr.shape== ():
ifarr.shape== ()andreturn_scalar:
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

You are right, thanks. When I started I hadn't ensured that yet, but it does now (and I think it is much cleaner/better).

* Wrapping is always done for ufuncs because the values stored will have been
* mutated (guarantees calling array-wrap if any mutation may have occurred;
* this is not true for Python calls, though).
* UFuncs always call array-wrap with base-class arrays in any case.
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Removed this is a leftover and better and sufficiently covered byforce_wrap now...


PyObject*wrapped_result=npy_apply_wrap(
(PyObject*)ret,out_obj,wrap,wrap_type,NULL,
PyArray_NDIM(ret)==0,NPY_TRUE);
Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I would say it is correct, but you are right, it is a subtle change? I find it odd to not do it for reductions, but do it for ufuncs, that said, ufuncs are special also in passingcontext, so I would be happy to undo it.

(I think besides this, looking up might be unnecessary sometimes, but nothing is changed.)

@seberg
Copy link
MemberAuthor

Nevermind, I siwtched to not do force-wrap which I think leaves the reduction result unchanged. While I would like to align them, I need this in more for the other PR then delay thinking about it.

Doing this due tonumpygh-25635 since it is super confusing with thebad retrying...
@seberg
Copy link
MemberAuthor

gh-25635 was a good reason to chain exceptions. Because this wasalways terrible (really it got no worse, except now you have a chance of getting to the real error). It shows nicely how that try/except can go wrong if it isn't very specific enough.

In [4]: np.sqrt(np.ma.masked)---------------------------------------------------------------------------TypeError                                 Traceback (most recent call last)File ~/forks/numpy/build-install/usr/lib/python3.10/site-packages/numpy/ma/core.py:6686, in MaskedConstant.__array_wrap__(self, obj, context, return_scalar)   6685 def __array_wrap__(self, obj, context=None, return_scalar=False):-> 6686     return self.view(MaskedArray).__array_wrap__(obj, context)File ~/forks/numpy/build-install/usr/lib/python3.10/site-packages/numpy/ma/core.py:3131, in MaskedArray.__array_wrap__(self, obj, context, return_scalar)   3129     fill_value = self.fill_value-> 3131 np.copyto(result, fill_value, where=d)   3133 # Update the maskTypeError: Cannot cast scalar from dtype('float64') to dtype('bool') according to the rule 'safe'The above exception was the direct cause of the following exception:DeprecationWarning                        Traceback (most recent call last)Cell In[4], line 1----> 1 np.sqrt(np.ma.masked)DeprecationWarning: __array_wrap__ must accept context and return_scalar arguments (positionally) in the future. (Deprecated NumPy 2.0)
mhvk reacted with eyes emoji

Copy link
Contributor

@mhvkmhvk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Mostly small comments about the logic innpy_find_array_wrap (and one ref-counting issue).

for (inti=0;i<nin;i++) {
PyObject*obj=inputs[i];
if (PyArray_CheckExact(obj)) {
if (wrap==NULL||priority<0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Priority cannot be< 0 here, right? Should the initialization above be at-inf? A benefit of that is that I think the test forwrap == NULL would not be needed any more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I'd also writepriority < NPY_PRIORITY and use that below forpriority = NPY_PRIORITY - somewhat easier to read than having to know the priority of arrays is 0.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Priority can be< 0, this makes sense for example formemmap and is used there.

I had initially thought about it, and decided it seemed dubious. Maybe all it changes is that-inf really means that__array_wrap__ shouldnever be called, which would make sense (and probably nobody uses that anyway).

There is the odd code, that the old code justskipped ndarrays, which is worse, because negative wraps take precendence over ndarray! (yes this is a change here)

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

(Will replace theNPY_PRIORITY to not hardcode the 0)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

But if you initializepriority = 0, doesn't that mean thatnp.memmap.__array_wrap__ never gets selected? Is that OK?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The code here always uses the first one (if there is more than one). That is what theNULL check does.

Copy link
MemberAuthor

@sebergsebergJan 19, 2024
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Ah, that is nicer though of course! I should writei == 0 that is much clearer...

EDIT: The only reason I even initialized priority at all is because gcc incorrectly thinks it can be used uninitialized.

Copy link
MemberAuthor

@sebergsebergJan 19, 2024
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Wait no, arg... that doesn't work, how about just adding a comment for what theNULL check does early on?

EDIT: Done this, I do think we can tweak this, but as of now... this is the smallest variation from what we had before (which was very strange and didn't agree with the python side)

}
}
elseif (PyArray_IsAnyScalar(obj)) {
if (wrap==NULL||priority<NPY_SCALAR_PRIORITY) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Same here.

@charrischarris changed the titleAPI,MAINT: Reorganize array-wrap calling and introducereturn_scalarAPI,MAINT: Reorganize array-wrap calling and introducereturn_scalarJan 20, 2024
for (inti=0;i<nin;i++) {
PyObject*obj=inputs[i];
if (PyArray_CheckExact(obj)) {
if (wrap==NULL||priority<NPY_PRIORITY) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I'm still confused: if you initializepriority = -inf (or some other very large negative number), will this line not just always be true ifwrap is undefined? So, why is thewrap == NULL needed?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

not if the object definespriority = -inf. Maybe that would be preferable, but 🤷.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Ah, I only need it on the last branch, yes. Would that seem clearer (then the last branch is still special, this way priority is only initialized to silence warnings)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think coding against something that definespriority = -inf yet defined__array_wrap__ seems unnecessary! But if you really prefer what you have, that is fine too -- my own sense is that the logic becomes clearer without the extra conditions that are needed to grab the first argument unconditionally.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I don't disagree, I think I just wanted to matchone of the implementations inside NumPy.
Maybe it is laziness speaking: This way no user can run into this (not even hypothesis) and its there already there...?

mhvk reacted with thumbs up emoji
Copy link
Contributor

@mhvkmhvk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

OK, I think we've reached (more than) good enough!

You probably want to squash & merge, or reduce the number of commits a bit.

@seberg
Copy link
MemberAuthor

Thanks Marten, let me squash merge this then, since I want to rebase theother PR on this work...

Afterall, while I think it fixes a bunch of issues (at least mid-term) and opens avenues, it was really mostly a split-out from the other PR.

@sebergseberg merged commit6bd3abf intonumpy:mainJan 20, 2024
@mhvk
Copy link
Contributor

Nice, on to the helper! (ping me when rebased - that one we went over a lot already so should be easy to do a final round of review).

@pllim
Copy link
Contributor

I thought numpy 2 would be out by now. Wasn't expecting breaking changes even now...

@seberg
Copy link
MemberAuthor

I thought numpy 2 would be out by now. Wasn't expecting breaking changes even now...

I wish :(, although I hope it should be mostly good. You will certainly also noticegh-25168 (more than this).

pllim reacted with eyes emoji

effigies added a commit to effigies/nitime that referenced this pull requestNov 6, 2025
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@mhvkmhvkmhvk approved these changes

Assignees

No one assigned

Labels

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@seberg@mhvk@pllim

[8]ページ先頭

©2009-2025 Movatter.jp