Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork11.9k
BUG: Fix casting from longdouble to long#9971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
1821d0c to922fcb6Comparea1830b5 tof10d9d3Comparecharris commentedNov 6, 2017
#9967 is in and needs rebase. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Why notnpy_uint64 withchunk_size 64?
digits in the original code was unsigned
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Hadn't noticed that it was unsigned. Yes, could probably change tosizeof(npy_long) * CHAR_BIT andnpy_long
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Changed to usePyLong_FromUnsignedLongLong,npy_ulonglong, andNPY_BITSOF_ULONGLONG
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
OK, maybe the reason for 32 bit chunk is becausenpy_long can be 32bit, andchunk needs to benpy_long.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
There is a funny character here.Windows newline? OK, it says "No newline at end of file", so no problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Newline added
7d8004c toec36b2bCompareThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Alternate name proposalnpy_longdouble_to_PyLong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Either seems fine. Maybe thenpy_ alternative is safer in case Python3 decides to define this too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This seemed to be just for thelong/int choice below, which is far easier (although a little slower) to handle by just producing along, and letting python decide whether to downcast to an int.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think we can now remove the@realtype@ repeat var.@unsigntyp@ too
(I don't exactly understand why these lines were needed before, since C-casting truncates the double fractional part anyway.. but the new code looks correct so I won't worry too much).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Oh I think it was probably so thex < LONG_MAX andx < LONG_MIN checks aren't affected by the fractional value, in the case of exact equality of the integer part ofx to those values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Indeed, that was the purpose - but I don't think the decrease in clarity is worth it for a marginal performance improvement on only python 2.
I'll remove the repeat vars - good catch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This would probably be generally handy throughout the codebase.
A job for another PR: would there be a better place to put it, and do we want to cal litPy_SETREF and#ifndef Py_SETREF it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
If we decide to use it I agree that doing and#ifndef seems reasonable.
For anyone else reading, it looks like this python macro was created in python3.4, and there are a lot of discussions of it on the python mailing list and on the python issue tracker. It is still going through some changes as of april 2016, I think they decided to split it into two macrosPy_SETREF andPy_XSETREF.
One motivation for it is to avoid a certain kind of bug, according to thecomment in CPython. On the other hand it's not part of the "stable api", so there are some risks in using it.
ahaldane commentedNov 7, 2017
Besides the small cleanup mentioned above, LGTM. |
Implements a custom PyLong_FromLongDouble (with a different name) derived from PyLong_FromDouble.Fixesnumpygh-9964
eric-wieser commentedNov 8, 2017
Cleanup made, used the non-clashing name |
ahaldane commentedNov 8, 2017
LGTM, merging. Thanks Eric! |
charris commentedNov 10, 2017 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Seems to fail the wheels build on the mac,https://travis-ci.org/MacPython/numpy-wheels. EDIT: It's not obvious what the problem is, the Mac does support extended precision long doubles. Maybe a library function (clang vs gcc) or some such ... |
charris commentedNov 10, 2017
I think this indicates that we need to add Mac testing. |
| { | ||
| PyObject*v; | ||
| PyObject*l_chunk_size; | ||
| // number of bits to extract at a time. CPython uses 30, but that's because |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Hmm, should be C style comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Careless, good spot. This is the reason for the Mac failure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
No. At the moment I'm suspecting the power function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The problem is this line:huge_ld = 1234 * np.longdouble(2) ** exp. We need someone running on a Mac to check this out. Maybe a fallback double function is being used somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Here it is,powl is blacklisted on the Mac
/* powl gives zero division warning on OS X, see gh-8307 */#if defined(HAVE_POWL) && defined(NPY_OS_DARWIN)#undef HAVE_POWL#endifThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Although I don't see why that should be a problem, the double value of the fallback function should be OK with result cast tolong double, so the multiplication should work. Maybe inlining is failing somewhere ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Fired up the old Mac. The Mac doublepow function errors withexp == 1024, Linux works with that. I'm simply going to useexp = 1023 as a fix in this case.
eric-wieserNov 10, 2017 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
exp=1023 doesn't work, because then the result fits indouble - the goal here is to produce a longdouble that exceeds the limits for a double.
I suppose using exp=1023 and multiplying by 2 would work
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Yes, that is what I did. See#10000 . Oh, looky there, issue 10000, it's like watching the odometer turn over ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Guess we will eventually find out if this all works for IBM double double.
| assert_raises(OverflowError,x.__int__) | ||
| assert_equal(len(sup.log),1) | ||
| @dec.skipif(np.finfo(np.double)==np.finfo(np.longdouble)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This test fails on ppc64.
I think I'm going to add@dec.skipif(platform.machine().startswith("ppc64")) here.
Implements a custom PyLong_FromLongDouble derived from PyLong_FromDouble.
Fixesgh-9964
Depends on#9967 being merged, but submitting now to allow tests to run on a platform with longdouble support, something mine does not have. I'll rebase once that's merged.