Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

gh-111545: Add PyHash_Double() function#112095

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Closed
vstinner wants to merge1 commit intopython:mainfromvstinner:hash_double2

Conversation

vstinner
Copy link
Member

@vstinnervstinner commentedNov 15, 2023
edited by github-actionsbot
Loading

  • Cleanup PyHash_Double() implementation based _Py_HashDouble():

    • Move variable declaration to their first assignment.
    • Add braces (PEP 7).
    • Cast result to signed Py_hash_t before the final "== -1" test, to reduce the number of casts.
    • Add an assertion on Py_IS_NAN(v) in the only code path which can return -1.
  • Add tests: Modules/_testcapi/hash.c and Lib/test/test_capi/test_hash.py.


📚 Documentation preview 📚:https://cpython-previews--112095.org.readthedocs.build/

@vstinner
Copy link
MemberAuthor

numpy has aNpy_HashDouble() compatibility layer to handle _Py_HashDouble() of Python 3.9 and older which only takes a single Cdouble argument. numpy callsNpy_HashDouble(obj, value) in scalartypes.c.src.

Proposed API has a single argument and cannot be used as a drop-in replacement for_Py_HashDouble(). I would prefer to let C extensions decide how to handle not-a-number special case. Example:

Py_hash_thash=PyHash_Double(value);if (hash==-1) {return_Py_HashPointer(obj);    }returnhash;

Problem:_Py_HashPointer() is also private. Once this PR is merged, I will prepare a follow-up PR to add a publicPyHash_Pointer() function.

Since numpy already has aNpy_HashDouble() compatibility layer, it can reimplement such "PyHash_Double() or _Py_HashPointer()" logic in a single place (npy_pycompat.h).


The_Py_HashDouble() function had 1 argument (Py_hash_t _Py_HashDouble(double v)) but was changed in Python 3.10a7 (or b1) to get a second argument:Py_hash_t _Py_HashDouble(PyObject *inst, double v).

Python now uses the identity for the hash when the value is a NaN, seegh-87641. In Python 3.9,hash(float("nan")) returned 0 (#define _PyHASH_NAN 0).

By the way, in Python 3.13,sys.hash_info.nan still exists and is equal to 0, even ifhash(float("nan)) no longer return 0! Seehttps://docs.python.org/dev/library/sys.html#sys.hash_info documentation:

hash_info.nan: (This attribute is no longer used)

@vstinner
Copy link
MemberAuthor

Another slice of Python history. In Python 3.2,PyObject_Hash() return type changed fromlong toPy_hash_t to reduce hash collisions on 64-bit Windows wherelong is only 32-bit (whereasPy_hash_t is 64-bit). ThePy_hash_t andPy_uhash_t types were added to Python 3.2. See commit8f67d08 and commit8035bc5 of issuegh-53987 (bpo-9778).

@vstinner
Copy link
MemberAuthor

I'm not a fan ofsigned number of hash. For example, I prefer to avoid it when using modulo operator (x % y). I would prefer to use theunsignedPy_uhash_t type.

The signedPy_hash_t type is used as the return type of thePyTypeObject.tp_hash function.

@vstinner
Copy link
MemberAuthor

vstinner commentedNov 15, 2023
edited
Loading

Problem: _Py_HashPointer() is also private. Once this PR is merged, I will prepare a follow-up PR to add a public PyHash_Pointer() function.

Draft PRgh-112096.

I prefer to only start withPyHash_Double() to discuss the PyHash API first:

  • Do we want to continue using annoyingsignedPy_hash_t, or should we move tounsignedPy_uhash_t?
  • I propose moving from_Py_HashXXX() naming convention toPyHash_XXX() to put hash functions in aPyHash namespace (share a common prefix).If we expose constants such as _PyHASH_MODULUS or _PyHASH_INF, I propose to expose them asPyHash_MODULUS orPyHash_INF. It would be consistent with the existingPyHash_GetFuncDef() function name.
  • AddDoc/c-api/hash.rst documentation.
  • AddModules/_testcapi/hash.c andLib/test/test_capi/test_hash.py to test the PyHash C API.

@vstinner
Copy link
MemberAuthor

@vstinner
Copy link
MemberAuthor

I merged a first change to make this PR smaller and so easier to review. The PR#112098 added documentation and tests on the PyHash_GetFuncDef() function which was added by PEP 456.

@vstinner
Copy link
MemberAuthor

The PR addsPy_hash_t PyHash_Double(double value).

Proposed API has a single argument and cannot be used as a drop-in replacement for _Py_HashDouble(). I would prefer to let C extensions decide how to handle not-a-number special case.

If needed, a second function can be added:

Py_hash_tPyHash_DoubleOrPointer(doublevalue,constvoid*ptr)

=> Computevalue hash,or computeptr hash if value is not-a-number (NaN).

For me, it's surprising that when passing a Python object in_Py_HashDouble(), the function doesn't callPyObject_Hash(obj) but_Py_HashPointer(obj).


Or do you prefer to just exposePy_hash_t _Py_HashDouble(PyObject *inst, double v) as it is?

@skirpichev
Copy link
Contributor

By the way, in Python 3.13, sys.hash_info.nan still exists and is equal to 0

I think, this attribute should be deprecated (or just removed?).


Hash a C double number.

Return ``-1`` if *value* is not-a-number (NaN).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Maybe you should document return value for inf too? This is exposed in sys.hash_info.

Copy link
MemberAuthor

@vstinnervstinnerNov 15, 2023
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I prefer to wait until the_PyHASH_INF constant is added to the API. That's the C API documentation, not the Python documentation.

Functions
^^^^^^^^^

.. c:function:: Py_hash_t PyHash_Double(double value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I would prefer to expose this as unstable API. Hashing of numeric types is relatively low-level detail of implementation, which was changed in past for minor (3.x.0) releases (nans hashing was last). Why not keep this freedom in future?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I don't expect PyHash_Double()API to change in the future. Theresult of the function can change in Python 3.x.0 releases, but I don't consider that it qualifies the function for the PyUnstable API.

The PyUnstable API is more when there is a risk that the function can be removed, that its API can change, or that a major change can happen in a Python 3.x.0 release.

In Python 3.2 (2010),_Py_HashChange() was written in commitdc787d2 of issuegh-52435.

commit dc787d2055a7b562b64ca91b8f1af6d49fa39f1cAuthor: Mark Dickinson <dickinsm@gmail.com>Date:   Sun May 23 13:33:13 2010 +0000    Issue #8188: Introduce a new scheme for computing hashes of numbers    (instances of int, float, complex, decimal.Decimal and    fractions.Fraction) that makes it easy to maintain the invariant that    hash(x) == hash(y) whenever x and y have equal value.

As written above, the latest major change was in Python 3.10 to treat NaN differently.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The result of the function can change in Python 3.x.0 releases

Previously, the function signature for_Py_HashDouble() was changed too.

@vstinner
Copy link
MemberAuthor

I think, this attribute should be deprecated (or just removed?).

If you consider that something should be changed, you can open a new issue aboutsys.hash_info.nan.

Copy link
Member

@serhiy-storchakaserhiy-storchaka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This PR contains many cosmetic changes, but also some changes that can affect performance in theory (like adding the "else" branch or adding additional check for -1). Please make precise benchmarks for this. Also consult with previous authors of this code.

To makePyHash_Double a replacement of_PyHash_Double you needPy_HashPointer. Maybe add it first?

BTW, should it bePyHash_Double orPy_HashDouble?

@serhiy-storchakaserhiy-storchaka requested review frommdickinson and removed request fora teamNovember 15, 2023 08:45
@encukou
Copy link
Member

In terms of the proposed C API guidleines: this violates the one where negative results are reserved for errors. Actually, hashes might be a good argument for only reserving -1 for errors, leaving the other negative numbers mean success.
A bigger problem is that the -1 is returnedwithout raising an exception, leaving the function with no way to signal unexpected failure -- seecapi-workgroup/api-evolution#5

@vstinner
Copy link
MemberAuthor

@serhiy-storchaka:

To make PyHash_Double a replacement of _PyHash_Double you need Py_HashPointer. Maybe add it first?

Ok. I updated PR#112096 so it can be merged first: my PR#112096 addingPy_HashPointer() is now ready for review.

@vstinnervstinnerforce-pushed thehash_double2 branch 3 times, most recently from24c3f6d toa58dcd1CompareNovember 15, 2023 12:12
* Add again _PyHASH_NAN constant.* _Py_HashDouble(NULL, value) now returns _PyHASH_NAN.* Add tests: Modules/_testcapi/hash.c and  Lib/test/test_capi/test_hash.py.
@vstinner
Copy link
MemberAuthor

vstinner commentedNov 15, 2023
edited
Loading

I updated the PR to address@serhiy-storchaka and@encukou's comments.

@serhiy-storchaka and@encukou: Please review the updated PR.

The API changed to:

Py_hash_t PyHash_Double(double value, PyObject *obj)

The interesting case is thatobj can beNULL! In that case, the function returnssys.float.nan which comes back from death!

Changes:

  • Add again_PyHASH_NAN constant.
  • Revert_Py_HashDouble() cleanup to focus on the proposed API. (I plan to propose a follow-up PR just for that.)
  • Just usehash(x) in tests, rather than reimplementinghash(int) in pure Python.
  • Add tests on positive and negative zeros.
  • The function cannot fail: it cannot return-1. It means that if later there will be a need for that, it will be possible to raise an exception and return-1 on error.
  • The function respects the latest C API guidelines.

* If *obj* is not ``NULL``, return the hash of the *obj* pointer.
* Otherwise, return :data:`sys.hash_info.nan <sys.hash_info>` (``0``).

The function cannot fail: it cannot return ``-1``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

But wedo want users to check the result, so that the function can start failing in some cases in the future.

Suggested change
The function cannot fail: it cannot return``-1``.
On failure, the function returns``-1`` and sets an exception.
(``-1`` is not a valid hash value; it is only returned on failure.)

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I don't see the point of asking developers to make their code slower for a case which cannot happen. It would make C extensions slower for no reason, no?

PyObject_Hash(obj) can call arbitrary__hash__() method in Python and so can fail. ButPyHash_Double() is simple and cannot fail. It's just that it has the same API thanPyObject_Hash() andPyTypeObject.tp_hash for convenience.

For me, it's the same asPyType_CheckExact(obj): the function cannot fail. Do you want to suggest users to start checking for-1 because the API is that itmay set an exception and return-1? IMO practicability beats purity here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I am strongly for allowing deprecation via runtime warnings, and for keeping new API consistent in that respect.

If the speed is an issue (which I doubt, with branch prediction around), let's solve that in a way that still allows the API to report errors.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I am strongly for allowing deprecation via runtime warnings, and for keeping new API consistent in that respect.

I createdcapi-workgroup/api-evolution#43 to discuss functions which cannot fail: when the caller is not expected to check for errors.

If the speed is an issue (which I doubt, with branch prediction around), let's solve that in a way that still allows the API to report errors.

Would you mind to elaborate how you plan to solve this issue?

My concern is more about usability of the API than performance here.

But yeah, performance matters as well. Such function can be used in a hash table (when floats as used as key), and making such function as fast as possible matters.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Would you mind to elaborate how you plan to solve this issue?

It's possible for specific compilers: add a static inline wrapper withif (result == -1) __builtin_unreachable(); or__assume(result != -1).
That way the compiler can optimize error checking away, until a later Python version decides to allow failures.

@vstinner
Copy link
MemberAuthor

numpy issue:numpy/numpy#25035

@encukou
Copy link
Member

Stepping back, I think I figured out why this API seems awkward to me.
It looks like this should be used to get the hash of a Python float object, given only the “unboxed” double value. But it can't actually be used for that: to be able to hash aNaN, you'd needs to pass in the Python float object. (And if you already have that, you could call its tp_hash instead.)
You could pass in NULL, but then the result for NaN is indistinguishable fromhash(0.0).
Also, it seems that to know whatobj “should” be, you need to know a bit of internal details that aren't quite apparent from the documentation: namely, NaNs are hashed usingPyHash_Pointer.

The actual use case the proposed function allows is implementing a custom object that needs to hash the same as a Python double, whereNaN should be hashable (like Python doubles are). That use case is what Python needs, and I assume it's what NumPy needs, but IMO it's unnecessarily limited -- which makes the function well suited for an internal (or unstable) function, but worse for public API.

With a signature like:

intPyHash_Double(doublevalue,Py_hash_t*result)// -> 1 for non-NaN (*result is set to the hash)// -> 0 for NaN (*result is set to 0)

the function would cover the more general use case as well.

The docs can note that when implementing hash for a custom object, if you get a 0 result you can:

  • fail, or
  • use a fallback that ensureshash(obj) == hash(obj) -- for example, usePyHash_Pointer(obj).

(I'll leave the fallibility argument to the WG repos, just noting that this API would allow adding -1 as a possible result.)

@mdickinsonmdickinson removed their request for reviewNovember 26, 2023 11:54
@mdickinson
Copy link
Member

FWIW, I also find the proposed API (Py_hash_t PyHash_Double(double value, PyObject *obj)) awkward. The interesting part of the hash computation, and the part that I presume would be useful to NumPy and others, is the part that converts a non-NaNdouble to its integer hash. I'd prefer an API that just exposed that part, so that third parties can use it in whatever way they want to.

The signature proposed by@encukou looks reasonable to me. I don't want to get involved in the discussion about error returns, but if we really wanted to wecould have a documented sentinel hash value that'sonly ever returned for NaNs (e.g., 2**62).

@vstinner
Copy link
MemberAuthor

The signature proposed by@encukou looks reasonable to me. I don't want to get involved in the discussion about error returns, but if we really wanted to we could have a documented sentinel hash value that's only ever returned for NaNs (e.g., 2**62).

There issys.hash_info.nan which is equal to 0 but it's no longer used as I explained before.

The problem with2**62 is that it doesn't fit intoPy_hash_t on 32-bit platforms. ThePy_hash_t type is defined asPy_ssize_t which is 32-bit on a platform with 32-bit pointers.

@vstinner
Copy link
MemberAuthor

The first PR version used the API:Py_hash_t PyHash_Double(double value), return_PyHASH_NAN ifvalue is NaN.

The second PR version changed the API toPy_hash_t PyHash_Double(double value, PyObject *obj) to micro-optimize the code, avoid having to checkPyHash_Double() result for NaN, and provide a drop-in replacement for numpy.

Well, nobody (including me, to be honest) likesPy_hash_t PyHash_Double(double value, PyObject *obj) API.

So I wrote PR#112449 which implements the API proposed by@encukou:int PyHash_Double(double value, Py_hash_t *result), return 0 ifvalue is NaN or return 1 otherwise (ifvalue is finite or is infinity).

@serhiy-storchaka
Copy link
Member

I like the simpler API proposed in this PR more, my only concern is about performance. Can the difference be observed in microbenchmarks or it is insignificant?

Instead of checking theresult of the function, the user code can check the argument before calling the function:

if (Py_IS_NAN(value)) {return_Py_HashPointer(obj);    }returnPyHash_Double(value);

As for names, "Hash" is a verb, and "Double" is an object._Py_HashPointer(),_Py_HashBytes() and_Py_HashDouble() names all say what to do ("to hash") and with what object ("double", "bytes" or "pointer").PyHash_GetFuncDef() also has a verb ("Get"). If you want to use thePyHash_ prefix, you need to repeatHash twice:PyHash_HashDouble(),PyHash_HashBytes(),PyHash_HashPointer(). I think it is better to use simple prefixPy_.

@encukou
Copy link
Member

I don't see this used in tight loops, so I'd go for the prettier API even if it's a few instructions slower. (Note that NumPy uses it forscalars -- degenerate arrays of size 1 -- to ensure compatibility with Python doubles.)
If thisdoes get performance-critical, IMO the solution (in version-specific builds) is to make it astatic inline function, and let the compiler optimize the shape of the API away.

+1 on the naming note,Py_HashDouble (orPyHash_HashDouble) is a bit better.

@mdickinson
Copy link
Member

There issys.hash_info.nan which is equal to 0 but it's no longer used as I explained before.

Yes, I'm rather familiar with the history here. :-) I was simply suggesting that if we picked and documented a value that's not used for the hash of any non-NaN, then the hash value itself could be a way of detecting NaNs after the fact.0 isn't viable for that because it's already the hash of0.0. And yes, of course a different value would be needed for 32-bit builds; the actual value could be stored insys.hash_info, as before.

In any case, I was just mentioning this as a possibility. I'd prefer a more direct method of NaN detection, like what you have currently implemented (or not having the API do NaN detection at all, but leave that to the user to do separately if necessary).

encukou reacted with thumbs up emoji

@vstinner
Copy link
MemberAuthor

vstinner commentedNov 27, 2023
edited
Loading

I ran microbenchmarks on 3 different APIs. Measured performance is between 13.6 ns and 14.7 ns. The maximum difference is 1.1 ns: 1.08x slower. It seems like the current _Py_HashDouble() API is the fastest.

In the 3 APIs, the C double input value is passed through thexmm0 register at the ABI level.

I expectedPy_hash_t PyHash_Double(double value) to be the fastest API. It's not the case. I always get bad surprises in my benchmarks. You should try to reproduce them and play with the code to double check that I didn't mess up with my measurement.


I wrote 3 benchmarks on:

Results using CPU isolation, Python built withgcc -O3 (without PGO, without LTO, just./configure):

  • A: 13.6 ns +- 0.0 ns
  • B: 14.0 ns +- 0.0 ns
  • C: 14.7 ns +- 0.1 ns
+-----------+---------+-----------------------+-----------------------+| Benchmark | A       | B                     | C                     |+===========+=========+=======================+=======================+| bench     | 13.6 ns | 14.0 ns: 1.03x slower | 14.7 ns: 1.08x slower |+-----------+---------+-----------------------+-----------------------+

I added benchmark code in_testinternalcapi extension which is built as a shared library, so calling_Py_HashDouble() andPyHash_Double() goes go through PLT (procedure linkage table) indirection.

I added an artificial test on the hash value to use it in the benchmark, so the compiler doesn't remove the whole function call, and the code is a little bit more realistic.

(A) assembly code:

Py_hash_t hash = _Py_HashDouble(obj, d);if (hash == -1) {    ...}mov    rax,QWORD PTR [rip+0x698e]        # 0x7ffff7c09690mov    rdi,rbpmovq   xmm0,raxcall   0x7ffff7c022b0 <_Py_HashDouble@plt>cmp    rax,0xffffffffffffffffjne    ...

(B) assembly code:

Py_hash_t hash;if (PyHash_Double(d, &hash) == 0) {    return NULL;}mov    rax,QWORD PTR [rip+0x6a1f]        # 0x7ffff7c09690mov    rdi,rbpmovq   xmm0,raxcall   0x7ffff7c02990 <PyHash_Double@plt>test   eax,eaxjne    ...

(C) assembly code:

Py_hash_t hash = PyHash_Double(d);if (hash == -1) {    ...}mov    rax,QWORD PTR [rip+0x6a26]        # 0x7ffff7c09690movq   xmm0,raxcall   0x7ffff7c02990 <PyHash_Double@plt>cmp    rax,0xffffffffffffffffjne ...

Change used to benchmark (A) and (B). Measuring (C) requires minor changes.

diff --git a/Modules/_testinternalcapi.c b/Modules/_testinternalcapi.cindex 4607a3faf17..9b671acf916 100644--- a/Modules/_testinternalcapi.c+++ b/Modules/_testinternalcapi.c@@ -1625,6 +1625,51 @@ get_type_module_name(PyObject *self, PyObject *type) }+static PyObject *+test_bench_private_hash_double(PyObject *Py_UNUSED(module), PyObject *args)+{+    Py_ssize_t loops;+    if (!PyArg_ParseTuple(args, "n", &loops)) {+        return NULL;+    }+    PyObject *obj = Py_None;+    double d = 1.0;++    _PyTime_t t1 = _PyTime_GetPerfCounter();+    for (Py_ssize_t i=0; i < loops; i++) {+        Py_hash_t hash = _Py_HashDouble(obj, d);+        if (hash == -1) {+            return NULL;+        }+    }+    _PyTime_t t2 = _PyTime_GetPerfCounter();++    return PyFloat_FromDouble(_PyTime_AsSecondsDouble(t2 - t1));+}+++static PyObject *+test_bench_public_hash_double(PyObject *Py_UNUSED(module), PyObject *args)+{+    Py_ssize_t loops;+    if (!PyArg_ParseTuple(args, "n", &loops)) {+        return NULL;+    }+    double d = 1.0;++    _PyTime_t t1 = _PyTime_GetPerfCounter();+    for (Py_ssize_t i=0; i < loops; i++) {+        Py_hash_t hash;+        if (PyHash_Double(d, &hash) == 0) {+            return NULL;+        }+    }+    _PyTime_t t2 = _PyTime_GetPerfCounter();++    return PyFloat_FromDouble(_PyTime_AsSecondsDouble(t2 - t1));+}++ static PyMethodDef module_functions[] = {     {"get_configs", get_configs, METH_NOARGS},     {"get_recursion_depth", get_recursion_depth, METH_NOARGS},@@ -1688,6 +1733,8 @@ static PyMethodDef module_functions[] = {     {"restore_crossinterp_data", restore_crossinterp_data,       METH_VARARGS},     _TESTINTERNALCAPI_TEST_LONG_NUMBITS_METHODDEF     {"get_type_module_name",    get_type_module_name,            METH_O},+    {"bench_private_hash_double", test_bench_private_hash_double, METH_VARARGS},+    {"bench_public_hash_double", test_bench_public_hash_double, METH_VARARGS},     {NULL, NULL} /* sentinel */ };

Bench script for (A), private API:

importpyperfimport_testinternalcapirunner=pyperf.Runner()runner.bench_time_func('bench',_testinternalcapi.bench_private_hash_double)

Bench script for (B) and (C), public API:

importpyperfimport_testinternalcapirunner=pyperf.Runner()runner.bench_time_func('bench',_testinternalcapi.bench_public_hash_double)

@vstinner
Copy link
MemberAuthor

Other benchmark results using PGO:

  • (A) 12.7 ns +- 0.0 ns
  • (B) 15.2 ns +- 0.1 ns: 1.19x slower than (A)
  • (C) 15.8 ns +- 0.0 ns: 1.24x slower than (A)

These numbers are surprising.

@vstinner
Copy link
MemberAuthor

vstinner commentedNov 27, 2023
edited
Loading

These numbers are surprising.

I wrote PR#112476 to run this benchmark differently. Results look better (less surprising, more reliable than my previous benchmark).

Results with CPU isolation,gcc -O3 and PGO, but without LTO:

  • (A)12.3 ns +- 0.0 ns:Py_hash_t hash_api_A(PyObject *inst, double v)
  • (B)13.2 ns +- 0.0 ns:int hash_api_B(double v, Py_hash_t *result)
  • (C)12.3 ns +- 0.0 ns:Py_hash_t hash_api_C(double v)

API (A) and (C) have the same performance.

API (B) is0.9 ns slower than (A) and (C): it is1.07x slower than (A) and (C).

I ran the benchmark withv = 1.0.

@vstinner
Copy link
MemberAuthor

API (B) is 0.9 ns slower than (A) and (C): it is 1.07x slower than (A) and (C).

If we only care about performance, an alternative is API (D):Py_hash_t hash_api_D(double v, int *is_nan) whereis_nan can be NULL.

  • (A) 12.3 ns +- 0.1 ns
  • (B)13.2 ns +- 0.0 ns:0.9 ns slower /1.07x slower than API (A) and (C)
  • (C) 12.3 ns +- 0.0 ns
  • (D)12.7 ns +- 0.0 ns:0.4 ns slower /1.03x slower than API (A) and (C)

Note: Passing non-NULLis_nan or passing NULLis_nan has no significant impact on API (D) performance.

It may be interesting if you know that the number cannot be NaN.

@vstinner
Copy link
MemberAuthor

+1 on the naming note, Py_HashDouble (or PyHash_HashDouble) is a bit better.

It would be interesting to design the C API namespace in a similar way than Python packages and Python package sub-modules:import pyhash; h = pyhash.hash_double(1.0) would becomeh = PyHash_HashDouble(1.0) in C. So yeah, repeat "Hash" here. PyHash is the namespace, HashDouble() is the function of the namespace.

But Python C API is far from respecting such design :-) The "Py_" namespace is a giant bag full of "anything".

@vstinner
Copy link
MemberAuthor

By the way, see also PR#112096 which adds PyHash_Pointer() function.

encukou reacted with thumbs up emoji

@serhiy-storchaka
Copy link
Member

How much it makes difference if remove the check for NaN from the function, but add it before calling the function, like in#112095 (comment) ?

@encukou
Copy link
Member

As far as I know, microbenchmarks at this level are susceptible to “random” variations due to code layout. An individual function should be benchmarked in a variety of calling situations to get a meaningful result.

(But to reiterate: I don't think this is the place for micro-optimizations.)

erlend-aasland reacted with thumbs up emoji

@vstinner
Copy link
MemberAuthor

How much it makes difference if remove the check for NaN from the function, but add it before calling the function, like in#112095 (comment) ?

It would be bad to return the same hash value for +inf, -inf and NaN values. Current code:

if (!Py_IS_FINITE(v)) {if (Py_IS_INFINITY(v))returnv>0 ?_PyHASH_INF :-_PyHASH_INF;elsereturn_Py_HashPointer(inst);// v is NaN    }

@vstinner
Copy link
MemberAuthor

As far as I know, microbenchmarks at this level are susceptible to “random” variations due to code layout. An individual function should be benchmarked in a variety of calling situations to get a meaningful result.

We can take it in account in the API design. Now we know that the APIint hash_api_B(double v, Py_hash_t *result) is around 1.07x slower than other discussed API, which means less than a nanosecond (0.9 ns) in absolute timing.

I don't think that 0.9 ns is a significant difference. If a workflow is impacted by 0.9 ns per function call, maybe they should copy PyHash_Double() code and design a very specialized flavor for their workflow (ex: remove code for infinity and NaN, and inline all code).

In terms of performance, I think that any proposed API is fine.

@vstinner
Copy link
MemberAuthor

@serhiy-storchaka@encukou: Do you preferPyHash_Double(),PyHash_HashDouble() orPy_HashDouble() name?

I think that now that I read previous discussions, I preferPy_HashDouble() name. It fits better in the current C API naming scheme.

@encukou
Copy link
Member

Yeah,Py_HashDouble sounds best to me.

@vstinner
Copy link
MemberAuthor

First, I proposedPy_hash_t PyHash_Double(double v) API in this PR. Then I modified it to use theint PyHash_Double(double v, Py_hash_t *result) API.

I'm not fully comfortable with this API neither. I close this PR.

Let's continue the discussion in PR#112449 which implements the API proposed by@encukou.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@encukouencukouencukou left review comments

@skirpichevskirpichevskirpichev left review comments

@serhiy-storchakaserhiy-storchakaserhiy-storchaka left review comments

@tirantiranAwaiting requested review from tirantiran is a code owner

Assignees
No one assigned
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

5 participants
@vstinner@skirpichev@encukou@mdickinson@serhiy-storchaka

[8]ページ先頭

©2009-2025 Movatter.jp