Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Implicit float conversion in function calls#1908

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
filmor merged 2 commits intopythonnet:releasefromfilmor:implicit-float-int
Sep 20, 2022

Conversation

filmor
Copy link
Member

What does this implement/fix? Explain your changes.

Uses__float__ when available if a Python object is passed to a function with aDouble orSingle parameter.

Does this close any currently open issues?

#1833

Any other comments?

This is not yet supported for integer types, which would use__int__
or__index__ (for Python >=3.10), as the respective logic is for some
reason only implemented forAsLongLong andAsLong, not for the
unsigned counterparts and also not for theAsSize_t variants that we
are using.

Checklist

Check all those that are applicable and complete.

  • Make sure to include one or more tests for your change
  • If an enhancement PR, please create docs and at best an example
  • Ensure you have signed the.NET Foundation CLA
  • Add yourself toAUTHORS
  • Updated theCHANGELOG

@lostmsu
Copy link
Member

Implicit casts to float were disabled intentionally. For instance,float(torch.tensor(42)) is afloat. However, if implicit casts are enabled, Python.NET can not choose an overload betweenfloat andTensor if the C# side has aTensor wrapper via a custom codec.

@filmor
Copy link
MemberAuthor

The (expected) order here is:

  1. Check if a type matches exactly
  2. Check if a custom codec matches
  3. Check if the parameter type isDouble orSingle and if that is the case and the Python parameter is "floaty" (i.e.PyNumber_Float returns something!= NULL), use the converted value
  4. Check if the parameter type is an integer type and do the same withPyNumber_Long orPyNumber_Index

This should lead to a sensible overload resolution.

@lostmsu
Copy link
Member

lostmsu commentedAug 16, 2022
edited
Loading

Check if a type matches exactly

This can never happen with a custom Python type.

Also, we do not really perform overload resolution prioritization like that. E.g. the__call__ impl just passes the object toConvert and checks if conversion succeeded.

@filmor
Copy link
MemberAuthor

Can you please provide a test-case that you expect to fail with this PR? I'm fine with working on the overload resolution, but I think makingCsharpFunc(float_like) fail is complicating things on the user side in a very error-prone manner.

@lostmsu
Copy link
Member

lostmsu commentedSep 14, 2022
edited
Loading

I described the test case above.

  1. Create an empty classTensor in C#, and a codec along the lines ofreturn isinstance(o, torch.tensor) ? new Tensor() : null;
  2. Get a C# function with two overloads:Foo(float) (or maybeFoo(double)) andFoo(Tensor)
  3. Call it from Python withFoo(torch.tensor(42))

See it pick the overload based on the weather. This is because we actually do not perform any overload resolution, but this behavior prevents user from forcing exact type match with codecs when the type on Python side has__float__.

If somebody wants this behavior, they can register a codec for types implementing__float__ or just explicitly cast their floaty types tofloat.

What we might want is forSystem.Double(np_float64_val) to produce aDouble value (which I am not sure it does currently)`.

@filmorfilmorforce-pushed theimplicit-float-int branch 3 times, most recently from882ccb6 toc4325ddCompareSeptember 19, 2022 12:34
@lostmsu
Copy link
Member

@filmor I think we still shouldnot just enable__float__ for all types that have it, because there are bogus ones that do that and should never be converted without an explicit cast. Implicit conversion should be reserved only for types, that explicitly represent identically sized numbers, e.g.numpy.float64,ctypes.int32, etc

@filmor
Copy link
MemberAuthor

That effectively boils down to just using the "real"Runtime.PyFloat_Check instead of our botched version. I'd be completely fine with that.

@filmorfilmor marked this pull request as ready for reviewSeptember 19, 2022 15:55
@filmor
Copy link
MemberAuthor

filmor commentedSep 19, 2022
edited
Loading

Just to recap: This will now check before attempting a conversion whether the object identifies as afloat usingPyFloat_Check, which has been adjusted in the way@rzindler proposed in#1936. All other changes are just cosmetics (plus one minor bug-fix adjustment for theInt64 conversion) and test-cases.

@filmor
Copy link
MemberAuthor

@lostmsu If this is mergeable for you, I think that would solve the reported issue sufficiently that we can continue with the releases without having to touch method resolution right now (which I also really don't want to). PyTorch'stensor does not derive fromfloat64, so this particular case should not be a problem.

@lostmsu
Copy link
Member

My problem with this approach is that it does not allow us to do:

stringFoo(floatx)=>"float32";stringFoo(doublex)=>"float64";
Foo(numpy.float32(42.0))

@filmor
Copy link
MemberAuthor

My problem with this approach is that it does not allow us to do:

stringFoo(floatx)=>"float32";stringFoo(doublex)=>"float64";
Foo(numpy.float32(42.0))

It does, you just need to be explicit. But I get your point, I just think that the reasonable default is more important than optimising this overload. We can document this particular issue.

@rzindler
Copy link

Thanks guys!

@lostmsu
Copy link
Member

I think we can do better than document, and actually special case ctypes, NumPy, PyTorch, and TensorFlow types that have exact precision. Preferably also leave a room for users to special case other precise types.

It is not that hard, and if we don't do it now we'd have to wait until 4.0 to make another breaking change.

@filmor
Copy link
MemberAuthor

I'm strongly against special-casing particular library types. If you want to allow properfloat32, that's an entirely new issue. Allowing modifying primitive type conversion as well.

@lostmsu
Copy link
Member

That's exactly what I want to do in a week or two.

@filmor
Copy link
MemberAuthor

That's exactly what I want to do in a week or two.

I'll merge this in the meantime. Feel free to revert or adjust the behaviour when you are attempting your refactoring. In this state, I'd be fine to publish it.

@filmorfilmor merged commit52cf03c intopythonnet:releaseSep 20, 2022
@filmorfilmor deleted the implicit-float-int branchSeptember 20, 2022 06:16
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@lostmsulostmsulostmsu left review comments

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

3 participants
@filmor@lostmsu@rzindler

[8]ページ先頭

©2009-2025 Movatter.jp