Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[MPS][BE] Extend torch.special. to integer dtypes#155002

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Closed
malfet wants to merge3 commits intogh/malfet/373/basefromgh/malfet/373/head

Conversation

@malfet
Copy link
Contributor

@malfetmalfet commentedJun 3, 2025
edited
Loading

Stack fromghstack (oldest at bottom):

By changing the functor to looks as follows

structxlog1py_functor {template<typename T,enable_if_t<is_floating_point_v<T>,bool> =true>inline Toperator()(const T a,const T b) {returnstatic_cast<T>(c10::metal::xlog1py(a, b));  }template<typename T,enable_if_t<is_integral_v<T>,bool> =true>inlinefloatoperator()(const T a,const T b) {returnc10::metal::xlog1py(float(a),float(b));  }};

Repeat the same forzeta,chebyshev_polynomial_[tuvw]_functor andhermite_polynomial_h[e]_functor

[ghstack-poisoned]
@malfetmalfet requested a review fromkulinseth as acode ownerJune 3, 2025 14:19
@pytorch-bot
Copy link

pytorch-botbot commentedJun 3, 2025
edited
Loading

🔗 Helpful Links

🧪 See artifacts and rendered test results athud.pytorch.org/pr/155002

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 20 Pending

As of commit7c32ff8 with merge base0fab322 (image):
💚 Looks good so far! There are no failures yet. 💚

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-botpytorch-botbot added ciflow/mpsRun MPS tests (subset of trunk) release notes: mpsRelease notes category labelsJun 3, 2025
@malfetmalfet requested review fromSkylion007 anddcciJune 3, 2025 14:21
@malfetmalfet added the topic: improvementstopic category labelJun 3, 2025
[ghstack-poisoned]
@Skylion007Skylion007 changed the title[MPS][BE] Extend torch.special.xlog1py to intergers[MPS][BE] Extend torch.special.xlog1py to integersJun 3, 2025
@Skylion007Skylion007 added the better-engineeringRelatively self-contained tasks for better engineering contributors labelJun 3, 2025
[ghstack-poisoned]
malfet added a commit that referenced this pull requestJun 3, 2025
And may be other special ops as well, depending on how many workaroundfor compiler crashes one has to do to make it work on MacOS-13ghstack-source-id:93599b5Pull Requestresolved:#155002
@malfetmalfet changed the title[MPS][BE] Extend torch.special.xlog1py to integers[MPS][BE] Extend torch.special. to integer dtypesJun 3, 2025
@malfet
Copy link
ContributorAuthor

@pytorchbot merge -f "Lint + MPS are green"

pytorch-bot[bot] reacted with thumbs up emoji

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag,bypassing any CI checks (ETA: 1-5 minutes). Please use-f as last resort and instead consider-i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in thewiki.

Questions? Feedback? Please reach out to thePyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit to Eliasj42/pytorch that referenced this pull requestJun 3, 2025
By changing the functor to looks as follows```metalstruct xlog1py_functor {  template <typename T, enable_if_t<is_floating_point_v<T>, bool> = true>  inline T operator()(const T a, const T b) {    return static_cast<T>(c10::metal::xlog1py(a, b));  }  template <typename T, enable_if_t<is_integral_v<T>, bool> = true>  inline float operator()(const T a, const T b) {    return c10::metal::xlog1py(float(a), float(b));  }};```Repeat the same for `zeta`, `chebyshev_polynomial_[tuvw]_functor` and `hermite_polynomial_h[e]_functor`Pull Requestresolved:pytorch#155002Approved by:https://github.com/Skylion007,https://github.com/dccighstack dependencies:pytorch#154936
pytorchmergebot pushed a commit that referenced this pull requestJun 4, 2025
That creates _kernel_mps function that takes iterator and calls stub foritPull Requestresolved:#155081Approved by:https://github.com/dccighstack dependencies:#154936,#155002
pytorchmergebot pushed a commit that referenced this pull requestJun 4, 2025
iupaikov-amd pushed a commit to ROCm/pytorch that referenced this pull requestJun 4, 2025
By changing the functor to looks as follows```metalstruct xlog1py_functor {  template <typename T, enable_if_t<is_floating_point_v<T>, bool> = true>  inline T operator()(const T a, const T b) {    return static_cast<T>(c10::metal::xlog1py(a, b));  }  template <typename T, enable_if_t<is_integral_v<T>, bool> = true>  inline float operator()(const T a, const T b) {    return c10::metal::xlog1py(float(a), float(b));  }};```Repeat the same for `zeta`, `chebyshev_polynomial_[tuvw]_functor` and `hermite_polynomial_h[e]_functor`Pull Requestresolved:pytorch#155002Approved by:https://github.com/Skylion007,https://github.com/dccighstack dependencies:pytorch#154936
iupaikov-amd pushed a commit to ROCm/pytorch that referenced this pull requestJun 4, 2025
That creates _kernel_mps function that takes iterator and calls stub foritPull Requestresolved:pytorch#155081Approved by:https://github.com/dccighstack dependencies:pytorch#154936,pytorch#155002
angelayi pushed a commit to angelayi/pytorch that referenced this pull requestJun 5, 2025
By changing the functor to looks as follows```metalstruct xlog1py_functor {  template <typename T, enable_if_t<is_floating_point_v<T>, bool> = true>  inline T operator()(const T a, const T b) {    return static_cast<T>(c10::metal::xlog1py(a, b));  }  template <typename T, enable_if_t<is_integral_v<T>, bool> = true>  inline float operator()(const T a, const T b) {    return c10::metal::xlog1py(float(a), float(b));  }};```Repeat the same for `zeta`, `chebyshev_polynomial_[tuvw]_functor` and `hermite_polynomial_h[e]_functor`Pull Requestresolved:pytorch#155002Approved by:https://github.com/Skylion007,https://github.com/dccighstack dependencies:pytorch#154936
angelayi pushed a commit to angelayi/pytorch that referenced this pull requestJun 5, 2025
That creates _kernel_mps function that takes iterator and calls stub foritPull Requestresolved:pytorch#155081Approved by:https://github.com/dccighstack dependencies:pytorch#154936,pytorch#155002
angelayi pushed a commit to angelayi/pytorch that referenced this pull requestJun 5, 2025
@github-actionsgithub-actionsbot deleted the gh/malfet/373/head branchJuly 4, 2025 02:22
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@Skylion007Skylion007Skylion007 approved these changes

@dccidccidcci approved these changes

@kulinsethkulinsethAwaiting requested review from kulinseth

Assignees

No one assigned

Labels

better-engineeringRelatively self-contained tasks for better engineering contributorsciflow/mpsRun MPS tests (subset of trunk)Mergedrelease notes: mpsRelease notes categorytopic: improvementstopic category

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

5 participants

@malfet@pytorchmergebot@Skylion007@dcci

[8]ページ先頭

©2009-2025 Movatter.jp