- Notifications
You must be signed in to change notification settings - Fork26.3k
Add toggle functionality for XPU profiler#155135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
pytorch-botbot commentedJun 4, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
🔗 Helpful Links🧪 See artifacts and rendered test results athud.pytorch.org/pr/155135
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit58b667d with merge based99cac2 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
test/profiler/test_profiler.py Outdated
| TEST_WITH_ROCM, | ||
| TestCase, | ||
| ) | ||
| fromtorch.testing._internal.triton_utilsimportrequires_gpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This API is used in Inductor, especially for triton, so I think we should avoid it here.
test/profiler/test_profiler.py Outdated
| iftorch.cuda.is_available(): | ||
| gpu_activity=ProfilerActivity.CUDA | ||
| device="cuda" | ||
| eliftorch.xpu.is_available(): | ||
| gpu_activity=ProfilerActivity.XPU | ||
| device="xpu" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
| iftorch.cuda.is_available(): | |
| gpu_activity=ProfilerActivity.CUDA | |
| device="cuda" | |
| eliftorch.xpu.is_available(): | |
| gpu_activity=ProfilerActivity.XPU | |
| device="xpu" | |
| acc=torch.accelerator.current_accelerator() | |
| self.assertIsNotNone(acc) | |
| device=acc.type | |
| gpu_activity=getattr(ProfilerActivity,device.upper(),None) | |
| self.assertIsNotNone(gpu_activity) |
test/profiler/test_profiler.py Outdated
| self.assertTrue(any("aten"ine.nameforeinp.events())) | ||
| self.assertTrue(any("cuda"ine.nameforeinp.events())) | ||
| self.assertTrue(any(str(device)ine.nameforeinp.events())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
device should already bestr type.
guangyey left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM.
| TEST_WITH_ROCM, | ||
| TestCase, | ||
| TEST_CUDA, | ||
| TEST_XPU, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Please fix the lint error.
frost-intel commentedJun 20, 2025
@sraikund16 Any chance to get this reviewed/merged before landing deadline today? |
sraikund16 commentedJun 20, 2025
@pytorchbot merge |
pytorchmergebot commentedJun 20, 2025
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in thewiki. Questions? Feedback? Please reach out to thePyTorch DevX Team |
Uh oh!
There was an error while loading.Please reload this page.
Fixes#154898 by adding ability to toggle XPU profiler on and off (which has already been added inpytorch/kineto#1088
cc@gujinghui@EikanWang@fengyuan14@guangyey