Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[ONNX] Add binary_cross_entropy_with_logits op to ONNX opset version 12#49675

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
BowenBao merged 232 commits intopytorch:onnx_ms_1fromhwangdeyu:deyu/bce_with_logits_sy12
Jan 20, 2021

Conversation

@hwangdeyu
Copy link
Collaborator

Fixes #{#47997}
Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commentedDec 21, 2020
edited
Loading

💊 CI failures summary and remediations

As of commit0e09ee9 (more detailson the Dr. CI page):



5 failuresnot recognized by patterns:

JobStepAction
CircleCIpytorch_linux_xenial_py3_clang5_asan_test2Run tests🔁 rerun
CircleCIpytorch_linux_bionic_py3_8_gcc9_coverage_test2Run tests🔁 rerun
CircleCIpytorch_xla_linux_bionic_py3_6_clang9_testRun tests🔁 rerun
CircleCIpytorch_linux_bionic_py3_8_gcc9_coverage_test1Run tests🔁 rerun
CircleCIpytorch_linux_xenial_py3_clang5_asan_test1Run tests🔁 rerun

❄️ 3 failurestentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI buildpytorch_linux_xenial_cuda10_2_cudnn7_py3_jit_legacy_test (1/3)

Step: "Unknown" (full log |diagnosis details |🔁 rerun) ❄️

Waiting for a VM assignment: ............................................................................................................................................................................................................................................................................................................
Build-agent version 1.0.50617-fbf8220b (2021-01-14T08:47:18+0000)Creating a dedicated VM with ubuntu-1604:202007-01 imageWaiting for a VM assignment: ............................................................................................................................................................................................................................................................................................................We timed out preparing a VM for this build, potentially due to our infrastructure or cloud provider.  Please retry the build in a few minutesUnexpected capacity error: error caused by capacity

See CircleCI buildpytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test1 (2/3)

Step: "Unknown" (full log |diagnosis details |🔁 rerun) ❄️

Waiting for a VM assignment: ............................................................................................................................................................................................................................................................................................................
Build-agent version 1.0.50617-fbf8220b (2021-01-14T08:47:18+0000)Creating a dedicated VM with ubuntu-1604:202007-01 imageWaiting for a VM assignment: ............................................................................................................................................................................................................................................................................................................We timed out preparing a VM for this build, potentially due to our infrastructure or cloud provider.  Please retry the build in a few minutesUnexpected capacity error: error caused by capacity

See CircleCI buildpytorch_linux_xenial_cuda10_2_cudnn7_py3_gcc7_test2 (3/3)

Step: "Unknown" (full log |diagnosis details |🔁 rerun) ❄️

Waiting for a VM assignment: ............................................................................................................................................................................................................................................................................................................
Build-agent version 1.0.50617-fbf8220b (2021-01-14T08:47:18+0000)Creating a dedicated VM with ubuntu-1604:202007-01 imageWaiting for a VM assignment: ............................................................................................................................................................................................................................................................................................................We timed out preparing a VM for this build, potentially due to our infrastructure or cloud provider.  Please retry the build in a few minutesUnexpected capacity error: error caused by capacity

ci.pytorch.org: 1 failed


This comment was automatically generated byDr. CI (expand for details).Followthis link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal)Dr. CI Users group.

@BowenBaoBowenBao changed the titleAdd binary_cross_entropy_with_logits op to ONNX opset version 12[ONNX] Add binary_cross_entropy_with_logits op to ONNX opset version 12Dec 22, 2020
@BowenBao
Copy link
Collaborator

please rebase with onnx_ms_1 to resolve CI issues.

bertmaherand others added17 commitsDecember 23, 2020 15:17
Summary:Pull Requestresolved:pytorch#49396Pull Requestresolved:pytorch#49271Two things:1. These throw exceptions in their constructor, which causes a segfault (*), so   move the exceptions to ::make.2. They technically support FP types but the rules are complicated so let's not   bother.(*) The reason for the segfault: all Exprs including these inherit fromKernelScopedObject, whose constructor adds the object to a list for destructionat the end of the containing KernelArena's lifetime.  But if the derived-classconstructor throws, the object is deleted even though it's still in theKernelArena's list.  So when the KernelArena is itself deleted, it double-freesthe pointer and dies.  I've also fixed And, Or, and Xor in this diff.ghstack-source-id: 118594998Test Plan: `buck test //caffe2/test:jit`Reviewed By: bwastiDifferential Revision: D25512052fbshipit-source-id: 42670b3be0cc1600dc5cda6811f7f270a2c88bba
Summary:Pull Requestresolved:pytorch#49340This refines the fusion group to include on certain types of operations.  We cannot safely handle "canRunNatively" types and the memonger pass causes regressions on some internal models, so it was disabled (to be revisited with proper memory optimization once Tensor pools are implemented)Test Plan:```buck test mode/no-gpu caffe2/test:static_runtimebuck test //caffe2/benchmarks/static_runtime:static_runtime_cpptest```Reviewed By: ZolotukhinMDifferential Revision: D25520105fbshipit-source-id: add61d103e4f8b4615f5402e760893ef759a60a9
Summary: Pull Requestresolved:pytorch#48992Differential Revision: D25388100Test Plan: Imported from OSSReviewed By: heitorschueroffPulled By: ZolotukhinMfbshipit-source-id: d95713af2220cf4f99ac92f59f8e5b902f2f3822
Summary:BC-breaking note:This PR changes the behavior of the any and all functions to always return a bool tensor. Previously these functions were only defined on bool and uint8 tensors, and when called on uint8 tensors they would also return a uint8 tensor. (When called on a bool tensor they would return a bool tensor.)PR summary:pytorch#44790 (comment)Fixes 2 and 3AlsoFixespytorch#48352Changes* Output dtype is always `bool` (consistent with numpy) **BC Breaking (Previously used to match the input dtype**)* Uses vectorized version for all dtypes on CPU* Enables test for complex* Update doc for `torch.all` and `torch.any`TODO* [x] Update docs* [x] Benchmark* [x] Raise issue on XLAPull Requestresolved:pytorch#47878Reviewed By: H-HuangDifferential Revision: D25421263Pulled By: mruberryfbshipit-source-id: c6c681ef94004d2bcc787be61a72aa059b333e69
…L_LAUNCH_CHECK() (pytorch#49424)Summary:Pull Requestresolved:pytorch#49424As per conversation in this [comment](https://www.internalfb.com/intern/diff/D25541113 (https://github.com/pytorch/pytorch/commit/e2510a0b60232aba5160ceb18b6ece8c59a9b79d)/?dest_fbid=393026838623691&transaction_id=3818008671564312) on D25541113 (pytorch@e2510a0), although THError does more than just log any errors associated cuda kernel launches, we're going to go ahead and replace it with C10_CUDA_KERNEL_LAUNCH_CHECK, so as to be consistent throughout the code base.Standardization FTW.This commit is purposefully sent in as a single file change so it can be easily reverted if it introduces a regression.Test Plan:Checked that the code still builds with```buck build //caffe2/aten:ATen-cu```Also ran basic aten tests```buck test //caffe2/aten:atest```Reviewed By: r-barnesDifferential Revision: D25567863fbshipit-source-id: 1093bfe2b6ca6b9a3bfb79dcdc5d713f6025eb77
Summary:Signed-off-by: caozhong <zhong.z.cao@intel.com>Pull Requestresolved:pytorch#48827Reviewed By: agolynskiDifferential Revision: D25375988Pulled By: ailzhangfbshipit-source-id: a8d5ab4572d991d6d96dfe758011517651ff0a6b
…ings.warn (pytorch#49313)Summary:Adding a flag torch_jit_disable_warning_prints to optimize interpreter performance by suppressing (potentially large amount) of warnings.warn.This is to work around TorchScript's warning behavior mismatch with Python. Python by default triggers a warning once per location but TorchScript doesn't support it. This causes same warning to trigger and print once per inference run, hurting performance.Pull Requestresolved:pytorch#49313Reviewed By: SplitInfinityDifferential Revision: D25534274Pulled By: gmagogsfmfbshipit-source-id: eaeb57a335c3e6c7eb259671645db05d781e80a2
…s in async execution (pytorch#49322)Summary:Pull Requestresolved:pytorch#49322In some cases async execution might loose dependencies (Alias like ops) or produce suboptimal scheduling when there is an option which parts to schedule first. Example of the later behavior can happen in ModelParallel training where copy can get lower priority compared to the rest of the execution on the given GPU, which will caused other GPUs to starve.This operator allows to address these issues by introducing extra explicit dependencies between ops.Test Plan:Unit-test/E2E testing in the future diffs.Reviewed By: xianjiecDifferential Revision: D24933471fbshipit-source-id: 1668994c7856d73926cde022378a99e1e8db3567
Summary: Pull Requestresolved:pytorch#49415Test Plan: Imported from OSSReviewed By: zdevitoDifferential Revision: D25565341Pulled By: jamesr66afbshipit-source-id: 2290ab62572632788809ba16319578bf0c0260ee
…reapply) (pytorch#49408)Summary:Pull Requestresolved:pytorch#49408Nearly every non-test callsite doesn't need to capture any variables anyway, and this saves 48 bytes per callback.ghstack-source-id: 118665808Test Plan:Wait for GitHub CI since we had C++14-specific issues withthis one in previous PRpytorch#48629Reviewed By: malfetDifferential Revision: D25563207fbshipit-source-id: 6a2831205917d465f8248ca37429ba2428d5626d
Summary:Since NCCL is an optional CUDA dependency, remove nccl.cpp from the core filelistPull Requestresolved:pytorch#49429Reviewed By: nikithamalgifbDifferential Revision: D25569883Pulled By: malfetfbshipit-source-id: 61371a4c6b0438e4e0a7f094975b9a9f9ffa4032
Summary:Fixespytorch#47462, but not completely.Update breathe to the latest version to get fixes for the "Unable to resolve..." issues. There are still some build errors, but much fewer than before.Pull Requestresolved:pytorch#49407Reviewed By: izdebyDifferential Revision: D25562163Pulled By: glaringleefbshipit-source-id: 91bfd9e9ac70723816309f489022d72853f5fdc5
Summary:Pull Requestresolved:pytorch#49447Adding an out variant for `permute`. It's better than fixing the copy inside contiguous because 1) we can leverage the c2 math library, 2) contiguous creates a tensor inside the function which isn't managed by the MemoryPlanner in StaticRuntimeTest Plan:Benchmark:```After:I1214 12:35:32.218775 991920 PyTorchPredictorBenchLib.cpp:209] PyTorch run finished. Milliseconds per iter: 0.0902339. Iters per second: 11082.3Before:I1214 12:35:43.368770 992620 PyTorchPredictorBenchLib.cpp:209] PyTorch run finished. Milliseconds per iter: 0.0961521. Iters per second: 10400.2```Reviewed By: yinghaiDifferential Revision: D25541666fbshipit-source-id: 013ed0d4080cd01de4d3e1b031ab51e5032e6651
Summary: Pull Requestresolved:pytorch#49388Test Plan: Imported from OSSReviewed By: zou3519Differential Revision: D25553672Pulled By: glaringleefbshipit-source-id: e9f2233bd678a90768844af2d8d5e2994d59e304
…ets (pytorch#49113)Summary: Pull Requestresolved:pytorch#49113Reviewed By: ajyuDifferential Revision: D25388512fbshipit-source-id: 3daa5b9387a3a10b6c220688df06540c4d844aea
pytorch#49346)Summary:Pull Requestresolved:pytorch#49346This is less ambitious redo ofpytorch#49129.We make the```xq_slice = xq[:, [0], :, :]```indexing syntax work if `xq` is a quantized Tensor.  For now, we aremaking the code not crash, with an in efficient `dq -> index -> q`implementation.  A future PR can optimize performance by removingthe unnecessary memory copies (which will require some non-trivialchanges to TensorIterator).Test Plan:```python test/test_quantization.py TestQuantizedOps.test_advanced_indexing```Imported from OSSReviewed By: jerryzh168Differential Revision: D25539365fbshipit-source-id: 98485875aaaf5743e1a940e170258057691be4fa
Summary:Pull Requestresolved:pytorch#49373Unescaping the string in RPC error message to provide better error msgTest Plan: CIReviewed By: xush6528Differential Revision: D25511730fbshipit-source-id: 054f46d5ffbcb1350012362a023fafb1fe57fca1
Copy link
Collaborator

@BowenBaoBowenBao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

LGTM, thanks! minor comments with improving helper function.

@BowenBaoBowenBao merged commit566406c intopytorch:onnx_ms_1Jan 20, 2021
BowenBao added a commit that referenced this pull requestJan 21, 2021
…12 (#49675)Fixes #{#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.[ghstack-poisoned]
BowenBao added a commit that referenced this pull requestJan 21, 2021
…et version 12 (#49675)"Fixes #{#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.[ghstack-poisoned]
BowenBao added a commit that referenced this pull requestJan 22, 2021
…et version 12 (#49675)"Fixes #{#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.[ghstack-poisoned]
BowenBao added a commit that referenced this pull requestJan 22, 2021
…et version 12 (#49675)"Fixes #{#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.Differential Revision: [D26023936](https://our.internmc.facebook.com/intern/diff/D26023936)[ghstack-poisoned]
BowenBao added a commit that referenced this pull requestJan 25, 2021
…et version 12 (#49675)"Fixes #{#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.[ghstack-poisoned]
BowenBao added a commit that referenced this pull requestJan 25, 2021
…et version 12 (#49675)"Fixes #{#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.Differential Revision: [D26050885](https://our.internmc.facebook.com/intern/diff/D26050885)[ghstack-poisoned]
BowenBao added a commit that referenced this pull requestJan 26, 2021
…et version 12 (#49675)"Fixes #{#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.Differential Revision: [D26050885](https://our.internmc.facebook.com/intern/diff/D26050885)[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull requestJan 28, 2021
…12 (#49675) (#50908)Summary:Pull Requestresolved:#50908Fixes #{#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.Test Plan: Imported from OSSReviewed By: pbelevichDifferential Revision: D26050885Pulled By: SplitInfinityfbshipit-source-id: e4167895eed804739aa50481679500a4d564b360
BowenBao added a commit to BowenBao/pytorch that referenced this pull requestJan 28, 2021
…12 (pytorch#49675)Fixes #{pytorch#47997}Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.ghstack-source-id:4d3467dPull Requestresolved:pytorch#50908
@hwangdeyuhwangdeyu deleted the deyu/bce_with_logits_sy12 branchAugust 30, 2021 03:17
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@BowenBaoBowenBaoBowenBao approved these changes

@albanDalbanDAwaiting requested review from albanD

Assignees

No one assigned

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

20 participants

@hwangdeyu@facebook-github-bot@BowenBao@pytorchbot@jiafatom@bertmaher@bwasti@kshitij12345@CaoZhongZ@gmagogsfm@kennyhorror@swolchok@malfet@mattip@vkuzo@rohan-varma@smessmer@ivannz@zasdfgbnm@iseeyuan

[8]ページ先頭

©2009-2025 Movatter.jp