Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork56.4k
Added int32, int64 support and type inference to dnn#24411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
| inps[i] = *ld.inputBlobs[i]; | ||
| } | ||
| layerPtr->finalize(inps, ld.outputBlobs); | ||
| layerPtr->preferableTarget = preferableTarget; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Moved it to line 590, because this line runs after type inferense and line 590 runs before type inference
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
| * order is the same as in layersIds | ||
| */ | ||
| voidgetLayerShapes(const MatShape& netInputShape, | ||
| const MatType& netInputType, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Why type is requires to get only shapes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Since the shape inference and the type inference has the same pipeline, I added the type inference into the shape inference pipeline. So this function probably will be renamed into getLayersShapesAndTypes
0862333 to2466922CompareUh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
asmorkalov commentedFeb 6, 2024
See build errors: |
asmorkalov commentedFeb 14, 2024
@alexlyulkov Please rebase and fix conflicts. |
867ca3a to63f1309Compare| constint requiredOutputs, | ||
| constint requiredInternals, | ||
| std::vector<MatType>&outputs, | ||
| std::vector<MatType>&internals)const; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Do we really need such kind of functionality in public API?
What Users scenarios need all types of all layers at once?
It makes sense to support this scenarios instead:
- user set network inputs
- user specify list of requested outputs
- network compiles (internally calls layer's
.finalize()) - user asks for layer by its name
- ask types and shapes of layer outputs without passing of any parameters like "inputs/outputs/internals" - pass only index of layer's output.
Note:
- support multiple outputs per layer
- user should not know anything about "internals"
getMemoryShapes/getLayersShapes()should be deprecated as primary API too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think it is out of scope of this pull request. I added the new type inference pipeline similar to existing shape inference pipeline because this way produces less architecture changes. I think it will be better to make another pull request with public API modifications.
e2d76a6 toe659940Comparefengyuentau commentedFeb 21, 2024
Something is broken with RAFT |
Uh oh!
There was an error while loading.Please reload this page.
dfe691c to5964d64Compare| } | ||
| TEST_P(Test_ONNX_nets,RAFT) | ||
| TEST_P(Test_ONNX_nets,DISABLED_RAFT) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Please create ticket about the issue and add reference to it in comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Reminder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
| if (inputs_arr.depth() == CV_16F) | ||
| { | ||
| forward_fallback(inputs_arr, outputs_arr, internals_arr); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
forward_fallback converts data from float16 to float32 and runs the layer. Now max_umpooling_layer supports CV_16F, so it doesn't need forward_fallback
Uh oh!
There was an error while loading.Please reload this page.
| if (hasDynamicShapes) | ||
| { | ||
| updateLayersShapes(); | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
As I understand, this function runs shape inference for the second time for the models with dynamic shapes. I've found out that this is redundant because this line runs before the main shape inference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
@dkurt could you take a look?
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
| } | ||
| TEST_P(Test_ONNX_nets,RAFT) | ||
| TEST_P(Test_ONNX_nets,DISABLED_RAFT) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Reminder.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
… support int64 constants
2da0e42 to083e14fCompare
asmorkalov left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
👍 Looks good to me! Thanks a lot for the contribution!
asmorkalov commentedMar 1, 2024
|
opencv-alalek commentedMar 4, 2024 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Where is the first line of PR description with the link on related opencv_contrib PR? CI is totally RED. |
Uh oh!
There was an error while loading.Please reload this page.
Related PRs:
Added a type inference to dnn similar to the shape inference, added int32 and int64 support.
Added int32 and int64 support for CUDA:
Passed all the accuracy tests on CPU, OCL, OCL_FP16, CUDA, CUDA_FP16. (except RAFT model)
CURRENT PROBLEMS:
CURRENT WORKAROUNDS:
DISABLED TESTS:
REMOVED TESTS:
TODO IN NEXT PULL REQUESTS: