Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork56.4k
Split up layer dispatch of Tensorflow importer into map of functions#20190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
23a039a toc0c9decCompare| @@ -34,6 +34,8 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN | |||
| #if HAVE_PROTOBUF | |||
| #define CALL_MEMBER_FN(object,ptrToMember) ((object).*(ptrToMember)) | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This macro is used once. Lets remove them and use code directly.
| static DispatchMap dispatch; | ||
| static DispatchMap initDispatch(); | ||
| void convolution (tensorflow::GraphDef& net, tensorflow::NodeDef& layer, LayerParams& layerParams); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
tensorflow::GraphDef& net
It makes sense to remove this parameter and use field from the class directly (throughthis) instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
In onnx importer there is a diagnostic run and if it is enabled, the other net is used instead, so I left a parameter here to be able to do the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
What is the problem to change field value for that case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
opencv/modules/dnn/src/onnx/onnx_importer.cpp
Lines 115 to 117 in3e513ee
| protected: | |
| Net& dstNet; | |
| Net utilNet; |
Net field is a reference itself, so we could make a self-referential struct, but the reference will become dangling when the object will be moved.
| DispatchMap; | ||
| static DispatchMap dispatch; | ||
| static DispatchMap initDispatch(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Avoid global fields which require initialization. Allocate/initialize on-demand once.
static DispatchMap& getDispatchMap();
with
DispatchMap& TFImporter::getDispatchMap(){ static DispatchMap g_dispatchMap; if (g_dispatchMap.empty()) { AutoLock ...; if (g_dispatchMap.empty()) { ... } } return g_dispatchMap;}| std::string name = layer.name(); | ||
| std::string type = layer.op(); | ||
| int num_inputs = layer.input_size(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
BTW, many handlers have such "prolog" code.
| void TFImporter::parseNode(const tensorflow::NodeDef& layer_) | ||
| { | ||
| tensorflow::NodeDef layer = layer_; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Probably this "copy" is not needed anymore.
vpisarev commentedJun 8, 2021
@rogday, thank you for the patch. one general question and one general request. the question. Could you please list the reasons why is this refactoring done. I mean, we discussed it briefly during the core team meeting and we found some pro's and con's of such approach: pro's:
con's:
Maybe I missed something. If so, can you please put your thoughts? the request. If we decide to keep this structure, I think, the methods need to be renamed to include some common prefix, e.g. parse. That is, instead of "concat", the method should possibly be named "parseConcat". It will help to avoid possible name conflicts if the different parsers API needs to be extended with some functionality that this functionality will conflict with a layer name. |
rogday commentedJun 8, 2021
@vpisarev, thank you for the review. This refactoring is done because I plan to implement TF importer diagnostic similar to ONNX one. In the latter each layer is duplicated in registered set(which is used for diagnostic reasons) and in if-elseif cascade, which makes it easy to forget to add the layer everywhere. It's also more structured this way and can be navigated more easily. The prologue can be inlined since its just introducing constant reference most of the time, and it is pretty short. If the layers share common parts, these parts can be turned into functions themselves. Or it can be done just like in convolution function which branches based on the type of the layer. |
vpisarev commentedJun 9, 2021 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
@rogday, thank you. It's good enough explanation for me. |
vpisarev left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
please, rename the methods, add "parse" or some other prefix
b1a1d40 to0106a9cCompare
vpisarev left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
👍
| ReadTFNetParamsFromTextFileOrDie(config, &netTxt); | ||
| cv::AutoLock lock(getInitializationMutex()); | ||
| if (instance == NULL) | ||
| instance = new Mutex(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Borrowed from dnn.cpp. I'm not sure this is 100% safe.https://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf, page 8. Mutex has an int refcounter which is being set in the constructor and it might get rearranged.
| cv::AutoLock lock(getDispatchMapMutex()); | ||
| if (dispatch == NULL) | ||
| { | ||
| dispatch = &getDispatchMapImpl(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Why do need both getDispatchMap/getDispatchMapImpl?
static bool initialized = false;static DispatchMap dispatch;if (!initialized){ AutoLock lock(getInitializationMutex()); if (initialized) return dispatch; dispatch["Conv2D"] = ... ... initialized = true;}return dispatch;There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I might be wrong, but I think the compiler is free to rearrangedispatch[layerName] = func withinitialized = true hence the Impl function. Andinitialized should be atomic, also the default constructor ofdispatchcan be ran by two threads at once. I think my solution is also not perfect since dispatch pointer isn't atomic either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
the default constructor of dispatch can be ran by two threads at once
C++11 doesn't have that problem. But unfortunatelly this patch is targeted for 3.4 with C++98. So yes, this is the problem.
We can use this (I will cleanup C++98 code during the merge into master branch):
{ static bool initialized = false;#ifdef CV_CXX11 static DispatchMap dispatch;#else static DispatchMap* pDispatch = NULL;#endif if (!initialized) { AutoLock lock(getInitializationMutex());#ifndef CV_CXX11 static DispatchMap dispatch; pDispatch = &dispatch;#endif if (initialized) return dispatch; dispatch["Conv2D"] = ... ... initialized = true; }#ifdef CV_CXX11 return dispatch;#else return *pDispatch;#endif}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
There is no guaranteepDispatch is notNULL wheninitialized istrue, because memory may be partially out of sync across different threads. I'd like to forcepDispatch to update wheninitialized is read from, but it is only possible via release-acquire semantics of atomic's loads and stores. It means thatinitialized has to be an equivalent ofatomic<bool>, which we don't have. Another way is to lock the mutex prior to readinginitialized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Ok, what is about to simplify C++11 code path?
const TFImporter::DispatchMap& TFImporter::getDispatchMap() {+#ifdef CV_CXX11+ static DispatchMap& dispatch = getDispatchMapImpl();+ return dispatch;+#else static const TFImporter::DispatchMap* volatile dispatch = NULL; if (dispatch == NULL) { cv::AutoLock lock(getDispatchMapMutex()); if (dispatch == NULL) { dispatch = &getDispatchMapImpl(); } } return *dispatch;+#endif }| @@ -510,2051 +510,2301 @@ class TFImporter | |||
| private: | |||
| void addPermuteLayer(const int* order, const std::string& permName, Pin& inpId); | |||
| typedef std::map<std::string, void (TFImporter::*)(tensorflow::GraphDef&, const tensorflow::NodeDef&, LayerParams&)> | |||
| DispatchMap; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
typedef void (TFImporter::*TFImporterNodeParser)(tensorflow::GraphDef&, const tensorflow::NodeDef&, LayerParams&);typedef std::map<std::string, TFImporterNodeParser> DispatchMap;| }; | ||
| TFImporter::TFImporter(Net& net, const char *model, const char *config) | ||
| : dstNet(net) | ||
| Mutex& getDispatchMapMutex() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Please reuseMutex& getInitializationMutex(); instead (defined in dnn_common.hpp, already included)
vpisarev commentedJun 11, 2021
@alalek, together with@rogday we briefly discussed this PR during the meeting. How about to create the map of the layer names and the parsing functions dynamically, each time a model is loaded? It's super-little overhead both in terms of consumed memory (probably just a few Kb) and time (I guess, far below <1ms). Such a solution will completely eliminate the need to use mutex and worry about proper deallocation of the map at the end. |
alalek commentedJun 11, 2021
This makes sense. Also this would help to configure possible custom handlers in flexible way. |
Pull Request Readiness Checklist
See details athttps://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request
Patch to opencv_extra has the same branch name.