Movatterモバイル変換


[0]ホーム

URL:


CN115689966A - Medical object recognition method, apparatus, computer device and storage medium - Google Patents

Medical object recognition method, apparatus, computer device and storage medium
Download PDF

Info

Publication number
CN115689966A
CN115689966ACN202110830107.9ACN202110830107ACN115689966ACN 115689966 ACN115689966 ACN 115689966ACN 202110830107 ACN202110830107 ACN 202110830107ACN 115689966 ACN115689966 ACN 115689966A
Authority
CN
China
Prior art keywords
medical object
initial
target
region
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110830107.9A
Other languages
Chinese (zh)
Inventor
陈俊强
杨溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weiwei Medical Technology Co ltd
Original Assignee
Shanghai Weiwei Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weiwei Medical Technology Co ltdfiledCriticalShanghai Weiwei Medical Technology Co ltd
Priority to CN202110830107.9ApriorityCriticalpatent/CN115689966A/en
Publication of CN115689966ApublicationCriticalpatent/CN115689966A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

The application relates to a medical object identification method, a medical object identification device, a computer device and a storage medium. The method comprises the following steps: acquiring a medical object image to be processed; performing preliminary segmentation on the medical object image to be processed to obtain an initial medical object region; obtaining a type of an initial medical object in the initial medical object region; determining a corresponding target medical object recognition model according to the type of the initial medical object; and segmenting the initial medical object region through the target medical object recognition model to obtain a target medical object. By adopting the method, the segmentation precision and accuracy can be improved.

Description

Medical object recognition method, apparatus, computer device and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a medical object recognition method, apparatus, computer device, and storage medium.
Background
With the development of computer technology, a computed tomography technology is appeared, which can be used to perform imaging scanning on bones of human bodies, animal bodies, etc., and then perform bone segmentation according to the scanned images of the bones.
In the conventional technology, the bones can be segmented manually by depending on experienced medical experts or based on threshold segmentation, but the manual segmentation results are not only greatly different but also take a lot of time and energy, and only the bones can be segmented roughly by the threshold segmentation, and the bones cannot be distinguished by the details. Thus, model matching based segmentation occurs.
However, the current segmentation based on model matching is not accurate because matching with a plurality of models is required, and bones of different objects may be different.
Disclosure of Invention
In view of the above, it is necessary to provide a medical object recognition method, apparatus, computer device and storage medium capable of improving the segmentation accuracy and precision.
A medical object identification method, the method comprising:
acquiring a medical object image to be processed;
performing primary segmentation on the medical object image to be processed to obtain an initial medical object region;
obtaining a type of an initial medical object in the initial medical object region;
determining a corresponding target medical object recognition model according to the type of the initial medical object;
and segmenting the initial medical object region through the target medical object recognition model to obtain a target medical object.
In one embodiment, the segmenting the initial medical object region by the target medical object recognition model to obtain the target medical object includes:
segmenting the initial medical object region according to a multi-target medical object recognition model obtained through pre-training to obtain different target medical objects; or
And sequentially inputting the initial medical object region into different medical object identification submodels to respectively obtain different target medical objects.
In one embodiment, the segmenting the initial medical object region by the target medical object recognition model to obtain the target medical object includes:
inputting the initial medical object region into a first medical object identification submodel to identify and obtain a current target medical object;
processing the initial medical object region according to the current target medical object to obtain a current residual medical object region, and acquiring a next cascade medical object identification submodel as a current medical object identification submodel;
inputting the current remaining medical object area into the current medical object identification submodel to identify and obtain a next current target medical object, and taking the next current target medical object as a current target medical object;
updating the current remaining medical object area according to the current target medical object, and continuously acquiring a next cascade medical object identification submodel as a current medical object identification submodel until the cascade medical object identification submodel identifies and processes the current remaining medical object area;
all identified current target medical objects are acquired as resulting different target medical objects.
In one embodiment, the number of medical object identification submodels is related to the kind of the target medical object in the initial medical object region.
In one embodiment, the preliminary segmentation of the medical object image to be processed to obtain an initial medical object region includes:
performing preliminary segmentation on the medical object image to be processed through an initial medical object recognition model obtained through pre-training to obtain an initial medical object region; the network structure of the initial medical object recognition model is the same as that of the target medical object recognition model, and the parameters are different; or the network structure of the initial medical object recognition model is the same as that of the medical object recognition sub-model, and the parameters are different.
In one embodiment, the segmenting the initial medical object region by the target medical object recognition model to obtain the target medical object includes:
cutting the medical object image to be processed according to the initial medical object area;
and segmenting the region obtained by cutting through the target medical object identification model to obtain the target medical object.
In one embodiment, the medical object image to be processed is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; after the target medical object is obtained by segmenting the initial medical object region through the target medical object recognition model, the method includes:
acquiring a symmetry axis of the pelvic region, and performing mirror image operation on the sacrum and/or the ilium according to the symmetry axis;
pelvic repair is performed from the sacrum and/or ilium after the mirroring operation.
In one embodiment, the medical object image to be processed is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; after the target medical object is obtained by segmenting the initial medical object region through the target medical object recognition model, the method includes:
acquiring a symmetry axis of the pelvis region, and determining a pelvis loss part according to the symmetry axis;
determining a pelvic bone repair material according to the pelvic bone deletion part;
generating pelvic modeling parameters from the pelvic repair material, the sacrum, and/or ilium.
A medical object recognition apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an image of a medical object to be processed;
the preliminary segmentation module is used for carrying out preliminary segmentation on the medical object image to be processed to obtain an initial medical object region; a model determination module for obtaining a type of an initial medical object in the initial medical object region; determining a corresponding target medical object recognition model according to the type of the initial medical object;
and the target segmentation module is used for segmenting the initial medical object region through the target medical object recognition model to obtain a target medical object.
A computer device comprising a memory storing a computer program and a processor implementing the steps of the method in any of the above embodiments when executing the computer program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any of the above embodiments.
According to the medical object identification method, the medical object identification device, the computer equipment and the storage medium, the image of the medical object to be processed is firstly subjected to initial segmentation to obtain the initial medical object area, and then the target medical object identification model used for further segmentation is determined according to the type of the initial medical object in the initial medical object area, so that the target medical object can be determined by further segmenting the initial medical object area according to the target medical object identification model, multi-step cascade segmentation is carried out, the segmentation precision and accuracy are improved, and multiple types of initial medical object areas can be obtained through initial segmentation, so that the application range of the model is wider.
Drawings
FIG. 1 is a diagram of an application environment of a medical object recognition method in an embodiment;
FIG. 2 is a flow diagram illustrating a method for medical object recognition in accordance with one embodiment;
FIG. 3 is a model cascade diagram of a medical object recognition method in an embodiment;
FIG. 4 is a model cascade diagram of a medical object recognition method in another embodiment;
FIG. 5 is a schematic structural diagram of an initial medical object recognition model based on depth full convolution in one embodiment;
FIG. 6 is a flow diagram of a training process for an initial medical object recognition model based on depth full convolution in one embodiment;
FIG. 7 is a schematic illustration of a pelvic bone image in an embodiment;
FIG. 8 is a schematic illustration of a target medical object in one embodiment;
FIG. 9 is a block diagram of the structure of a medical object recognition apparatus in one embodiment;
fig. 10 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The medical object identification method provided by the application can be applied to the application environment shown in fig. 1. Wherein theterminal 102 communicates with themedical imaging device 104 over a network. Theterminal 102 may receive the image of the medical object to be processed scanned by themedical imaging device 104, and perform preliminary segmentation on the image of the medical object to be processed to obtain an initial medical object region; acquiring the type of an initial medical object in an initial medical object region, and determining a corresponding target medical object identification model according to the type of the initial medical object; and segmenting the initial medical object region through the target medical object identification model to obtain the target medical object. Therefore, the initial segmentation is firstly carried out on the medical object picture to be processed to obtain an initial medical object region, and then a target medical object identification model used for further segmentation is determined according to the type of the initial medical object in the initial medical object region, so that the target medical object can be determined by further segmenting the initial medical object region according to the target medical object identification model, the multi-step cascade segmentation is carried out, the segmentation precision and the segmentation accuracy are improved, and multiple types of initial medical object regions can be obtained by the initial segmentation, so that the application range of the model is wider.
Theterminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and themedical imaging device 104 includes, but not limited to, various imaging devices, such as a CT imaging device (Computed Tomography) which performs cross-sectional scanning around a part of a human body with a precisely collimated X-ray beam and a highly sensitive detector, and reconstructs a precise three-dimensional position image of a tumor and the like through CT scanning), a magnetic resonance device (which is a Tomography device that acquires electromagnetic signals from a human body using a magnetic resonance phenomenon and reconstructs a human body information image), a Positron Emission Computed Tomography (Positron Emission Computed Tomography) device, a Positron Emission magnetic resonance imaging system (PET/MR), and the like.
In one embodiment, as shown in fig. 2, a medical object recognition method is provided, which is described by taking the application of the method to the terminal in fig. 1 as an example, and includes the following steps:
s202: acquiring an image of a medical object to be processed.
Specifically, the medical object image to be processed may be a CTA (computed tomography angiography) volume data (for example, three-dimensional data of a human body image) image, and the size may be selected to be 512 × 130, although the size may be selected according to a specific image in implementation.
Optionally, when the terminal acquires the medical object image to be processed, the terminal performs filtering processing on the medical object image to be processed to obtain a processed image. Specifically, the terminal may filter noise information in the image by using a three-dimensional gaussian filter to obtain a preprocessed image.
S204: and carrying out preliminary segmentation on the medical object image to be processed to obtain an initial medical object region.
Specifically, the preliminary segmentation refers to acquiring an initial medical object region from an image of a medical object to be processed, where the initial medical object region refers to an image region including a relatively complete medical object of a certain type, for example, at least one region of a pelvic region, a skull region, a spine region, and the like, and specifically, the terminal may input the image of the medical object to be processed into an initial medical object recognition model to obtain a corresponding initial medical object region.
It should be noted that if the captured image of the medical object to be processed is an image of a certain type of medical object, the initial medical object recognition model may be an initial medical object region that is used for recognizing only a region where the medical object of the certain type is located. If the captured images of the medical objects to be processed are multiple types of medical objects, i.e. multiple medical objects are captured simultaneously, so as to reduce the number of capturing times, the initial medical object recognition model may be an initial medical object region for recognizing multiple types, for example, regions where at least two types of medical objects are located.
S206: the type of an initial medical object in an initial medical object region is obtained, and a corresponding target medical object recognition model is determined according to the type of the initial medical object.
In particular, the type of initial medical object may include, but is not limited to, pelvic bone, skull bone, sternum, vertebrae, ribs, upper and lower limb bones, and the like.
The terminal can determine the type of the initial medical object according to the characteristics of the initial medical object in the initial medical object region, and therefore a target medical object recognition model for accurately segmenting the initial medical object region is selected according to the type of the initial medical object. For example, the terminal may determine the type of the initial medical object from at least one of an outer contour, a shape of the initial medical object in the initial medical object region, and a relative position of the initial medical object region.
Optionally, the database of the terminal may store the corresponding relationship between the types of the various initial medical objects and the target medical object recognition models in advance, so that after the terminal obtains the initial medical object region according to the medical object image to be processed, the terminal may determine the corresponding target medical object recognition model according to the characteristics of the initial medical object in the initial medical object region to perform more accurate segmentation. Optionally, when there are a plurality of identified initial medical object regions, and the types of the initial medical objects in the plurality of initial medical object regions are different, the initial medical object regions may be segmented more accurately in parallel through the corresponding target medical object identification models, respectively.
Therefore, more medical objects can be identified through initial segmentation, and the corresponding target medical object identification models are selected according to the medical objects for further segmentation, so that the segmentation accuracy of the medical objects is improved, and the application range is wider.
S208: and segmenting the initial medical object region through the target medical object identification model to obtain the target medical object.
In particular, the target medical object recognition model is a model for further segmentation of the initial medical object region, which is used for accurate classification of the initial medical object, e.g. more accurate classification of the pelvic bone resulting in the ilium and sacrum.
Preferably, in order to improve the accuracy of the target medical object recognition model, the terminal may first perform cutting from the medical object image to be processed according to the initial medical object region to obtain a three-dimensional initial medical object image, and then input the initial medical object image into the target medical object recognition model to perform three-dimensional object segmentation to obtain the target medical object.
In one embodiment, the terminal may distinguish the identified target medical objects by different identifiers, for example, different colors are used to display different target medical objects, for example, the ilium and the sacrum are obtained by segmenting the pelvis, the ilium may be identified by red, and the sacrum may be identified by green, which is convenient for the user to view.
According to the medical object identification method, the image of the medical object to be processed is initially segmented to obtain an initial medical object region, and then a target medical object identification model used for further segmentation is determined according to the type of the initial medical object in the initial medical object region, so that the target medical object can be determined by further segmenting the initial medical object region according to the target medical object identification model, multi-step cascade segmentation is performed, segmentation precision and accuracy are improved, and multiple types of initial medical object regions can be obtained through initial segmentation, so that the application range of the model is wider.
In one embodiment, segmenting the initial medical object region by the target medical object recognition model to obtain the target medical object includes: segmenting an initial medical object region according to a multi-target medical object recognition model obtained through pre-training to obtain different target medical objects; or the initial medical object region is sequentially input into different medical object identification submodels so as to respectively obtain different target medical objects.
Specifically, in this embodiment, two implementation manners of the target medical object model are given, and as shown in fig. 3, one implementation manner is to directly segment all the target medical objects at a time, that is, first, the medical image to be processed is input to the initial medical object recognition model to obtain an initial medical object region, and then the initial medical object region is input to one target medical object recognition model to obtain each medical image. Referring to fig. 4, another method is to sequentially identify one of the target medical objects through cascaded medical object identification submodels, specifically, first, a medical image to be processed is input into an initial medical object identification model to obtain an initial medical object region, then, a medical object identification submodel is input into the initial medical object region to obtain an identification result, and then, the identification result is continuously input into a next medical object identification submodel until all the medical objects are identified, as illustrated in fig. 4 by taking a two-stage medical object identification submodel as an example. When the identification is performed by means of cascaded sub-models of medical object identification, the number of cascaded sub-models of medical object identification may be set according to the type of the target medical object to be identified, i.e. the number of sub-models of medical object identification is related to the type of target medical object in different types of initial medical object regions.
In particular, the network structure and the training mode of the multi-target medical object recognition model and the medical object recognizer model can be referred to below.
In one of the embodiments, the target medical object recognition model comprises at least two cascaded medical object recognizer models; sequentially inputting the initial medical object region into different medical object recognition submodels to respectively obtain different target medical objects, comprising: inputting the initial medical object region into a first medical object recognition sub-model to be recognized to obtain a current target medical object; processing the initial medical object region according to the current target medical object to obtain a current residual medical object region, and acquiring a next cascade medical object identification submodel as a current medical object identification submodel; inputting the current remaining medical object area into a current medical object identification sub-model to identify and obtain a next current target medical object, and taking the next current target medical object as a current target medical object; updating the current remaining medical object area according to the current target medical object, and continuously acquiring a next cascade medical object identification submodel as a current medical object identification submodel until the cascade medical object identification submodel identifies and processes the current remaining medical object area; all identified current target medical objects are acquired as resulting different target medical objects.
In particular, the present embodiment mainly introduces a way of cascaded medical object identifier models to identify a target medical object. The terminal firstly inputs an initial medical object region into a first medical object identification submodel to obtain a first target medical object, then processes the initial medical object region according to the first target medical object, for example, the position of the first target medical object in the initial medical object region is processed in a mode of setting as a background, or the position of the first target medical object in the initial medical object region is cut off from the initial medical object region to obtain a current remaining medical object region, then a cascade next medical object identification submodel is obtained, the obtained current remaining medical object region is input into the next medical object identification submodel so as to identify the next target medical object, and the current remaining medical object region is continuously processed according to the method until all medical object identification submodels are processed, so that the target medical object output by each cascade medical object identification submodel is obtained.
The cascade order of the cascaded medical object identifier models may be preset by a user, for example, the medical object identifier models corresponding to the target medical object in the descending order are cascaded, or the cascade order is determined according to the models, for example, the final identification accuracy and the improved identification efficiency may be calculated according to different cascade orders, and the cascade order in which the identification accuracy and/or efficiency meets the requirements may be selected as needed.
In one embodiment, the preliminary segmentation of the image of the medical object to be processed to obtain the initial medical object region includes: performing preliminary segmentation on a medical object image to be processed through an initial medical object recognition model obtained through pre-training to obtain an initial medical object region; the network structure of the initial medical object recognition model is the same as that of the target medical object recognition model, and the parameters are different, or the network structure of the initial medical object recognition model is the same as that of the medical object recognition sub-model, and the parameters are different.
Specifically, as shown in fig. 5, fig. 5 is a schematic diagram of a network structure of an initial medical object recognition model or a target medical object recognition model or a medical object recognition submodel in an embodiment. Taking the initial medical object recognition model as an example, the structures and processing modes of the target medical object recognition model or the medical object recognition submodels are similar, and the difference is that the target medical object recognition model extracts morphological information of a plurality of target medical objects, and the medical object recognition submodel extracts morphological information of one target medical object, which is not described herein again.
The method for acquiring the initial medical object region by inputting the medical object image to be processed into the initial medical object recognition model based on the depth full convolution and obtained through pre-training by the terminal comprises the following steps: performing feature extraction on a medical object image to be processed by using an initial medical object recognition model based on depth full convolution obtained through pre-training to obtain morphological information corresponding to the initial medical object; and according to the morphological information corresponding to the initial medical object, performing initial medical object position positioning on the medical object image to be processed by utilizing an initial medical object recognition model based on depth full convolution obtained through pre-training to obtain an initial medical object area.
Wherein the morphological information comprises one or more of textural features, geometric features and positional features of the initial medical object. The initial medical object recognition model based on the depth full convolution carries out feature extraction on the image of the medical object to be processed to obtain morphological information corresponding to the initial medical object, and therefore the initial medical object region can be obtained by carrying out initial medical object position positioning on the image of the medical object to be processed according to the morphological information. The texture feature may refer to features such as surface gray values of the initial medical object and other initial medical objects, the geometric feature may refer to a shape feature of the initial medical object, and the position feature refers to a position of the initial medical object in the target object.
Wherein the initial medical object recognition model based on the depth full convolution comprises an encoding network and a decoding network. The coding network is used for learning morphological information of an initial medical object from an image of the medical object to be processed, and is specifically used for learning the position characteristics of the initial medical object so as to realize image segmentation; the decoding network is used for learning the obtained texture features and/or geometric structural features of the initial medical object, and enabling the coding network to find the positions of the regions where the texture features and/or geometric structural features are located according to the texture features and/or geometric structural features so as to realize image segmentation. The coding network comprises a convolutional layer and a deconvolution layer. The decoding network includes a volume hierarchy, residual connections, and a max-pooling layer. In addition, the initial medical object identification model based on the deep full convolution also comprises a merging layer which connects the coding network and the decoding network.
Specifically, the feature extraction is performed on the medical object image to be processed to obtain morphological information corresponding to the initial medical object, and the method includes: performing feature extraction on a medical object image to be processed by using a current convolutional layer of a coding network in an initial medical object recognition model based on depth full convolution obtained through pre-training to obtain morphological information; and performing image size transformation on the image after the morphological information is extracted by utilizing a corresponding deconvolution layer of the coding network in the initial medical object recognition model based on the depth full convolution obtained through pre-training, so that the size of the transformed image corresponds to the size of the image output by the convolution layer in the corresponding decoding network.
Specifically, according to morphological information corresponding to an initial medical object, performing initial medical object position positioning on a medical object image to be processed by using an initial medical object recognition model based on depth full convolution obtained through pre-training to obtain an initial medical object region, and the method comprises the following steps of: performing feature extraction on a medical object image to be processed by using a convolution layer of a decoding network in an initial medical object recognition model based on depth full convolution obtained through pre-training to obtain morphological information; utilizing a maximum pooling layer of a decoding network in an initial medical object recognition model based on deep full convolution obtained through pre-training, reserving the extracted morphological information, and adjusting the size of an image output by a convolution layer of the decoding network; directly adding the input and the output of the convolution layer of the decoding network as the input of the maximum pooling layer by utilizing the residual connection of the decoding network in the initial medical object recognition model based on the depth full convolution obtained by pre-training; and combining the image output by the convolution layer in the corresponding decoding network and the image output by the deconvolution layer of the coding network by using a combination layer in the initial medical object recognition model based on the depth full convolution obtained by training in advance, wherein the combination layer is used as the input of the next convolution layer of the coding network.
With reference to fig. 5, the convolutional layer in the coding network may learn and express the morphological information in the medical object image to be processed, where the morphological information may be learned according to the labeled sample image, and the deconvolution layer is configured to increase the image output by the convolutional layer to correspond to the size of the image output by the convolutional layer in the decoding network, so that the merging layer may merge the image output by the convolutional layer in the decoding network corresponding to the size of the image with the image output by the deconvolution layer of the coding network as the input of the next convolutional layer of the coding network.
The convolution layer in the decoding network learns and expresses the form information in the medical object image to be processed, wherein the learning of the form information can be obtained by learning according to a labeled sample image, the maximum pooling layer reserves the extracted form information and adjusts the size of the image output by the convolution layer of the decoding network, the size of the image is reduced in the embodiment, and the residual error connection directly intersects the input information and the output information so as to facilitate subsequent optimization learning, namely, the input and the output of the convolution layer of the decoding network are directly added to be used as the input of the maximum pooling layer.
In one embodiment, referring to fig. 6, fig. 6 is a flowchart of a training process of an initial medical object recognition model based on deep full convolution in one embodiment, specifically, in this embodiment, a training mode is described by taking an initial medical object recognition model and a target medical object recognition model as an example, and training modes of medical object recognition submodels are similar and are not described again. The network structure of the initial medical object recognition model is the same as that of the target medical object recognition model, and the parameters are different, or the network structure of the initial medical object recognition model is the same as that of the medical object recognition sub-model, and the parameters are different.
The training process of the initial medical object recognition model based on the deep full convolution can comprise the following steps:
s602: and acquiring target sample data.
Specifically, the target sample data is data finally used for training, wherein due to the fact that deep learning can only be performed on certain data to have certain robustness, in order to increase the robustness, data amplification operation needs to be performed, and further, in order to increase the generalization capability of the full convolution network model.
The terminal acquires initial sample data; carrying out random rigid transformation processing on the initial sample data to obtain the sample data after data expansion; and taking the initial sample data and the sample data after data expansion as target sample data. Specifically, the same random rigid transformation is performed on the original image and the corresponding image label (which may represent an artificially labeled initial medical object region, and if a target medical object recognition model or a medical object recognizer model is trained, the labeled target medical object), which may specifically include but is not limited to: rotation, scaling, translation, flipping, and grayscale transformation. For example, the original 20 images are expanded to 2000 cases, wherein 1600 cases are used as training samples for training, and 400 cases are used as test samples for testing, wherein each image comprises an original image and an image label. Specifically, the terminal usually performs data enhancement on a marked limited amount of data, and expands the marked limited amount of data to a larger amount, and in deep learning, 80% of data is generally selected for training, and 20% of data is selected for testing. Specifically, during image amplification, the terminal adopts at least one of the following modes to perform image extraction: rotation-30 degrees to 30 degrees, scaling by 0.8 to 1.2 times, translation-10 to 10 pixels, flipping (horizontal and vertical) and grayscale transformation (image normalization).
S604: and acquiring a deep full convolution network, and setting hyper-parameters of the deep full convolution network.
Specifically, the acquisition of the deep full convolution network refers to the building of a deep full convolution network structure, and the specifically built deep full convolution network can be shown in fig. 3.
The parameter setting of the full convolution network model comprises characteristic parameters and hyper-parameters, wherein the characteristic parameters are continuously iteratively learned by a neural network and are used for learning image characteristics; the hyper-parameters are manually set, and the network is trained well by setting proper hyper-parameters. As an example, the present invention may set the learning rate to be 0.001, the number of hidden layers to be 16, 32, 64, 128, 256, the size of convolution kernel to be 3x3x3, the number of training iterations to be 30000, and the batch size to be 1 for each iteration. The characteristic parameters may include a weighting parameter W and a bias parameter b of a network, for example, a cardiac image, W and b parameters in the network are used to represent texture, geometric, and position information features of the initial medical object region, and simple and complex features of the local initial medical object region are expressed by W and b in shallow and deep convolutional layers, the simple features may be edge and corner features, and the complex features are texture and geometric features composed of these simple features.
S606: a loss function for a deep fully convolutional network is defined.
In particular, the loss function is an objective function used to optimize the network, which is minimized to make deep full convolution network learning better. The deep full convolution network learning image features can be learned only under a certain situation, namely, a proper loss function needs to be defined to learn effective features, and a bad loss function cannot learn good features, which is mainly defined by a loss function, wherein in the embodiment, the loss function can be defined as:
Figure BDA0003175156150000131
wherein W and b represent weight parameters and bias parameters of the network, xi I-th target sample data representing input, fW,b (xi ) Represents the model prediction result of the ith target sample data, yi And K is a smooth parameter, so that the condition that the denominator is zero and calculation cannot be carried out is prevented.
S608: and training the deep full convolution network by using target sample data through a random gradient descent method to adjust the characteristic parameters of the deep full convolution network, so that the value of the loss function meets the requirement, and the trained initial medical object recognition model based on the deep full convolution is obtained.
Specifically, in this embodiment, the terminal trains the deep full convolution network by using a stochastic gradient descent method, and the main training process is to iteratively train and update the weight parameter and the bias parameter by using the stochastic gradient descent method.
Specifically, the terminal trains the deep full convolution network by using a gradient descent method, and then updates and optimizes the weight parameters and the bias parameters in the deep full convolution network by using a back propagation algorithm. Specifically, the gradient descent method determines that the position where the slope of the curve is the maximum is the direction from the maximum to the optimum, the back propagation method adopts a probabilistic chain derivation method to calculate a partial derivative so as to update the weight, and the parameters are updated through continuous iterative training so as to learn the image. The method for updating the weight parameter and the bias parameter by the back propagation algorithm is as follows:
the forward propagation is firstly carried out, parameters are updated through continuous iterative training to learn the characteristics of the image, and the activation values of all layers (convolution layers and deconvolution layers) are calculated, namely the image is subjected to convolution operation to obtain an activation image.
Then to the output layer (n-th)l Layer), calculating the sensitivity value
Figure BDA0003175156150000132
Figure BDA0003175156150000133
Wherein y is the true value of the sample,
Figure BDA0003175156150000134
in order to output the prediction value of the layer,
Figure BDA0003175156150000135
representing the partial derivative of the output layer parameter.
Second, for l = nl -1,nl -2, respective layers of
Figure BDA0003175156150000141
Figure BDA0003175156150000142
Wherein, Wl Parameter, δ, representing the l-th layerl+1 Represents the sensitivity value, f' (z) of layer l +1l ) Represents the partial derivative of the l-th layer;
and finally, updating the weight parameter and the bias parameter of each layer:
Figure BDA0003175156150000143
Figure BDA0003175156150000144
wherein, Wl And bl Respectively representing the weight parameter and the bias parameter for the l layers,
Figure BDA0003175156150000145
to the learning rate, al Represents the output value of the l-th layer, δl+1 Indicating the sensitivity value of the l +1 layer.
The whole depth full convolution network is trained until the error requirement is converged, the parameters of the depth full convolution network are stored, and the convergence to the error requirement can be that the loss function value is minimized or not changed greatly any more, so that the trained initial medical object recognition model based on the depth full convolution is obtained, and the subsequent use is facilitated.
In one embodiment, segmenting the initial medical object region by the target medical object recognition model to obtain the target medical object includes: cutting the medical object image to be processed according to the initial medical object area; and segmenting the region obtained by cutting through the target medical object identification model to obtain the target medical object.
Specifically, after an initial medical object region is obtained, the terminal cuts the medical object image to be processed according to the initial medical object region, and then the region obtained by cutting is segmented through the target medical object recognition model to obtain the target medical object. In other words, in order to improve the accuracy of the target medical object recognition model, the terminal may first perform clipping from the medical object image to be processed according to the initial medical object region to obtain a three-dimensional initial medical object image, and then input the initial medical object image into the target medical object recognition model to perform three-dimensional object segmentation to obtain the target medical object.
In one embodiment, shown in combination with fig. 7 and 8, wherein fig. 7 is a schematic diagram of a pelvic bone image in one embodiment, and fig. 8 is a schematic diagram of a target medical object in one embodiment, in which the medical object image to be processed is a pelvic bone image, the initial medical object region is a pelvic bone region, and the target medical object is a sacrum and/or ilium; after the target medical object is obtained by segmenting the initial medical object region through the target medical object recognition model, the method comprises the following steps: acquiring a symmetry axis of a pelvic region, and performing mirror image operation according to the sacrum and/or the ilium of the symmetry axis; pelvic repair is performed from the sacrum and/or ilium after the mirroring procedure.
Specifically, the terminal firstly obtains a pelvic region after recognizing the pelvic image, then further recognizes the pelvic region to obtain the sacrum and/or the ilium, and can obtain a symmetry axis of the pelvic region according to the sacrum calculation, and mirror image operation is performed through the symmetry axis to complete missing parts of corresponding side edges, so that pelvic repair is completed.
In one embodiment, the medical object image to be processed is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; after the target medical object is obtained by segmenting the initial medical object region through the target medical object recognition model, the method comprises the following steps: acquiring a symmetry axis of a pelvic region, and determining a pelvic missing part according to the symmetry axis; determining a pelvic bone repair material according to the pelvic bone deletion part; generating pelvic modeling parameters from the pelvic repair material, the sacrum, and/or the ilium.
Specifically, the terminal firstly obtains a pelvic region after recognizing the pelvic image, then further recognizes the pelvic region to obtain the sacrum and/or the ilium, and can obtain a symmetry axis of the pelvic region according to the sacrum calculation, and then performs mirror image operation through the symmetry axis to obtain the pelvic missing part. Thus, the terminal determines the pelvic bone repair material according to the pelvic bone deletion part; pelvic modeling parameters are generated from the pelvic repair material, the sacrum, and/or the ilium to facilitate pelvic modeling, e.g., a user creates a pelvic bone from the pelvic modeling parameters for replacement of a real pelvic bone. The terminal may determine the pelvic repair material according to the position of the pelvic loss part, for example, if the pelvic loss part is a middle sacrum, the material may be selected to have a corresponding hardness because the sacrum is not connected to other bones, and if the pelvic loss part is a boundary, the material may be selected to have a friction resistance because the sacrum is in contact with other bones.
It should be understood that although the steps in the flowcharts of fig. 2 and 6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 9, there is provided a medical object recognition apparatus including: animage acquisition module 701, apreliminary segmentation module 702, amodel determination module 703 and atarget segmentation module 704, wherein:
animage obtaining module 701, configured to obtain an image of a medical object to be processed;
apreliminary segmentation module 702, configured to perform preliminary segmentation on the medical object image to be processed to obtain an initial medical object region;
amodel determination module 703 for obtaining a type of the initial medical object in the initial medical object region; determining a corresponding target medical object recognition model according to the type of the initial medical object;
and atarget segmentation module 704, configured to segment the initial medical object region through the target medical object recognition model to obtain a target medical object.
In one embodiment, thetarget segmentation module 704 is configured to segment an initial medical object region according to a multi-target medical object recognition model obtained through pre-training, so as to obtain different target medical objects; or the initial medical object region is sequentially input into different medical object identification submodels so as to respectively obtain different target medical objects.
In one embodiment, thetarget segmentation module 704 includes:
the first current target medical object identification unit is used for inputting the initial medical object area into a first medical object identification sub-model to identify and obtain a current target medical object;
a medical object identification submodel obtaining unit, configured to process the initial medical object region according to the current target medical object to obtain a current remaining medical object region, and obtain a next cascade medical object identification submodel as a current medical object identification submodel;
the second current target medical object identification unit is used for inputting the current residual medical object area into the current medical object identification submodel to identify and obtain a next current target medical object, and taking the next current target medical object as the current target medical object;
the circulating unit is used for updating the current residual medical object region according to the current target medical object and continuously acquiring a next cascade medical object identification submodel as the current medical object identification submodel until the cascade medical object identification submodel identifies the current residual medical object region;
a target medical object acquisition unit for acquiring all identified current target medical objects as the obtained different target medical objects.
In one embodiment, the number of medical object identification submodels is related to the kind of target medical object in different types of initial medical object regions.
In one embodiment, thepreliminary segmentation module 702 is configured to perform preliminary segmentation on an image of a medical object to be processed through a pre-trained initial medical object recognition model to obtain an initial medical object region; the network structure of the initial medical object recognition model is the same as that of the target medical object recognition model, and the parameters are different, or the network structure of the initial medical object recognition model is the same as that of the medical object recognition sub-model, and the parameters are different.
In one embodiment, thetarget segmentation module 704 includes:
the cutting unit is used for cutting the medical object image to be processed according to the initial medical object area;
and the segmentation unit is used for segmenting the region obtained by cutting through the target medical object identification model to obtain the target medical object.
In one embodiment, the medical object image to be processed is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; the device still includes:
the mirror image operation module is used for acquiring a symmetry axis of the pelvic region and performing mirror image operation according to the sacrum and/or the ilium of the symmetry axis;
and the repair module is used for repairing the pelvis according to the sacrum and/or the ilium after the mirror image operation.
In one embodiment, the medical object image to be processed is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; the device still includes:
the missing part determining module is used for acquiring a symmetry axis of the pelvis region and determining a pelvis missing part according to the symmetry axis;
the repair material determining module is used for determining a pelvic bone repair material according to the pelvic bone missing part;
and the modeling parameter generation module is used for generating pelvic modeling parameters according to the pelvic repairing material, the sacrum and/or the ilium.
For specific definitions of the medical object recognition apparatus, reference may be made to the above definitions of the medical object recognition method, which are not further described herein. The various modules in the medical object recognition apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 10. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a medical object recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of: acquiring a medical object image to be processed; performing primary segmentation on a medical object image to be processed to obtain an initial medical object region; acquiring the type of the initial medical object in the initial medical object region; determining a corresponding target medical object recognition model according to the type of the initial medical object; and segmenting the initial medical object region through the target medical object recognition model to obtain the target medical object.
In one embodiment, the segmentation of the initial medical object region by the target medical object recognition model into the target medical object, which is implemented by the processor when executing the computer program, comprises: segmenting an initial medical object region according to a multi-target medical object recognition model obtained through pre-training to obtain different target medical objects; or the initial medical object region is sequentially input into different medical object identification submodels so as to respectively obtain different target medical objects.
In an embodiment, the target medical object recognition model involved in the execution of the computer program by the processor comprises at least two cascaded medical object recognition submodels; sequentially inputting the initial medical object region into different medical object recognition submodels to respectively obtain different target medical objects, which is realized when the processor executes the computer program, and comprises the following steps: inputting the initial medical object region into a first medical object recognition sub-model to be recognized to obtain a current target medical object; processing the initial medical object region according to the current target medical object to obtain a current residual medical object region, and acquiring a next cascade medical object identification submodel as a current medical object identification submodel; inputting the current remaining medical object area into a current medical object identification sub-model to identify and obtain a next current target medical object, and taking the next current target medical object as a current target medical object; updating the current remaining medical object region according to the current target medical object, and continuously acquiring a next cascade medical object identification submodel as the current medical object identification submodel until the cascade medical object identification submodel identifies the current remaining medical object region; all identified current target medical objects are acquired as the resulting different target medical objects.
In an embodiment, the number of medical object identification submodels implemented by the processor when executing the computer program is related to the kind of target medical object in different types of initial medical object regions.
In one embodiment, the preliminary segmentation of the image of the medical object to be processed, which is performed by the processor when executing the computer program, into the initial medical object region, comprises: performing preliminary segmentation on a medical object image to be processed through an initial medical object recognition model obtained through pre-training to obtain an initial medical object region; the network structure of the initial medical object recognition model is the same as that of the target medical object recognition model, and the parameters are different, or the network structure of the initial medical object recognition model is the same as that of the medical object recognition sub-model, and the parameters are different.
In one embodiment, the segmentation of the initial medical object region by the target medical object recognition model into the target medical object, as performed by the processor when executing the computer program, comprises: cutting the medical object image to be processed according to the initial medical object area; and segmenting the region obtained by cutting through the target medical object identification model to obtain the target medical object.
In one embodiment, the image of the medical object to be processed involved in the execution of the computer program by the processor is an image of a pelvic bone, the initial medical object region is a pelvic bone region, and the target medical object is a sacrum and/or an ilium; after the processor, which is implemented when executing the computer program, segments the initial medical object region through the target medical object recognition model to obtain the target medical object, the method includes: acquiring a symmetry axis of a pelvic region, and performing mirror image operation according to the sacrum and/or the ilium of the symmetry axis; pelvic repair is performed from the sacrum and/or ilium after the mirroring procedure.
In one embodiment, the medical object image to be processed involved in the execution of the computer program by the processor is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; after the processor, which is implemented when executing the computer program, segments the initial medical object region through the target medical object recognition model to obtain the target medical object, the method includes: acquiring a symmetry axis of a pelvic region, and determining a pelvic missing part according to the symmetry axis; determining a pelvic bone repair material according to the pelvic bone deletion part; generating pelvic modeling parameters from the pelvic repair material, the sacrum, and/or the ilium.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a medical object image to be processed; performing primary segmentation on a medical object image to be processed to obtain an initial medical object region; obtaining a type of an initial medical object in the initial medical object region; determining a corresponding target medical object recognition model according to the type of the initial medical object; and segmenting the initial medical object region through the target medical object identification model to obtain the target medical object.
In one embodiment, the segmentation of the initial medical object region by the target medical object recognition model into the target medical object, which is performed by the computer program when being executed by the processor, comprises: segmenting an initial medical object region according to a multi-target medical object recognition model obtained through pre-training to obtain different target medical objects; or the initial medical object region is sequentially input into different medical object identification submodels so as to respectively obtain different target medical objects.
In an embodiment, the target medical object recognition model involved in the computer program when executed by the processor comprises at least two cascaded medical object recognition submodels; the sequential input of the initial medical object region into different medical object recognition submodels to obtain different target medical objects respectively, which is realized when the computer program is executed by the processor, comprises: inputting the initial medical object region into a first medical object recognition sub-model to be recognized to obtain a current target medical object; processing the initial medical object area according to the current target medical object to obtain a current residual medical object area, and acquiring a next cascaded medical object identification submodel as a current medical object identification submodel; inputting the current remaining medical object area into a current medical object identification sub-model to identify and obtain a next current target medical object, and taking the next current target medical object as a current target medical object; updating the current remaining medical object area according to the current target medical object, and continuously acquiring a next cascade medical object identification submodel as a current medical object identification submodel until the cascade medical object identification submodel identifies and processes the current remaining medical object area; all identified current target medical objects are acquired as resulting different target medical objects.
In an embodiment, the number of medical object identification submodels that the computer program implements when executed by the processor is related to the kind of target medical object in different types of initial medical object regions.
In one embodiment, the preliminary segmentation of the image of the medical object to be processed, which is performed by the computer program when being executed by the processor, into an initial medical object region, comprises: performing preliminary segmentation on a medical object image to be processed through an initial medical object recognition model obtained through pre-training to obtain an initial medical object region; the network structure of the initial medical object recognition model is the same as that of the target medical object recognition model, and the parameters are different, or the network structure of the initial medical object recognition model is the same as that of the medical object recognition sub-model, and the parameters are different.
In one embodiment, the segmentation of the initial medical object region by the target medical object recognition model into the target medical object, implemented when the computer program is executed by the processor, comprises: cutting the medical object image to be processed according to the initial medical object area; and segmenting the region obtained by cutting through the target medical object identification model to obtain the target medical object.
In an embodiment, the medical object image to be processed, which the computer program is executed by the processor, is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; the computer program, when executed by a processor, for segmenting a region of an initial medical object into a target medical object by means of a target medical object recognition model, comprises: acquiring a symmetry axis of a pelvic region, and performing mirror image operation according to the sacrum and/or the ilium of the symmetry axis; pelvic repair is performed from the sacrum and/or ilium after the mirroring procedure.
In one embodiment, the image of the medical object to be processed to which the computer program is executed by the processor is an image of a pelvic bone, the initial medical object region is a pelvic bone region, and the target medical object is a sacrum and/or an ilium; the computer program, when executed by a processor, for segmenting a region of an initial medical object into a target medical object by means of a target medical object recognition model, comprises: acquiring a symmetry axis of a pelvic region, and determining a pelvic bone deletion part according to the symmetry axis; determining a pelvic bone repair material according to the pelvic bone deletion part; pelvic modeling parameters are generated from the pelvic repair material, the sacrum, and/or the ilium.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (11)

1. A medical object identification method, characterized in that the method comprises:
acquiring a medical object image to be processed;
performing primary segmentation on the medical object image to be processed to obtain an initial medical object region;
acquiring the type of the initial medical object in the initial medical object region;
determining a corresponding target medical object recognition model according to the type of the initial medical object;
and segmenting the initial medical object region through the target medical object recognition model to obtain a target medical object.
2. The method according to claim 1, wherein the segmenting the initial medical object region by the target medical object recognition model to obtain a target medical object comprises:
segmenting the initial medical object region according to a multi-target medical object recognition model obtained through pre-training to obtain different target medical objects; or alternatively
And sequentially inputting the initial medical object region into different medical object identification submodels to respectively obtain different target medical objects.
3. The method according to claim 1, wherein the segmenting the initial medical object region by the target medical object recognition model to obtain a target medical object comprises:
inputting the initial medical object region into a first medical object identification submodel to identify and obtain a current target medical object;
processing the initial medical object region according to the current target medical object to obtain a current residual medical object region, and acquiring a next cascade medical object identification submodel as a current medical object identification submodel;
inputting the current remaining medical object area into the current medical object identification submodel to identify and obtain a next current target medical object, and taking the next current target medical object as a current target medical object;
updating the current remaining medical object region according to the current target medical object, and continuously acquiring a next cascade medical object identification submodel as the current medical object identification submodel until the cascade medical object identification submodel identifies the current remaining medical object region;
all identified current target medical objects are acquired as the resulting different target medical objects.
4. The method of claim 3, wherein the number of medical object identification submodels is related to the kind of the target medical object in the initial medical object region.
5. The method according to any one of claims 2 to 4, wherein the preliminary segmenting the medical object image to be processed into an initial medical object region comprises:
performing preliminary segmentation on the medical object image to be processed through an initial medical object recognition model obtained through pre-training to obtain an initial medical object region; the network structure of the initial medical object recognition model is the same as that of the target medical object recognition model, and the parameters are different; or the network structure of the initial medical object recognition model is the same as that of the medical object recognition submodel, and the parameters are different.
6. The method according to claim 1, wherein the segmenting the initial medical object region by the target medical object recognition model to obtain a target medical object comprises:
cutting the medical object image to be processed according to the initial medical object area;
and segmenting the region obtained by cutting through the target medical object identification model to obtain the target medical object.
7. The method according to claim 1, wherein the medical object image to be processed is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; after the target medical object is obtained by segmenting the initial medical object region through the target medical object recognition model, the method includes:
acquiring a symmetry axis of the pelvic region, and performing mirror image operation on the sacrum and/or the ilium according to the symmetry axis;
pelvic repair is performed from the sacrum and/or ilium after the mirroring operation.
8. The method according to claim 1, wherein the medical object image to be processed is a pelvic image, the initial medical object region is a pelvic region, and the target medical object is a sacrum and/or an ilium; after the target medical object is obtained by segmenting the initial medical object region through the target medical object recognition model, the method includes:
acquiring a symmetry axis of the pelvis region, and determining a pelvis loss part according to the symmetry axis;
determining a pelvic bone repair material according to the pelvic bone deletion part;
generating pelvic modeling parameters from the pelvic repair material, the sacrum, and/or ilium.
9. A medical object recognition apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image of a medical object to be processed;
the preliminary segmentation module is used for carrying out preliminary segmentation on the medical object image to be processed to obtain an initial medical object region; a model determination module for obtaining a type of an initial medical object in the initial medical object region; determining a corresponding target medical object recognition model according to the type of the initial medical object;
and the target segmentation module is used for segmenting the initial medical object region through the target medical object recognition model to obtain a target medical object.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202110830107.9A2021-07-222021-07-22Medical object recognition method, apparatus, computer device and storage mediumPendingCN115689966A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110830107.9ACN115689966A (en)2021-07-222021-07-22Medical object recognition method, apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110830107.9ACN115689966A (en)2021-07-222021-07-22Medical object recognition method, apparatus, computer device and storage medium

Publications (1)

Publication NumberPublication Date
CN115689966Atrue CN115689966A (en)2023-02-03

Family

ID=85044141

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110830107.9APendingCN115689966A (en)2021-07-222021-07-22Medical object recognition method, apparatus, computer device and storage medium

Country Status (1)

CountryLink
CN (1)CN115689966A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118366621A (en)*2024-04-192024-07-19平安科技(深圳)有限公司Medical image analysis method, device, terminal equipment and storage medium
CN118941579A (en)*2024-07-222024-11-12北京纳通医用机器人科技有限公司 Bone image processing method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118366621A (en)*2024-04-192024-07-19平安科技(深圳)有限公司Medical image analysis method, device, terminal equipment and storage medium
CN118941579A (en)*2024-07-222024-11-12北京纳通医用机器人科技有限公司 Bone image processing method, device, equipment and medium

Similar Documents

PublicationPublication DateTitle
CN113313234B (en)Neural network system and method for image segmentation
EP3449421B1 (en)Classification and 3d modelling of 3d dento-maxillofacial structures using deep learning methods
US10546014B2 (en)Systems and methods for segmenting medical images based on anatomical landmark-based features
CN110310287B (en)Automatic organ-at-risk delineation method, equipment and storage medium based on neural network
US8150119B2 (en)Method and system for left ventricle endocardium surface segmentation using constrained optimal mesh smoothing
US11334995B2 (en)Hierarchical systems and methods for image segmentation
JP2020516427A (en) RECIST assessment of tumor progression
US20170178307A1 (en)System and method for image registration in medical imaging system
CN112802036A (en)Method, system and device for segmenting target area of three-dimensional medical image
Bijar et al.Atlas-based automatic generation of subject-specific finite element tongue meshes
CN111798424B (en)Medical image-based nodule detection method and device and electronic equipment
Sokooti et al.Hierarchical prediction of registration misalignment using a convolutional LSTM: Application to chest CT scans
CN109410189B (en)Image segmentation method, and image similarity calculation method and device
CN114241017B (en) Image registration method, device, storage medium and computer equipment
CN115689966A (en)Medical object recognition method, apparatus, computer device and storage medium
CN113962957A (en)Medical image processing method, bone image processing method, device and equipment
Gsaxner et al.PET-train: Automatic ground truth generation from PET acquisitions for urinary bladder segmentation in CT images using deep learning
CN114332563A (en)Image processing model training method, related device, equipment and storage medium
CN114240954A (en)Network model training method and device and image segmentation method and device
CN113284151A (en)Pancreas segmentation method and system based on deep convolutional neural network
US12211204B2 (en)AI driven longitudinal liver focal lesion analysis
CN115546089A (en)Medical image segmentation method, pathological image processing method, device and equipment
CN114693671A (en)Lung nodule semi-automatic segmentation method, device, equipment and medium based on deep learning
US12437401B2 (en)Systems and methods for determining anatomical deformations
Klemencic et al.Non-rigid registration based active appearance models for 3D medical image segmentation

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp