Movatterモバイル変換


[0]ホーム

URL:


CN114581418B - Method, device and storage medium for object analysis of medical images - Google Patents

Method, device and storage medium for object analysis of medical images
Download PDF

Info

Publication number
CN114581418B
CN114581418BCN202210224566.7ACN202210224566ACN114581418BCN 114581418 BCN114581418 BCN 114581418BCN 202210224566 ACN202210224566 ACN 202210224566ACN 114581418 BCN114581418 BCN 114581418B
Authority
CN
China
Prior art keywords
sub
image
object analysis
medical image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210224566.7A
Other languages
Chinese (zh)
Other versions
CN114581418A (en
Inventor
蓝重洲
袁绍锋
黄晓萌
李育威
曹坤琳
宋麒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Keya Medical Technology Corp
Original Assignee
Shenzhen Keya Medical Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Keya Medical Technology CorpfiledCriticalShenzhen Keya Medical Technology Corp
Priority to CN202210224566.7ApriorityCriticalpatent/CN114581418B/en
Publication of CN114581418ApublicationCriticalpatent/CN114581418A/en
Application grantedgrantedCritical
Publication of CN114581418BpublicationCriticalpatent/CN114581418B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The present disclosure relates to a method, apparatus and storage medium for object analysis of medical images, which may include the following steps. A 3D medical image containing the object may be acquired. Corresponding window width window levels can be set for each kind of object, and each sub-image sequence is windowed based on each window width window level, so as to obtain sub-image sequences of each channel. The sub-object analysis result can be obtained by analyzing the sub-object analysis model corresponding to each part based on the sub-image sequence of each channel. According to the method and the device, various window width window levels are set according to the lesion categories of the object, the sub-image sequence obtains a multi-channel image after window adjustment according to the set window width window levels, the multi-channel image is used for replacing a single-channel image to serve as input of a sub-object analysis model, the identification rate of the sub-object analysis model to lesions of different categories can be improved, and the problem of CT value difference corresponding to the lesion categories of different blood vessels can be effectively solved.

Description

Method, device and storage medium for object analysis of medical images
The application is a divisional application of Chinese application patent application with the application number 202111652073.5, the application date 2021, 12 and 31, and the application name of 'method, device and storage medium for analyzing object of medical image'.
Technical Field
The present disclosure relates to the field of medical images, and more particularly, to a method, apparatus, and storage medium for object analysis of medical images.
Background
Vascular diseases have been an important problem threatening human health, and a considerable proportion of vascular diseases are caused by vascular stenosis due to plaque lesion accumulation on the vascular wall, aneurysm due to abnormal bulge on the vascular wall, and the like, however, the detection and identification of vascular lesions in the prior art have certain defects.
In the case of head and neck arterial plaque, head and neck arterial disease generally refers to arterial stenosis or blockage caused by the accumulation of atherosclerotic plaque in the arterial wall. Patients with intracranial arterial stenosis and blockage have limited cerebral blood supply, and are extremely prone to ischemic stroke. If the plaque rupture is very easy to block and damage blood vessels, the acute cerebral apoplexy of patients is caused. The plaque can be further classified into calcified plaque, non-calcified plaque, and mixed plaque according to the composition of the atherosclerotic plaque, wherein the mixed plaque has both calcified and non-calcified plaque components. Non-calcified, mixed plaques are prone to rupture.
Computed tomography angiography (Computed Tomography Angiography, CTA) or magnetic resonance angiography (Magnetic Resonance Angiography, MRA) can image blood vessels and lesions thereof at various parts of the whole body, and is a common angiography examination technique. The contrast ratio of the non-calcified plaque, the mixed plaque and the aneurysm on the image to surrounding tissues is low, the non-calcified plaque, the mixed plaque and the aneurysm are easily confused with the surrounding tissues to cause missed detection, and the surrounding tissues are easily caused to cause false detection.
The existing head and neck CTA vascular lesion detection method generally comprises manual analysis and automatic analysis software. The artificial plaque analysis is seriously dependent on the experience of radiologists and cardiovascular specialists, lesions such as atherosclerosis plaques, aneurysms and the like are discretely distributed on the vascular wall of the head and neck artery with a complex structure, and the vascular lesions are analyzed in massive CTA data, so that the time-consuming work is undoubtedly performed for doctors, and the difficulty of diagnosis of doctors is increased due to uncertainty of non-calcification and mixed plaques.
The conventional vascular lesion analysis software can reduce the daily diagnosis workload of doctors to a certain extent, but has certain defects, for example, the semi-automatic analysis software of CT equipment manufacturers such as Siemens and the like needs to consume a great deal of manual interaction to complete the segmentation of blood vessels, the diameter estimation, the analysis of the morphology of the vessel walls and the like, and the scheme is generally only aimed at local blood vessels.
Recent deep learning techniques are gradually applied to vascular disease detection and achieve significant results. However, the existing scheme generally adopts a single model to predict the vascular lesion information in the CTA sequence, so that the prediction process is long in time consumption and more in false positive, the method is generally only suitable for a single part, and the prediction result of the medical image comprising multiple parts is inaccurate.
Disclosure of Invention
The present disclosure is provided to solve the above-mentioned problems occurring in the prior art.
There is a need for a method, apparatus and storage medium for object analysis of 3D medical images that enables accurate identification of various objects in the entire 3D medical image with reasonable time consumption for different kinds of objects distributed in multiple locations without being affected or significantly suppressing differences in CT values corresponding to the different kinds of objects.
According to a first aspect of the present disclosure, a method of object analysis of a medical image is provided, which may include acquiring a 3D medical image containing an object. The method may also divide the 3D medical image by region into a sequence of sub-images of the respective region.
The method can also set corresponding window width levels for various types of objects, and window the sub-image sequences of various channels based on the window width levels. The method can also be used for analyzing by utilizing the sub-object analysis model corresponding to each part based on the sub-image sequence of each channel to obtain a sub-object analysis result. The method can also fuse the analysis results of all the sub-objects to obtain the object analysis result of the 3D medical image.
According to a second aspect of the present disclosure, an apparatus for object analysis of a medical image is provided, which may include an interface and a processor. The interface may be configured to acquire a 3D medical image containing the object. The processor may be configured (e.g., via an interface) to include acquiring a 3D medical image containing the subject. The processor may be further configured to divide the 3D medical image by region into a sequence of sub-images of the respective region. The processor may be further configured to set a corresponding window width level for each type of object, and window each sub-image sequence based on each window width level, so as to obtain a sub-image sequence of each channel. The processor can be further configured to perform analysis by utilizing a sub-object analysis model corresponding to each part based on the sub-image sequence of each channel to obtain a sub-object analysis result, and can be further configured to fuse each sub-object analysis result to obtain an object analysis result of the 3D medical image.
According to a third aspect of the present disclosure, a computer storage medium is provided, having stored thereon executable instructions which when executed by a processor implement the steps of a method of object analysis of medical images. The method may include acquiring a 3D medical image containing the object. The method may also divide the 3D medical image by region into a sequence of sub-images of the respective region. The method can also set corresponding window width levels for various types of objects, and window the sub-image sequences of various channels based on the window width levels. The method can also be used for analyzing by utilizing the sub-object analysis model corresponding to each part based on the sub-image sequence of each channel to obtain a sub-object analysis result. The method can also fuse the analysis results of all the sub-objects to obtain the object analysis result of the 3D medical image.
Methods, apparatuses and storage medium for object analysis of medical images according to various embodiments of the present disclosure, for example, for plaque detection of blood vessels (as an example of an object), enable the present application to have the following advantages over existing approaches:
1. According to the method, the sub-image sequence of each part is obtained by adopting the slice classification model, a plurality of window width window levels are set according to the lesion types of the objects, a multi-channel image is obtained after window adjustment according to the set window width window levels, the multi-channel image is used for replacing a single-channel image as input of the sub-object analysis model, the recognition rate of the sub-object analysis model on lesions of different types is improved, the objects of different types distributed in the plurality of parts can be reasonably consumed time, accurate recognition of various objects is realized in the whole 3D medical image, and the influence of CT value differences corresponding to the different types from the objects is avoided or obviously restrained.
2. The present disclosure is not dependent on complex human interaction, and accurate and efficient detection of vascular lesions in a series of images comprising an extended organ or tissue (e.g., a blood vessel), such as a head and neck CTA image sequence, wherein there are head, neck arteries and aortic arch, wherein there are a large number of branches per artery, can be accomplished by the subject analysis methods of the present disclosure.
Drawings
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The accompanying drawings illustrate various embodiments by way of example in general and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Such embodiments are illustrative and not intended to be exhaustive or exclusive of the present apparatus or method.
Fig. 1 illustrates a method of object analysis of a medical image according to an embodiment of the present disclosure.
Fig. 2 illustrates a site partitioning process according to an embodiment of the present disclosure.
Fig. 3 illustrates an object analysis process based on a sequence of sub-images according to an embodiment of the present disclosure.
Fig. 4 illustrates a truncated sliding window block process according to an embodiment of the present disclosure.
Fig. 5 illustrates a detection result fusion process according to an embodiment of the present disclosure.
Fig. 6 illustrates an illustrative block diagram of an exemplary apparatus for object analysis of medical images in accordance with embodiments of the present disclosure.
Detailed Description
In order to better understand the technical solutions of the present disclosure, the following detailed description of the present disclosure is provided with reference to the accompanying drawings and the specific embodiments. Embodiments of the present disclosure will be described in further detail below with reference to the drawings and specific embodiments, but not by way of limitation of the present disclosure. The order in which the steps are described herein by way of example should not be construed as limiting if there is no necessity for a relationship between each other, and it should be understood by those skilled in the art that the steps may be sequentially modified without disrupting the logic of each other so that the overall process is not realized.
Fig. 1 illustrates a method of object analysis of a medical image according to an embodiment of the present disclosure. As shown in FIG. 1, a method of subject analysis of a medical image begins at step S1 with obtaining a 3D medical image containing a subject, wherein the subject may be any organ or tissue extending a length, such as, but not limited to, at least one of a blood vessel, a digestive tract, a breast duct, a respiratory tract, or a lesion therein. The lesions are lesions or abnormalities of atheromatous plaque, aneurysm, stent and the like in blood vessels. The 3D medical image is a CTA image comprising blood vessels, a CT image comprising ribs or a CT image comprising lungs. And the vascular lesion is at least one of calcified plaque, non-calcified plaque, mixed plaque, aneurysm and stent image.
In this embodiment, a vascular lesion is taken as an example of a subject, and the 3D medical image is a CTA image of a head and neck including a blood vessel, and this embodiment is used for explaining detection of an atherosclerotic plaque.
The 3D medical image needs to conform to the medical image format, namely medical digital imaging, and meet the communication (DIGITAL IMAGING AND Communications IN MEDICINE, DICOM) protocol, and the 3D medical image also needs to conform to the CTA image basic requirements, such as no contrast agent filling, no obvious motion artifact, and the like.
At step S2, the 3D medical image may be divided by location into a sequence of sub-images of the respective locations (as in step S21 of fig. 2) using a processor;
In this embodiment, step S2 may specifically include identifying, based on the 3D medical image, a key slice serving as a boundary between adjacent portions in the 3D medical image using a slice classification model, and implementing division of sub-images according to the portions using the identified key slice. In some embodiments, the slice classification model is implemented using a two-dimensional learning network, trained using training samples of classification information for slices having corresponding locations.
In this embodiment, the 3D medical image is a CTA image of the head and neck including blood vessels, and in order to distinguish a sub-image sequence of 3 sub-parts of the head, neck and chest, 2 key slices are required to distinguish the head, neck and chest. The sub-image sequence determining process is shown in fig. 2, wherein the slice classification model is obtained by training according to a training sample with slice classification information of the region of interest. In some embodiments, the slice classification model adopts a 2D ResNet network structure, the training mode is that a training sample image is marked by an experienced image doctor based on 2 key slices, and the slices of the head, the neck and the chest are collected according to the marked key slice information, so that the marked key slice information is used as gold standard classification information corresponding to the training sample. And inputting the training sample into a slice classification model to obtain a slice classification result of the training sample, and calculating the loss between the slice classification result and gold standard classification information. The loss-adjust slice classification model is not specifically defined herein, and a random gradient descent SGD optimizer or other type of optimizer may be employed in adjusting network parameters, and is not specifically defined herein.
In step S3, a corresponding window width level may be set for each type of object, and each sub-image sequence is windowed based on each window width level (as in step S31 of fig. 3) to obtain a sub-image sequence of each channel. In this embodiment, the atherosclerotic plaque is taken as an example, and its kinds may include three kinds of calcified plaque, non-calcified plaque, and mixed plaque, so 3 kinds of window width levels are set. And window adjusting each sub-image sequence based on the 3 window width window levels to obtain a sub-image sequence of the 3-channel image. The CT values of calcified plaque, non-calcified plaque and mixed plaque have differences, the differences of CT values are reflected on the differences of gray values, the contrast between the non-calcified plaque, mixed plaque and aneurysm on an image and surrounding tissues is low, the contrast is very easy to be confused with the surrounding tissues to cause missed detection, the embodiment sets a plurality of window width window levels according to the lesion types of the objects, a sub-image sequence is windowed according to the set window width window levels to obtain a multi-channel image, and because the CT values of different lesions are different, the window width and window level of gray values are respectively set for the objects of different types and the window is windowed. The sub-image sequences of all channels obtained after window adjustment can highlight gray information of objects of corresponding types, the multi-channel images obtained after window adjustment replace single-channel images to serve as input of a sub-object analysis model, the recognition rate of the sub-object analysis model on lesions of different types is improved, and the problem of CT value difference corresponding to different blood vessel lesion types is effectively solved.
In step S4, model parameters may be adjusted for each sub-object analysis model based on the prior information of each part and the skeletonized object segmentation result thereof. The a priori information of the site may include at least one of the size, shape and number of objects contained in the site, such as the diameters of blood vessels of different sites.
In this embodiment, step S4 may specifically include determining a size of a sliding window block based on prior information of each portion, determining an internal representative point of the sliding window block based on an object segmentation result of each sub-image sequence after skeletonizing, intercepting a sliding window block of a training sample including lesion marking information according to the size of the sliding window block based on the internal representative point, and training each sub-object analysis model by using the sliding window block as the training sample.
The following description will be made with respect to an example in which the center point of the sliding window block is taken as an internal representative point, but it should be understood that the internal representative point is not limited thereto, and for example, an example in which the middle region of the sliding window block is taken as an internal representative point, and the like may also be employed. The size of the sliding window block is determined based on prior information of each part, the internal center point of the sliding window block is determined based on object segmentation results of each sub-image sequence after skeletonization, the sliding window block of a training sample is intercepted according to the size of the sliding window block based on the internal center point, the sliding window block is used as the training sample to train each sub-object analysis model, and model prediction robustness can be improved.
In this embodiment, the determining the internal representative point of the sliding window block based on the object segmentation result of each skeletonized sub-image sequence may specifically include determining, by the processor, a corresponding object segmentation result using a corresponding segmentation model of each region based on the sub-image sequence of each region (as in step S32 of fig. 3), and skeletonizing the object segmentation result of each sub-image sequence (as in step S33 of fig. 3) to obtain the internal representative point of the sliding window block by sparse sampling the skeletonized object segmentation result. According to the embodiment, with reference to the blood vessel segmentation result, the sliding window prediction based on the segmentation result can improve the model prediction speed and reduce false positive.
In this embodiment, each vessel segmentation model is trained separately using training samples having corresponding site vessel information. In this embodiment, the 3D medical image is a head and neck CTA image including blood vessels, so the present embodiment needs to apply a head blood vessel segmentation model, a neck blood vessel segmentation model, and a chest blood vessel segmentation model, and the training process of the three blood vessel segmentation models is similar, taking the head blood vessel segmentation model as an example. The head vessel segmentation model is obtained by training according to a training sample with the vessel information of interest. In some embodiments, the vessel segmentation model generally adopts a 3D U-Net network structure, and the training mode of the vessel segmentation model can comprise marking head vessels in training sample images based on experienced image doctors, and taking the head vessels as gold standards during training. And then inputting the training sample image into a head blood vessel segmentation model to obtain a head blood vessel segmentation result, and calculating the loss between the head blood vessel segmentation result and a gold standard. And adjusting network parameters of the head segmentation model according to the loss, and indicating that the head vessel segmentation model training converges when the loss is smaller than or equal to a preset threshold or reaches convergence. Alternatively, the loss may be calculated using a Dice loss function, a cross entropy loss function, or other types of loss functions, which are not specifically limited herein, and the network parameters may be adjusted using a random gradient descent SGD optimizer or other types of optimizers, which are not specifically limited herein.
In this embodiment, step S4 may further specifically include training each sub-object analysis model by using the sliding window block as a training sample, and training each sub-object analysis model by using the false positive sample obtained by the training and the training sample including the lesion marking information as new training samples, so as to improve sensitivity and accuracy of the prediction result of the lesion detection model.
Taking a sub-image sequence of the head as an example, the prior information based on the head and the object segmentation result after skeletonizing are taken as an explanation to adjust model parameters for the sub-object analysis model.
The diameter of the head artery is smaller than that of the neck and the aortic arch, the size of the sliding window block is set to be 323 (the size of the sliding window block corresponding to the neck is 643) under the condition that the sequence voxel space is 0.4mm according to the head priori information, the size accords with the actual blood vessel shape, the lesion detection effect can be improved, and the reasoning time of a lesion detection model can be reduced. The sliding window is shown in fig. 4, after the obtained head blood vessel segmentation result is skeletonized, sparse sampling is performed to obtain a sliding window block center point required by the prediction of the sub-object analysis model, and a sparse sampling interval is set to be 2 for head data. The plaque detection model (sub-object analysis model) adopts a 3D U-Net network structure and completes training in an iterative manner. Taking a training manner of the head vascular plaque detection model as an example, it may include:
(1) Based on the experience, the image doctor marks the head vascular plaque in the training sample image, and the head vascular plaque is used as a gold standard in training.
(2) And then training a sample image to input into a plaque model to obtain a head vascular plaque result, and calculating the loss between the head vascular plaque detection result and a gold standard.
(3) And adjusting network parameters of the head plaque detection model in a gradient descending mode according to the loss, and when the loss is smaller than or equal to a preset threshold or reaches convergence, indicating that model training converges to obtain a plaque detection first model. Alternatively, the computational loss is generally a Dice loss function, a cross entropy loss function, or other type of loss function, which is not specifically limited herein, and a random gradient descent SGD optimizer or other type of optimizer may be used in adjusting the network parameters, which is not specifically limited herein.
(4) And predicting the head plaque detection result according to the designated sliding window block by using a plaque detection first model (a head plaque detection model after network parameter adjustment), selecting a false positive sample in part of detection results, and combining the false positive sample with a gold standard to form a new training sample.
(5) Repeating the steps (2) - (3), and iteratively obtaining a plaque detection second model.
(6) Repeating the steps (2) - (5) for several times, and iteratively obtaining a plaque detection final model.
In step S5, based on the sub-image sequences of each channel, analysis may be performed by using the sub-object analysis model corresponding to each part, to obtain a sub-object analysis result;
in this embodiment, step S5 may specifically include, based on the sub-image sequences of each channel, referring to the prior information of each location and the object segmentation result thereof after skeletonizing, performing analysis by using the sub-object analysis model corresponding to each location (as in step S34 in fig. 3), so as to obtain a sub-object analysis result.
In step S6, the processor may be used to fuse the analysis results of the sub-objects to obtain the object analysis result of the 3D medical image. And predicting a plurality of sub-parts of the CTA medical image to obtain a sub-part plaque detection result, and then fusing the sub-part plaque detection result to obtain a blood vessel plaque detection result. Taking head and neck CTA as an example, the three sub-image sequences generally include 3 sub-image sequences of the head, neck and chest, and plaque detection results of the 3 sub-image sequences can be re-stacked according to sub-sequence slice classification to obtain a detection result of the CTA medical image (step S51 in fig. 5).
According to the method, a 3D medical image is divided according to the positions by a slice classification model to obtain a sub-image sequence of each position, 3 window width window levels are set according to the blood vessel plaque types, a multi-channel image is obtained after the sub-image sequence is windowed according to the set 3 window width window levels, the multi-channel image is used for replacing a single-channel image to serve as input of a sub-object analysis model, as CT values (derived from attenuation coefficients) of different types of lesions are different, the differences of the CT values are reflected on the differences of gray values, and the window width and the window level of a window of the gray value are set and are windowed for each type of object respectively. The sub-image sequences of all channels obtained after window adjustment can highlight the gray information of the objects of the corresponding types, so that good and accurate analysis results of all types of objects can be obtained by using the multi-channel sub-image sequences as the input of the sub-object analysis model, the recognition rate of the sub-object analysis model on lesions of different types can be improved, and the problem of CT value difference corresponding to the different vascular lesion types can be effectively solved. Compared with a manual analysis scheme, the application can automatically, quickly and accurately complete the detection of vascular lesions, and greatly reduce the workload of doctors and the waiting time of patients while improving the diagnosis efficiency.
As another embodiment, the vascular lesion is an aneurysm, and the process of analyzing the aneurysm is different from the process of analyzing the object of the present embodiment in that the kind of the aneurysm is only one. Only 1 window width level needs to be set. And window adjusting each sub-image sequence based on the window width window level of 1 type, so as to obtain the sub-image sequence of the single-channel image. And inputting the sub-image sequence of the single-channel image into a sub-object analysis model for analysis.
As a further alternative embodiment, the vascular lesion is a stent, and the analysis of the stent is different from the object analysis of the present embodiment in that the kind of the aneurysm is only one. Only 1 window width level needs to be set. And window adjusting each sub-image sequence based on the window width window level of 1 type, so as to obtain the sub-image sequence of the single-channel image. And inputting the sub-image sequence of the single-channel image into a sub-object analysis model for analysis.
Fig. 6 illustrates an illustrative block diagram of an exemplary apparatus for object analysis of a medical image, as shown in fig. 6, an object analysis apparatus 600 may include an interface 607 and a processor 601, in accordance with embodiments of the present disclosure. The interface 607 may be configured to receive a 3D medical image containing an object. The processor 601 may be configured to perform a method of object analysis of medical images according to various embodiments of the present disclosure.
Through this interface 607, the means for object analysis of the medical image may be connected to a network (not shown), such as, but not limited to, a local area network in a hospital or the internet. The communication mode implemented by the interface 607 is not limited to the network, and may include NFC, bluetooth, WIFI, etc., and may be a wired connection or a wireless connection. Taking a network as an example, the interface 607 may connect a device that performs object analysis on a medical image with external devices such as an image acquisition device (not shown), a medical image database 608, and an image data storage device 609. The image acquisition device may be any type of imaging modality, such as, but not limited to, computed Tomography (CT), digital Subtraction Angiography (DSA), magnetic Resonance Imaging (MRI), functional MRI, dynamic contrast-enhanced MRI, diffusion MRI, helical CT, cone Beam Computed Tomography (CBCT), positron Emission Tomography (PET), single Photon Emission Computed Tomography (SPECT), X-ray imaging, optical tomography, fluoroscopic imaging, ultrasound imaging, radiotherapy portal imaging.
In some embodiments, object analysis device 600 may be a dedicated smart device or a general purpose smart device. For example, object analysis device 600 may be a computer tailored for image data acquisition and image data processing tasks, or a server placed in the cloud. For example, the apparatus 600 is integrated into an image acquisition apparatus.
The object analysis apparatus 600 may include a processor 601 and a memory 604, and may additionally include at least one of an input/output 602 and an image display 603.
The processor 601 may be a processing device that may include one or more general purpose processing devices, such as a microprocessor, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and the like. More specifically, the processor 601 may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, a processor running other instruction sets, or a processor running a combination of instruction sets. Processor 601 may also be one or more special purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like. As will be appreciated by those skilled in the art, in some embodiments, the processor 601 may be a special purpose processor rather than a general purpose processor. The processor 601 may include one or more known processing devices, such as a microprocessor from the PentiumTM、CoreTM、XeonTM or Itanium family manufactured by IntelTM, the TurionTM、AthlonTM、SempronTM、OpteronTM、FXTM、PhenomTM family manufactured by AMD TM, or various processors manufactured by Sun Microsystems. Processor 601 may also include a graphics processing unit, such as fromIs manufactured by NvidiaTMSeries, GMA manufactured by IntelTM, irisTM series, or RadeonTM series manufactured by AMDTM. The processor 601 may also include an acceleration processing unit, such as the Desktop A-4 (6, 6) series manufactured by AMDTM, the Xeon PhiTM series manufactured by IntelTM. The disclosed embodiments are not limited to any type of processor or processor circuit that is otherwise configured to acquire a 3D medical image containing an object, segment the 3D medical image to obtain a segmented result of the object, acquire a set of image slices in the 3D medical image in a direction of extension, acquire internal representative points of the segmented object in each of the set of image slices, acquire a set of image blocks in the 3D medical image based on a set of internal representative points of the object of the set of image slices, perform object analysis based on the set of image blocks, or manipulate any other type of data consistent with the disclosed embodiments. In addition, the term "processor" or "image processor" may include more than one processor, for example, a multi-core design or a plurality of processors, each having a multi-core design. Processor 601 may execute sequences of computer program instructions stored in memory 604 to perform the various operations, processes, and methods disclosed herein.
The processor 601 may be communicatively coupled to a memory 604 and configured to execute computer-executable instructions stored therein. The memory 604 may include Read Only Memory (ROM), flash memory, random Access Memory (RAM), dynamic Random Access Memory (DRAM) such as Synchronous DRAM (SDRAM) or Rambus DRAM, static memory (e.g., flash memory, static random access memory), etc., upon which computer-executable instructions are stored in any format. In some embodiments, memory 604 may store computer-executable instructions for one or more image processing programs 605. The computer program instructions may be accessed by the processor 601, read from ROM or any other suitable memory location, and loaded into RAM for execution by the processor 601. For example, memory 604 may store one or more software applications. Software applications stored in memory 604 may include, for example, an operating system for a general computer system (not shown) and an operating system for a soft control device.
Further, the memory 604 may store an entire software application or only a portion of a software application (e.g., the image processing program 605) that is executable by the processor 601. Further, the memory 604 may store a plurality of software modules for implementing the steps of a method of object analysis of a medical image or for training a sub-object analysis model, a slice classification model, a segmentation model, consistent with the present disclosure.
Furthermore, the memory 604 may store data generated/buffered when executing the computer program, e.g. medical image data 606, may comprise medical images transmitted from an image acquisition device, medical image database 608, image data storage device 609, etc. In some embodiments, the medical image data 606 may include 3D medical images containing objects to be subject to analysis, for which the image processing program 605 is to segment, acquire image slices, acquire internal representative points, crop image blocks, and subject to analysis.
In some embodiments, an image data storage 609 may be provided to exchange image data with the medical image database 608, and the memory 604 may be in communication with the medical image database 608 to obtain a medical image containing several sites to be vessel segmented. For example, the image data storage 609 may reside in other medical image acquisition devices (e.g., CT that performs a scan of the patient). The medical image of the patient may be transmitted and saved to the medical image database 608, and the object analysis device 600 may retrieve the medical image of the specific patient from the medical image database 608 and perform object analysis for the medical image of the specific patient.
In some embodiments, the memory 604 may be in communication with the medical image database 608 to transmit and save the object segmentation results along with the resulting object analysis results into the medical image database 608.
In addition, parameters of the trained sub-object analysis model and/or slice classification model and/or segmentation model may be stored on the medical image database 608 for access, acquisition, and utilization by other object analysis devices as needed. In this manner, the processor 601, when facing the patient, may obtain a trained sub-object analysis model, a slice classification model, and/or a segmentation model for the corresponding population for vessel segmentation based on the obtained trained model.
In some embodiments, a sub-object analysis model, a slice classification model, and/or a segmentation model (particularly a learning network) may be stored in the memory 604. Alternatively, the learning network may be stored in a remote device, a separate database (such as medical image database 608), a distributed device, and may be used by the image processing program 605.
In addition to displaying medical images, the image display 603 may also display other information, such as segmentation results of objects, center point calculation results, and object analysis results. For example, the image display 603 may be an LCD, CRT, or LED display.
The input/output 602 may be configured to allow the object analysis device 600 to receive and/or transmit data. The input/output 602 may include one or more digital and/or analog communication devices that allow the device to communicate with a user or other machine and device. For example, input/output 602 may include a keyboard and a mouse that allow a user to provide input.
In some embodiments, the image display 603 may present a user interface so that the user, with the input/output 602 in conjunction with the user interface, may conveniently and intuitively modify (such as edit, move, modify, etc.) the generated anatomical label.
Interface 607 may include a network adapter, cable connector, serial connector, USB connector, parallel connector, high-speed data transmission adapter such as fiber optic, USB 6.0, lightning, wireless network adapter such as Wi-Fi adapter, telecommunications (6G, 4G/LTE, etc.) adapter. The device may connect to the network through interface 607. The network may provide a Local Area Network (LAN), a wireless network, a cloud computing environment (e.g., software as a service, a platform as a service, an infrastructure as a service, etc.), a client-server, a Wide Area Network (WAN), etc.
Embodiments of the present disclosure also provide a computer storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement a method of object analysis of medical images according to various embodiments of the present disclosure. The storage medium may include read-only memory (ROM), flash memory, random Access Memory (RAM), dynamic Random Access Memory (DRAM) such as Synchronous DRAM (SDRAM) or Rambus DRAM, static memory (e.g., flash memory, static random access memory), etc., upon which computer-executable instructions may be stored in any format.
Furthermore, although exemplary embodiments have been described herein, the scope thereof may include any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of the various embodiments across schemes), adaptations or alterations based on the present disclosure. Elements in the claims are to be construed broadly based on language employed in the claims and not limited to examples described in the present specification or during the practice of the present disclosure, which examples are to be construed as non-exclusive. It is intended, therefore, that the specification and examples be considered as exemplary only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the disclosure. This is not to be interpreted as an intention that the disclosed features not being claimed are essential to any claim. Rather, the disclosed subject matter may include less than all of the features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with one another in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are only exemplary embodiments of the present disclosure, and are not intended to limit the present invention, the scope of which is defined by the claims. Various modifications and equivalent arrangements of parts may be made by those skilled in the art, which modifications and equivalents are intended to be within the spirit and scope of the present disclosure.

Claims (15)

Translated fromChinese
1.一种对医学图像进行对象分析的方法,其特征在于,包括:1. A method for object analysis of a medical image, comprising:获取包含对象的3D医学图像;Acquire a 3D medical image containing an object;利用处理器,将所述3D医学图像按照部位划分为各个部位的子图像序列;Using a processor, dividing the 3D medical image into sub-image sequences of each part according to the part;为对象的各个种类设置对应的窗宽窗位,基于各个窗宽窗位分别对各个子图像序列调窗,以得到各个通道的子图像序列;Setting corresponding window widths and window levels for each type of object, and adjusting windows for each sub-image sequence based on each window width and window level to obtain a sub-image sequence for each channel;基于各个部位的先验信息确定滑窗块的尺寸,基于骨架化后的各个子图像序列的对象分割结果确定滑窗块的内部代表点,基于内部代表点按照滑窗块的尺寸截取包含病变标注信息的训练样本的滑窗块,利用所述滑窗块作为训练样本对各个子对象分析模型进行训练,以为各个子对象分析模型调节模型参数;Determine the size of the sliding window block based on the prior information of each part, determine the internal representative point of the sliding window block based on the object segmentation result of each sub-image sequence after skeletonization, intercept the sliding window block of the training sample containing the lesion annotation information according to the size of the sliding window block based on the internal representative point, train each sub-object analysis model using the sliding window block as the training sample, and adjust the model parameters for each sub-object analysis model;基于各个通道的子图像序列,参考各个部位的先验信息,利用各个部位对应的子对象分析模型进行分析,得到子对象分析结果;Based on the sub-image sequence of each channel, referring to the prior information of each part, using the sub-object analysis model corresponding to each part to perform analysis, the sub-object analysis result is obtained;利用所述处理器,对各个子对象分析结果进行融合,来得到所述3D医学图像的对象分析结果。The processor is used to fuse the analysis results of each sub-object to obtain the object analysis result of the 3D medical image.2.根据权利要求1所述的方法,其特征在于,将所述3D医学图像按照部位划分为各个部位的子图像序列具体包括:基于所述3D医学图像,利用片层分类模型,识别出所述3D医学图像中作为相邻部位交界处的关键片层;利用识别出的关键片层,实现按照部位的子图像的划分。2. The method according to claim 1 is characterized in that dividing the 3D medical image into a sub-image sequence of each part according to the part specifically includes: based on the 3D medical image, using a slice classification model, identifying key slices in the 3D medical image that are the junctions of adjacent parts; using the identified key slices to realize the division of sub-images according to the part.3.根据权利要求2所述的方法,其特征在于,所述片层分类模型利用二维学习网络实现,利用具有对应部位的片层的分类信息的训练样本训练。3. The method according to claim 2 is characterized in that the slice classification model is implemented using a two-dimensional learning network and is trained using training samples with classification information of slices at corresponding locations.4.根据权利要求1所述的方法,其特征在于,所述对象为血管、消化道、乳腺管、呼吸道中的至少一种或其中的病变。4. The method according to claim 1, characterized in that the object is at least one of a blood vessel, a digestive tract, a mammary duct, a respiratory tract or a lesion therein.5.根据权利要求1所述的方法,其特征在于,所述对象为血管病变。The method according to claim 1 , wherein the object is a vascular lesion.6.根据权利要求5所述的方法,其特征在于,所述血管病变为钙化斑块、非钙化斑块、混合斑块、动脉瘤和支架影像中的至少一种。6. The method according to claim 5, characterized in that the vascular lesion is at least one of a calcified plaque, a non-calcified plaque, a mixed plaque, an aneurysm and a stent image.7.根据权利要求1-6中的任何一项所述的方法,其特征在于,基于各个通道的子图像序列,利用各个部位对应的子对象分析模型进行分析,得到子对象分析结果具体包括:基于各个通道的子图像序列,参考各个部位的先验信息及其骨架化后的对象分割结果,利用各个部位对应的子对象分析模型进行分析,来得到子对象分析结果。7. The method according to any one of claims 1-6 is characterized in that, based on the sub-image sequence of each channel, the sub-object analysis model corresponding to each part is used to perform analysis to obtain the sub-object analysis result, which specifically includes: based on the sub-image sequence of each channel, referring to the prior information of each part and the object segmentation result after skeletonization, and using the sub-object analysis model corresponding to each part to perform analysis to obtain the sub-object analysis result.8.根据权利要求1所述的方法,其特征在于,所述基于骨架化后的各个子图像序列的对象分割结果确定滑窗块的内部代表点具体包括:8. The method according to claim 1, wherein determining the internal representative points of the sliding window block based on the object segmentation results of each sub-image sequence after skeletonization specifically comprises:基于各个部位的子图像序列,利用所述处理器,使用各个部位的对应分割模型确定对应的对象分割结果;Based on the sub-image sequence of each part, using the processor, using the corresponding segmentation model of each part to determine the corresponding object segmentation result;对各个子图像序列的对象分割结果进行骨架化操作:Perform skeletonization on the object segmentation results of each sub-image sequence:对骨架化后的对象分割结果稀疏采样得到滑窗块的内部代表点。The skeletonized object segmentation result is sparsely sampled to obtain the internal representative points of the sliding window block.9.根据权利要求8所述的方法,其特征在于,各个血管分割模型利用具有对应部位血管信息的训练样本分别训练。9 . The method according to claim 8 , wherein each blood vessel segmentation model is trained separately using training samples having blood vessel information of corresponding parts.10.根据权利要求1所述的方法,其特征在于,基于各个部位的先验信息及其骨架化后的对象分割结果,为各个子对象分析模型调节模型参数还具体包括:10. The method according to claim 1, characterized in that adjusting model parameters for each sub-object analysis model based on prior information of each part and the object segmentation result after skeletonization further specifically comprises:利用滑窗块作为训练样本对各个子对象分析模型进行训练;Use sliding window blocks as training samples to train each sub-object analysis model;将该训练得到的假阳性样本与包含病变标注信息的训练样本一起作为新的训练样本,来训练各个子对象分析模型。The false positive samples obtained from the training are used together with the training samples containing lesion annotation information as new training samples to train each sub-object analysis model.11.根据权利要求8-10中的任何一种所述的方法,其特征在于,所述部位的先验信息包括:部位所含对象的尺寸、形状和数量中的至少一种。11. The method according to any one of claims 8-10, characterized in that the prior information of the part includes: at least one of the size, shape and number of objects contained in the part.12.根据权利要求8-10中的任何一种所述的方法,其特征在于,所述内部代表点为滑窗块中心点。12. The method according to any one of claims 8 to 10, characterized in that the internal representative point is a center point of a sliding window block.13.根据权利要求1-6中的任何一种所述的方法,其特征在于,所述3D医学图像为包含血管的CTA图像、包含肋骨的CT图像或者包含肺部的CT图像。13. The method according to any one of claims 1-6, characterized in that the 3D medical image is a CTA image containing blood vessels, a CT image containing ribs, or a CT image containing lungs.14.一种对医学图像进行对象分析的装置,其特征在于,包括:接口,其配置为获取包含对象的3D医学图像;以及处理器,其配置为:执行根据权利要求1-13中任何一项所述的对医学图像进行对象分析的方法。14. An apparatus for performing object analysis on a medical image, characterized in that it comprises: an interface configured to acquire a 3D medical image containing an object; and a processor configured to: execute the method for performing object analysis on a medical image according to any one of claims 1-13.15.一种具有存储在其上的指令的非暂时性计算机可读介质,所述指令在由处理器执行时实现根据权利要求1-13中任何一项所述的对医学图像进行对象分析的方法。15. A non-transitory computer-readable medium having instructions stored thereon, which when executed by a processor implement the method for object analysis of a medical image according to any one of claims 1-13.
CN202210224566.7A2021-12-312021-12-31Method, device and storage medium for object analysis of medical imagesActiveCN114581418B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210224566.7ACN114581418B (en)2021-12-312021-12-31Method, device and storage medium for object analysis of medical images

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN202111652073.5ACN114004835B (en)2021-12-312021-12-31 Method, device and storage medium for object analysis of medical images
CN202210224566.7ACN114581418B (en)2021-12-312021-12-31Method, device and storage medium for object analysis of medical images

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111652073.5ADivisionCN114004835B (en)2021-12-312021-12-31 Method, device and storage medium for object analysis of medical images

Publications (2)

Publication NumberPublication Date
CN114581418A CN114581418A (en)2022-06-03
CN114581418Btrue CN114581418B (en)2025-05-23

Family

ID=79932327

Family Applications (2)

Application NumberTitlePriority DateFiling Date
CN202210224566.7AActiveCN114581418B (en)2021-12-312021-12-31Method, device and storage medium for object analysis of medical images
CN202111652073.5AActiveCN114004835B (en)2021-12-312021-12-31 Method, device and storage medium for object analysis of medical images

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
CN202111652073.5AActiveCN114004835B (en)2021-12-312021-12-31 Method, device and storage medium for object analysis of medical images

Country Status (1)

CountryLink
CN (2)CN114581418B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114913174B (en)*2022-07-152022-11-01深圳科亚医疗科技有限公司Method, apparatus and storage medium for vascular system variation detection
CN115496735A (en)*2022-09-282022-12-20推想医疗科技股份有限公司 Lesion segmentation model training method and device, lesion segmentation method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110060263A (en)*2018-04-242019-07-26深圳科亚医疗科技有限公司 Medical image segmentation method, segmentation device, segmentation system and computer readable medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP5361439B2 (en)*2009-02-232013-12-04株式会社東芝 Medical image processing apparatus and medical image processing method
CN103208005A (en)*2012-01-132013-07-17富士通株式会社Object recognition method and object recognition device
CN103222876B (en)*2012-01-302016-11-23东芝医疗系统株式会社Medical image-processing apparatus, image diagnosing system, computer system and medical image processing method
DE102016205718A1 (en)*2016-04-062017-10-12Siemens Healthcare Gmbh Method for displaying medical image data
US11341631B2 (en)*2017-08-092022-05-24Shenzhen Keya Medical Technology CorporationSystem and method for automatically detecting a physiological condition from a medical image of a patient
CN110009010B (en)*2019-03-202023-03-24西安电子科技大学Wide-width optical remote sensing target detection method based on interest area redetection
CN110751621B (en)*2019-09-052023-07-21五邑大学 Breast cancer auxiliary diagnosis method and device based on deep convolutional neural network
CN111368827B (en)*2020-02-272023-08-29推想医疗科技股份有限公司Medical image processing method, medical image processing device, computer equipment and storage medium
CN111739026B (en)*2020-05-282021-02-09数坤(北京)网络科技有限公司Blood vessel center line-based adhesion cutting method and device
CN111862033B (en)*2020-07-152024-02-20上海联影医疗科技股份有限公司 Medical image processing methods, devices, image processing equipment and storage media
CN112669235B (en)*2020-12-302024-03-05上海联影智能医疗科技有限公司Method, device, electronic equipment and storage medium for adjusting image gray scale
CN112686899B (en)*2021-03-222021-06-18深圳科亚医疗科技有限公司 Medical image analysis method and device, computer equipment and storage medium
CN113192031B (en)*2021-04-292023-05-30上海联影医疗科技股份有限公司 Blood vessel analysis method, device, computer equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110060263A (en)*2018-04-242019-07-26深圳科亚医疗科技有限公司 Medical image segmentation method, segmentation device, segmentation system and computer readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的颅内出血侦测与定位;张雷;万方数据知识服务平台;20211216;全文*

Also Published As

Publication numberPublication date
CN114004835A (en)2022-02-01
CN114581418A (en)2022-06-03
CN114004835B (en)2022-03-18

Similar Documents

PublicationPublication DateTitle
CN113902741B (en)Method, device and medium for performing blood vessel segmentation on medical image
US11508460B2 (en)Method and system for anatomical tree structure analysis
US11495357B2 (en)Method and device for automatically predicting FFR based on images of vessel
US10580526B2 (en)System and method for calculating vessel flow parameters based on angiography
US11847547B2 (en)Method and system for generating a centerline for an object, and computer readable medium
CN109949300B (en)Method, system and computer readable medium for anatomical tree structure analysis
CN115035020B (en) Method, device and storage medium for object analysis of medical images
US12198348B2 (en)Computerised tomography image processing
CN111476791B (en)Image processing method, image processing apparatus, and non-transitory computer readable medium
CN114596311B (en)Blood vessel function evaluation method and blood vessel function evaluation device based on blood vessel image
CN114581418B (en)Method, device and storage medium for object analysis of medical images
WO2021011775A1 (en)Systems and methods for generating classifying and quantitative analysis reports of aneurysms from medical image data
CN114708390B (en)Image processing method and device for physiological tubular structure and storage medium
CN110070534B (en)Method for automatically acquiring feature sequence based on blood vessel image and device for predicting fractional flow reserve
CN114782443A (en)Device and storage medium for data-based enhanced aneurysm risk assessment
CN114862850B (en)Target detection method, device and medium for blood vessel medical image
CN114862879B (en)Method, system and medium for processing images containing physiological tubular structures
US20240062370A1 (en)Mechanics-informed quantitative flow analysis of medical images of a tubular organ
CN115511778A (en)Method and system for predicting physiological condition evaluation parameters from blood vessel images
KR20240057147A (en)Method and apparatus for analyzing blood vessels based on machine learning model
CN120259532A (en) A parameter image generation method and device for shortening dynamic PET scanning time
Beare et al.Segmentation of carotid arteries in CTA images

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp