Movatterモバイル変換


[0]ホーム

URL:


CN119048694A - Multi-mode-based three-dimensional image post-processing method and system - Google Patents

Multi-mode-based three-dimensional image post-processing method and system
Download PDF

Info

Publication number
CN119048694A
CN119048694ACN202411538123.0ACN202411538123ACN119048694ACN 119048694 ACN119048694 ACN 119048694ACN 202411538123 ACN202411538123 ACN 202411538123ACN 119048694 ACN119048694 ACN 119048694A
Authority
CN
China
Prior art keywords
data
image data
features
lesion
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411538123.0A
Other languages
Chinese (zh)
Inventor
苏志康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Zhikangyun Medical Technology Co ltd
Original Assignee
Fujian Zhikangyun Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Zhikangyun Medical Technology Co ltdfiledCriticalFujian Zhikangyun Medical Technology Co ltd
Priority to CN202411538123.0ApriorityCriticalpatent/CN119048694A/en
Publication of CN119048694ApublicationCriticalpatent/CN119048694A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention provides a multi-mode-based three-dimensional image post-processing method and system, which relate to the technical field of image processing, wherein the method comprises the steps of acquiring CT image data, MRI image data and PET image data, and preprocessing to obtain preprocessed data; the method comprises the steps of extracting features of preprocessed data to obtain skeleton structure features, soft tissue structure features and metabolic activity features, carrying out rigid registration on the skeleton structure features and the soft tissue structure features to obtain preliminary registration data, carrying out elastic registration on a preliminary registration data set and metabolic activity features to obtain multi-mode image data, generating a three-dimensional reconstruction model according to the multi-mode image data, carrying out segmentation processing on the three-dimensional reconstruction model to obtain lesion analysis data, carrying out analysis according to the lesion analysis data to obtain lesion marking data, and carrying out dynamic weighting fusion processing on the lesion marking data and the tissue structure data to obtain three-dimensional comprehensive image data. The invention improves a more accurate influence processing method.

Description

Multi-mode-based three-dimensional image post-processing method and system
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-mode-based three-dimensional image post-processing method and system.
Background
Currently, medical imaging technology has become an indispensable tool in clinical diagnosis, and is widely applied to disease detection and treatment planning. Common medical image types include CT (computed tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), and the like. These imaging techniques can provide structural and functional information of the internal tissues of the patient, but each has its limitations. For example, CT images can show bone and hard tissue well, but have limited resolution for soft tissue, while MRI images can show soft tissue details more clearly, but are less effective in bone imaging. To obtain more comprehensive diagnostic information, it is often necessary to comprehensively process and analyze a variety of image data.
For the post-processing of multi-mode images, it is common practice to sequentially perform independent analysis of each mode, and then simply superimpose analysis results. This approach is prone to information loss or inaccurate image alignment problems, especially in three-dimensional reconstruction and analysis.
Disclosure of Invention
The invention aims to provide a multi-mode-based three-dimensional image post-processing method and system, which aim to solve the problems in the background technology.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, a method for three-dimensional post-processing of images based on multiple modalities, the method comprising:
acquiring CT image data, MRI image data and PET image data;
preprocessing CT image data, MRI image data and PET image data to obtain first CT image data, first MRI image data and first PET image data;
performing feature extraction on the first CT image data, the first MRI image data and the first PET image data to obtain skeleton structure features, soft tissue structure features and metabolic activity features;
Carrying out rigid spatial registration on the bone structure features and the soft tissue structure features to obtain preliminary registration data;
elastic space registration is carried out on the preliminary registration data set and the metabolic activity characteristics, so that multi-mode image data are obtained;
According to the multi-mode image data, performing three-dimensional reconstruction processing to generate a three-dimensional reconstruction model fusing anatomical information, soft tissue information and metabolic information;
Dividing the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;
Extracting and separating lesion characteristic data according to the lesion analysis data to obtain lesion marking data;
Carrying out dynamic weighting fusion processing on lesion marking data and tissue structure data to obtain three-dimensional comprehensive image data;
And dynamically displaying the three-dimensional comprehensive image data through a visual interface.
Preferably, preprocessing the CT image data, the MRI image data, and the PET image data to obtain first CT image data, first MRI image data, and first PET image data, including:
denoising the CT image data to obtain first CT image data;
Performing enhancement processing on the MRI image data to obtain first MRI image data;
and performing artifact removal processing on the PET image data to obtain first PET image data.
Preferably, skeletal structural featuresThe extraction formula of (2) is:
,
Wherein,For the first CT image data,AndThe gradients of the CT image in the x-axis and y-axis directions respectively,With adaptive Gaussian smoothingAndIs the standard deviation of a gaussian filter.
Preferably, the soft tissue structural featuresThe extraction formula of (2) is:
,
Wherein,For the first MRI image data, S is a soft tissue segmentation function,AndA lower threshold and an upper threshold, respectively, the pixel values satisfying the threshold range are soft tissue features,Representing the segmented soft tissue region.
Preferably, the metabolic activity profileThe extraction formula of (2) is:
,
,
Wherein,For the first PET image data, M is a metabolic activity extraction function,For the threshold value extracted for the metabolic activity,As the weight of the material to be weighed,Representing intensity accumulation for all pixels within the metabolically active region.
Preferably, the rigid spatial registration of the bone structural features and the soft tissue structural features results in preliminary registration data, comprising:
acquiring space coordinates of skeleton structure features and soft tissue structure features;
By rigidly transforming matrices according to spatial coordinatesSpatially aligning the bone structural features and the soft tissue structural features to obtain registered bone structural featuresAnd registering soft tissue structural features;
The expression of the rigid transformation matrix is:
,
Wherein, theta is the rotation angle,Is a translation vector;
registering skeletal structure featuresAnd registering soft tissue structural featuresThe calculation formula of (2) is as follows:
,
Preferably, elastic spatial registration is performed on the preliminary registration data set and the metabolic activity feature to obtain multi-modal image data, including:
According to the registered skeleton structure features and the registered soft tissue structure features, calculating similarity differences of the preset areas to obtain similarity difference data;
determining deformation data of a preset area according to the similarity difference data;
Generating deformation parameter fields for describing the displacement of each pixel point according to deformation data;
Adjusting the metabolic activity characteristics according to the deformation parameter field to obtain registered metabolic activity characteristics;
The extraction formula for registering metabolic activity features is as follows:
and generating multi-mode image data according to the registered skeleton structure features, the registered soft tissue structure features and the registered metabolic activity features.
Preferably, according to the lesion analysis data, extracting and separating the lesion feature data to obtain lesion marking data, including:
Calculating the volume, surface area and compactness of the lesion according to the lesion analysis data, and extracting shape features;
The calculation formula of the lesion volume is as follows: Wherein, the method comprises the steps of, wherein,Is the total number of voxels within the lesion region,Is the volume of each voxel,Is an index;
The surface area is calculated as: Wherein, the method comprises the steps of, wherein,Is based on the total number of triangle units in the boundary mesh of the lesion area,Representing the area of each triangular cell,Is an index;
The calculation formula of the compactness is as follows:;
according to the lesion analysis data, a gray level co-occurrence matrix is constructed, entropy and contrast are calculated through the gray level co-occurrence matrix, and texture features are extracted;
The calculation formula of the entropy value is as follows: Wherein, the method comprises the steps of, wherein,Representing gray levels in gray level co-occurrence matrixAndThe co-occurrence probability between the two,Is the total number of gray levels;
the calculation formula of the contrast ratio is as follows: Wherein, the method comprises the steps of, wherein,Representing gray scale levelsAndDifferences between;
extracting metabolic activity features from lesion analysis data;
The metabolic activity profileObtained by the following formula: Wherein, the method comprises the steps of, wherein,Is the weight;
fusing the shape characteristics, the texture characteristics and the metabolic activity characteristics to obtain lesion characteristic data;
marking and separating the lesion characteristic data to obtain lesion marking data
Preferably, the dynamic weighted fusion processing is performed on the lesion marking data and the tissue structure data to obtain three-dimensional integrated image data, which comprises the following steps:
Assigning weights to lesion marking data and tissue structure data, and performing dynamic fusion processing to obtain three-dimensional integrated image dataThe weighted formula of (2) is:
,
Wherein,Marking data for lesionsIs used for the weight coefficient of the (c),Is thatAndWeight coefficient of (c) in the above-mentioned formula (c).
In a second aspect, a multi-modal image based three-dimensional post-processing system, the system comprising:
the data collection module is used for acquiring CT image data, MRI image data and PET image data;
the denoising processing module is used for denoising the CT image data to obtain first CT image data;
the enhancement processing module is used for enhancing the MRI image data to obtain first MRI image data;
the artifact removal module is used for performing artifact removal processing on the PET image data to obtain first PET image data;
The feature extraction module is used for carrying out feature extraction on the first CT image data, the first MRI image data and the first PET image data to obtain skeleton structure features, soft tissue structure features and metabolic activity features;
The rigid registration module is used for carrying out rigid spatial registration on the skeleton structure features and the soft tissue structure features to obtain preliminary registration data;
The elastic registration module is used for carrying out elastic spatial registration on the preliminary registration data set and the metabolic activity characteristics to obtain multi-mode image data;
The three-dimensional reconstruction module is used for carrying out three-dimensional reconstruction processing according to the multi-mode image data to generate a three-dimensional reconstruction model integrating the anatomical information, the soft tissue information and the metabolic information;
The segmentation processing module is used for carrying out segmentation processing on the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;
The lesion extraction module is used for extracting and separating lesion characteristic data according to the lesion analysis data to obtain lesion marking data;
The weighted fusion module is used for carrying out dynamic weighted fusion processing on the lesion marking data and the tissue structure data to obtain three-dimensional comprehensive image data;
and the visualization module is used for dynamically displaying the three-dimensional comprehensive image data through a visualization interface.
The scheme of the invention at least comprises the following beneficial effects:
Firstly, the comprehensive processing of the multi-mode images can comprehensively reflect the information of bones, soft tissues and metabolic activities, solves the limitation that a single-mode image cannot cover all the information, secondly, the rigidity and the elasticity are combined for use, so that the alignment precision of different-mode images in space is greatly improved, in particular, the inconsistent problem caused by local area deformation is solved, thirdly, the automated lesion recognition and marking process greatly improves the diagnosis efficiency, the errors possibly caused by manual marking are avoided, and finally, the dynamic weighting fusion and interactive visual display functions enable the image display to be more flexible, and doctors can adjust the display mode of the images according to actual needs, so that the diagnosis value of the images is further improved.
Drawings
Fig. 1 is a flow chart of a multi-mode-based three-dimensional image post-processing method according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention proposes a multi-mode-based three-dimensional post-processing method for images, which includes the following steps:
acquiring CT image data, MRI image data and PET image data;
denoising the CT image data to obtain first CT image data;
Performing enhancement processing on the MRI image data to obtain first MRI image data;
performing artifact removal processing on the PET image data to obtain first PET image data;
performing feature extraction on the first CT image data, the first MRI image data and the first PET image data to obtain skeleton structure features, soft tissue structure features and metabolic activity features;
Carrying out rigid spatial registration on the bone structure features and the soft tissue structure features to obtain preliminary registration data;
elastic space registration is carried out on the preliminary registration data set and the metabolic activity characteristics, so that multi-mode image data are obtained;
According to the multi-mode image data, performing three-dimensional reconstruction processing to generate a three-dimensional reconstruction model fusing anatomical information, soft tissue information and metabolic information;
Dividing the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;
Extracting and separating lesion characteristic data according to the lesion analysis data to obtain lesion marking data;
Carrying out dynamic weighting fusion processing on lesion marking data and tissue structure data to obtain three-dimensional comprehensive image data;
And dynamically displaying the three-dimensional comprehensive image data through a visual interface.
In the embodiment of the invention, firstly, data is acquired from different image modes, specifically including a CT image, an MRI image and a PET image. CT images provide information about bone structure, MRI images enhance the detail of soft tissue, and PET images provide data of metabolic activity. Each modality image has different application advantages, CT images have advantages in displaying bone and hard tissue, MRI images focus on showing details of soft tissue, and PET images are mainly used to show metabolic activity areas. The integration of the data enables the subsequent analysis to be more comprehensive and accurate, and can provide multi-angle support for diagnosis.
After the image data are acquired, the data are preprocessed. The pretreatment of the CT image comprises denoising treatment, and the denoising filter is applied to remove noise, so that the high quality of the skeleton structure data is ensured. Preprocessing of MRI images adjusts contrast through image enhancement algorithms, highlighting details of soft tissue structures. And (3) removing the artifacts of the PET image, eliminating the interference of the artifacts, and ensuring the accuracy of metabolic activity data. After these preprocessing, the obtained image data are referred to as first CT image data, first MRI image data, and first PET image data, respectively.
Next, respective key features are extracted from the three image data. The CT image is mainly used for extracting skeleton structure characteristics, the MRI image is used for extracting soft tissue characteristics, and the PET image is used for extracting metabolic activity characteristics. Each feature data is used for subsequent registration and reconstruction operations, guaranteeing the integrity and consistency of the processed data in subsequent steps.
Registration of the multimodal images is an important step in achieving subsequent reconstruction and analysis. First, the extracted bone structural features and soft tissue structural features are rigidly spatially registered. The rigid registration keeps the geometry of the images unchanged, and only rotates and translates the images to ensure that the different modality images are spatially aligned. After rigid registration, the metabolic activity features are further processed by elastic registration. Elastic registration allows fine deformation adjustment of images in localized areas, ensures precise alignment of metabolic activity features with bone and soft tissue structures, and particularly for those localized differences due to patient position changes or physiological activity, can significantly improve the accuracy of image alignment.
And after registration is completed, three-dimensional reconstruction is performed by combining the multi-mode image data. By comprehensively analyzing the skeletal structure characteristics, the soft tissue structure characteristics and the metabolic activity characteristics, a three-dimensional reconstruction model capable of fusing anatomical, soft tissue and metabolic information is generated. The model provides more comprehensive image information and provides powerful support for subsequent lesion identification and clinical diagnosis.
After the three-dimensional reconstruction is completed, the lesion area is automatically identified and marked. The identification of the lesion area is based on comprehensive analysis of various features, and particularly, the position and the range of the lesion area are accurately identified through comprehensive extraction of shape features, texture features and metabolic features. The shape features analyze the volume, surface area and compactness of the lesion area, the texture features extract entropy and contrast of the lesion area, and the metabolic features analyze the metabolic active area in the PET image. The multi-angle analysis greatly improves the accuracy of lesion identification and ensures the complete marking of the lesion area.
And after the lesion marking is finished, fusing the lesion marking data and the tissue structure data through a dynamic weighted fusion algorithm. The weighted fusion algorithm allows the user to adjust the display ratio of the lesion area to the normal tissue according to the weight set by the user. In this way, the doctor can display specific areas in the image more flexibly according to actual needs, such as focusing on the metabolic active area or highlighting the soft tissue structure. The image data after the weighted fusion processing can realize dynamic fusion display between the lesion area and the normal tissue.
And finally, displaying the fused multi-mode image data through a visual interface. The user can interact with the image data through the three-dimensional visual interface to adjust the display mode and the transparency of the image. Allowing the user to view the lesion area from different angles ensures that the physician can obtain more comprehensive diagnostic information. The user can also dynamically adjust the display weights of the images of different modes, so that the fused image can meet specific diagnosis requirements. Through the flexible visual display, doctors can analyze the lesion areas from multiple dimensions, and the accuracy and the efficiency of diagnosis are improved.
Compared with the prior art, the method has the advantages that firstly, the comprehensive processing of the multi-mode images can comprehensively reflect the information of bones, soft tissues and metabolic activities, the limitation that a single-mode image cannot cover all the information is solved, secondly, the rigidity and the elasticity are combined for use, the alignment precision of different-mode images in space is greatly improved, in particular, the problem of inconsistency caused by local area deformation is solved, thirdly, the automated lesion recognition and marking process greatly improves the diagnosis efficiency, errors possibly caused by manual marking are avoided, and finally, the dynamic weighting fusion and interactive visual display function enables the image display to be more flexible, doctors can adjust the display mode of the images according to actual needs, and the diagnosis value of the images is further improved.
The main purpose of the three-dimensional reconstruction process is to fuse the features extracted from the different modality images (CT, MRI, PET) to generate a comprehensive three-dimensional model capable of reflecting anatomical structures, soft tissue features and metabolic information simultaneously. The image data of each modality provides different tissue type information, CT images provide the structure of hard tissues such as bones, MRI images provide details of soft tissues, and PET images show the distribution of metabolic activity. By fusion of the multimodal images, a three-dimensional model can be generated that comprehensively reflects the internal structure of the patient's body.
Specifically, the three-dimensional reconstruction process first aligns the bone structural features in CT images, the soft tissue structural features in MRI images, and the metabolic activity features in PET images spatially precisely by spatially registering them. Then, the characteristic data are integrated, and a three-dimensional model is constructed through a three-dimensional reconstruction algorithm. The three-dimensional model not only contains anatomical structure information of bones and soft tissues, but also reflects metabolic activity conditions of patients, in particular to pathological areas (such as tumors) with active metabolism.
In the prior art, only single-mode image data can be processed generally, and characteristics of different tissue types cannot be reflected at the same time. The invention realizes the comprehensive fusion of the anatomical information, the soft tissue information and the metabolic information through the three-dimensional reconstruction of the multi-mode image, and the generated three-dimensional reconstruction model can provide more comprehensive and accurate diagnosis basis. For example, in the treatment of cancer, PET images may show areas of active metabolism but not accurately reflect the anatomy, whereas through the three-dimensional reconstruction model of the present invention, a physician can not only see the metabolic condition of a tumor, but also understand the specific location of the tumor in the anatomy.
The segmentation process is a further analysis step of the image data after the three-dimensional reconstruction model is generated, and aims to automatically segment the lesion area and the tissue structure area from the reconstructed three-dimensional model. The segmentation process is to analyze the volume data in the three-dimensional model through an algorithm to identify a lesion area and a normal tissue area.
The lesion analysis data is obtained by preliminarily identifying the volume, shape, metabolic activity and other information of the lesion area. Shape analysis can detect geometric features (e.g., volume, surface area, compactness) of the lesion region, while texture analysis can extract complexity (e.g., entropy, contrast) of the lesion region, and metabolic analysis provides metabolic activity (reflected by PET images) of the lesion region. The data can help doctors to better know the characteristics of the lesion area, particularly in cancer diagnosis, the segmentation processing can accurately identify the boundary, volume and metabolic level of the tumor area, and the doctors are assisted to judge the type and the development condition of the tumor.
The tissue structure data then corresponds to normal anatomical and soft tissue regions. By differentiating bones, soft tissues and lesion areas through a segmentation algorithm, detailed information of tissue structures can be provided. The method has important application value for operation planning, selection of radiotherapy target areas and the like. For example, a doctor can determine important anatomical regions to be avoided in the surgical procedure by analyzing tissue structure data, ensuring the accuracy and safety of the surgery.
In a preferred embodiment of the present invention, feature extraction is performed on the first CT image data, the first MRI image data, and the first PET image data to obtain bone structural features, soft tissue structural features, and metabolic activity features, including:
extracting features of the first CT image data to obtain skeleton structure features;
Extracting features of the first MRI image data to obtain soft tissue structural features;
and extracting features of the first PET image data to obtain metabolic activity features.
In the embodiment of the invention, firstly, the bone structure characteristics are extracted through processing the first CT image data, the CT image has advantages in representing hard tissues such as bones, and the extracted bone characteristics are used for subsequent spatial registration. For the first MRI image data, soft tissue features are extracted through a segmentation algorithm, the MRI image has advantages in the aspect of displaying soft tissue details, and the extracted soft tissue features are used for subsequent processing. Finally, through processing the first PET image data, metabolic activity characteristics are extracted, the PET image can provide metabolic activity information, and the extracted metabolic characteristics are used for detecting metabolic activity of a lesion area.
Compared with the prior art, the method can realize more comprehensive analysis of the internal tissues of the patient by extracting the characteristics of the three different mode images, and has important roles in lesion recognition and treatment planning in particular. The single-mode image is easy to miss some important information, and the accuracy and the comprehensiveness of diagnosis can be greatly improved through the feature extraction of the multi-mode image.
In a preferred embodiment of the invention, skeletal structural featuresThe extraction formula of (2) is:
,
Wherein,For the first CT image data,AndThe gradients of the CT image in the x-axis and y-axis directions respectively,With adaptive Gaussian smoothingAndIs the standard deviation of a gaussian filter.
In the embodiment of the invention, the edge features of the bone structure in the first CT image can be accurately extracted through the formula. Compared with the existing edge detection technology, the method and the device use adaptive Gaussian smoothing, can better process noise in the first CT image, ensure more accurate edge detection of the skeleton structure, and are beneficial to subsequent registration processing.
The formula extracts edge information in the image by carrying out gradient calculation on the x axis and the y axis of the first CT image. By introducing a Gaussian smoothing filterThe influence of noise on gradient calculation is reduced, so that the bone structural features are extracted more accurately.
This approach ensures accurate extraction of bone edges, facilitating subsequent registration and three-dimensional reconstruction.
In a preferred embodiment of the invention, the soft tissue structural featuresThe extraction formula of (2) is:
,
Wherein,For the first MRI image data, S is a soft tissue segmentation function,AndThe pixel values satisfying the threshold range are soft tissue features.
In the embodiment of the invention, by setting the proper threshold, the soft tissue region in the first MRI image can be accurately extracted, particularly under the condition that the soft tissue display is complex, the accuracy of soft tissue extraction can be effectively improved, and the problem that the soft tissue and the background are difficult to distinguish in the prior art is solved.
The formula extracts a soft tissue region in the first MRI image through a threshold segmentation method. When the gray value of the pixel is located atAndIn between, the pixel is classified as a soft tissue feature, otherwise not as a soft tissue.
This method ensures accurate extraction of soft tissue, especially in the first MRI image, which is critical for identification of the lesion area.
In a preferred embodiment of the invention, the metabolic activity profileThe extraction formula of (2) is:
,
,
Wherein,For the first PET image data, M is a metabolic activity extraction function,For the threshold value extracted for the metabolic activity,As the weight of the material to be weighed,Representing intensity accumulation for all pixels within the metabolically active region.
In the embodiment of the invention, through the formula, the metabolic active region in the first PET image can be accurately extracted, and the metabolic active region is weightedImproving the visibility of metabolic activity. This method of extraction is particularly important in cancer detection, as the metabolically active region is often an indication of the presence of a tumor. Compared with the traditional metabolic extraction method, the algorithm can better adapt to the change of different thresholds and enhancement factors, and ensures the flexibility and accuracy of the extraction result.
The formula extracts metabolic characteristics through identifying metabolic active regions in the first PET image. First, pass the threshold valuePixels with metabolic activity intensities above the threshold are screened out and then passedThe intensity of metabolic activity in these regions is amplified for weight.
The accumulation process ensures that the metabolic activity of the whole lesion area can be comprehensively extracted for subsequent analysis of the metabolic level of the lesion area.
In a preferred embodiment of the present invention, the rigid spatial registration of the bone structural features and the soft tissue structural features to obtain preliminary registration data comprises:
acquiring space coordinates of skeleton structure features and soft tissue structure features;
By rigidly transforming matrices according to spatial coordinatesSpatially aligning the bone structural features and the soft tissue structural features to obtain registered bone structural featuresAnd registering soft tissue structural features;
The expression of the rigid transformation matrix is:
,
Wherein, theta is the rotation angle,Is a translation vector;
registering skeletal structure featuresAnd registering soft tissue structural featuresThe calculation formula of (2) is as follows:
,
In the embodiment of the invention, the CT image and the MRI image can be spatially aligned on the premise of keeping the shape unchanged by a rigid transformation formula. This process is particularly important for registration of multi-modality images, which can ensure consistency of different modality images over anatomical structures. Compared with the prior art, the rigid registration formula can adapt to more complicated rotation and translation conditions, and the problem of inaccurate image information loss or overlapping caused by space misalignment is avoided.
In a preferred embodiment of the present invention, elastic spatial registration is performed on the preliminary registration data set and the metabolic activity feature to obtain multi-modal image data, including:
According to the registered skeleton structure features and the registered soft tissue structure features, calculating similarity differences of the preset areas to obtain similarity difference data;
determining deformation data of a preset area according to the similarity difference data;
Generating deformation parameter fields for describing the displacement of each pixel point according to deformation data;
Adjusting the metabolic activity characteristics according to the deformation parameter field to obtain registered metabolic activity characteristics;
The extraction formula for registering metabolic activity features is as follows:
and generating multi-mode image data according to the registered skeleton structure features, the registered soft tissue structure features and the registered metabolic activity features.
In the embodiment of the invention, through the elastic registration algorithm, the slight difference of the local anatomical structure in different mode images can be corrected, so that the metabolic activity is more closely combined with the bone and soft tissue information. Compared with the prior art, the elastic registration has higher accuracy, and is particularly suitable for the problems of local distortion or deformation of images caused by movement, respiration and the like of patients.
The core of elastic registration is the generation of deformation parameter fields that describe the spatial displacement of each pixel. Through the elastic registration algorithm, the slight difference of the local anatomical structure in different mode images can be corrected, so that metabolic activity is more closely combined with bone and soft tissue information. Compared with the prior art, the elastic registration has higher accuracy, and is particularly suitable for the problems of local distortion or deformation of images caused by movement, respiration and the like of patients.
According to the registered skeleton structure features and the registered soft tissue structure features, similarity difference of a preset area is calculated to obtain similarity difference data, mean square error can be adopted for calculation, and error square sum between corresponding pixels of two feature images is calculated. The smaller the mean square error, the more similar the images are. And further, the difference data meeting the requirements can be obtained.
The deformation data of the preset area is determined according to the similarity difference data, and the deformation data can be obtained through calculation based on a deformation model, such as a B-spline model, a finite element model and the like. The deformation data describes the spatial displacement of each pixel point or voxel in the preset region
The B spline model is used for describing the deformation of the preset area in a mode of interpolation of control points and splines. The displacement of each control point is adjusted by an optimization algorithm and the deformation field is generated by the movement of these control points. B-splines have the advantage of being able to generate smooth deformation fields suitable for complex biological deformation scenarios.
The finite element method is used to divide the image into a plurality of small finite element regions and to calculate the deformation field by minimizing the energy difference between these regions. The finite element method is suitable for processing areas with obvious structural differences in images, especially for deformation of soft tissues.
In a preferred embodiment of the present invention, extracting lesion characteristic data and separating the same according to lesion analysis data to obtain lesion marking data, including:
Calculating the volume, surface area and compactness of the lesion according to the lesion analysis data, and extracting shape features;
The calculation formula of the lesion volume is as follows: Wherein, the method comprises the steps of, wherein,Is the total number of voxels within the lesion region,Is the volume of each voxel,Is an index;
The surface area is calculated as: Wherein, the method comprises the steps of, wherein,Is based on the total number of triangle units in the boundary mesh of the lesion area,Representing the area of each triangular cell,Is an index;
The calculation formula of the compactness is as follows:;
according to the lesion analysis data, a gray level co-occurrence matrix is constructed, entropy and contrast are calculated through the gray level co-occurrence matrix, and texture features are extracted;
The calculation formula of the entropy value is as follows: Wherein, the method comprises the steps of, wherein,Representing gray levels in gray level co-occurrence matrixAndThe co-occurrence probability between the two,Is the total number of gray levels;
the calculation formula of the contrast ratio is as follows: Wherein, the method comprises the steps of, wherein,Representing gray scale levelsAndDifferences between;
extracting metabolic activity features from lesion analysis data;
The metabolic activity profileObtained by the following formula: Wherein, the method comprises the steps of, wherein,Is the weight;
fusing the shape characteristics, the texture characteristics and the metabolic activity characteristics to obtain lesion characteristic data;
marking and separating the lesion characteristic data to obtain lesion marking data
In the embodiment of the invention, the lesion feature extraction is based on comprehensive analysis of shape features, texture features and metabolic activity features. Through the lesion recognition based on multiple characteristics, the method can effectively improve the accuracy of lesion detection, and especially can comprehensively analyze the shape, texture and metabolic information of a complex lesion region in a multi-mode image. Compared with the prior art, the method can more accurately identify the complex lesion area, especially under the condition of complex texture or abnormal metabolic activity.
The volume reflects the size of the lesion, the surface area reflects the appearance of the lesion, and the compactness characterizes the regularity of the lesion. Through the comprehensive analysis of the small features, the system can obtain the overall shape features of the lesion area, and provide key information about the lesion growth mode and appearance features for doctors.
The entropy reflects the texture complexity of the lesion and the contrast reflects the gray scale difference between the lesion and surrounding tissue. By combining the two small features, the overall texture features of the lesion area can be extracted. The regions of high entropy and high contrast generally mean that the lesion region is complex and has a significant difference from surrounding tissues, which is of great importance for assessing the structural features of complex lesions such as tumors.
By weightThe clarity of the metabolic activity features can be increased.
Wherein the lesion volumeRefers to the volume occupied by the lesion area in three dimensions. It is typically obtained by cumulative calculation of voxels within the lesion area. The method comprises the following specific steps:
In the foregoing, the segmentation algorithm extracts a lesion analysis region, divides the lesion analysis region into a plurality of small voxel units, and the voxel is a unit volume (typically, a cube) having a fixed size in a three-dimensional space. Calculating the volume of each voxelAnd adding all voxel volumes in the lesion area to obtain the total volume of the lesion area.
The formula is:
,
Wherein,Is the total number of voxels within the lesion region,Is the volume of each voxel (the voxel volume is determined by the resolution of the CT or MRI image). This voxel accumulation method ensures the accuracy of the volume calculation.
Lesion surface areaRefers to the boundary surface area of the lesion area. The procedure for calculating the lesion surface area is as follows:
Three-dimensional boundaries of the lesion area are extracted, and the total area of the boundaries is calculated by using a triangular mesh method or other geometric algorithms. The triangle mesh method approximates the lesion area boundary to a plurality of triangle units, and the area of each triangle is calculated and accumulated to form the surface area of the whole boundary. The formula is:
,
Wherein,Is the total number of triangle units in the boundary mesh,Representing the area of each triangle cell. This formula enables accurate calculation of the surface area of the irregular lesion region.
Tightness degreeIs a geometric measure describing the regularity of the shape of a lesion, and is typically calculated by the ratio of volume to surface area, which is used to measure whether the lesion is close to a regular shape (e.g., a sphere).
The formula of the compactness is:
,
Wherein,AndThe volume and surface area of the lesion, respectively. The more regular the shape of the lesion, the higher its compactness value.
Wherein the entropy valueIs a parameter for measuring the gray level distribution complexity of the lesion area. The higher the entropy value, the more complex the gray distribution of the lesion area, and the larger the information amount. The method of calculating the entropy value is generally based on a gray level co-occurrence matrix. The step of calculating the entropy value is as follows:
firstly, a gray level co-occurrence matrix of a lesion area is constructed, and gray level relations between adjacent pixels in an image are recorded. The size of the gray level co-occurrence matrix is determined by the gray level number of the imageAnd (5) determining. The entropy value is calculated using the following formula:
,
Wherein,Representing gray levels in gray level co-occurrence matrixAndThe co-occurrence probability between the two,Is the total number of gray levels. The formula obtains entropy values by calculating probability distribution of pixel gray co-occurrence, and reflects texture complexity of a lesion area.
Contrast ratioThe gray value difference degree of the lesion area is measured and reflects the difference degree between the lesion area and surrounding tissues. High contrast typically means that the gray scale difference of the lesion area is significant, facilitating the physician to identify the lesion boundary. The contrast is calculated based on the gray level co-occurrence matrix, and the specific steps are as follows:
and constructing a gray level co-occurrence matrix of the lesion area, and recording gray level values of adjacent pixels. The contrast is calculated using the following formula:
,
Wherein,Representing gray scale levelsAndDifferences between them. By this formula, the gray contrast of the lesion area can be precisely calculated, and the parameter can help to identify the sharpness of the boundary of the lesion area.
In a preferred embodiment of the present invention, the method for performing dynamic weighted fusion processing on lesion marking data and tissue structure data to obtain three-dimensional integrated image data includes:
Assigning weights to lesion marking data and tissue structure data, and performing dynamic fusion processing to obtain three-dimensional integrated image dataThe weighted formula of (2) is:
,
Wherein,Marking data for lesionsIs used for the weight coefficient of the (c),Is thatAndWeight coefficient of (c) in the above-mentioned formula (c).
In the embodiment of the invention, the dynamic weighted fusion processing is an important step for realizing the comprehensive display of lesion marking data and tissue structure data. By distributing weights for different types of data, the method and the device can realize the fusion processing of the multi-mode data. Through the weighted fusion method, the display weight between the lesion area and the tissue structure can be dynamically adjusted according to the user requirement. Compared with the prior art, the method not only improves the flexibility of image display, but also can adjust the display effect of the images through user interaction and provide clearer lesion information.
The embodiment of the invention also provides a three-dimensional post-processing system based on the multi-mode image, which is applied to the method and comprises the following steps:
the data collection module is used for acquiring CT image data, MRI image data and PET image data;
the denoising processing module is used for denoising the CT image data to obtain first CT image data;
the enhancement processing module is used for enhancing the MRI image data to obtain first MRI image data;
the artifact removal module is used for performing artifact removal processing on the PET image data to obtain first PET image data;
The feature extraction module is used for carrying out feature extraction on the first CT image data, the first MRI image data and the first PET image data to obtain skeleton structure features, soft tissue structure features and metabolic activity features;
The rigid registration module is used for carrying out rigid spatial registration on the skeleton structure features and the soft tissue structure features to obtain preliminary registration data;
The elastic registration module is used for carrying out elastic spatial registration on the preliminary registration data set and the metabolic activity characteristics to obtain multi-mode image data;
The three-dimensional reconstruction module is used for carrying out three-dimensional reconstruction processing according to the multi-mode image data to generate a three-dimensional reconstruction model integrating the anatomical information, the soft tissue information and the metabolic information;
The segmentation processing module is used for carrying out segmentation processing on the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;
The lesion extraction module is used for extracting and separating lesion characteristic data according to the lesion analysis data to obtain lesion marking data;
The weighted fusion module is used for carrying out dynamic weighted fusion processing on the lesion marking data and the tissue structure data to obtain three-dimensional comprehensive image data;
and the visualization module is used for dynamically displaying the three-dimensional comprehensive image data through a visualization interface.
The denoising processing of the image data is realized through a Gaussian filtering algorithm, and the Gaussian filtering algorithm is used for:
performing noise estimation on CT image data;
Adjusting filter parameters based on the noise estimation result;
and performing smoothing treatment on the image data to obtain denoised CT image data.
The enhancement processing of the image data is realized through a contrast stretching algorithm, and the contrast stretching algorithm is used for:
acquiring a gray level histogram of MRI image data;
Adjusting the contrast range of the image data according to the gray level histogram;
and carrying out contrast stretching treatment on the image data to obtain the enhanced MRI image data.
It should be noted that, the system is a system corresponding to the above method, and all implementation manners in the above method embodiment are applicable to the embodiment, so that the same technical effects can be achieved.
Embodiments of the present invention also provide a computing device comprising a processor, a memory storing a computer program which, when executed by the processor, performs a method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Embodiments of the present invention also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform a method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
While the invention has been described with reference to the preferred embodiments, it should be understood by those skilled in the art of image processing that various modifications and adaptations can be made without departing from the principles of the invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.

Claims (10)

Translated fromChinese
1.一种基于多模态的影像三维后处理方法,其特征在于,所述方法包括:1. A three-dimensional image post-processing method based on multimodality, characterized in that the method comprises:获取CT影像数据、MRI影像数据和PET影像数据;Acquire CT image data, MRI image data and PET image data;对CT影像数据、MRI影像数据和PET影像数据进行预处理,得到第一CT影像数据、第一MRI影像数据和第一PET影像数据;Preprocessing the CT image data, the MRI image data, and the PET image data to obtain first CT image data, first MRI image data, and first PET image data;对第一CT影像数据、第一MRI影像数据和第一PET影像数据进行特征提取,得到骨骼结构特征、软组织结构特征和代谢活动特征;Extracting features from the first CT image data, the first MRI image data, and the first PET image data to obtain bone structure features, soft tissue structure features, and metabolic activity features;对骨骼结构特征和软组织结构特征进行刚性空间配准,得到初步配准数据;Perform rigid spatial registration on the bone structure features and soft tissue structure features to obtain preliminary registration data;对初步配准数据集和代谢活动特征进行弹性空间配准,得到多模态影像数据;Perform elastic spatial registration on the preliminary registration data set and metabolic activity features to obtain multimodal imaging data;根据多模态影像数据,进行三维重建处理,生成融合解剖信息、软组织信息和代谢信息的三维重建模型;Perform three-dimensional reconstruction processing based on multimodal imaging data to generate a three-dimensional reconstruction model that integrates anatomical information, soft tissue information, and metabolic information;对三维重建模型进行分割处理,得到病变分析数据和组织结构数据;Segment the three-dimensional reconstructed model to obtain lesion analysis data and tissue structure data;根据病变分析数据,提取病变特征数据并对其进行分离,得到病变标记数据;According to the lesion analysis data, lesion feature data is extracted and separated to obtain lesion marking data;将病变标记数据与组织结构数据进行动态加权融合处理,得到三维综合影像数据;Perform dynamic weighted fusion processing on the lesion marking data and the tissue structure data to obtain three-dimensional comprehensive image data;将三维综合影像数据通过可视化界面进行动态展示。The three-dimensional comprehensive image data is dynamically displayed through a visual interface.2.根据权利要求1所述的一种基于多模态的影像三维后处理方法,其特征在于,对CT影像数据、MRI影像数据和PET影像数据进行预处理,得到第一CT影像数据、第一MRI影像数据和第一PET影像数据,包括:2. The method for three-dimensional image post-processing based on multimodality according to claim 1, characterized in that the CT image data, the MRI image data and the PET image data are pre-processed to obtain the first CT image data, the first MRI image data and the first PET image data, comprising:对CT影像数据进行去噪处理,得到第一CT影像数据;Performing denoising processing on the CT image data to obtain first CT image data;对MRI影像数据进行增强处理,得到第一MRI影像数据;Performing enhancement processing on the MRI image data to obtain first MRI image data;对PET影像数据进行伪影去除处理,得到第一PET影像数据。Perform artifact removal processing on the PET image data to obtain first PET image data.3.根据权利要求2所述的一种基于多模态的影像三维后处理方法,其特征在于,骨骼结构特征的提取公式为:3. The multimodal image 3D post-processing method according to claim 2, characterized in that the bone structure features The extraction formula is: ,其中,为第一CT影像数据,分别为CT影像在x轴和y轴方向的梯度,是带有自适应高斯平滑的梯度运算符,σ是高斯滤波器的标准差。in, is the first CT image data, and are the gradients of the CT image in the x-axis and y-axis directions, is with adaptive Gaussian smoothing and is the gradient operator of , and σ is the standard deviation of the Gaussian filter.4.根据权利要求3所述的一种基于多模态的影像三维后处理方法,其特征在于,软组织结构特征的提取公式为:4. The multimodal image 3D post-processing method according to claim 3, characterized in that the soft tissue structure features The extraction formula is: ,其中,为第一MRI影像数据,S为软组织分割函数,分别为下限阈值和上限阈值,满足阈值范围的像素值为软组织特征,表示分割后的软组织区域。in, is the first MRI image data, S is the soft tissue segmentation function, and are the lower threshold and the upper threshold respectively. The pixel values that meet the threshold range are soft tissue features. Represents the segmented soft tissue area.5.根据权利要求4所述的一种基于多模态的影像三维后处理方法,其特征在于,代谢活动特征的提取公式为:5. The method for three-dimensional post-processing of multimodal images according to claim 4, characterized in that metabolic activity characteristics The extraction formula is: , ,其中,为第一PET影像数据,M为代谢活动提取函数,为代谢活动提取的阈值,为权重,表示对代谢活跃区域内的所有像素进行强度累加。in, is the first PET image data, M is the metabolic activity extraction function, The threshold extracted for metabolic activity, is the weight, Indicates that the intensity of all pixels in the metabolically active area is accumulated.6.根据权利要求5所述的一种基于多模态的影像三维后处理方法,其特征在于,对骨骼结构特征和软组织结构特征进行刚性空间配准,得到初步配准数据,包括:6. The multimodal three-dimensional image post-processing method according to claim 5, characterized in that the rigid spatial registration of the bone structure features and the soft tissue structure features is performed to obtain preliminary registration data, comprising:获取骨骼结构特征和软组织结构特征的空间坐标;Obtaining the spatial coordinates of bone structure features and soft tissue structure features;根据空间坐标,通过刚性变换矩阵对骨骼结构特征和软组织结构特征进行空间对齐,得到配准骨骼结构特征和配准软组织结构特征According to the spatial coordinates, through the rigid transformation matrix Spatial alignment of bone structure features and soft tissue structure features to obtain registered bone structure features and registration of soft tissue structural features ;刚性变换矩阵的表达式为:The expression of the rigid transformation matrix is: ,其中,θ为旋转角度,为平移向量;Where θ is the rotation angle, is the translation vector;配准骨骼结构特征和配准软组织结构特征的计算公式为:Registering bone structure features and registration of soft tissue structural features The calculation formula is: , .7.根据权利要求6所述的一种基于多模态的影像三维后处理方法,其特征在于,对初步配准数据集和代谢活动特征进行弹性空间配准,得到多模态影像数据,包括:7. The method for post-processing a multimodal image based on three-dimensional image processing according to claim 6, characterized in that elastic spatial registration is performed on the preliminary registration data set and the metabolic activity features to obtain multimodal image data, including:根据配准骨骼结构特征和配准软组织结构特征,计算预设区域的相似性差异,得到相似性差异数据;According to the registered bone structure features and the registered soft tissue structure features, the similarity difference of the preset area is calculated to obtain the similarity difference data;根据相似性差异数据,确定预设区域的形变数据;Determining deformation data of a preset area according to the similarity difference data;根据形变数据,生成用于描述每个像素点位移的形变参数场Based on the deformation data, a deformation parameter field is generated to describe the displacement of each pixel. ;根据形变参数场对代谢活动特征进行调整,得到配准代谢活动特征The metabolic activity characteristics are adjusted according to the deformation parameter field to obtain the registered metabolic activity characteristics ;配准代谢活动特征的提取公式为:The extraction formula for the registered metabolic activity features is: ;根据配准骨骼结构特征、配准软组织结构特征和配准代谢活动特征,生成多模态影像数据。Multimodal imaging data are generated based on the registered bone structure features, the registered soft tissue structure features and the registered metabolic activity features.8.根据权利要求7所述的一种基于多模态的影像三维后处理方法,其特征在于,根据病变分析数据,提取病变特征数据并对其进行分离,得到病变标记数据,包括:8. The multimodal three-dimensional image post-processing method according to claim 7, characterized in that, according to the lesion analysis data, lesion feature data is extracted and separated to obtain lesion marking data, including:根据病变分析数据,计算病变体积、表面积和紧致度,提取形状特征Based on the lesion analysis data, the lesion volume, surface area and compactness are calculated, and the shape features are extracted. ;所述病变体积的计算公式为:,其中,是病变区域内的体素总数,是每个体素的体积,为索引;The calculation formula of the lesion volume is: ,in, is the total number of voxels within the lesion area, is the volume of each voxel, is the index;所述表面积的计算公式为:,其中,是基于病变区域的边界网格中的三角形单元总数,表示每个三角形单元的面积,为索引;The surface area is calculated as: ,in, is the total number of triangle elements in the boundary mesh based on the lesion area, represents the area of each triangular unit, is the index;所述紧致度的计算公式为:The calculation formula of the compactness is: ;根据病变分析数据,构建灰度共生矩阵,并通过灰度共生矩阵计算熵值和对比度,提取纹理特征According to the lesion analysis data, the gray level co-occurrence matrix is constructed, and the entropy value and contrast are calculated through the gray level co-occurrence matrix to extract the texture features. ;所述熵值的计算公式为:,其中,表示灰度共生矩阵中灰度级别之间的共现概率,是灰度级别的总数;The calculation formula of the entropy value is: ,in, Represents the gray level in the gray level co-occurrence matrix and The co-occurrence probability between is the total number of gray levels;所述对比度的计算公式为:,其中,表示灰度级别之间的差异;The calculation formula of the contrast ratio is: ,in, Indicates grayscale level and The difference between根据病变分析数据,提取代谢活动特征Extract metabolic activity features based on lesion analysis data ;所述代谢活动特征通过以下公式获取:,其中,为权重;The metabolic activity characteristics Obtained by the following formula: ,in, is the weight;对形状特征、纹理特征和代谢活动特征进行融合,得到病变特征数据;The shape features, texture features and metabolic activity features are integrated to obtain the lesion feature data;对病变特征数据进行标记分离,得到病变标记数据Label and separate the lesion feature data to obtain lesion label data .9.根据权利要求8所述的一种基于多模态的影像三维后处理方法,其特征在于,将病变标记数据与组织结构数据进行动态加权融合处理,得到三维综合影像数据,包括:9. The multimodal three-dimensional image post-processing method according to claim 8, characterized in that the lesion marking data and the tissue structure data are dynamically weighted and fused to obtain three-dimensional comprehensive image data, including:为病变标记数据和组织结构数据分配权重,并执行动态融合处理,得到三维综合影像数据,三维综合影像数据的加权公式为:Assign weights to the lesion marking data and tissue structure data, and perform dynamic fusion processing to obtain three-dimensional comprehensive image data. The weighted formula is: ,其中,为病变标记数据的权重系数,的权重系数。in, Labeling data for lesions The weight coefficient of for and The weight coefficient of .10.一种基于多模态影像的三维后处理系统,其特征在于,应用于如权利要求1至9任一项所述的方法中,所述系统包括:10. A three-dimensional post-processing system based on multimodal images, characterized in that it is applied to the method according to any one of claims 1 to 9, and the system comprises:数据收集模块,用于获取CT影像数据、MRI影像数据和PET影像数据;A data collection module, used for acquiring CT image data, MRI image data and PET image data;去噪处理模块,用于对CT影像数据进行去噪处理,得到第一CT影像数据;A denoising processing module, used for performing denoising processing on the CT image data to obtain first CT image data;增强处理模块,用于对MRI影像数据进行增强处理,得到第一MRI影像数据;An enhancement processing module, used for performing enhancement processing on the MRI image data to obtain first MRI image data;伪影去除模块,用于对PET影像数据进行伪影去除处理,得到第一PET影像数据;An artifact removal module is used to perform artifact removal processing on the PET image data to obtain first PET image data;特征提取模块,用于对第一CT影像数据、第一MRI影像数据和第一PET影像数据进行特征提取,得到骨骼结构特征、软组织结构特征和代谢活动特征;A feature extraction module, used to extract features from the first CT image data, the first MRI image data, and the first PET image data to obtain bone structure features, soft tissue structure features, and metabolic activity features;刚性配准模块,用于对骨骼结构特征和软组织结构特征进行刚性空间配准,得到初步配准数据;A rigid registration module is used to perform rigid spatial registration of bone structure features and soft tissue structure features to obtain preliminary registration data;弹性配准模块,用于对初步配准数据集和代谢活动特征进行弹性空间配准,得到多模态影像数据;The elastic registration module is used to perform elastic spatial registration of the preliminary registration data set and metabolic activity features to obtain multimodal imaging data;三维重建模块,用于根据多模态影像数据,进行三维重建处理,生成融合解剖信息、软组织信息和代谢信息的三维重建模型;A three-dimensional reconstruction module is used to perform three-dimensional reconstruction processing based on multimodal imaging data to generate a three-dimensional reconstruction model that integrates anatomical information, soft tissue information, and metabolic information;分割处理模块,用于对三维重建模型进行分割处理,得到病变分析数据和组织结构数据;A segmentation processing module is used to perform segmentation processing on the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;病变提取模块,用于根据病变分析数据,提取病变特征数据并对其进行分离,得到病变标记数据;A lesion extraction module is used to extract and separate lesion feature data based on lesion analysis data to obtain lesion marking data;加权融合模块,用于将病变标记数据与组织结构数据进行动态加权融合处理,得到三维综合影像数据;The weighted fusion module is used to dynamically perform weighted fusion processing on the lesion marking data and the tissue structure data to obtain three-dimensional comprehensive image data;可视化模块,用于将三维综合影像数据通过可视化界面进行动态展示。The visualization module is used to dynamically display the three-dimensional comprehensive image data through a visualization interface.
CN202411538123.0A2024-10-312024-10-31Multi-mode-based three-dimensional image post-processing method and systemPendingCN119048694A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411538123.0ACN119048694A (en)2024-10-312024-10-31Multi-mode-based three-dimensional image post-processing method and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411538123.0ACN119048694A (en)2024-10-312024-10-31Multi-mode-based three-dimensional image post-processing method and system

Publications (1)

Publication NumberPublication Date
CN119048694Atrue CN119048694A (en)2024-11-29

Family

ID=93574266

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411538123.0APendingCN119048694A (en)2024-10-312024-10-31Multi-mode-based three-dimensional image post-processing method and system

Country Status (1)

CountryLink
CN (1)CN119048694A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119446437A (en)*2025-01-132025-02-14四川大学华西医院 An intelligent digital image processing system for urology
CN119540468A (en)*2025-01-212025-02-28北京中研海康科技有限公司 Multi-angle photography and three-dimensional reconstruction method and system for soft tissue
CN119850621A (en)*2025-03-202025-04-18天津博思特医疗科技有限责任公司Tumor angiogenesis detection and analysis method based on multi-mode image fusion
CN119887773A (en)*2025-03-282025-04-25上海万怡医学科技股份有限公司Medical image recognition processing system and method based on multi-mode image fusion
CN120236011A (en)*2025-03-192025-07-01青峰宇 A craniofacial dynamic reconstruction method and system based on multimodal data fusion
CN120298329A (en)*2025-03-242025-07-11江苏泰科医疗科技有限公司 A medical image analysis and processing method based on AI

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
DE102007018630A1 (en)*2007-04-192008-10-23Siemens AgMultimodal image acquisition-, processing-, archiving- and visualization system for patient, has merging-, registering and visualization tool for linking, registering and archiving image data of two-dimensional cross sections
CN101626727A (en)*2007-03-062010-01-13皇家飞利浦电子股份有限公司Additional automatic diagnosis and the aligning that the PET/MR flow estimation is arranged
CN118262875A (en)*2024-04-112024-06-28南昌大学第二附属医院Medical image diagnosis and contrast film reading method
CN118334006A (en)*2024-05-132024-07-12北京汉博信息技术有限公司Processing method and device for three-dimensional focus positioning based on multi-mode image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101626727A (en)*2007-03-062010-01-13皇家飞利浦电子股份有限公司Additional automatic diagnosis and the aligning that the PET/MR flow estimation is arranged
DE102007018630A1 (en)*2007-04-192008-10-23Siemens AgMultimodal image acquisition-, processing-, archiving- and visualization system for patient, has merging-, registering and visualization tool for linking, registering and archiving image data of two-dimensional cross sections
CN118262875A (en)*2024-04-112024-06-28南昌大学第二附属医院Medical image diagnosis and contrast film reading method
CN118334006A (en)*2024-05-132024-07-12北京汉博信息技术有限公司Processing method and device for three-dimensional focus positioning based on multi-mode image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
(澳)屠基元,基奥•翁塔冯,古达兹•艾哈迈迪: "人体呼吸系统的计算流体力学与粒子动力学基础", 31 August 2021, 哈尔滨:哈尔滨工程大学出版社, pages: 42 - 44*
(美)托马斯·博尔特费尔德(THOMAS BORTFELD)等: "影像引导调强放射治疗", 30 April 2012, 天津:天津科技翻译出版公司, pages: 247*
丁明跃: "物联网识别技术", 31 July 2012, 北京:中国铁道出版社, pages: 110 - 114*
屈春晖: "医学影像临床诊断", 30 September 2023, 上海科学技术文献出版社, pages: 97*

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119446437A (en)*2025-01-132025-02-14四川大学华西医院 An intelligent digital image processing system for urology
CN119540468A (en)*2025-01-212025-02-28北京中研海康科技有限公司 Multi-angle photography and three-dimensional reconstruction method and system for soft tissue
CN120236011A (en)*2025-03-192025-07-01青峰宇 A craniofacial dynamic reconstruction method and system based on multimodal data fusion
CN119850621A (en)*2025-03-202025-04-18天津博思特医疗科技有限责任公司Tumor angiogenesis detection and analysis method based on multi-mode image fusion
CN119850621B (en)*2025-03-202025-07-18天津博思特医疗科技有限责任公司Tumor angiogenesis detection and analysis method based on multi-mode image fusion
CN120298329A (en)*2025-03-242025-07-11江苏泰科医疗科技有限公司 A medical image analysis and processing method based on AI
CN119887773A (en)*2025-03-282025-04-25上海万怡医学科技股份有限公司Medical image recognition processing system and method based on multi-mode image fusion

Similar Documents

PublicationPublication DateTitle
CN119048694A (en)Multi-mode-based three-dimensional image post-processing method and system
US8666128B2 (en)Methods, systems, and computer readable media for mapping regions in a model of an object comprising an anatomical structure from one image data set to images used in a diagnostic or therapeutic intervention
US7935055B2 (en)System and method of measuring disease severity of a patient before, during and after treatment
Alam et al.Challenges and solutions in multimodal medical image subregion detection and registration
US20150023575A1 (en)Anatomy Aware Articulated Registration for Image Segmentation
US8588498B2 (en)System and method for segmenting bones on MR images
WO2012074039A1 (en)Medical image processing device
CN115830016B (en)Medical image registration model training method and equipment
WO2007044508A2 (en)System and method for whole body landmark detection, segmentation and change quantification in digital images
Linguraru et al.Liver and tumor segmentation and analysis from CT of diseased patients via a generic affine invariant shape parameterization and graph cuts
CN109498046A (en)The myocardial infarction quantitative evaluating method merged based on nucleic image with CT coronary angiography
Alam et al.Evaluation of medical image registration techniques based on nature and domain of the transformation
CN101005803B (en)Method for flexible 3dra-ct fusion
CN119963613A (en) A medical image and three-dimensional space registration method based on 3D printing technology
Galdames et al.Registration of renal SPECT and 2.5 D US images
CN117427286B (en) A method, system and device for identifying tumor radiotherapy target area based on spectral CT
Carminati et al.Reconstruction of the descending thoracic aorta by multiview compounding of 3-D transesophageal echocardiographic aortic data sets for improved examination and quantification of atheroma burden
Garcia et al.Multimodal breast parenchymal patterns correlation using a patient-specific biomechanical model
CN116645389A (en) A personalized vascular thrombus three-dimensional structure modeling method and system
Li et al.3D intersubject warping and registration of pulmonary CT images for a human lung model
Frantz et al.Development and validation of a multi-step approach to improved detection of 3D point landmarks in tomographic images
Hopp et al.Automatic multimodal 2D/3D image fusion of ultrasound computer tomography and x-ray mammography for breast cancer diagnosis
CN115035166A (en)CT and MRI3D/3D image registration method based on human face feature points
Kawata et al.Tracking interval changes of pulmonary nodules using a sequence of three-dimensional thoracic images
TWI548401B (en)Method for reconstruction of blood vessels 3d structure

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp