Disclosure of Invention
The invention aims to provide a multi-mode-based three-dimensional image post-processing method and system, which aim to solve the problems in the background technology.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, a method for three-dimensional post-processing of images based on multiple modalities, the method comprising:
acquiring CT image data, MRI image data and PET image data;
preprocessing CT image data, MRI image data and PET image data to obtain first CT image data, first MRI image data and first PET image data;
performing feature extraction on the first CT image data, the first MRI image data and the first PET image data to obtain skeleton structure features, soft tissue structure features and metabolic activity features;
Carrying out rigid spatial registration on the bone structure features and the soft tissue structure features to obtain preliminary registration data;
elastic space registration is carried out on the preliminary registration data set and the metabolic activity characteristics, so that multi-mode image data are obtained;
According to the multi-mode image data, performing three-dimensional reconstruction processing to generate a three-dimensional reconstruction model fusing anatomical information, soft tissue information and metabolic information;
Dividing the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;
Extracting and separating lesion characteristic data according to the lesion analysis data to obtain lesion marking data;
Carrying out dynamic weighting fusion processing on lesion marking data and tissue structure data to obtain three-dimensional comprehensive image data;
And dynamically displaying the three-dimensional comprehensive image data through a visual interface.
Preferably, preprocessing the CT image data, the MRI image data, and the PET image data to obtain first CT image data, first MRI image data, and first PET image data, including:
denoising the CT image data to obtain first CT image data;
Performing enhancement processing on the MRI image data to obtain first MRI image data;
and performing artifact removal processing on the PET image data to obtain first PET image data.
Preferably, skeletal structural featuresThe extraction formula of (2) is:
,
Wherein,For the first CT image data,AndThe gradients of the CT image in the x-axis and y-axis directions respectively,With adaptive Gaussian smoothingAndIs the standard deviation of a gaussian filter.
Preferably, the soft tissue structural featuresThe extraction formula of (2) is:
,
Wherein,For the first MRI image data, S is a soft tissue segmentation function,AndA lower threshold and an upper threshold, respectively, the pixel values satisfying the threshold range are soft tissue features,Representing the segmented soft tissue region.
Preferably, the metabolic activity profileThe extraction formula of (2) is:
,
,
Wherein,For the first PET image data, M is a metabolic activity extraction function,For the threshold value extracted for the metabolic activity,As the weight of the material to be weighed,Representing intensity accumulation for all pixels within the metabolically active region.
Preferably, the rigid spatial registration of the bone structural features and the soft tissue structural features results in preliminary registration data, comprising:
acquiring space coordinates of skeleton structure features and soft tissue structure features;
By rigidly transforming matrices according to spatial coordinatesSpatially aligning the bone structural features and the soft tissue structural features to obtain registered bone structural featuresAnd registering soft tissue structural features;
The expression of the rigid transformation matrix is:
,
Wherein, theta is the rotation angle,Is a translation vector;
registering skeletal structure featuresAnd registering soft tissue structural featuresThe calculation formula of (2) is as follows:
,
。
Preferably, elastic spatial registration is performed on the preliminary registration data set and the metabolic activity feature to obtain multi-modal image data, including:
According to the registered skeleton structure features and the registered soft tissue structure features, calculating similarity differences of the preset areas to obtain similarity difference data;
determining deformation data of a preset area according to the similarity difference data;
Generating deformation parameter fields for describing the displacement of each pixel point according to deformation data;
Adjusting the metabolic activity characteristics according to the deformation parameter field to obtain registered metabolic activity characteristics;
The extraction formula for registering metabolic activity features is as follows:
。
and generating multi-mode image data according to the registered skeleton structure features, the registered soft tissue structure features and the registered metabolic activity features.
Preferably, according to the lesion analysis data, extracting and separating the lesion feature data to obtain lesion marking data, including:
Calculating the volume, surface area and compactness of the lesion according to the lesion analysis data, and extracting shape features;
The calculation formula of the lesion volume is as follows: Wherein, the method comprises the steps of, wherein,Is the total number of voxels within the lesion region,Is the volume of each voxel,Is an index;
The surface area is calculated as: Wherein, the method comprises the steps of, wherein,Is based on the total number of triangle units in the boundary mesh of the lesion area,Representing the area of each triangular cell,Is an index;
The calculation formula of the compactness is as follows:;
according to the lesion analysis data, a gray level co-occurrence matrix is constructed, entropy and contrast are calculated through the gray level co-occurrence matrix, and texture features are extracted;
The calculation formula of the entropy value is as follows: Wherein, the method comprises the steps of, wherein,Representing gray levels in gray level co-occurrence matrixAndThe co-occurrence probability between the two,Is the total number of gray levels;
the calculation formula of the contrast ratio is as follows: Wherein, the method comprises the steps of, wherein,Representing gray scale levelsAndDifferences between;
extracting metabolic activity features from lesion analysis data;
The metabolic activity profileObtained by the following formula: Wherein, the method comprises the steps of, wherein,Is the weight;
fusing the shape characteristics, the texture characteristics and the metabolic activity characteristics to obtain lesion characteristic data;
marking and separating the lesion characteristic data to obtain lesion marking data。
Preferably, the dynamic weighted fusion processing is performed on the lesion marking data and the tissue structure data to obtain three-dimensional integrated image data, which comprises the following steps:
Assigning weights to lesion marking data and tissue structure data, and performing dynamic fusion processing to obtain three-dimensional integrated image dataThe weighted formula of (2) is:
,
Wherein,Marking data for lesionsIs used for the weight coefficient of the (c),Is thatAndWeight coefficient of (c) in the above-mentioned formula (c).
In a second aspect, a multi-modal image based three-dimensional post-processing system, the system comprising:
the data collection module is used for acquiring CT image data, MRI image data and PET image data;
the denoising processing module is used for denoising the CT image data to obtain first CT image data;
the enhancement processing module is used for enhancing the MRI image data to obtain first MRI image data;
the artifact removal module is used for performing artifact removal processing on the PET image data to obtain first PET image data;
The feature extraction module is used for carrying out feature extraction on the first CT image data, the first MRI image data and the first PET image data to obtain skeleton structure features, soft tissue structure features and metabolic activity features;
The rigid registration module is used for carrying out rigid spatial registration on the skeleton structure features and the soft tissue structure features to obtain preliminary registration data;
The elastic registration module is used for carrying out elastic spatial registration on the preliminary registration data set and the metabolic activity characteristics to obtain multi-mode image data;
The three-dimensional reconstruction module is used for carrying out three-dimensional reconstruction processing according to the multi-mode image data to generate a three-dimensional reconstruction model integrating the anatomical information, the soft tissue information and the metabolic information;
The segmentation processing module is used for carrying out segmentation processing on the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;
The lesion extraction module is used for extracting and separating lesion characteristic data according to the lesion analysis data to obtain lesion marking data;
The weighted fusion module is used for carrying out dynamic weighted fusion processing on the lesion marking data and the tissue structure data to obtain three-dimensional comprehensive image data;
and the visualization module is used for dynamically displaying the three-dimensional comprehensive image data through a visualization interface.
The scheme of the invention at least comprises the following beneficial effects:
Firstly, the comprehensive processing of the multi-mode images can comprehensively reflect the information of bones, soft tissues and metabolic activities, solves the limitation that a single-mode image cannot cover all the information, secondly, the rigidity and the elasticity are combined for use, so that the alignment precision of different-mode images in space is greatly improved, in particular, the inconsistent problem caused by local area deformation is solved, thirdly, the automated lesion recognition and marking process greatly improves the diagnosis efficiency, the errors possibly caused by manual marking are avoided, and finally, the dynamic weighting fusion and interactive visual display functions enable the image display to be more flexible, and doctors can adjust the display mode of the images according to actual needs, so that the diagnosis value of the images is further improved.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As shown in fig. 1, an embodiment of the present invention proposes a multi-mode-based three-dimensional post-processing method for images, which includes the following steps:
acquiring CT image data, MRI image data and PET image data;
denoising the CT image data to obtain first CT image data;
Performing enhancement processing on the MRI image data to obtain first MRI image data;
performing artifact removal processing on the PET image data to obtain first PET image data;
performing feature extraction on the first CT image data, the first MRI image data and the first PET image data to obtain skeleton structure features, soft tissue structure features and metabolic activity features;
Carrying out rigid spatial registration on the bone structure features and the soft tissue structure features to obtain preliminary registration data;
elastic space registration is carried out on the preliminary registration data set and the metabolic activity characteristics, so that multi-mode image data are obtained;
According to the multi-mode image data, performing three-dimensional reconstruction processing to generate a three-dimensional reconstruction model fusing anatomical information, soft tissue information and metabolic information;
Dividing the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;
Extracting and separating lesion characteristic data according to the lesion analysis data to obtain lesion marking data;
Carrying out dynamic weighting fusion processing on lesion marking data and tissue structure data to obtain three-dimensional comprehensive image data;
And dynamically displaying the three-dimensional comprehensive image data through a visual interface.
In the embodiment of the invention, firstly, data is acquired from different image modes, specifically including a CT image, an MRI image and a PET image. CT images provide information about bone structure, MRI images enhance the detail of soft tissue, and PET images provide data of metabolic activity. Each modality image has different application advantages, CT images have advantages in displaying bone and hard tissue, MRI images focus on showing details of soft tissue, and PET images are mainly used to show metabolic activity areas. The integration of the data enables the subsequent analysis to be more comprehensive and accurate, and can provide multi-angle support for diagnosis.
After the image data are acquired, the data are preprocessed. The pretreatment of the CT image comprises denoising treatment, and the denoising filter is applied to remove noise, so that the high quality of the skeleton structure data is ensured. Preprocessing of MRI images adjusts contrast through image enhancement algorithms, highlighting details of soft tissue structures. And (3) removing the artifacts of the PET image, eliminating the interference of the artifacts, and ensuring the accuracy of metabolic activity data. After these preprocessing, the obtained image data are referred to as first CT image data, first MRI image data, and first PET image data, respectively.
Next, respective key features are extracted from the three image data. The CT image is mainly used for extracting skeleton structure characteristics, the MRI image is used for extracting soft tissue characteristics, and the PET image is used for extracting metabolic activity characteristics. Each feature data is used for subsequent registration and reconstruction operations, guaranteeing the integrity and consistency of the processed data in subsequent steps.
Registration of the multimodal images is an important step in achieving subsequent reconstruction and analysis. First, the extracted bone structural features and soft tissue structural features are rigidly spatially registered. The rigid registration keeps the geometry of the images unchanged, and only rotates and translates the images to ensure that the different modality images are spatially aligned. After rigid registration, the metabolic activity features are further processed by elastic registration. Elastic registration allows fine deformation adjustment of images in localized areas, ensures precise alignment of metabolic activity features with bone and soft tissue structures, and particularly for those localized differences due to patient position changes or physiological activity, can significantly improve the accuracy of image alignment.
And after registration is completed, three-dimensional reconstruction is performed by combining the multi-mode image data. By comprehensively analyzing the skeletal structure characteristics, the soft tissue structure characteristics and the metabolic activity characteristics, a three-dimensional reconstruction model capable of fusing anatomical, soft tissue and metabolic information is generated. The model provides more comprehensive image information and provides powerful support for subsequent lesion identification and clinical diagnosis.
After the three-dimensional reconstruction is completed, the lesion area is automatically identified and marked. The identification of the lesion area is based on comprehensive analysis of various features, and particularly, the position and the range of the lesion area are accurately identified through comprehensive extraction of shape features, texture features and metabolic features. The shape features analyze the volume, surface area and compactness of the lesion area, the texture features extract entropy and contrast of the lesion area, and the metabolic features analyze the metabolic active area in the PET image. The multi-angle analysis greatly improves the accuracy of lesion identification and ensures the complete marking of the lesion area.
And after the lesion marking is finished, fusing the lesion marking data and the tissue structure data through a dynamic weighted fusion algorithm. The weighted fusion algorithm allows the user to adjust the display ratio of the lesion area to the normal tissue according to the weight set by the user. In this way, the doctor can display specific areas in the image more flexibly according to actual needs, such as focusing on the metabolic active area or highlighting the soft tissue structure. The image data after the weighted fusion processing can realize dynamic fusion display between the lesion area and the normal tissue.
And finally, displaying the fused multi-mode image data through a visual interface. The user can interact with the image data through the three-dimensional visual interface to adjust the display mode and the transparency of the image. Allowing the user to view the lesion area from different angles ensures that the physician can obtain more comprehensive diagnostic information. The user can also dynamically adjust the display weights of the images of different modes, so that the fused image can meet specific diagnosis requirements. Through the flexible visual display, doctors can analyze the lesion areas from multiple dimensions, and the accuracy and the efficiency of diagnosis are improved.
Compared with the prior art, the method has the advantages that firstly, the comprehensive processing of the multi-mode images can comprehensively reflect the information of bones, soft tissues and metabolic activities, the limitation that a single-mode image cannot cover all the information is solved, secondly, the rigidity and the elasticity are combined for use, the alignment precision of different-mode images in space is greatly improved, in particular, the problem of inconsistency caused by local area deformation is solved, thirdly, the automated lesion recognition and marking process greatly improves the diagnosis efficiency, errors possibly caused by manual marking are avoided, and finally, the dynamic weighting fusion and interactive visual display function enables the image display to be more flexible, doctors can adjust the display mode of the images according to actual needs, and the diagnosis value of the images is further improved.
The main purpose of the three-dimensional reconstruction process is to fuse the features extracted from the different modality images (CT, MRI, PET) to generate a comprehensive three-dimensional model capable of reflecting anatomical structures, soft tissue features and metabolic information simultaneously. The image data of each modality provides different tissue type information, CT images provide the structure of hard tissues such as bones, MRI images provide details of soft tissues, and PET images show the distribution of metabolic activity. By fusion of the multimodal images, a three-dimensional model can be generated that comprehensively reflects the internal structure of the patient's body.
Specifically, the three-dimensional reconstruction process first aligns the bone structural features in CT images, the soft tissue structural features in MRI images, and the metabolic activity features in PET images spatially precisely by spatially registering them. Then, the characteristic data are integrated, and a three-dimensional model is constructed through a three-dimensional reconstruction algorithm. The three-dimensional model not only contains anatomical structure information of bones and soft tissues, but also reflects metabolic activity conditions of patients, in particular to pathological areas (such as tumors) with active metabolism.
In the prior art, only single-mode image data can be processed generally, and characteristics of different tissue types cannot be reflected at the same time. The invention realizes the comprehensive fusion of the anatomical information, the soft tissue information and the metabolic information through the three-dimensional reconstruction of the multi-mode image, and the generated three-dimensional reconstruction model can provide more comprehensive and accurate diagnosis basis. For example, in the treatment of cancer, PET images may show areas of active metabolism but not accurately reflect the anatomy, whereas through the three-dimensional reconstruction model of the present invention, a physician can not only see the metabolic condition of a tumor, but also understand the specific location of the tumor in the anatomy.
The segmentation process is a further analysis step of the image data after the three-dimensional reconstruction model is generated, and aims to automatically segment the lesion area and the tissue structure area from the reconstructed three-dimensional model. The segmentation process is to analyze the volume data in the three-dimensional model through an algorithm to identify a lesion area and a normal tissue area.
The lesion analysis data is obtained by preliminarily identifying the volume, shape, metabolic activity and other information of the lesion area. Shape analysis can detect geometric features (e.g., volume, surface area, compactness) of the lesion region, while texture analysis can extract complexity (e.g., entropy, contrast) of the lesion region, and metabolic analysis provides metabolic activity (reflected by PET images) of the lesion region. The data can help doctors to better know the characteristics of the lesion area, particularly in cancer diagnosis, the segmentation processing can accurately identify the boundary, volume and metabolic level of the tumor area, and the doctors are assisted to judge the type and the development condition of the tumor.
The tissue structure data then corresponds to normal anatomical and soft tissue regions. By differentiating bones, soft tissues and lesion areas through a segmentation algorithm, detailed information of tissue structures can be provided. The method has important application value for operation planning, selection of radiotherapy target areas and the like. For example, a doctor can determine important anatomical regions to be avoided in the surgical procedure by analyzing tissue structure data, ensuring the accuracy and safety of the surgery.
In a preferred embodiment of the present invention, feature extraction is performed on the first CT image data, the first MRI image data, and the first PET image data to obtain bone structural features, soft tissue structural features, and metabolic activity features, including:
extracting features of the first CT image data to obtain skeleton structure features;
Extracting features of the first MRI image data to obtain soft tissue structural features;
and extracting features of the first PET image data to obtain metabolic activity features.
In the embodiment of the invention, firstly, the bone structure characteristics are extracted through processing the first CT image data, the CT image has advantages in representing hard tissues such as bones, and the extracted bone characteristics are used for subsequent spatial registration. For the first MRI image data, soft tissue features are extracted through a segmentation algorithm, the MRI image has advantages in the aspect of displaying soft tissue details, and the extracted soft tissue features are used for subsequent processing. Finally, through processing the first PET image data, metabolic activity characteristics are extracted, the PET image can provide metabolic activity information, and the extracted metabolic characteristics are used for detecting metabolic activity of a lesion area.
Compared with the prior art, the method can realize more comprehensive analysis of the internal tissues of the patient by extracting the characteristics of the three different mode images, and has important roles in lesion recognition and treatment planning in particular. The single-mode image is easy to miss some important information, and the accuracy and the comprehensiveness of diagnosis can be greatly improved through the feature extraction of the multi-mode image.
In a preferred embodiment of the invention, skeletal structural featuresThe extraction formula of (2) is:
,
Wherein,For the first CT image data,AndThe gradients of the CT image in the x-axis and y-axis directions respectively,With adaptive Gaussian smoothingAndIs the standard deviation of a gaussian filter.
In the embodiment of the invention, the edge features of the bone structure in the first CT image can be accurately extracted through the formula. Compared with the existing edge detection technology, the method and the device use adaptive Gaussian smoothing, can better process noise in the first CT image, ensure more accurate edge detection of the skeleton structure, and are beneficial to subsequent registration processing.
The formula extracts edge information in the image by carrying out gradient calculation on the x axis and the y axis of the first CT image. By introducing a Gaussian smoothing filterThe influence of noise on gradient calculation is reduced, so that the bone structural features are extracted more accurately.
This approach ensures accurate extraction of bone edges, facilitating subsequent registration and three-dimensional reconstruction.
In a preferred embodiment of the invention, the soft tissue structural featuresThe extraction formula of (2) is:
,
Wherein,For the first MRI image data, S is a soft tissue segmentation function,AndThe pixel values satisfying the threshold range are soft tissue features.
In the embodiment of the invention, by setting the proper threshold, the soft tissue region in the first MRI image can be accurately extracted, particularly under the condition that the soft tissue display is complex, the accuracy of soft tissue extraction can be effectively improved, and the problem that the soft tissue and the background are difficult to distinguish in the prior art is solved.
The formula extracts a soft tissue region in the first MRI image through a threshold segmentation method. When the gray value of the pixel is located atAndIn between, the pixel is classified as a soft tissue feature, otherwise not as a soft tissue.
This method ensures accurate extraction of soft tissue, especially in the first MRI image, which is critical for identification of the lesion area.
In a preferred embodiment of the invention, the metabolic activity profileThe extraction formula of (2) is:
,
,
Wherein,For the first PET image data, M is a metabolic activity extraction function,For the threshold value extracted for the metabolic activity,As the weight of the material to be weighed,Representing intensity accumulation for all pixels within the metabolically active region.
In the embodiment of the invention, through the formula, the metabolic active region in the first PET image can be accurately extracted, and the metabolic active region is weightedImproving the visibility of metabolic activity. This method of extraction is particularly important in cancer detection, as the metabolically active region is often an indication of the presence of a tumor. Compared with the traditional metabolic extraction method, the algorithm can better adapt to the change of different thresholds and enhancement factors, and ensures the flexibility and accuracy of the extraction result.
The formula extracts metabolic characteristics through identifying metabolic active regions in the first PET image. First, pass the threshold valuePixels with metabolic activity intensities above the threshold are screened out and then passedThe intensity of metabolic activity in these regions is amplified for weight.
The accumulation process ensures that the metabolic activity of the whole lesion area can be comprehensively extracted for subsequent analysis of the metabolic level of the lesion area.
In a preferred embodiment of the present invention, the rigid spatial registration of the bone structural features and the soft tissue structural features to obtain preliminary registration data comprises:
acquiring space coordinates of skeleton structure features and soft tissue structure features;
By rigidly transforming matrices according to spatial coordinatesSpatially aligning the bone structural features and the soft tissue structural features to obtain registered bone structural featuresAnd registering soft tissue structural features;
The expression of the rigid transformation matrix is:
,
Wherein, theta is the rotation angle,Is a translation vector;
registering skeletal structure featuresAnd registering soft tissue structural featuresThe calculation formula of (2) is as follows:
,
。
In the embodiment of the invention, the CT image and the MRI image can be spatially aligned on the premise of keeping the shape unchanged by a rigid transformation formula. This process is particularly important for registration of multi-modality images, which can ensure consistency of different modality images over anatomical structures. Compared with the prior art, the rigid registration formula can adapt to more complicated rotation and translation conditions, and the problem of inaccurate image information loss or overlapping caused by space misalignment is avoided.
In a preferred embodiment of the present invention, elastic spatial registration is performed on the preliminary registration data set and the metabolic activity feature to obtain multi-modal image data, including:
According to the registered skeleton structure features and the registered soft tissue structure features, calculating similarity differences of the preset areas to obtain similarity difference data;
determining deformation data of a preset area according to the similarity difference data;
Generating deformation parameter fields for describing the displacement of each pixel point according to deformation data;
Adjusting the metabolic activity characteristics according to the deformation parameter field to obtain registered metabolic activity characteristics;
The extraction formula for registering metabolic activity features is as follows:
。
and generating multi-mode image data according to the registered skeleton structure features, the registered soft tissue structure features and the registered metabolic activity features.
In the embodiment of the invention, through the elastic registration algorithm, the slight difference of the local anatomical structure in different mode images can be corrected, so that the metabolic activity is more closely combined with the bone and soft tissue information. Compared with the prior art, the elastic registration has higher accuracy, and is particularly suitable for the problems of local distortion or deformation of images caused by movement, respiration and the like of patients.
The core of elastic registration is the generation of deformation parameter fields that describe the spatial displacement of each pixel. Through the elastic registration algorithm, the slight difference of the local anatomical structure in different mode images can be corrected, so that metabolic activity is more closely combined with bone and soft tissue information. Compared with the prior art, the elastic registration has higher accuracy, and is particularly suitable for the problems of local distortion or deformation of images caused by movement, respiration and the like of patients.
According to the registered skeleton structure features and the registered soft tissue structure features, similarity difference of a preset area is calculated to obtain similarity difference data, mean square error can be adopted for calculation, and error square sum between corresponding pixels of two feature images is calculated. The smaller the mean square error, the more similar the images are. And further, the difference data meeting the requirements can be obtained.
The deformation data of the preset area is determined according to the similarity difference data, and the deformation data can be obtained through calculation based on a deformation model, such as a B-spline model, a finite element model and the like. The deformation data describes the spatial displacement of each pixel point or voxel in the preset region
The B spline model is used for describing the deformation of the preset area in a mode of interpolation of control points and splines. The displacement of each control point is adjusted by an optimization algorithm and the deformation field is generated by the movement of these control points. B-splines have the advantage of being able to generate smooth deformation fields suitable for complex biological deformation scenarios.
The finite element method is used to divide the image into a plurality of small finite element regions and to calculate the deformation field by minimizing the energy difference between these regions. The finite element method is suitable for processing areas with obvious structural differences in images, especially for deformation of soft tissues.
In a preferred embodiment of the present invention, extracting lesion characteristic data and separating the same according to lesion analysis data to obtain lesion marking data, including:
Calculating the volume, surface area and compactness of the lesion according to the lesion analysis data, and extracting shape features;
The calculation formula of the lesion volume is as follows: Wherein, the method comprises the steps of, wherein,Is the total number of voxels within the lesion region,Is the volume of each voxel,Is an index;
The surface area is calculated as: Wherein, the method comprises the steps of, wherein,Is based on the total number of triangle units in the boundary mesh of the lesion area,Representing the area of each triangular cell,Is an index;
The calculation formula of the compactness is as follows:;
according to the lesion analysis data, a gray level co-occurrence matrix is constructed, entropy and contrast are calculated through the gray level co-occurrence matrix, and texture features are extracted;
The calculation formula of the entropy value is as follows: Wherein, the method comprises the steps of, wherein,Representing gray levels in gray level co-occurrence matrixAndThe co-occurrence probability between the two,Is the total number of gray levels;
the calculation formula of the contrast ratio is as follows: Wherein, the method comprises the steps of, wherein,Representing gray scale levelsAndDifferences between;
extracting metabolic activity features from lesion analysis data;
The metabolic activity profileObtained by the following formula: Wherein, the method comprises the steps of, wherein,Is the weight;
fusing the shape characteristics, the texture characteristics and the metabolic activity characteristics to obtain lesion characteristic data;
marking and separating the lesion characteristic data to obtain lesion marking data。
In the embodiment of the invention, the lesion feature extraction is based on comprehensive analysis of shape features, texture features and metabolic activity features. Through the lesion recognition based on multiple characteristics, the method can effectively improve the accuracy of lesion detection, and especially can comprehensively analyze the shape, texture and metabolic information of a complex lesion region in a multi-mode image. Compared with the prior art, the method can more accurately identify the complex lesion area, especially under the condition of complex texture or abnormal metabolic activity.
The volume reflects the size of the lesion, the surface area reflects the appearance of the lesion, and the compactness characterizes the regularity of the lesion. Through the comprehensive analysis of the small features, the system can obtain the overall shape features of the lesion area, and provide key information about the lesion growth mode and appearance features for doctors.
The entropy reflects the texture complexity of the lesion and the contrast reflects the gray scale difference between the lesion and surrounding tissue. By combining the two small features, the overall texture features of the lesion area can be extracted. The regions of high entropy and high contrast generally mean that the lesion region is complex and has a significant difference from surrounding tissues, which is of great importance for assessing the structural features of complex lesions such as tumors.
By weightThe clarity of the metabolic activity features can be increased.
Wherein the lesion volumeRefers to the volume occupied by the lesion area in three dimensions. It is typically obtained by cumulative calculation of voxels within the lesion area. The method comprises the following specific steps:
In the foregoing, the segmentation algorithm extracts a lesion analysis region, divides the lesion analysis region into a plurality of small voxel units, and the voxel is a unit volume (typically, a cube) having a fixed size in a three-dimensional space. Calculating the volume of each voxelAnd adding all voxel volumes in the lesion area to obtain the total volume of the lesion area.
The formula is:
,
Wherein,Is the total number of voxels within the lesion region,Is the volume of each voxel (the voxel volume is determined by the resolution of the CT or MRI image). This voxel accumulation method ensures the accuracy of the volume calculation.
Lesion surface areaRefers to the boundary surface area of the lesion area. The procedure for calculating the lesion surface area is as follows:
Three-dimensional boundaries of the lesion area are extracted, and the total area of the boundaries is calculated by using a triangular mesh method or other geometric algorithms. The triangle mesh method approximates the lesion area boundary to a plurality of triangle units, and the area of each triangle is calculated and accumulated to form the surface area of the whole boundary. The formula is:
,
Wherein,Is the total number of triangle units in the boundary mesh,Representing the area of each triangle cell. This formula enables accurate calculation of the surface area of the irregular lesion region.
Tightness degreeIs a geometric measure describing the regularity of the shape of a lesion, and is typically calculated by the ratio of volume to surface area, which is used to measure whether the lesion is close to a regular shape (e.g., a sphere).
The formula of the compactness is:
,
Wherein,AndThe volume and surface area of the lesion, respectively. The more regular the shape of the lesion, the higher its compactness value.
Wherein the entropy valueIs a parameter for measuring the gray level distribution complexity of the lesion area. The higher the entropy value, the more complex the gray distribution of the lesion area, and the larger the information amount. The method of calculating the entropy value is generally based on a gray level co-occurrence matrix. The step of calculating the entropy value is as follows:
firstly, a gray level co-occurrence matrix of a lesion area is constructed, and gray level relations between adjacent pixels in an image are recorded. The size of the gray level co-occurrence matrix is determined by the gray level number of the imageAnd (5) determining. The entropy value is calculated using the following formula:
,
Wherein,Representing gray levels in gray level co-occurrence matrixAndThe co-occurrence probability between the two,Is the total number of gray levels. The formula obtains entropy values by calculating probability distribution of pixel gray co-occurrence, and reflects texture complexity of a lesion area.
Contrast ratioThe gray value difference degree of the lesion area is measured and reflects the difference degree between the lesion area and surrounding tissues. High contrast typically means that the gray scale difference of the lesion area is significant, facilitating the physician to identify the lesion boundary. The contrast is calculated based on the gray level co-occurrence matrix, and the specific steps are as follows:
and constructing a gray level co-occurrence matrix of the lesion area, and recording gray level values of adjacent pixels. The contrast is calculated using the following formula:
,
Wherein,Representing gray scale levelsAndDifferences between them. By this formula, the gray contrast of the lesion area can be precisely calculated, and the parameter can help to identify the sharpness of the boundary of the lesion area.
In a preferred embodiment of the present invention, the method for performing dynamic weighted fusion processing on lesion marking data and tissue structure data to obtain three-dimensional integrated image data includes:
Assigning weights to lesion marking data and tissue structure data, and performing dynamic fusion processing to obtain three-dimensional integrated image dataThe weighted formula of (2) is:
,
Wherein,Marking data for lesionsIs used for the weight coefficient of the (c),Is thatAndWeight coefficient of (c) in the above-mentioned formula (c).
In the embodiment of the invention, the dynamic weighted fusion processing is an important step for realizing the comprehensive display of lesion marking data and tissue structure data. By distributing weights for different types of data, the method and the device can realize the fusion processing of the multi-mode data. Through the weighted fusion method, the display weight between the lesion area and the tissue structure can be dynamically adjusted according to the user requirement. Compared with the prior art, the method not only improves the flexibility of image display, but also can adjust the display effect of the images through user interaction and provide clearer lesion information.
The embodiment of the invention also provides a three-dimensional post-processing system based on the multi-mode image, which is applied to the method and comprises the following steps:
the data collection module is used for acquiring CT image data, MRI image data and PET image data;
the denoising processing module is used for denoising the CT image data to obtain first CT image data;
the enhancement processing module is used for enhancing the MRI image data to obtain first MRI image data;
the artifact removal module is used for performing artifact removal processing on the PET image data to obtain first PET image data;
The feature extraction module is used for carrying out feature extraction on the first CT image data, the first MRI image data and the first PET image data to obtain skeleton structure features, soft tissue structure features and metabolic activity features;
The rigid registration module is used for carrying out rigid spatial registration on the skeleton structure features and the soft tissue structure features to obtain preliminary registration data;
The elastic registration module is used for carrying out elastic spatial registration on the preliminary registration data set and the metabolic activity characteristics to obtain multi-mode image data;
The three-dimensional reconstruction module is used for carrying out three-dimensional reconstruction processing according to the multi-mode image data to generate a three-dimensional reconstruction model integrating the anatomical information, the soft tissue information and the metabolic information;
The segmentation processing module is used for carrying out segmentation processing on the three-dimensional reconstruction model to obtain lesion analysis data and tissue structure data;
The lesion extraction module is used for extracting and separating lesion characteristic data according to the lesion analysis data to obtain lesion marking data;
The weighted fusion module is used for carrying out dynamic weighted fusion processing on the lesion marking data and the tissue structure data to obtain three-dimensional comprehensive image data;
and the visualization module is used for dynamically displaying the three-dimensional comprehensive image data through a visualization interface.
The denoising processing of the image data is realized through a Gaussian filtering algorithm, and the Gaussian filtering algorithm is used for:
performing noise estimation on CT image data;
Adjusting filter parameters based on the noise estimation result;
and performing smoothing treatment on the image data to obtain denoised CT image data.
The enhancement processing of the image data is realized through a contrast stretching algorithm, and the contrast stretching algorithm is used for:
acquiring a gray level histogram of MRI image data;
Adjusting the contrast range of the image data according to the gray level histogram;
and carrying out contrast stretching treatment on the image data to obtain the enhanced MRI image data.
It should be noted that, the system is a system corresponding to the above method, and all implementation manners in the above method embodiment are applicable to the embodiment, so that the same technical effects can be achieved.
Embodiments of the present invention also provide a computing device comprising a processor, a memory storing a computer program which, when executed by the processor, performs a method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
Embodiments of the present invention also provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to perform a method as described above. All the implementation manners in the method embodiment are applicable to the embodiment, and the same technical effect can be achieved.
While the invention has been described with reference to the preferred embodiments, it should be understood by those skilled in the art of image processing that various modifications and adaptations can be made without departing from the principles of the invention, and such modifications and adaptations are intended to be comprehended within the scope of the invention.