Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The image fusion method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein console 102 is coupled to terminal 104. The console 102 may be a means for deriving a registration matrix or may be a system for deriving a registration matrix. Specifically, the console 102 acquires a first transverse process image segmented from an ultrasound image of a target subject vertebral site, and a second transverse process image segmented from a medical image of the vertebral site. The console 102 registers the ultrasound image and the medical image according to three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix, so as to perform image fusion based on the registration matrix.
In one embodiment, as shown in fig. 2, an image fusion method is provided, and the method is applied to the console in fig. 1 for illustration, and includes the following steps:
S202, acquiring a first transverse process image segmented from an ultrasonic image of a vertebra part of a target object and a second transverse process image segmented from a medical image of the vertebra part.
The vertebra part can be the lumbar vertebra part of the target object or the thoracic vertebra part of the target object. The ultrasonic image is obtained by scanning the vertebrae of the target object with an ultrasonic probe. The method of dividing the first transverse process image from the ultrasound image includes, but is not limited to, a threshold dividing method, an edge detecting method, a region generating method, a model dividing method, etc., and the method of dividing the first transverse process image from the ultrasound image may be a combination of a plurality of methods among the threshold dividing method, the edge detecting method, and the model dividing method. The thresholding method is to divide pixels in an image into several parts by setting one or more thresholds to divide a first transverse process image and other tissues in an ultrasound image by pixel values. Edge detection is the use of an edge detection algorithm to identify the edge contours of the transverse processes in the image, to locate and segment the first transverse process image by the edge contours. The model-based method is to segment the first transverse process image from the ultrasound image using a trained segmentation model. A first transverse process image segmented from an ultrasound image of the lumbar region of the target subject is shown in fig. 3.
The medical image is still image data of a vertebrae portion of a target object captured at a certain time. Medical images include, but are not limited to, computed tomography imaging, magnetic resonance imaging, and positron emission tomography imaging. The method of dividing the second transverse process image from the medical image includes, but is not limited to, a threshold dividing method, an edge detecting method, a region generating method, a model dividing method, etc., and the method of dividing the second transverse process image from the medical image may be a combination of a plurality of methods among the threshold dividing method, the edge detecting method, and the model dividing method. A second transverse process image segmented from the medical image of the lumbar region of the target subject is shown in fig. 4.
Optionally, after the ultrasound probe scans the vertebra part of the target object, the console acquires an ultrasound image scanned by the ultrasound probe, and segments a first transverse process image from the ultrasound image by one or more modes of a threshold segmentation method, an edge detection method and a model segmentation method. After obtaining a medical image of the vertebrae of the target subject by one or more of computed tomography, magnetic resonance imaging, and positron emission tomography, the console segments a second transverse process image from the medical image using one or more of thresholding, edge detection, and model segmentation.
S204, registering the ultrasonic image and the medical image according to the three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix, so as to perform image fusion based on the registration matrix.
In registering the ultrasound image and the medical image, it is determined which of the first transverse process image and the second transverse process image corresponds to the corresponding transverse process, that is, which of the first transverse process image and the second transverse process image belongs to the same vertebra, and then the ultrasound image and the medical image are registered according to the three-dimensional coordinate information of the transverse processes belonging to the same vertebra.
The registration matrix can be used for converting three-dimensional coordinate information in the ultrasonic image into three-dimensional coordinate information in the medical image, and the inverse matrix of the registration matrix can be used for converting three-dimensional coordinate information in the medical image into three-dimensional coordinate information in the ultrasonic image, so that fusion between the ultrasonic image and the medical image can be realized.
Optionally, the console determines which transverse processes of the first transverse process image and the second transverse process image belong to the same vertebra, and registers the ultrasonic image and the medical image according to the three-dimensional coordinate information of the transverse processes belonging to the same vertebra to obtain a registration matrix, so that the three-dimensional coordinate information in the ultrasonic image is converted into the three-dimensional coordinate information in the medical image or the three-dimensional coordinate information in the medical image is converted into the three-dimensional coordinate information in the ultrasonic image through the registration matrix, and fusion between the ultrasonic image and the medical image is realized.
In the image fusion method, the first transverse process image segmented from the ultrasonic image of the vertebrae part of the target object and the second transverse process image segmented from the medical image of the vertebrae part are acquired, and the ultrasonic image and the medical image are registered according to the three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image, so that high-precision space alignment can be realized based on the coordinate information of a plurality of transverse processes, registration errors are reduced, the registration matrix is more accurate, and puncture can be guided correctly in puncture scenes and other scenes through the fused images based on the registration matrix.
In one embodiment, registering the ultrasound image and the medical image according to three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix comprises:
the vertebrae in which each transverse process is located in the first transverse process image and the vertebrae in which each transverse process is located in the second transverse process image are determined.
The transverse processes of the first transverse process image and the second transverse process image in the same vertebra are determined as transverse process pairs.
And registering the ultrasonic image and the medical image according to the three-dimensional coordinate information of each transverse process in the transverse process pair to obtain a registration matrix.
The vertebrae where each transverse process is located refers to the vertebrae where the transverse process is located in the target object. Specifically, the human body has 12 total thoracic vertebrae, which are respectively called a first thoracic vertebra T1, a second thoracic vertebra T2, a third thoracic vertebra T3, a fourth thoracic vertebra T4, a fifth thoracic vertebra T5, a sixth thoracic vertebra T6, a seventh thoracic vertebra T7, an eighth thoracic vertebra T8, a ninth thoracic vertebra T9, a tenth thoracic vertebra T10, an eleventh thoracic vertebra T11, and a twelfth thoracic vertebra T12. The human body structure is provided with 5 lumbar vertebrae in total, which are respectively called a first lumbar vertebra, a second lumbar vertebra, a third lumbar vertebra, a fourth lumbar vertebra and a fifth lumbar vertebra, wherein the lumbar vertebrae are sequentially connected, and as shown in fig. 5, L1 is the first lumbar vertebra, L2 is the second lumbar vertebra, L3 is the third lumbar vertebra, L4 is the fourth lumbar vertebra and L5 is the fifth lumbar vertebra.
The transverse processes of the first transverse process image and the second transverse process image in the same vertebra refer to transverse processes belonging to the same vertebra. For example, if the transverse process a in the first transverse process image is the transverse process in the first lumbar vertebra and the transverse process B in the second transverse process object is the transverse process in the first lumbar vertebra, the transverse process a and the transverse process B are in the same lumbar vertebra, and belong to the transverse process pair.
In some embodiments, since the second transverse process image may include each transverse process of the target object vertebral region, the transverse processes in the second transverse process image may be ranked according to three-dimensional coordinate information of the transverse processes in the second transverse process image, and vertebrae in which the transverse processes in the second transverse process image are located may be obtained according to the ranking result.
In some embodiments, the vertebrae in which each transverse process in the first transverse process image/the second transverse process image is located may be determined based on morphological features and/or adjacent structures of the transverse processes. For example, the vertebrae are lumbar regions, the transverse processes in the third lumbar vertebra are generally longer, and are the longest transverse processes in all lumbar vertebrae, so the longest transverse processes in the first transverse process image/the second transverse process image can be determined as the transverse processes on the third lumbar vertebra, the transverse processes in the fifth lumbar vertebra are generally thicker and extend to both sides to form a lumbosacral joint with the sacrum, so the thicker and laterally extending transverse processes in the first transverse process image/the second transverse process image can be determined as the transverse processes on the fifth lumbar vertebra, and the transverse processes on the fifth lumbar vertebra are connected with the sacrum to form a lumbosacral angle, so the transverse processes connected with the sacrum can also be determined as the transverse processes on the fifth lumbar vertebra.
Optionally, the console sorts the transverse processes in the second transverse process image according to the three-dimensional coordinate information of the transverse processes in the second transverse process image, and obtains vertebrae where the transverse processes in the second transverse process image are located according to the sorting result. The console determines the vertebrae in which each transverse process in the first transverse process image is located based on morphological features and/or adjacent structures of each transverse process in the first transverse process image. The console determines transverse processes of the first transverse process image and the second transverse process image in the same vertebra as transverse process pairs, and registers the ultrasonic image and the medical image according to three-dimensional coordinate information of each transverse process in the transverse process pairs to obtain a registration matrix.
In this embodiment, the ultrasound image and the medical image are registered according to the three-dimensional coordinate information of each transverse process in the transverse process pair, so that the registration can be ensured to be performed only in the anatomical structure of the same vertebral body, the cross-segment mismatching is avoided, for example, the alignment of the transverse process of the third lumbar vertebra in the first transverse process image and the transverse process of the fourth lumbar vertebra in the second transverse process image of the lumbar vertebra part can be avoided, and the accuracy of the registration matrix is improved.
In one embodiment, determining the vertebrae in which each transverse process is located in the first transverse process image includes:
the center coordinates of the transverse processes in each vertebra of the target object are acquired.
And respectively determining a target center closest to each transverse process in the first transverse process image from the centers according to the center coordinates and the three-dimensional coordinate information of each transverse process in the first transverse process image.
The vertebrae corresponding to the center of the object are determined as the vertebrae in which each transverse process is located in the first transverse process image.
Wherein the center coordinates of each transverse process refer to the coordinates of the center of the transverse process. The transverse process center can be obtained from the target image including each vertebra of the target. The method comprises the steps of performing binarization processing on a target image to obtain a binarized image, identifying edge contours of transverse processes in the binarized image, determining the centroid of the edge contours as the transverse process center of a target object or determining an circumscribed rectangular frame containing the edge contours, and determining the center of the circumscribed rectangular frame as the transverse process center of the target object. Further, vertebrae corresponding to each center can be determined according to the coordinate ordering result of each center. Specifically, according to the coordinate values of the centers on the target axis, the ordering result of the centers is obtained, and vertebrae corresponding to the centers are determined based on the ordering result of the centers. The direction of the target axis is the foot-to-head direction of the target object.
Each transverse process in the first transverse process image corresponds to a target center. For example, in the first transverse process image, the transverse processes 1, 2 and 3 are found by calculation that the distance between the center 1 and the transverse process 1 is closest, the distance between the center 2 and the transverse process 2 is closest, the distance between the center 3 and the transverse process 3 is closest, the center 1 is the target center of the transverse process 1, the center 2 is the target center of the transverse process 2, the center 3 is the target center of the transverse process 3, the vertebra corresponding to the center 1 is the vertebra where the transverse process 1 is located, the vertebra corresponding to the center 2 is the vertebra where the transverse process 2 is located, and the vertebra corresponding to the center 3 is the vertebra where the transverse process 3 is located.
In some embodiments, the center coordinates of each transverse process may also be the coordinates of the cluster center of the transverse process. The method comprises the steps of scanning transverse process centers of each vertebra of a target object from multiple directions to obtain first coordinates of each transverse process center in all directions, and carrying out principal component analysis and cluster analysis based on the first coordinates to obtain a cluster center and a cluster center coordinate of each transverse process.
Optionally, the console performs binarization processing on the target image including each vertebra of the target object to obtain a binarized image, identifies an edge contour of each transverse process in the binarized image, determines a centroid of the edge contour as a transverse process center of the vertebra of the target object, or determines an circumscribed rectangular frame containing the edge contour, determines a center of the circumscribed rectangular frame as a transverse process center of the vertebra of the target object, and acquires coordinates of the transverse process center. And the console determines vertebrae corresponding to each center according to the coordinate ordering result of each center. The console calculates the distance between each center and each transverse process according to the center coordinates and the three-dimensional coordinate information of each transverse process in the first transverse process image, and determines the target center closest to each transverse process in the first transverse process image from each center according to the calculation result. The console determines the vertebrae corresponding to the center of the object as the vertebrae in which each transverse process is located in the first transverse process image.
In the present embodiment, by determining the target center closest to each transverse process in the first transverse process image from the centers, respectively, based on the center coordinates and the three-dimensional coordinate information of each transverse process in the first transverse process image, it is possible to accurately determine the vertebrae corresponding to each transverse process.
In one embodiment, determining the vertebrae in which each transverse process is located in the second transverse process image includes:
The transverse processes in the second transverse process image are ordered according to the coordinate size of the transverse processes in the second transverse process image at the target axis.
And determining the vertebrae where each transverse process in the second transverse process image is located according to the ordering result of each transverse process in the second transverse process image.
Wherein the direction of the target axis is the foot-to-head direction of the target object. Since the vertebrae are sequentially arranged from the head to the foot of the target object, the vertebrae in which the transverse processes are located in the second transverse process image are determined by ordering the transverse processes in accordance with the coordinate sizes of the transverse processes on the target axis in the direction of the target axis with the foot to head direction of the target object, so that the accuracy can be improved.
Further, the ranking results of the transverse processes are obtained by ranking from low to high according to the coordinate value of each transverse process on the target axis. Wherein the transverse process with smaller coordinate values is located below and near the foot of the target object, and the transverse process with larger coordinate values is located above and near the head of the target object. Specifically, if the vertebrae are lumbar, the lumbar vertebrae where each transverse process is located in the sorting result are the fifth lumbar vertebra, the fourth lumbar vertebra, the third lumbar vertebra, the second lumbar vertebra and the first lumbar vertebra, that is, the transverse process arranged in the first position is located in the fifth lumbar vertebra, the transverse process arranged in the second position is located in the fourth lumbar vertebra, the transverse process arranged in the third position is located in the third lumbar vertebra, the transverse process arranged in the fourth position is located in the second lumbar vertebra, and the transverse process arranged in the fifth position is located in the first lumbar vertebra.
Optionally, the console sorts the transverse processes from low to high according to the coordinate value of each transverse process on the target axis, and a sorting result of each transverse process is obtained.
In this embodiment, by ordering the coordinates of the transverse processes on the target axis, the transverse processes in different vertebrae can be more clearly identified and distinguished, which helps to avoid confusion of the transverse processes of adjacent vertebrae, so that the puncture needle or the anesthetic needle can be accurately guided to puncture in the puncture scene.
In one embodiment, acquiring a first transverse process image segmented from an ultrasound image of a vertebral region of a target subject includes:
scanning the vertebra part of the target object from the sagittal position by using an ultrasonic probe to obtain an ultrasonic image sequence with two-dimensional coordinate information;
performing three-dimensional reconstruction based on the ultrasonic image sequence to obtain a first reconstruction model;
the first transverse process image is segmented from a first reconstruction model, and the first reconstruction model comprises three-dimensional coordinate information of each transverse process in the first transverse process image.
Where sagittal position is a tangential position in the human body and refers to a plane cut along the target object from anterior to posterior or posterior to anterior. Ultrasound scanning of the targeted subject's vertebral sites from the sagittal position helps to observe the overall trend of the transverse processes and their relative positions with respect to the anterior-posterior spine.
The ultrasound image sequence is made up of a plurality of frames of ultrasound images. The multi-frame ultrasonic image is obtained by scanning the vertebrae of the target object from the sagittal position by the ultrasonic probe. The ultrasound image sequence is scanned by an ultrasound probe and sent to a console.
The three-dimensional coordinate information of each transverse process in the first transverse process image can be obtained through two-dimensional coordinate conversion. The method comprises the steps of obtaining pose information of an ultrasonic probe in an ultrasonic image acquisition process, determining a transformation matrix based on the pose information, wherein the transformation matrix is used for representing a conversion relation from a two-dimensional image coordinate system to a global three-dimensional coordinate system, and converting the two-dimensional coordinate information of each transverse process in a first transverse process image by using the transformation matrix to obtain three-dimensional coordinate information of each transverse process in the first transverse process image.
Optionally, the staff controls the ultrasonic probe to scan the vertebra part of the target object from the sagittal position, and sends the scanned multi-frame ultrasonic image to the console. And the console receives the ultrasonic image sequence with the two-dimensional coordinate information, and performs three-dimensional reconstruction on the ultrasonic image sequence to obtain a first reconstruction model. The console segments a first transverse process image from the first reconstruction model and acquires pose information of the ultrasonic probe in the ultrasonic image acquisition process. The console determines a transformation matrix based on the pose information, and converts the two-dimensional coordinate information of each transverse process in the first transverse process image by using the transformation matrix to obtain the three-dimensional coordinate information of each transverse process in the first transverse process image.
In this embodiment, the first reconstruction model is obtained by performing three-dimensional reconstruction based on the ultrasound image sequence, so that the structures of the vertebrae and the transverse processes thereof can be observed from multiple angles, which is helpful for more comprehensively understanding the structural relationship between the vertebrae and the transverse processes thereof.
In some embodiments, the image fusion method further comprises performing a data preprocessing operation on each frame of image in the acquired ultrasound image sequence. Thus, the image quality and the accuracy of the ultrasonic image sequence can be improved. Wherein the data preprocessing includes, but is not limited to, denoising and contrast enhancement. Denoising can be achieved by using a filtering algorithm, and contrast enhancement can be achieved by histogram equalization or contrast boosting.
In some embodiments, before the three-dimensional reconstruction of the ultrasound image sequence, the ultrasound images of adjacent frames in the ultrasound image sequence may also be aligned by image feature matching to improve the accuracy of the three-dimensional reconstruction. The image feature matching refers to detecting and matching key feature points in different ultrasonic images, and determining corresponding relations among the key feature points so as to align the ultrasonic images of adjacent frames in an ultrasonic image sequence according to the corresponding relations among the key feature points.
In some embodiments, after the first reconstruction model is obtained, smoothing may be further performed on the first reconstruction model to further improve accuracy of the first reconstruction model.
In some embodiments, the acquiring of the three-dimensional coordinate information of the first transverse process image further comprises acquiring, by an electromagnetic positioning system, the three-dimensional coordinate information of the first transverse process image segmented from the ultrasound image.
In some embodiments, acquiring a first transverse process image segmented from an ultrasound image of a vertebral region of a target subject includes scanning the vertebral region of the target subject from a sagittal position using an ultrasound probe to obtain an ultrasound image sequence having two-dimensional coordinate information, segmenting a two-dimensional transverse process image from each frame of the ultrasound image sequence, and performing three-dimensional reconstruction on the two-dimensional transverse process image to obtain the first transverse process image.
In one embodiment, the image fusion method further comprises:
And carrying out three-dimensional reconstruction based on an image sequence of the medical image to obtain a second modeling type.
And acquiring a first coordinate of an ultrasonic image scanned by the ultrasonic probe in real time. The first coordinates are three-dimensional coordinates.
Based on the registration matrix, the first coordinates are converted into first converted coordinates in the second reconstruction model, and based on the first converted coordinates, a cross-sectional image of the ultrasound image in the second reconstruction model is determined.
And fusing the cross-sectional image and the ultrasonic image to obtain a fused image so as to conduct puncture guiding based on the fused image, the cross-sectional image and the ultrasonic image.
The image sequence in which the medical image is located refers to an image sequence formed by multiple frames of medical images. Specifically, three-dimensional reconstruction is performed by using multi-frame medical images, and a second modeling type is obtained. The first coordinates may be obtained by a magnetic positioning system.
In the process of scanning the ultrasonic probe, the control console can acquire the cross-sectional image in the second reconstruction model in real time according to the first coordinate of the scanned ultrasonic image, so that more comprehensive and accurate image information can be provided in time. In particular, ultrasound images can provide real-time soft tissue contrast, while computed tomography images can provide detailed bone structure and density information, which in combination can help a physician obtain more complete and accurate structural position information. The coordinates of the sectional image in the second reconstruction model are the first transformed coordinates. The cross-sectional image corresponding to the lumbar region is shown in fig. 6.
In some embodiments, the three-dimensional reconstruction process includes performing data preprocessing on a plurality of frames of medical images in an image sequence to obtain a preprocessed image sequence, and performing three-dimensional reconstruction on the preprocessed image sequence by using a three-dimensional reconstruction algorithm to obtain a second modeling type. The data preprocessing comprises at least one of denoising, image registration and standardization, wherein denoising refers to noise reduction in medical images, standardization refers to gray scale range adjustment in medical images, image registration refers to multi-frame medical images registration in the same coordinate system, and a three-dimensional reconstruction method comprises but is not limited to a voxel difference method and a surface reconstruction method.
In some embodiments, segmentation of both the first and second transverse process images may be achieved by a segmentation model. For example, a first reconstructed model is segmented using a trained 3D U-Net (three-dimensional U-shaped network) model to obtain a first transverse process image, and a second reconstructed model is segmented using a trained 3D U-Net model to obtain a second transverse process image.
Optionally, the console performs data preprocessing on the multi-frame medical image to obtain a preprocessed image sequence, and performs three-dimensional reconstruction on the preprocessed image sequence by using a three-dimensional reconstruction algorithm to obtain a second modeling type. The console acquires first coordinates of an ultrasonic image scanned by the ultrasonic probe in real time through the magnetic positioning system, and converts the first coordinates into first converted coordinates in the second reconstruction model based on the registration matrix. The console searches a section image corresponding to the first conversion coordinate in the second reconstruction model. The console fuses the cross-sectional image and the ultrasonic image to obtain a fused image so as to conduct puncture guiding based on the fused image, the cross-sectional image and the ultrasonic image.
In one embodiment, the console may also receive a contrast adjustment request to adjust the contrast of the fused image so that the cross-sectional image and the ultrasound image may be distinguished significantly.
In this embodiment, by fusing the ultrasound image and the cross-sectional image, the soft tissue contrast and detailed bone of the vertebrae of the target object can be displayed at the same time, which is helpful for more comprehensively understanding the vertebrae of the target object and the surrounding environment thereof, thereby reducing the risk of puncture and improving the success rate of puncture.
In one embodiment, the image fusion method further comprises:
and acquiring a puncture mark planned in a second reconstruction model after the three-dimensional reconstruction of the medical image, wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information.
When the ultrasonic probe scans the puncture mark, the puncture mark is displayed in the ultrasonic image so as to guide puncture.
The puncture mark can be determined by a worker according to nerve tissues of the vertebrae part of the target object, so that the worker is guided to perform anesthesia puncture through the puncture mark, more accurate pain control is provided, and risk is reduced. The puncture marking may also be determined from anatomical fixation points, which may be iliac ridges, spinous processes, etc., to aid in the precise location of the puncture point. For example, in spinal anesthesia, the space between the third vertebra and the fourth vertebra or the space between the fourth vertebra and the fifth vertebra may be utilized as the puncture point.
The puncture point refers to the point where the anesthetic needle or the puncture needle finally reaches, and the puncture route refers to the travel route of the anesthetic needle or the puncture needle in the target object.
In some embodiments, the puncture mark can also be automatically planned by the control console according to preset mark information. The preset identification information is preset as the candidate puncture point and/or the candidate puncture route. For example, when spinal anesthesia is performed on a lumbar region, a medical staff generally uses a gap between a third lumbar vertebra and a fourth lumbar vertebra or a gap between a fourth lumbar vertebra and a fifth lumbar vertebra as a puncture point, and then the gap between the third lumbar vertebra and the fourth lumbar vertebra or the gap between the fourth lumbar vertebra and the fifth lumbar vertebra can be used as preset identification information, so that a console can automatically plan the gap between the third lumbar vertebra and the fourth lumbar vertebra or the gap between the fourth lumbar vertebra and the fifth lumbar vertebra as a puncture identification.
When the ultrasonic image is scanned, the position information of the ultrasonic image is converted into the position information under the coordinate of the medical image through the registration matrix, the puncture mark is planned in the second modeling type, namely, the puncture mark has the position information under the coordinate system of the medical image, and when the cross-section image converted by the ultrasonic image is intersected with the position of the puncture mark under the coordinate system of the medical image, the puncture mark can be stated to be scanned in the ultrasonic image. Determining whether the ultrasound probe scanned the puncture identity is performed in real time.
Displaying the puncture identity in the ultrasound image may be achieved by coordinate transformation. Specifically, based on an inverse matrix of the registration matrix, coordinate information of the puncture mark is converted into coordinate information under an ultrasonic coordinate system, and the scanned puncture mark is displayed in an ultrasonic image according to the converted coordinate information.
In some embodiments, the puncture identity may also be displayed in a fused image obtained after the ultrasound image and the cross-sectional image are fused.
Optionally, after the staff plans the puncture mark according to the neural tissue of the vertebra part of the target object, the control console automatically acquires the puncture mark planned in the second reconstruction model after the three-dimensional reconstruction of the medical image. Wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information. In the process of scanning by the ultrasonic probe, the console converts the position information of the ultrasonic image into the position information under the medical image coordinate in real time through the registration matrix, and when the section image converted by the ultrasonic image is intersected with the position of the puncture mark under the medical image coordinate system, the puncture mark is determined to be scanned in the ultrasonic image. The control console converts coordinate information of the puncture mark into coordinate information under an ultrasonic coordinate system based on an inverse matrix of the registration matrix, and displays the scanned puncture mark in an ultrasonic image according to the converted coordinate information so as to guide puncture.
In the embodiment, the puncture mark is displayed in the ultrasonic image, so that a doctor can more intuitively see the position of the needle point and the position of the needle point relative to the puncture mark, the puncture angle and the puncture depth can be conveniently adjusted in real time, and the controllability of operation is improved.
In one embodiment, displaying the puncture identity in the ultrasound image to guide the puncture when the ultrasound probe scans the puncture identity comprises:
and acquiring a second coordinate of the ultrasonic image scanned by the ultrasonic probe in real time. The second coordinates are three-dimensional coordinates.
The second coordinates are converted into second converted coordinates in the second reconstruction model based on the registration matrix.
And when the second conversion coordinates are matched with at least one of the puncture route coordinate information and the puncture point coordinate information, determining that the ultrasonic probe scans the puncture mark.
The puncture identity is displayed in the ultrasound image to guide the puncture.
Wherein the second coordinate is obtainable by a magnetic positioning system. The matching of the second conversion coordinate with at least one of the puncture route coordinate information and the puncture point coordinate information means that the second conversion coordinate intersects with at least one of the puncture route coordinate information and the puncture point coordinate information.
When the puncture mark is displayed in the ultrasonic image, only the puncture mark scanned by the ultrasonic probe in real time is displayed.
Optionally, the console acquires a second coordinate of the ultrasound image scanned by the ultrasound probe in real time through the magnetic positioning system. The console converts the second coordinates into second converted coordinates in the second reconstructed model based on the registration matrix. When the second conversion coordinates are matched with at least one of the puncture route coordinate information and the puncture point coordinate information, the control console determines that the ultrasonic probe scans the puncture mark, and displays the scanned puncture mark in the ultrasonic image so as to guide puncture.
In this embodiment, by acquiring the second coordinates of the ultrasound image scanned by the ultrasound probe in real time and converting the second coordinates into the second converted coordinates in the second reconstruction model based on the predetermined registration matrix, accurate matching between image data of different modes is achieved, so that it is ensured that the structure in the ultrasound image can accurately correspond to the second reconstruction model after three-dimensional reconstruction of the medical image, and positioning accuracy is improved. By determining that the ultrasonic probe has scanned the puncture mark and the puncture mark scanned on the ultrasonic image when the second conversion coordinate is matched with the puncture route coordinate information or the puncture point coordinate information, the real-time feedback mechanism greatly enhances the visualization degree and the intuitiveness, and is beneficial to more accurately performing the puncture operation.
In some embodiments, the console includes a display screen that is operable to display an image interface that includes a first region 10, a second region 20, and a third region 30. Wherein the first region 10 may display the ultrasound image and the fusion image simultaneously, the display positions of the fusion image and the ultrasound image in the first region 10 allowing switching. The second region 20 displays a three-dimensional model for guiding a puncture and the third region 30 may be used to display acquired medical images and/or fused images. Schematic diagrams of the image interface are shown in fig. 7 and 8. Wherein the three-dimensional model for guiding the puncture may be the first reconstruction model or the second reconstruction model.
The application also provides an application scene, which applies the image fusion method. Specifically, the application of the image fusion method in the application scene is as follows:
The staff controls the ultrasonic probe to scan the vertebra part of the target object from the sagittal position, and sends the scanned multi-frame ultrasonic image to the console. And the console receives the ultrasonic image sequence with the two-dimensional coordinate information, and performs three-dimensional reconstruction on the ultrasonic image sequence to obtain a first reconstruction model. The console segments a first transverse process image from the first reconstruction model and acquires pose information of the ultrasonic probe in the ultrasonic image acquisition process. The console determines a transformation matrix based on the pose information, and converts the two-dimensional coordinate information of each transverse process in the first transverse process image by using the transformation matrix to obtain the three-dimensional coordinate information of each transverse process in the first transverse process image. After obtaining a medical image of the vertebrae of the target subject by one or more of computed tomography, magnetic resonance imaging, and positron emission tomography, the console segments a second transverse process image from the medical image using one or more of thresholding, edge detection, and model segmentation.
The console sorts the transverse processes from low to high according to the coordinate value of each transverse process on the target axis, and a sorting result of each transverse process is obtained. The console determines the vertebrae in which each transverse process in the first transverse process image is located based on morphological features and/or adjacent structures of each transverse process in the first transverse process image. The console determines transverse processes of the first transverse process image and the second transverse process image in the same vertebra as transverse process pairs, and registers the ultrasonic image and the medical image according to three-dimensional coordinate information of each transverse process in the transverse process pairs to obtain a registration matrix.
The control console performs data preprocessing on the multi-frame medical images to obtain preprocessed image sequences, and performs three-dimensional reconstruction on the preprocessed image sequences by using a three-dimensional reconstruction algorithm to obtain a second modeling type. The console acquires first coordinates of an ultrasonic image scanned by the ultrasonic probe in real time through the magnetic positioning system, and converts the first coordinates into first converted coordinates in the second reconstruction model based on the registration matrix. The console searches a section image corresponding to the first conversion coordinate in the second reconstruction model. The console fuses the cross-sectional image and the ultrasonic image to obtain a fused image so as to conduct puncture guiding based on the fused image, the cross-sectional image and the ultrasonic image.
After planning puncture marks according to the nerve tissue of the vertebrae part of the target object, the control console automatically acquires the puncture marks planned in the second reconstruction model after the three-dimensional reconstruction of the medical image. Wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information. In the process of scanning by the ultrasonic probe, the console converts the position information of the ultrasonic image into the position information under the medical image coordinate in real time through the registration matrix, and when the section image converted by the ultrasonic image is intersected with the position of the puncture mark under the medical image coordinate system, the puncture mark is determined to be scanned in the ultrasonic image. The control console converts coordinate information of the puncture mark into coordinate information under an ultrasonic coordinate system based on an inverse matrix of the registration matrix, and displays the scanned puncture mark in an ultrasonic image according to the converted coordinate information so as to guide puncture. A specific flow chart is shown in fig. 9.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image fusion device for realizing the above related image fusion method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiment of one or more image fusion devices provided below may be referred to the limitation of the image fusion method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 10, there is provided an image fusion apparatus including:
An image acquisition module 1002 is configured to acquire a first transverse process image segmented from an ultrasound image of a target subject vertebral region, and a second transverse process image segmented from a medical image of the target subject vertebral region.
The registration fusion module 1004 is configured to register the ultrasound image and the medical image according to three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image, so as to obtain a registration matrix.
In one embodiment, the image fusion device is further used for determining the vertebra where each transverse process in the first transverse process image is located and the vertebra where each transverse process in the second transverse process image is located, determining the transverse processes of the first transverse process image and the second transverse process image in the same vertebra as a transverse process pair, and registering the ultrasonic image and the medical image according to three-dimensional coordinate information of each transverse process in the transverse process pair to obtain a registration matrix.
In one embodiment, the image fusion device is further used for acquiring center coordinates of each transverse process in each vertebra of the target object, respectively determining a target center closest to each transverse process in the first transverse process image from the centers according to the center coordinates and three-dimensional coordinate information of each transverse process in the first transverse process image, and determining the vertebra corresponding to the target center as the vertebra where each transverse process in the first transverse process image is located.
In one embodiment, the image fusion device is further configured to rank the transverse processes in the second transverse process image according to the coordinate size of the transverse processes in the second transverse process image on the target axis, wherein the target axis is the foot-to-head direction of the target object, and determine the vertebrae in which the transverse processes in the second transverse process image are located according to the ranking result of the transverse processes in the second transverse process image.
In one embodiment, the image fusion device is further used for scanning a vertebra part of the target object from a sagittal position by using an ultrasonic probe to obtain an ultrasonic image sequence with two-dimensional coordinate information, performing three-dimensional reconstruction based on the ultrasonic image sequence to obtain a first reconstruction model, segmenting a first transverse process image from the first reconstruction model, and the first reconstruction model comprises the three-dimensional coordinate information of each transverse process in the first transverse process image.
In one embodiment, the image fusion device is further used for carrying out three-dimensional reconstruction based on an image sequence where the medical image is located to obtain a second reconstruction model, acquiring a first coordinate of an ultrasonic image scanned by the ultrasonic probe in real time, wherein the first coordinate is a three-dimensional coordinate, converting the first coordinate into a first conversion coordinate in the second reconstruction model based on the registration matrix, determining a cross-sectional image of the ultrasonic image in the second reconstruction model based on the first conversion coordinate, and fusing the cross-sectional image and the ultrasonic image to obtain a fused image so as to carry out puncture guidance based on the fused image, the cross-sectional image and the ultrasonic image.
In one embodiment, the image fusion device is further used for acquiring a puncture mark planned in a second reconstruction model after three-dimensional reconstruction of the medical image, wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information, and when the ultrasonic probe scans the puncture mark, the puncture mark is displayed in the ultrasonic image so as to guide puncture.
In one embodiment, the image fusion device is further used for acquiring second coordinates of an ultrasonic image scanned by the ultrasonic probe in real time, the second coordinates are three-dimensional coordinates, converting the second coordinates into second converted coordinates in a second reconstruction model based on the registration matrix, determining that the ultrasonic probe scans a puncture mark when the second converted coordinates are matched with at least one of puncture route coordinate information and puncture point coordinate information, and displaying the puncture mark in the ultrasonic image to guide puncture.
The respective modules in the above image fusion apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing an ultrasonic image, a first transverse process image, a medical image, a second transverse process image, three-dimensional coordinate information of transverse processes, a registration matrix, transverse process pairs, a center coordinate, a target center, a sorting result of each transverse process, a first reconstruction model, a second reconstruction model, a first coordinate, a first conversion coordinate, a fusion image, a puncture mark, a second coordinate and a second conversion coordinate. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image fusion method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.