Movatterモバイル変換


[0]ホーム

URL:


CN120563340A - Image fusion method, device and computer equipment - Google Patents

Image fusion method, device and computer equipment

Info

Publication number
CN120563340A
CN120563340ACN202511062353.9ACN202511062353ACN120563340ACN 120563340 ACN120563340 ACN 120563340ACN 202511062353 ACN202511062353 ACN 202511062353ACN 120563340 ACN120563340 ACN 120563340A
Authority
CN
China
Prior art keywords
image
transverse process
puncture
transverse
ultrasonic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202511062353.9A
Other languages
Chinese (zh)
Inventor
吴梦麟
王杉杉
邓洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kaben Shenzhen Medical Equipment Co ltd
Original Assignee
Kaben Shenzhen Medical Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kaben Shenzhen Medical Equipment Co ltdfiledCriticalKaben Shenzhen Medical Equipment Co ltd
Priority to CN202511062353.9ApriorityCriticalpatent/CN120563340A/en
Publication of CN120563340ApublicationCriticalpatent/CN120563340A/en
Pendinglegal-statusCriticalCurrent

Links

Landscapes

Abstract

Translated fromChinese

本申请涉及医疗器械以及穿刺技术领域,尤其涉及一种图像融合方法、装置和计算机设备。所述方法包括:获取从目标对象椎骨部位的超声图像中分割出的第一横突图像、从椎骨部位的医学影像中分割出的第二横突图像;根据第一横突图像和第二横突图像中各横突的三维坐标信息,将超声图像和医学影像进行配准,得到配准矩阵,以基于配准矩阵进行图像融合。采用本方法能够正确引导穿刺。

This application relates to the fields of medical devices and puncture technology, and more particularly to an image fusion method, apparatus, and computer equipment. The method comprises obtaining a first transverse process image segmented from an ultrasound image of a target subject's vertebrae and a second transverse process image segmented from a medical image of the vertebrae; and registering the ultrasound image and the medical image based on the three-dimensional coordinate information of each transverse process in the first and second transverse process images to obtain a registration matrix, thereby performing image fusion based on the registration matrix. This method enables accurate puncture guidance.

Description

Image fusion method, device and computer equipment
Technical Field
The present application relates to the field of medical apparatuses and instruments, and in particular, to an image fusion method, apparatus, and computer device.
Background
Anesthesia is an integral part of modern medicine, which greatly improves the safety and effectiveness of medical procedures, while also improving patient comfort and therapeutic experience. The lumbar plexus nerve block and the sacral plexus nerve block under the ultrasonic guidance are two effective regional anesthesia methods, and are mainly used for lower limb operation, and compared with the traditional general anesthesia, spinal anesthesia or epidural anesthesia has a plurality of advantages, such as fewer side effects, long duration of pain control, quick recovery and the like.
However, the conventional anesthesia puncture guiding system usually performs puncture under the guidance of an ultrasonic image, but due to the characteristics of low ultrasonic resolution, general specific tissue development effect and the like, the effect of puncture guiding is poor, and the puncture risk is increased.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image fusion method, apparatus, and computer device that can accurately guide puncturing.
In a first aspect, the present application provides an image fusion method, the method comprising:
acquiring a first transverse process image segmented from an ultrasonic image of a vertebra part of a target object and a second transverse process image segmented from a medical image of the vertebra part;
And registering the ultrasonic image and the medical image according to the three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix, so as to perform image fusion based on the registration matrix.
In one embodiment, the registering the ultrasound image and the medical image according to the three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix includes:
Determining vertebrae in which each transverse process is located in the first transverse process image and vertebrae in which each transverse process is located in the second transverse process image;
Determining a transverse process of the first transverse process image and the second transverse process image in the same vertebra as a transverse process pair;
And registering the ultrasonic image and the medical image according to the three-dimensional coordinate information of each transverse process in the transverse process pair to obtain a registration matrix.
In one embodiment, the determining the vertebrae in which each transverse process in the first transverse process image is located includes:
acquiring the central coordinates of transverse processes in each vertebra of the target object;
According to the center coordinates and the three-dimensional coordinate information of each transverse process in the first transverse process image, respectively determining a target center closest to each transverse process in the first transverse process image from each center;
And determining the vertebrae corresponding to the target center as the vertebrae where each transverse process in the first transverse process image is located.
In one embodiment, determining the vertebrae in which each transverse process is located in the second transverse process image includes:
Ordering the transverse processes in the second transverse process image according to the coordinate size of each transverse process in the target axis in the second transverse process image, wherein the direction of the target axis is the foot-to-head direction of the target object;
And determining vertebrae where each transverse process in the second transverse process image is located according to the ordering result of each transverse process in the second transverse process image.
In one embodiment, the acquiring a first transverse process image segmented from an ultrasound image of a vertebral region of a target subject includes:
scanning the vertebra part of the target object from the sagittal position by using an ultrasonic probe to obtain an ultrasonic image sequence with two-dimensional coordinate information;
Performing three-dimensional reconstruction based on the ultrasonic image sequence to obtain a first reconstruction model;
And segmenting a first transverse process image from the first reconstruction model, wherein the first reconstruction model comprises three-dimensional coordinate information of each transverse process in the first transverse process image.
In one embodiment, the method further comprises:
performing three-dimensional reconstruction based on an image sequence of the medical image to obtain a second modeling type;
acquiring a first coordinate of the ultrasonic image scanned by an ultrasonic probe in real time, wherein the first coordinate is a three-dimensional coordinate;
Converting the first coordinates into first converted coordinates in the second reconstruction model based on the registration matrix, and determining a cross-sectional image of the ultrasound image in the second reconstruction model based on the first converted coordinates;
and fusing the section image and the ultrasonic image to obtain a fused image so as to conduct puncture guiding based on the fused image, the section image and the ultrasonic image.
In one embodiment, the method further comprises:
Acquiring a puncture mark planned in a second reconstruction model after the three-dimensional reconstruction of the medical image, wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information;
and displaying the puncture mark in the ultrasonic image when the ultrasonic probe scans the puncture mark so as to guide puncture.
In a second aspect, the present application provides an image fusion apparatus, the apparatus comprising:
An image acquisition module for acquiring a first transverse process image segmented from an ultrasound image of a target subject vertebral region and a second transverse process image segmented from a medical image of the target subject vertebral region;
And the registration fusion module is used for registering the ultrasonic image and the medical image according to the three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
According to the image fusion method, the image fusion device and the computer equipment, the first transverse process image which is segmented from the ultrasonic image of the vertebra part of the target object and the second transverse process image which is segmented from the medical image of the vertebra part are obtained, and the ultrasonic image and the medical image are registered according to the three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image, so that high-precision spatial alignment can be realized based on the coordinate information of a plurality of transverse processes, registration errors are reduced, the registration matrix is more accurate, and puncture can be guided correctly in puncture scenes and other scenes through the images fused based on the registration matrix.
Drawings
FIG. 1 is an application environment diagram of an image fusion method in one embodiment;
FIG. 2 is a flow chart of an image fusion method in one embodiment;
FIG. 3 is a schematic representation of a first transverse process image in one embodiment;
FIG. 4 is a schematic representation of a second transverse process image in one embodiment;
figure 5 is a schematic view of a lumbar spine arrangement in one embodiment;
FIG. 6 is a schematic illustration of a cross-sectional image in one embodiment;
FIG. 7 is a schematic diagram of an image interface in one embodiment;
FIG. 8 is a schematic diagram of an image interface in another embodiment;
FIG. 9 is a flowchart of an image fusion method according to another embodiment;
FIG. 10 is a block diagram of an image fusion apparatus in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The image fusion method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein console 102 is coupled to terminal 104. The console 102 may be a means for deriving a registration matrix or may be a system for deriving a registration matrix. Specifically, the console 102 acquires a first transverse process image segmented from an ultrasound image of a target subject vertebral site, and a second transverse process image segmented from a medical image of the vertebral site. The console 102 registers the ultrasound image and the medical image according to three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix, so as to perform image fusion based on the registration matrix.
In one embodiment, as shown in fig. 2, an image fusion method is provided, and the method is applied to the console in fig. 1 for illustration, and includes the following steps:
S202, acquiring a first transverse process image segmented from an ultrasonic image of a vertebra part of a target object and a second transverse process image segmented from a medical image of the vertebra part.
The vertebra part can be the lumbar vertebra part of the target object or the thoracic vertebra part of the target object. The ultrasonic image is obtained by scanning the vertebrae of the target object with an ultrasonic probe. The method of dividing the first transverse process image from the ultrasound image includes, but is not limited to, a threshold dividing method, an edge detecting method, a region generating method, a model dividing method, etc., and the method of dividing the first transverse process image from the ultrasound image may be a combination of a plurality of methods among the threshold dividing method, the edge detecting method, and the model dividing method. The thresholding method is to divide pixels in an image into several parts by setting one or more thresholds to divide a first transverse process image and other tissues in an ultrasound image by pixel values. Edge detection is the use of an edge detection algorithm to identify the edge contours of the transverse processes in the image, to locate and segment the first transverse process image by the edge contours. The model-based method is to segment the first transverse process image from the ultrasound image using a trained segmentation model. A first transverse process image segmented from an ultrasound image of the lumbar region of the target subject is shown in fig. 3.
The medical image is still image data of a vertebrae portion of a target object captured at a certain time. Medical images include, but are not limited to, computed tomography imaging, magnetic resonance imaging, and positron emission tomography imaging. The method of dividing the second transverse process image from the medical image includes, but is not limited to, a threshold dividing method, an edge detecting method, a region generating method, a model dividing method, etc., and the method of dividing the second transverse process image from the medical image may be a combination of a plurality of methods among the threshold dividing method, the edge detecting method, and the model dividing method. A second transverse process image segmented from the medical image of the lumbar region of the target subject is shown in fig. 4.
Optionally, after the ultrasound probe scans the vertebra part of the target object, the console acquires an ultrasound image scanned by the ultrasound probe, and segments a first transverse process image from the ultrasound image by one or more modes of a threshold segmentation method, an edge detection method and a model segmentation method. After obtaining a medical image of the vertebrae of the target subject by one or more of computed tomography, magnetic resonance imaging, and positron emission tomography, the console segments a second transverse process image from the medical image using one or more of thresholding, edge detection, and model segmentation.
S204, registering the ultrasonic image and the medical image according to the three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix, so as to perform image fusion based on the registration matrix.
In registering the ultrasound image and the medical image, it is determined which of the first transverse process image and the second transverse process image corresponds to the corresponding transverse process, that is, which of the first transverse process image and the second transverse process image belongs to the same vertebra, and then the ultrasound image and the medical image are registered according to the three-dimensional coordinate information of the transverse processes belonging to the same vertebra.
The registration matrix can be used for converting three-dimensional coordinate information in the ultrasonic image into three-dimensional coordinate information in the medical image, and the inverse matrix of the registration matrix can be used for converting three-dimensional coordinate information in the medical image into three-dimensional coordinate information in the ultrasonic image, so that fusion between the ultrasonic image and the medical image can be realized.
Optionally, the console determines which transverse processes of the first transverse process image and the second transverse process image belong to the same vertebra, and registers the ultrasonic image and the medical image according to the three-dimensional coordinate information of the transverse processes belonging to the same vertebra to obtain a registration matrix, so that the three-dimensional coordinate information in the ultrasonic image is converted into the three-dimensional coordinate information in the medical image or the three-dimensional coordinate information in the medical image is converted into the three-dimensional coordinate information in the ultrasonic image through the registration matrix, and fusion between the ultrasonic image and the medical image is realized.
In the image fusion method, the first transverse process image segmented from the ultrasonic image of the vertebrae part of the target object and the second transverse process image segmented from the medical image of the vertebrae part are acquired, and the ultrasonic image and the medical image are registered according to the three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image, so that high-precision space alignment can be realized based on the coordinate information of a plurality of transverse processes, registration errors are reduced, the registration matrix is more accurate, and puncture can be guided correctly in puncture scenes and other scenes through the fused images based on the registration matrix.
In one embodiment, registering the ultrasound image and the medical image according to three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image to obtain a registration matrix comprises:
the vertebrae in which each transverse process is located in the first transverse process image and the vertebrae in which each transverse process is located in the second transverse process image are determined.
The transverse processes of the first transverse process image and the second transverse process image in the same vertebra are determined as transverse process pairs.
And registering the ultrasonic image and the medical image according to the three-dimensional coordinate information of each transverse process in the transverse process pair to obtain a registration matrix.
The vertebrae where each transverse process is located refers to the vertebrae where the transverse process is located in the target object. Specifically, the human body has 12 total thoracic vertebrae, which are respectively called a first thoracic vertebra T1, a second thoracic vertebra T2, a third thoracic vertebra T3, a fourth thoracic vertebra T4, a fifth thoracic vertebra T5, a sixth thoracic vertebra T6, a seventh thoracic vertebra T7, an eighth thoracic vertebra T8, a ninth thoracic vertebra T9, a tenth thoracic vertebra T10, an eleventh thoracic vertebra T11, and a twelfth thoracic vertebra T12. The human body structure is provided with 5 lumbar vertebrae in total, which are respectively called a first lumbar vertebra, a second lumbar vertebra, a third lumbar vertebra, a fourth lumbar vertebra and a fifth lumbar vertebra, wherein the lumbar vertebrae are sequentially connected, and as shown in fig. 5, L1 is the first lumbar vertebra, L2 is the second lumbar vertebra, L3 is the third lumbar vertebra, L4 is the fourth lumbar vertebra and L5 is the fifth lumbar vertebra.
The transverse processes of the first transverse process image and the second transverse process image in the same vertebra refer to transverse processes belonging to the same vertebra. For example, if the transverse process a in the first transverse process image is the transverse process in the first lumbar vertebra and the transverse process B in the second transverse process object is the transverse process in the first lumbar vertebra, the transverse process a and the transverse process B are in the same lumbar vertebra, and belong to the transverse process pair.
In some embodiments, since the second transverse process image may include each transverse process of the target object vertebral region, the transverse processes in the second transverse process image may be ranked according to three-dimensional coordinate information of the transverse processes in the second transverse process image, and vertebrae in which the transverse processes in the second transverse process image are located may be obtained according to the ranking result.
In some embodiments, the vertebrae in which each transverse process in the first transverse process image/the second transverse process image is located may be determined based on morphological features and/or adjacent structures of the transverse processes. For example, the vertebrae are lumbar regions, the transverse processes in the third lumbar vertebra are generally longer, and are the longest transverse processes in all lumbar vertebrae, so the longest transverse processes in the first transverse process image/the second transverse process image can be determined as the transverse processes on the third lumbar vertebra, the transverse processes in the fifth lumbar vertebra are generally thicker and extend to both sides to form a lumbosacral joint with the sacrum, so the thicker and laterally extending transverse processes in the first transverse process image/the second transverse process image can be determined as the transverse processes on the fifth lumbar vertebra, and the transverse processes on the fifth lumbar vertebra are connected with the sacrum to form a lumbosacral angle, so the transverse processes connected with the sacrum can also be determined as the transverse processes on the fifth lumbar vertebra.
Optionally, the console sorts the transverse processes in the second transverse process image according to the three-dimensional coordinate information of the transverse processes in the second transverse process image, and obtains vertebrae where the transverse processes in the second transverse process image are located according to the sorting result. The console determines the vertebrae in which each transverse process in the first transverse process image is located based on morphological features and/or adjacent structures of each transverse process in the first transverse process image. The console determines transverse processes of the first transverse process image and the second transverse process image in the same vertebra as transverse process pairs, and registers the ultrasonic image and the medical image according to three-dimensional coordinate information of each transverse process in the transverse process pairs to obtain a registration matrix.
In this embodiment, the ultrasound image and the medical image are registered according to the three-dimensional coordinate information of each transverse process in the transverse process pair, so that the registration can be ensured to be performed only in the anatomical structure of the same vertebral body, the cross-segment mismatching is avoided, for example, the alignment of the transverse process of the third lumbar vertebra in the first transverse process image and the transverse process of the fourth lumbar vertebra in the second transverse process image of the lumbar vertebra part can be avoided, and the accuracy of the registration matrix is improved.
In one embodiment, determining the vertebrae in which each transverse process is located in the first transverse process image includes:
the center coordinates of the transverse processes in each vertebra of the target object are acquired.
And respectively determining a target center closest to each transverse process in the first transverse process image from the centers according to the center coordinates and the three-dimensional coordinate information of each transverse process in the first transverse process image.
The vertebrae corresponding to the center of the object are determined as the vertebrae in which each transverse process is located in the first transverse process image.
Wherein the center coordinates of each transverse process refer to the coordinates of the center of the transverse process. The transverse process center can be obtained from the target image including each vertebra of the target. The method comprises the steps of performing binarization processing on a target image to obtain a binarized image, identifying edge contours of transverse processes in the binarized image, determining the centroid of the edge contours as the transverse process center of a target object or determining an circumscribed rectangular frame containing the edge contours, and determining the center of the circumscribed rectangular frame as the transverse process center of the target object. Further, vertebrae corresponding to each center can be determined according to the coordinate ordering result of each center. Specifically, according to the coordinate values of the centers on the target axis, the ordering result of the centers is obtained, and vertebrae corresponding to the centers are determined based on the ordering result of the centers. The direction of the target axis is the foot-to-head direction of the target object.
Each transverse process in the first transverse process image corresponds to a target center. For example, in the first transverse process image, the transverse processes 1, 2 and 3 are found by calculation that the distance between the center 1 and the transverse process 1 is closest, the distance between the center 2 and the transverse process 2 is closest, the distance between the center 3 and the transverse process 3 is closest, the center 1 is the target center of the transverse process 1, the center 2 is the target center of the transverse process 2, the center 3 is the target center of the transverse process 3, the vertebra corresponding to the center 1 is the vertebra where the transverse process 1 is located, the vertebra corresponding to the center 2 is the vertebra where the transverse process 2 is located, and the vertebra corresponding to the center 3 is the vertebra where the transverse process 3 is located.
In some embodiments, the center coordinates of each transverse process may also be the coordinates of the cluster center of the transverse process. The method comprises the steps of scanning transverse process centers of each vertebra of a target object from multiple directions to obtain first coordinates of each transverse process center in all directions, and carrying out principal component analysis and cluster analysis based on the first coordinates to obtain a cluster center and a cluster center coordinate of each transverse process.
Optionally, the console performs binarization processing on the target image including each vertebra of the target object to obtain a binarized image, identifies an edge contour of each transverse process in the binarized image, determines a centroid of the edge contour as a transverse process center of the vertebra of the target object, or determines an circumscribed rectangular frame containing the edge contour, determines a center of the circumscribed rectangular frame as a transverse process center of the vertebra of the target object, and acquires coordinates of the transverse process center. And the console determines vertebrae corresponding to each center according to the coordinate ordering result of each center. The console calculates the distance between each center and each transverse process according to the center coordinates and the three-dimensional coordinate information of each transverse process in the first transverse process image, and determines the target center closest to each transverse process in the first transverse process image from each center according to the calculation result. The console determines the vertebrae corresponding to the center of the object as the vertebrae in which each transverse process is located in the first transverse process image.
In the present embodiment, by determining the target center closest to each transverse process in the first transverse process image from the centers, respectively, based on the center coordinates and the three-dimensional coordinate information of each transverse process in the first transverse process image, it is possible to accurately determine the vertebrae corresponding to each transverse process.
In one embodiment, determining the vertebrae in which each transverse process is located in the second transverse process image includes:
The transverse processes in the second transverse process image are ordered according to the coordinate size of the transverse processes in the second transverse process image at the target axis.
And determining the vertebrae where each transverse process in the second transverse process image is located according to the ordering result of each transverse process in the second transverse process image.
Wherein the direction of the target axis is the foot-to-head direction of the target object. Since the vertebrae are sequentially arranged from the head to the foot of the target object, the vertebrae in which the transverse processes are located in the second transverse process image are determined by ordering the transverse processes in accordance with the coordinate sizes of the transverse processes on the target axis in the direction of the target axis with the foot to head direction of the target object, so that the accuracy can be improved.
Further, the ranking results of the transverse processes are obtained by ranking from low to high according to the coordinate value of each transverse process on the target axis. Wherein the transverse process with smaller coordinate values is located below and near the foot of the target object, and the transverse process with larger coordinate values is located above and near the head of the target object. Specifically, if the vertebrae are lumbar, the lumbar vertebrae where each transverse process is located in the sorting result are the fifth lumbar vertebra, the fourth lumbar vertebra, the third lumbar vertebra, the second lumbar vertebra and the first lumbar vertebra, that is, the transverse process arranged in the first position is located in the fifth lumbar vertebra, the transverse process arranged in the second position is located in the fourth lumbar vertebra, the transverse process arranged in the third position is located in the third lumbar vertebra, the transverse process arranged in the fourth position is located in the second lumbar vertebra, and the transverse process arranged in the fifth position is located in the first lumbar vertebra.
Optionally, the console sorts the transverse processes from low to high according to the coordinate value of each transverse process on the target axis, and a sorting result of each transverse process is obtained.
In this embodiment, by ordering the coordinates of the transverse processes on the target axis, the transverse processes in different vertebrae can be more clearly identified and distinguished, which helps to avoid confusion of the transverse processes of adjacent vertebrae, so that the puncture needle or the anesthetic needle can be accurately guided to puncture in the puncture scene.
In one embodiment, acquiring a first transverse process image segmented from an ultrasound image of a vertebral region of a target subject includes:
scanning the vertebra part of the target object from the sagittal position by using an ultrasonic probe to obtain an ultrasonic image sequence with two-dimensional coordinate information;
performing three-dimensional reconstruction based on the ultrasonic image sequence to obtain a first reconstruction model;
the first transverse process image is segmented from a first reconstruction model, and the first reconstruction model comprises three-dimensional coordinate information of each transverse process in the first transverse process image.
Where sagittal position is a tangential position in the human body and refers to a plane cut along the target object from anterior to posterior or posterior to anterior. Ultrasound scanning of the targeted subject's vertebral sites from the sagittal position helps to observe the overall trend of the transverse processes and their relative positions with respect to the anterior-posterior spine.
The ultrasound image sequence is made up of a plurality of frames of ultrasound images. The multi-frame ultrasonic image is obtained by scanning the vertebrae of the target object from the sagittal position by the ultrasonic probe. The ultrasound image sequence is scanned by an ultrasound probe and sent to a console.
The three-dimensional coordinate information of each transverse process in the first transverse process image can be obtained through two-dimensional coordinate conversion. The method comprises the steps of obtaining pose information of an ultrasonic probe in an ultrasonic image acquisition process, determining a transformation matrix based on the pose information, wherein the transformation matrix is used for representing a conversion relation from a two-dimensional image coordinate system to a global three-dimensional coordinate system, and converting the two-dimensional coordinate information of each transverse process in a first transverse process image by using the transformation matrix to obtain three-dimensional coordinate information of each transverse process in the first transverse process image.
Optionally, the staff controls the ultrasonic probe to scan the vertebra part of the target object from the sagittal position, and sends the scanned multi-frame ultrasonic image to the console. And the console receives the ultrasonic image sequence with the two-dimensional coordinate information, and performs three-dimensional reconstruction on the ultrasonic image sequence to obtain a first reconstruction model. The console segments a first transverse process image from the first reconstruction model and acquires pose information of the ultrasonic probe in the ultrasonic image acquisition process. The console determines a transformation matrix based on the pose information, and converts the two-dimensional coordinate information of each transverse process in the first transverse process image by using the transformation matrix to obtain the three-dimensional coordinate information of each transverse process in the first transverse process image.
In this embodiment, the first reconstruction model is obtained by performing three-dimensional reconstruction based on the ultrasound image sequence, so that the structures of the vertebrae and the transverse processes thereof can be observed from multiple angles, which is helpful for more comprehensively understanding the structural relationship between the vertebrae and the transverse processes thereof.
In some embodiments, the image fusion method further comprises performing a data preprocessing operation on each frame of image in the acquired ultrasound image sequence. Thus, the image quality and the accuracy of the ultrasonic image sequence can be improved. Wherein the data preprocessing includes, but is not limited to, denoising and contrast enhancement. Denoising can be achieved by using a filtering algorithm, and contrast enhancement can be achieved by histogram equalization or contrast boosting.
In some embodiments, before the three-dimensional reconstruction of the ultrasound image sequence, the ultrasound images of adjacent frames in the ultrasound image sequence may also be aligned by image feature matching to improve the accuracy of the three-dimensional reconstruction. The image feature matching refers to detecting and matching key feature points in different ultrasonic images, and determining corresponding relations among the key feature points so as to align the ultrasonic images of adjacent frames in an ultrasonic image sequence according to the corresponding relations among the key feature points.
In some embodiments, after the first reconstruction model is obtained, smoothing may be further performed on the first reconstruction model to further improve accuracy of the first reconstruction model.
In some embodiments, the acquiring of the three-dimensional coordinate information of the first transverse process image further comprises acquiring, by an electromagnetic positioning system, the three-dimensional coordinate information of the first transverse process image segmented from the ultrasound image.
In some embodiments, acquiring a first transverse process image segmented from an ultrasound image of a vertebral region of a target subject includes scanning the vertebral region of the target subject from a sagittal position using an ultrasound probe to obtain an ultrasound image sequence having two-dimensional coordinate information, segmenting a two-dimensional transverse process image from each frame of the ultrasound image sequence, and performing three-dimensional reconstruction on the two-dimensional transverse process image to obtain the first transverse process image.
In one embodiment, the image fusion method further comprises:
And carrying out three-dimensional reconstruction based on an image sequence of the medical image to obtain a second modeling type.
And acquiring a first coordinate of an ultrasonic image scanned by the ultrasonic probe in real time. The first coordinates are three-dimensional coordinates.
Based on the registration matrix, the first coordinates are converted into first converted coordinates in the second reconstruction model, and based on the first converted coordinates, a cross-sectional image of the ultrasound image in the second reconstruction model is determined.
And fusing the cross-sectional image and the ultrasonic image to obtain a fused image so as to conduct puncture guiding based on the fused image, the cross-sectional image and the ultrasonic image.
The image sequence in which the medical image is located refers to an image sequence formed by multiple frames of medical images. Specifically, three-dimensional reconstruction is performed by using multi-frame medical images, and a second modeling type is obtained. The first coordinates may be obtained by a magnetic positioning system.
In the process of scanning the ultrasonic probe, the control console can acquire the cross-sectional image in the second reconstruction model in real time according to the first coordinate of the scanned ultrasonic image, so that more comprehensive and accurate image information can be provided in time. In particular, ultrasound images can provide real-time soft tissue contrast, while computed tomography images can provide detailed bone structure and density information, which in combination can help a physician obtain more complete and accurate structural position information. The coordinates of the sectional image in the second reconstruction model are the first transformed coordinates. The cross-sectional image corresponding to the lumbar region is shown in fig. 6.
In some embodiments, the three-dimensional reconstruction process includes performing data preprocessing on a plurality of frames of medical images in an image sequence to obtain a preprocessed image sequence, and performing three-dimensional reconstruction on the preprocessed image sequence by using a three-dimensional reconstruction algorithm to obtain a second modeling type. The data preprocessing comprises at least one of denoising, image registration and standardization, wherein denoising refers to noise reduction in medical images, standardization refers to gray scale range adjustment in medical images, image registration refers to multi-frame medical images registration in the same coordinate system, and a three-dimensional reconstruction method comprises but is not limited to a voxel difference method and a surface reconstruction method.
In some embodiments, segmentation of both the first and second transverse process images may be achieved by a segmentation model. For example, a first reconstructed model is segmented using a trained 3D U-Net (three-dimensional U-shaped network) model to obtain a first transverse process image, and a second reconstructed model is segmented using a trained 3D U-Net model to obtain a second transverse process image.
Optionally, the console performs data preprocessing on the multi-frame medical image to obtain a preprocessed image sequence, and performs three-dimensional reconstruction on the preprocessed image sequence by using a three-dimensional reconstruction algorithm to obtain a second modeling type. The console acquires first coordinates of an ultrasonic image scanned by the ultrasonic probe in real time through the magnetic positioning system, and converts the first coordinates into first converted coordinates in the second reconstruction model based on the registration matrix. The console searches a section image corresponding to the first conversion coordinate in the second reconstruction model. The console fuses the cross-sectional image and the ultrasonic image to obtain a fused image so as to conduct puncture guiding based on the fused image, the cross-sectional image and the ultrasonic image.
In one embodiment, the console may also receive a contrast adjustment request to adjust the contrast of the fused image so that the cross-sectional image and the ultrasound image may be distinguished significantly.
In this embodiment, by fusing the ultrasound image and the cross-sectional image, the soft tissue contrast and detailed bone of the vertebrae of the target object can be displayed at the same time, which is helpful for more comprehensively understanding the vertebrae of the target object and the surrounding environment thereof, thereby reducing the risk of puncture and improving the success rate of puncture.
In one embodiment, the image fusion method further comprises:
and acquiring a puncture mark planned in a second reconstruction model after the three-dimensional reconstruction of the medical image, wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information.
When the ultrasonic probe scans the puncture mark, the puncture mark is displayed in the ultrasonic image so as to guide puncture.
The puncture mark can be determined by a worker according to nerve tissues of the vertebrae part of the target object, so that the worker is guided to perform anesthesia puncture through the puncture mark, more accurate pain control is provided, and risk is reduced. The puncture marking may also be determined from anatomical fixation points, which may be iliac ridges, spinous processes, etc., to aid in the precise location of the puncture point. For example, in spinal anesthesia, the space between the third vertebra and the fourth vertebra or the space between the fourth vertebra and the fifth vertebra may be utilized as the puncture point.
The puncture point refers to the point where the anesthetic needle or the puncture needle finally reaches, and the puncture route refers to the travel route of the anesthetic needle or the puncture needle in the target object.
In some embodiments, the puncture mark can also be automatically planned by the control console according to preset mark information. The preset identification information is preset as the candidate puncture point and/or the candidate puncture route. For example, when spinal anesthesia is performed on a lumbar region, a medical staff generally uses a gap between a third lumbar vertebra and a fourth lumbar vertebra or a gap between a fourth lumbar vertebra and a fifth lumbar vertebra as a puncture point, and then the gap between the third lumbar vertebra and the fourth lumbar vertebra or the gap between the fourth lumbar vertebra and the fifth lumbar vertebra can be used as preset identification information, so that a console can automatically plan the gap between the third lumbar vertebra and the fourth lumbar vertebra or the gap between the fourth lumbar vertebra and the fifth lumbar vertebra as a puncture identification.
When the ultrasonic image is scanned, the position information of the ultrasonic image is converted into the position information under the coordinate of the medical image through the registration matrix, the puncture mark is planned in the second modeling type, namely, the puncture mark has the position information under the coordinate system of the medical image, and when the cross-section image converted by the ultrasonic image is intersected with the position of the puncture mark under the coordinate system of the medical image, the puncture mark can be stated to be scanned in the ultrasonic image. Determining whether the ultrasound probe scanned the puncture identity is performed in real time.
Displaying the puncture identity in the ultrasound image may be achieved by coordinate transformation. Specifically, based on an inverse matrix of the registration matrix, coordinate information of the puncture mark is converted into coordinate information under an ultrasonic coordinate system, and the scanned puncture mark is displayed in an ultrasonic image according to the converted coordinate information.
In some embodiments, the puncture identity may also be displayed in a fused image obtained after the ultrasound image and the cross-sectional image are fused.
Optionally, after the staff plans the puncture mark according to the neural tissue of the vertebra part of the target object, the control console automatically acquires the puncture mark planned in the second reconstruction model after the three-dimensional reconstruction of the medical image. Wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information. In the process of scanning by the ultrasonic probe, the console converts the position information of the ultrasonic image into the position information under the medical image coordinate in real time through the registration matrix, and when the section image converted by the ultrasonic image is intersected with the position of the puncture mark under the medical image coordinate system, the puncture mark is determined to be scanned in the ultrasonic image. The control console converts coordinate information of the puncture mark into coordinate information under an ultrasonic coordinate system based on an inverse matrix of the registration matrix, and displays the scanned puncture mark in an ultrasonic image according to the converted coordinate information so as to guide puncture.
In the embodiment, the puncture mark is displayed in the ultrasonic image, so that a doctor can more intuitively see the position of the needle point and the position of the needle point relative to the puncture mark, the puncture angle and the puncture depth can be conveniently adjusted in real time, and the controllability of operation is improved.
In one embodiment, displaying the puncture identity in the ultrasound image to guide the puncture when the ultrasound probe scans the puncture identity comprises:
and acquiring a second coordinate of the ultrasonic image scanned by the ultrasonic probe in real time. The second coordinates are three-dimensional coordinates.
The second coordinates are converted into second converted coordinates in the second reconstruction model based on the registration matrix.
And when the second conversion coordinates are matched with at least one of the puncture route coordinate information and the puncture point coordinate information, determining that the ultrasonic probe scans the puncture mark.
The puncture identity is displayed in the ultrasound image to guide the puncture.
Wherein the second coordinate is obtainable by a magnetic positioning system. The matching of the second conversion coordinate with at least one of the puncture route coordinate information and the puncture point coordinate information means that the second conversion coordinate intersects with at least one of the puncture route coordinate information and the puncture point coordinate information.
When the puncture mark is displayed in the ultrasonic image, only the puncture mark scanned by the ultrasonic probe in real time is displayed.
Optionally, the console acquires a second coordinate of the ultrasound image scanned by the ultrasound probe in real time through the magnetic positioning system. The console converts the second coordinates into second converted coordinates in the second reconstructed model based on the registration matrix. When the second conversion coordinates are matched with at least one of the puncture route coordinate information and the puncture point coordinate information, the control console determines that the ultrasonic probe scans the puncture mark, and displays the scanned puncture mark in the ultrasonic image so as to guide puncture.
In this embodiment, by acquiring the second coordinates of the ultrasound image scanned by the ultrasound probe in real time and converting the second coordinates into the second converted coordinates in the second reconstruction model based on the predetermined registration matrix, accurate matching between image data of different modes is achieved, so that it is ensured that the structure in the ultrasound image can accurately correspond to the second reconstruction model after three-dimensional reconstruction of the medical image, and positioning accuracy is improved. By determining that the ultrasonic probe has scanned the puncture mark and the puncture mark scanned on the ultrasonic image when the second conversion coordinate is matched with the puncture route coordinate information or the puncture point coordinate information, the real-time feedback mechanism greatly enhances the visualization degree and the intuitiveness, and is beneficial to more accurately performing the puncture operation.
In some embodiments, the console includes a display screen that is operable to display an image interface that includes a first region 10, a second region 20, and a third region 30. Wherein the first region 10 may display the ultrasound image and the fusion image simultaneously, the display positions of the fusion image and the ultrasound image in the first region 10 allowing switching. The second region 20 displays a three-dimensional model for guiding a puncture and the third region 30 may be used to display acquired medical images and/or fused images. Schematic diagrams of the image interface are shown in fig. 7 and 8. Wherein the three-dimensional model for guiding the puncture may be the first reconstruction model or the second reconstruction model.
The application also provides an application scene, which applies the image fusion method. Specifically, the application of the image fusion method in the application scene is as follows:
The staff controls the ultrasonic probe to scan the vertebra part of the target object from the sagittal position, and sends the scanned multi-frame ultrasonic image to the console. And the console receives the ultrasonic image sequence with the two-dimensional coordinate information, and performs three-dimensional reconstruction on the ultrasonic image sequence to obtain a first reconstruction model. The console segments a first transverse process image from the first reconstruction model and acquires pose information of the ultrasonic probe in the ultrasonic image acquisition process. The console determines a transformation matrix based on the pose information, and converts the two-dimensional coordinate information of each transverse process in the first transverse process image by using the transformation matrix to obtain the three-dimensional coordinate information of each transverse process in the first transverse process image. After obtaining a medical image of the vertebrae of the target subject by one or more of computed tomography, magnetic resonance imaging, and positron emission tomography, the console segments a second transverse process image from the medical image using one or more of thresholding, edge detection, and model segmentation.
The console sorts the transverse processes from low to high according to the coordinate value of each transverse process on the target axis, and a sorting result of each transverse process is obtained. The console determines the vertebrae in which each transverse process in the first transverse process image is located based on morphological features and/or adjacent structures of each transverse process in the first transverse process image. The console determines transverse processes of the first transverse process image and the second transverse process image in the same vertebra as transverse process pairs, and registers the ultrasonic image and the medical image according to three-dimensional coordinate information of each transverse process in the transverse process pairs to obtain a registration matrix.
The control console performs data preprocessing on the multi-frame medical images to obtain preprocessed image sequences, and performs three-dimensional reconstruction on the preprocessed image sequences by using a three-dimensional reconstruction algorithm to obtain a second modeling type. The console acquires first coordinates of an ultrasonic image scanned by the ultrasonic probe in real time through the magnetic positioning system, and converts the first coordinates into first converted coordinates in the second reconstruction model based on the registration matrix. The console searches a section image corresponding to the first conversion coordinate in the second reconstruction model. The console fuses the cross-sectional image and the ultrasonic image to obtain a fused image so as to conduct puncture guiding based on the fused image, the cross-sectional image and the ultrasonic image.
After planning puncture marks according to the nerve tissue of the vertebrae part of the target object, the control console automatically acquires the puncture marks planned in the second reconstruction model after the three-dimensional reconstruction of the medical image. Wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information. In the process of scanning by the ultrasonic probe, the console converts the position information of the ultrasonic image into the position information under the medical image coordinate in real time through the registration matrix, and when the section image converted by the ultrasonic image is intersected with the position of the puncture mark under the medical image coordinate system, the puncture mark is determined to be scanned in the ultrasonic image. The control console converts coordinate information of the puncture mark into coordinate information under an ultrasonic coordinate system based on an inverse matrix of the registration matrix, and displays the scanned puncture mark in an ultrasonic image according to the converted coordinate information so as to guide puncture. A specific flow chart is shown in fig. 9.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides an image fusion device for realizing the above related image fusion method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiment of one or more image fusion devices provided below may be referred to the limitation of the image fusion method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 10, there is provided an image fusion apparatus including:
An image acquisition module 1002 is configured to acquire a first transverse process image segmented from an ultrasound image of a target subject vertebral region, and a second transverse process image segmented from a medical image of the target subject vertebral region.
The registration fusion module 1004 is configured to register the ultrasound image and the medical image according to three-dimensional coordinate information of each transverse process in the first transverse process image and the second transverse process image, so as to obtain a registration matrix.
In one embodiment, the image fusion device is further used for determining the vertebra where each transverse process in the first transverse process image is located and the vertebra where each transverse process in the second transverse process image is located, determining the transverse processes of the first transverse process image and the second transverse process image in the same vertebra as a transverse process pair, and registering the ultrasonic image and the medical image according to three-dimensional coordinate information of each transverse process in the transverse process pair to obtain a registration matrix.
In one embodiment, the image fusion device is further used for acquiring center coordinates of each transverse process in each vertebra of the target object, respectively determining a target center closest to each transverse process in the first transverse process image from the centers according to the center coordinates and three-dimensional coordinate information of each transverse process in the first transverse process image, and determining the vertebra corresponding to the target center as the vertebra where each transverse process in the first transverse process image is located.
In one embodiment, the image fusion device is further configured to rank the transverse processes in the second transverse process image according to the coordinate size of the transverse processes in the second transverse process image on the target axis, wherein the target axis is the foot-to-head direction of the target object, and determine the vertebrae in which the transverse processes in the second transverse process image are located according to the ranking result of the transverse processes in the second transverse process image.
In one embodiment, the image fusion device is further used for scanning a vertebra part of the target object from a sagittal position by using an ultrasonic probe to obtain an ultrasonic image sequence with two-dimensional coordinate information, performing three-dimensional reconstruction based on the ultrasonic image sequence to obtain a first reconstruction model, segmenting a first transverse process image from the first reconstruction model, and the first reconstruction model comprises the three-dimensional coordinate information of each transverse process in the first transverse process image.
In one embodiment, the image fusion device is further used for carrying out three-dimensional reconstruction based on an image sequence where the medical image is located to obtain a second reconstruction model, acquiring a first coordinate of an ultrasonic image scanned by the ultrasonic probe in real time, wherein the first coordinate is a three-dimensional coordinate, converting the first coordinate into a first conversion coordinate in the second reconstruction model based on the registration matrix, determining a cross-sectional image of the ultrasonic image in the second reconstruction model based on the first conversion coordinate, and fusing the cross-sectional image and the ultrasonic image to obtain a fused image so as to carry out puncture guidance based on the fused image, the cross-sectional image and the ultrasonic image.
In one embodiment, the image fusion device is further used for acquiring a puncture mark planned in a second reconstruction model after three-dimensional reconstruction of the medical image, wherein the puncture mark comprises at least one of puncture route coordinate information and puncture point coordinate information, and when the ultrasonic probe scans the puncture mark, the puncture mark is displayed in the ultrasonic image so as to guide puncture.
In one embodiment, the image fusion device is further used for acquiring second coordinates of an ultrasonic image scanned by the ultrasonic probe in real time, the second coordinates are three-dimensional coordinates, converting the second coordinates into second converted coordinates in a second reconstruction model based on the registration matrix, determining that the ultrasonic probe scans a puncture mark when the second converted coordinates are matched with at least one of puncture route coordinate information and puncture point coordinate information, and displaying the puncture mark in the ultrasonic image to guide puncture.
The respective modules in the above image fusion apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing an ultrasonic image, a first transverse process image, a medical image, a second transverse process image, three-dimensional coordinate information of transverse processes, a registration matrix, transverse process pairs, a center coordinate, a target center, a sorting result of each transverse process, a first reconstruction model, a second reconstruction model, a first coordinate, a first conversion coordinate, a fusion image, a puncture mark, a second coordinate and a second conversion coordinate. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image fusion method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

CN202511062353.9A2025-07-312025-07-31 Image fusion method, device and computer equipmentPendingCN120563340A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202511062353.9ACN120563340A (en)2025-07-312025-07-31 Image fusion method, device and computer equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202511062353.9ACN120563340A (en)2025-07-312025-07-31 Image fusion method, device and computer equipment

Publications (1)

Publication NumberPublication Date
CN120563340Atrue CN120563340A (en)2025-08-29

Family

ID=96817149

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202511062353.9APendingCN120563340A (en)2025-07-312025-07-31 Image fusion method, device and computer equipment

Country Status (1)

CountryLink
CN (1)CN120563340A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109646089A (en)*2019-01-152019-04-19浙江大学A kind of spine and spinal cord body puncture based on multi-mode medical blending image enters waypoint intelligent positioning system and method
CN114418960A (en)*2021-12-272022-04-29苏州微创畅行机器人有限公司 Image processing method, system, computer equipment and storage medium
CN115553883A (en)*2022-09-292023-01-03浙江大学Percutaneous spinal puncture positioning system based on robot ultrasonic scanning imaging
CN119112232A (en)*2024-09-182024-12-13河南省人民医院 Ultrasonic imaging method and ultrasonic probe used in ultrasonic imaging equipment
CN119168849A (en)*2024-08-272024-12-20深圳惟德精准医疗科技有限公司 Puncture method and related products based on image registration

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109646089A (en)*2019-01-152019-04-19浙江大学A kind of spine and spinal cord body puncture based on multi-mode medical blending image enters waypoint intelligent positioning system and method
CN114418960A (en)*2021-12-272022-04-29苏州微创畅行机器人有限公司 Image processing method, system, computer equipment and storage medium
CN115553883A (en)*2022-09-292023-01-03浙江大学Percutaneous spinal puncture positioning system based on robot ultrasonic scanning imaging
CN119168849A (en)*2024-08-272024-12-20深圳惟德精准医疗科技有限公司 Puncture method and related products based on image registration
CN119112232A (en)*2024-09-182024-12-13河南省人民医院 Ultrasonic imaging method and ultrasonic probe used in ultrasonic imaging equipment

Similar Documents

PublicationPublication DateTitle
US11304680B2 (en)Spinal image generation system based on ultrasonic rubbing technique and navigation positioning system for spinal surgery
Alam et al.Medical image registration in image guided surgery: Issues, challenges and research opportunities
US8787648B2 (en)CT surrogate by auto-segmentation of magnetic resonance images
US8942455B2 (en)2D/3D image registration method
Penney et al.Registration of freehand 3D ultrasound and magnetic resonance liver images
Grimson et al.An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization
HawkesAlgorithms for radiological image registration and their clinical application
US9928588B2 (en)Indication-dependent display of a medical image
Hacihaliloglu et al.Local phase tensor features for 3-D ultrasound to statistical shape+ pose spine model registration
CN110946652B (en)Method and device for planning screw path of bone screw
CN113469935B (en)Automatic detection and positioning method for posterior superior iliac spine based on CT image
CN113870098A (en)Automatic Cobb angle measurement method based on spinal layered reconstruction
WO2023216947A1 (en)Medical image processing system and method for interventional operation
Chen et al.Three-dimensional reconstruction and fusion for multi-modality spinal images
US7627158B2 (en)Automatic registration of intra-modality medical volume images using affine transformation
Hacihaliloglu et al.Statistical shape model to 3D ultrasound registration for spine interventions using enhanced local phase features
Antico et al.Deep learning-based automatic segmentation for reconstructing vertebral anatomy of healthy adolescents and patients with adolescent idiopathic scoliosis (AIS) using MRI data
KR20230013042A (en) Method for predicting recurrence of lesions through image analysis
Rasoulian et al.Augmentation of paramedian 3D ultrasound images of the spine
CN112927213A (en)Medical image segmentation method, medium and electronic device
Rasoulian et al.A statistical multi-vertebrae shape+ pose model for segmentation of CT images
CN120563340A (en) Image fusion method, device and computer equipment
Matsopoulos et al.CT-MRI automatic surface-based registration schemes combining global and local optimization techniques
Liu et al.Automatic probe artifact detection in MRI-guided cryoablation
Hemler et al.A system for multimodality image fusion

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp