Disclosure of Invention
The invention aims to solve the problems that the smooth surface of a puncture needle head can generate strong specular reflection, so that ultrasonic waves cannot effectively return to a probe head, the needle head becomes fuzzy in an image, and particularly the puncture positioning efficiency is reduced at the key moment of puncture positioning, and provides a puncture positioning method and device based on ultrasonic imaging.
In a first aspect of the present invention, there is first provided a puncture positioning method based on ultrasonic imaging, the method comprising:
synchronously acquiring an ultrasonic image and a photoacoustic image when the puncture needle moves through the ultrasonic probe and the photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set;
Respectively extracting features of an ultrasonic image and a photoacoustic image to obtain a first feature and a second feature, wherein the ultrasonic image is any ultrasonic image in the ultrasonic image set, and the photoacoustic image is a photoacoustic image acquired at the same time of the ultrasonic image;
The first feature and the second feature are aligned and then fused to obtain a target feature, and the target feature is marked on the ultrasonic image to obtain a target image;
And constructing a target three-dimensional space according to all target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position.
Optionally, performing feature extraction on the ultrasound image and the photoacoustic image to obtain the first feature and the second feature includes:
respectively carrying out image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image;
Image segmentation is carried out on the first ultrasonic image and the first photoacoustic image through an edge detection algorithm, so that a second ultrasonic image and a second photoacoustic image are obtained;
Respectively extracting geometric features of the second ultrasonic image and the second photoacoustic image to obtain a first geometric feature and a second geometric feature;
substituting the second ultrasonic image and the second photoacoustic image into a convolutional neural network model to obtain a first depth feature and a second depth feature;
The first geometric feature and the first depth feature are noted as first features, and the second geometric feature and the second depth feature are noted as second features.
Optionally, performing spatial fusion on the first feature and the second feature after alignment to obtain a target feature includes:
acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and respectively extracting the needle tip position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;
Calculating Euclidean distances of the first puncture position and the second puncture position to obtain a target distance, and if the target distance is larger than a preset distance, performing similar transformation on the first puncture position and the second puncture position until the target distance is smaller than or equal to the preset distance, and performing geometric alignment on the ultrasonic image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasonic image and a first aligned photoacoustic image;
carrying out local binary processing on the first aligned ultrasonic image to obtain a first texture feature, and extracting textures in the first aligned photoacoustic image through a gray level co-occurrence matrix to obtain a second texture feature;
mapping the first texture features and the second texture features to the same feature space through principal component analysis to perform space alignment to obtain a second aligned ultrasonic image and a second aligned photoacoustic image;
And carrying out weighted fusion on the first characteristic and the second characteristic corresponding to the second aligned ultrasonic image and the second aligned photoacoustic image to obtain an initial target characteristic.
Optionally, constructing the target three-dimensional space according to all target images when the puncture needle moves includes:
Performing inter-frame alignment on all target images to obtain an aligned image set, and sequentially mapping each aligned image to a three-dimensional space according to a time sequence aiming at the aligned images in the aligned image set to obtain an initial three-dimensional space;
And filling unmapped voxels in the initial three-dimensional space by three-time interpolation to obtain a target three-dimensional space.
Optionally, mapping each aligned image to the three-dimensional space in sequence according to the time sequence to obtain the initial three-dimensional space includes:
projecting each alignment image into a three-dimensional coordinate system through geometric information of ultrasonic equipment, and mapping pixel points of a two-dimensional image into voxels of a three-dimensional space;
the space positions of the puncture needle at different time points are gradually accumulated to obtain an initial three-dimensional space.
In a second aspect of the present invention, there is provided a puncture positioning device based on ultrasonic imaging, comprising:
The image acquisition module is used for synchronously acquiring an ultrasonic image and a photoacoustic image when the puncture needle moves through the ultrasonic probe and the photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set;
The device comprises a feature extraction module, a feature extraction module and a photo-acoustic image processing module, wherein the feature extraction module is used for respectively carrying out feature extraction on an ultrasonic image and the photo-acoustic image to obtain a first feature and a second feature, wherein the ultrasonic image is any ultrasonic image in the ultrasonic image set;
The feature fusion module is used for fusing the first features and the second features after being aligned to obtain target features, and labeling the target features on the ultrasonic image to obtain a target image;
The three-dimensional construction module is used for constructing a target three-dimensional space according to all target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position.
Optionally, the feature extraction module includes:
the image preprocessing module is used for respectively carrying out image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image;
The image segmentation module is used for respectively carrying out image segmentation on the first ultrasonic image and the first photoacoustic image through an edge detection algorithm to obtain a second ultrasonic image and a second photoacoustic image;
the geometrical feature extraction module is used for respectively carrying out geometrical feature extraction on the second ultrasonic image and the second photoacoustic image to obtain a first geometrical feature and a second geometrical feature;
The depth feature extraction module is used for substituting the second ultrasonic image and the second photoacoustic image into a convolutional neural network model to obtain a first depth feature and a second depth feature;
and the feature determining module is used for marking the first geometric feature and the first depth feature as first features and marking the second geometric feature and the second depth feature as second features.
Optionally, the feature fusion module includes:
the puncture position determining module is used for acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and extracting the needle point position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;
The geometric alignment module is used for calculating Euclidean distance between the first puncture position and the second puncture position to obtain a target distance, if the target distance is larger than a preset distance, performing similar transformation on the first puncture position and the second puncture position until the target distance is smaller than or equal to the preset distance, and performing geometric alignment on the ultrasonic image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasonic image and a first aligned photoacoustic image;
the texture feature extraction module is used for carrying out local binary processing on the first aligned ultrasonic image to obtain a first texture feature, and extracting textures in the first aligned photoacoustic image through the gray level co-occurrence matrix to obtain a second texture feature;
the space alignment module is used for mapping the first texture features and the second texture features to the same feature space through principal component analysis to perform space alignment to obtain a second aligned ultrasonic image and a second aligned photoacoustic image;
And the weighted fusion module is used for carrying out weighted fusion on the first characteristic and the second characteristic corresponding to the second aligned ultrasonic image and the second aligned photoacoustic image to obtain an initial target characteristic.
Optionally, the three-dimensional building module includes:
the image mapping module is used for carrying out inter-frame alignment on all target images to obtain an aligned image set, and mapping each aligned image to a three-dimensional space according to a time sequence to obtain an initial three-dimensional space aiming at the aligned images in the aligned image set;
And the voxel filling module is used for filling unmapped voxels in the initial three-dimensional space through cubic interpolation to obtain a target three-dimensional space.
Optionally, the image mapping module includes:
The image projection module is used for projecting each alignment image into a three-dimensional coordinate system through geometric information of the ultrasonic equipment, and mapping pixel points of the two-dimensional image into voxels of a three-dimensional space;
The position accumulation module is used for gradually accumulating the spatial positions of the puncture needle at different time points to obtain an initial three-dimensional space.
The invention has the beneficial effects that:
The invention provides a puncture positioning method based on ultrasonic imaging, which comprises the steps of synchronously acquiring an ultrasonic image and a photoacoustic image when a puncture needle moves through an ultrasonic probe and a photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set, respectively extracting features of the ultrasonic image and the photoacoustic image to obtain a first feature and a second feature, wherein the ultrasonic image is any ultrasonic image in the ultrasonic image set, the photoacoustic image is the photoacoustic image acquired at the same time of the ultrasonic image, aligning the first feature and the second feature, fusing to obtain a target feature, labeling the target feature on the ultrasonic image to obtain a target image, constructing a target three-dimensional space according to all the target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position. The ultrasonic image and the photoacoustic image are acquired simultaneously, and the images shot at the same time by utilizing the two imaging technologies are subjected to feature extraction and fusion, so that a target image containing more information can be generated, the accuracy and the reliability of positioning are improved, the needle head is clear in the image by analyzing all the target images when the puncture needle moves, the movement track and the position of the puncture needle head can be displayed more intuitively by constructing a three-dimensional space, and the puncture positioning efficiency is improved.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The term "and/or" is merely an association relation describing the association object, and means that three kinds of relations may exist, for example, a and B may mean that a exists alone, a and B exist together, and B exists alone. Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides a puncture positioning method based on ultrasonic imaging. Referring to fig. 1, fig. 1 is a flowchart of a puncture positioning method based on ultrasonic imaging according to an embodiment of the present invention. The method comprises the following steps:
S101, synchronously acquiring an ultrasonic image and a photoacoustic image when a puncture needle moves through an ultrasonic probe and a photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set;
s102, respectively carrying out feature extraction on an ultrasonic image and a photoacoustic image to obtain a first feature and a second feature;
S103, aligning the first feature and the second feature, then fusing to obtain a target feature, and marking the target feature on the ultrasonic image to obtain a target image;
S104, constructing a target three-dimensional space according to all target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position.
The ultrasonic image is any ultrasonic image in the ultrasonic image set, and the photoacoustic image is a photoacoustic image acquired at the same time of the ultrasonic image.
According to the puncture positioning method based on ultrasonic imaging, the ultrasonic image and the photoacoustic image are acquired simultaneously, and the images shot at the same time by utilizing the two imaging technologies are subjected to feature extraction and fusion, so that a target image containing more information can be generated, the positioning accuracy and reliability are improved, the needle head is clear in the image through analyzing all the target images when the puncture needle moves, the movement track and the position of the puncture needle head can be displayed more intuitively through constructing a three-dimensional space, and the puncture positioning efficiency is improved.
In one implementation, the target features are marked on the ultrasonic image, so that the structure for clearly marking the puncture needle and surrounding tissues is arranged on the target image, the visual effect of the puncture needle in the image can be improved, the positioning is more accurate, the target features of the puncture needle in the three-dimensional space are identified, and specific position information is obtained.
In one implementation, the ultrasound probe may be GE HEALTHCARE L, philips L12-3, siemens 14L5, mindray L-4 s, and the photoacoustic probe may be LZ250, MX550D, MSOT Acuity Echo, MX201, etc.
In one implementation, the ultrasound image can provide information of tissue structure, but the puncture needle is difficult to clearly display due to specular reflection, the photoacoustic image can enhance the visualization of the puncture needle because the photoacoustic image has better enhancement effect on the contrast of the needle by the photoacoustic effect, and the two can be combined to remarkably improve the definition and the visibility of the needle in imaging and help doctors to more accurately position the needle.
In one implementation, the motion of the puncture needle is monitored in real time, a three-dimensional space is constructed according to all target images, the dynamic position of the needle in the body can be accurately tracked, the position and the path of the needle can be accurately mastered in a complex anatomical structure, the operation strategy is adjusted in real time, errors are reduced, and the operation success rate is improved.
In one implementation, the planning and adjustment of the penetration path may be optimized using a target three-dimensional space construct. The method not only can optimize the puncture path before operation, but also can adjust the path in real time in the operation process, so that the needle head can accurately reach the target position according to the preset path.
In one embodiment, referring to fig. 2, step S102 specifically includes:
S1021, respectively carrying out image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image;
S1022, respectively carrying out image segmentation on the first ultrasonic image and the first photoacoustic image through an edge detection algorithm to obtain a second ultrasonic image and a second photoacoustic image;
s1023, respectively carrying out geometric feature extraction on the second ultrasonic image and the second photoacoustic image to obtain a first geometric feature and a second geometric feature;
S1024, substituting the second ultrasonic image and the second photoacoustic image into a convolutional neural network model to obtain a first depth feature and a second depth feature;
S1025, the first geometric feature and the first depth feature are marked as first features, and the second geometric feature and the second depth feature are marked as second features.
In one implementation, the preprocessing of the ultrasonic image is to remove speckle noise in the image by Gaussian filtering, then to use contrast enhancement technology to improve the contrast between the puncture needle and background tissue so as to make the puncture needle more prominent in the image, and the preprocessing of the photoacoustic image is to use wiener filtering to remove background noise in the image, then to use histogram equalization to enhance the contrast of the image so as to ensure that the puncture needle and surrounding tissues are more clearly presented.
In one implementation, important boundaries and contours in the image are identified through an edge detection algorithm, so that the boundaries of the puncture needle and surrounding tissues are more obvious, geometric feature extraction is helpful for capturing structural information such as the shape, the size and the boundaries of a target area in the image, depth features extracted by a Convolutional Neural Network (CNN) can extract complex nonlinear features from local details and global modes, and the combined use of the geometric features and the depth features is helpful for enhancing the feature extraction of the puncture needle by a model.
In one implementation, the second ultrasonic image and the second photoacoustic image are images which separate the puncture needle region from the background and only keep the puncture needle region, the geometric features are extracted to extract the length, width, shape, position information and the like of the puncture needle, and the convolutional neural network model is a CNN model with a convolutional layer of 3x3 and 5x 5.
In one embodiment, spatially fusing the first feature and the second feature after alignment to obtain the target feature includes:
Acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and respectively extracting the needle point position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;
calculating Euclidean distance between the first puncture position and the second puncture position to obtain a target distance, and if the target distance is larger than a preset distance, performing similar transformation on the first puncture position and the second puncture position until the target distance is smaller than or equal to the preset distance, and performing geometric alignment on the ultrasonic image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasonic image and a first aligned photoacoustic image;
Carrying out local binary processing on the first aligned ultrasonic image to obtain a first texture feature, and extracting textures in the first aligned photoacoustic image through a gray level co-occurrence matrix to obtain a second texture feature;
Mapping the first texture features and the second texture features to the same feature space through principal component analysis to perform space alignment to obtain a second aligned ultrasonic image and a second aligned photoacoustic image;
And carrying out weighted fusion on the first characteristic and the second characteristic corresponding to the second aligned ultrasonic image and the second aligned photoacoustic image to obtain an initial target characteristic.
In one implementation, the preset distance is determined by a technician, the two modal images can be ensured to be matched accurately in space by respectively extracting the needle tip positions of the puncture needles in the ultrasonic image and the photoacoustic image and performing geometric alignment through similar transformation, a preset alignment threshold is set by calculating Euclidean distance of the positions of the puncture needle tips in the two modal images, and the positions of the two images are adjusted based on similar transformation translation, scaling, rotation and the like, so that the alignment precision is ensured, the multi-modal images can be subjected to comparative analysis in the same space, texture features in the ultrasonic image and the photoacoustic image are respectively extracted through local binary processing and gray level co-occurrence matrix, and abundant detailed information in the images can be captured, so that the fusion effect of the images and the accuracy of the whole analysis result are improved.
In one implementation mode, texture features of ultrasonic and photoacoustic images are mapped to the same feature space through principal component analysis, so that feature dimension unification and alignment are achieved, data dimension is effectively reduced, main information is reserved, and image processing speed and efficiency are improved.
In one implementation, the characteristics of the aligned ultrasonic image and the aligned photoacoustic image are weighted and fused, so that information of two modes can be comprehensively utilized, useful characteristics of different image sources are combined, more comprehensive image information can be obtained, and the robustness and the accuracy of a diagnosis model can be improved.
In one implementation, local binary processing is suitable for enhancing contrast of local areas of the image so that texture information is more prominent, and the gray level co-occurrence matrix can calculate texture statistical features of the image and extract subtle changes in the image. The combination of the two can better capture the microstructure differences in the ultrasound and photoacoustic images.
In one implementation, the final initial target feature is more reliable and accurate by combining the geometric alignment and feature alignment and weighted fusion. This high quality image fusion can provide a physician with more rich diagnostic information, helping to make more informed medical decisions, particularly in interventional procedures, tumor localization and tissue analysis.
In one embodiment, constructing the target three-dimensional space from all of the target images as the needle is moved comprises:
Performing inter-frame alignment on all target images to obtain an aligned image set, and sequentially mapping each aligned image to a three-dimensional space according to a time sequence aiming at aligned images in the aligned image set to obtain an initial three-dimensional space;
Filling unmapped voxels in the initial three-dimensional space by three-time interpolation to obtain a target three-dimensional space.
In one implementation, the inter-frame alignment ensures that each time-series image is consistent in spatial position under the same reference frame, thereby eliminating image misalignment problems due to motion, pose changes, or other factors, and the inter-frame alignment can be achieved by detecting feature points in each frame image, for example, based on feature points, based on optical flow, based on global transformation, based on deep learning, and the like, and then calculating an inter-frame displacement, rotation, or scaling transformation matrix by matching the positional relationship between the feature points.
In one implementation, by mapping the aligned images of the time series to a three-dimensional space, a three-dimensional structure containing complete spatial information can be generated, and compared with a single-frame image, the three-dimensional structure can display the spatial form more intuitively and comprehensively, thereby helping doctors to see the position of the puncture needle better.
In one implementation, the cubic interpolation method is a high-precision interpolation method, by carrying out interpolation filling on unmapped voxels in an initial three-dimensional space, a smoother and more continuous three-dimensional model can be generated, spatial faults and data loss caused by insufficient image frames or gaps are avoided, the fineness and accuracy of three-dimensional reconstruction can be improved, incomplete voxel data can be frequently encountered in the generation of three-dimensional medical images, the integrity and fluency of the three-dimensional model can be ensured by filling blank areas through the cubic interpolation, and the three-dimensional interpolation method is particularly suitable for fine analysis on tissue edges or complex structures.
In one embodiment, mapping each of the aligned images into three-dimensional space in sequence in time series to obtain an initial three-dimensional space includes:
projecting each alignment image into a three-dimensional coordinate system through geometric information of ultrasonic equipment, and mapping pixel points of a two-dimensional image into voxels of a three-dimensional space;
the space positions of the puncture needle at different time points are gradually accumulated to obtain an initial three-dimensional space.
In one implementation, the geometric information of the ultrasound device includes a probe position, an angle, and a scan range, a starting point of a three-dimensional coordinate system is a position of the ultrasound probe, a Z-axis is along a scan direction of the probe, an X-axis and a Y-axis respectively correspond to a horizontal and a vertical direction of the image, a pixel point of the two-dimensional image is calculated its position on an image plane and then mapped to a voxel position in a three-dimensional space, specifically mapped, for example, to an image resolution of 0.5 mm/pixel, a physical size of the image of 100mmx100mm, the image plane is in a z=0 plane, and the image is directly in front of the probe, a two-dimensional coordinate of the pixel (i, j) is (X2D, Y2D), a three-dimensional coordinate thereof is (X3D, Y3D, Z3 d=x2d×0.5mm, y3d=y2d×0.5mm, Z3D is depth information at the pixel (i, j).
In one implementation, the initial three-dimensional space is obtained by gradually accumulating the spatial positions of the puncture needle at different time points, specifically, the three-dimensional coordinate points of the puncture needle at all time points are gradually accumulated. The coordinate points of each time point are added into a whole data set to form a complete puncture needle movement track, and a three-dimensional space model is generated through point cloud data by using the accumulated puncture needle coordinate points.
The embodiment of the invention also provides a puncture positioning device based on ultrasonic imaging based on the same inventive concept. Referring to fig. 3, fig. 3 is a schematic structural diagram of a puncture positioning device based on ultrasonic imaging according to an embodiment of the present invention, including:
The image acquisition module is used for synchronously acquiring an ultrasonic image and a photoacoustic image when the puncture needle moves through the ultrasonic probe and the photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set;
The device comprises a feature extraction module, a feature extraction module and a photo-acoustic image processing module, wherein the feature extraction module is used for respectively carrying out feature extraction on an ultrasonic image and the photo-acoustic image to obtain a first feature and a second feature, wherein the ultrasonic image is any ultrasonic image in the ultrasonic image set;
the feature fusion module is used for fusing the first features and the second features after aligning to obtain target features, and labeling the target features on the ultrasonic image to obtain a target image;
The three-dimensional construction module is used for constructing a target three-dimensional space according to all target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position.
According to the puncture positioning device based on ultrasonic imaging, provided by the embodiment of the invention, the target image containing more information can be generated by simultaneously acquiring the ultrasonic image and the photoacoustic image and performing feature extraction and fusion on the images shot at the same time by utilizing the two imaging technologies, so that the accuracy and the reliability of positioning are improved, the needle head is clear in the image by analyzing all the target images when the puncture needle moves, the movement track and the position of the puncture needle head can be displayed more intuitively by constructing a three-dimensional space, and the puncture positioning efficiency is improved.
In one embodiment, the feature extraction module comprises:
The image preprocessing module is used for respectively carrying out image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image;
The image segmentation module is used for respectively carrying out image segmentation on the first ultrasonic image and the first photoacoustic image through an edge detection algorithm to obtain a second ultrasonic image and a second photoacoustic image;
the geometrical feature extraction module is used for respectively carrying out geometrical feature extraction on the second ultrasonic image and the second photoacoustic image to obtain a first geometrical feature and a second geometrical feature;
The depth feature extraction module is used for substituting the second ultrasonic image and the second photoacoustic image into the convolutional neural network model to obtain a first depth feature and a second depth feature;
The feature determining module is used for recording the first geometric feature and the first depth feature as first features and recording the second geometric feature and the second depth feature as second features.
In one embodiment, the feature fusion module comprises:
The puncture position determining module is used for acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and extracting the needle point position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;
the geometric alignment module is used for calculating Euclidean distance between the first puncture position and the second puncture position to obtain a target distance, if the target distance is larger than a preset distance, performing similar transformation on the first puncture position and the second puncture position until the target distance is smaller than or equal to the preset distance, and performing geometric alignment on the ultrasonic image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasonic image and a first aligned photoacoustic image;
The texture feature extraction module is used for carrying out local binary processing on the first aligned ultrasonic image to obtain a first texture feature, and extracting textures in the first aligned photoacoustic image through the gray level co-occurrence matrix to obtain a second texture feature;
the space alignment module is used for mapping the first texture features and the second texture features to the same feature space through principal component analysis to perform space alignment to obtain a second aligned ultrasonic image and a second aligned photoacoustic image;
And the weighted fusion module is used for carrying out weighted fusion on the first characteristic and the second characteristic corresponding to the second aligned ultrasonic image and the second aligned photoacoustic image to obtain an initial target characteristic.
In one embodiment, a three-dimensional build module includes:
the image mapping module is used for carrying out inter-frame alignment on all the target images to obtain an aligned image set, and for aligned images in the aligned image set, mapping each aligned image to a three-dimensional space according to a time sequence to obtain an initial three-dimensional space;
And the voxel filling module is used for filling unmapped voxels in the initial three-dimensional space through cubic interpolation to obtain a target three-dimensional space.
In one embodiment, the image mapping module includes:
The image projection module is used for projecting each alignment image into a three-dimensional coordinate system through geometric information of the ultrasonic equipment, and mapping pixel points of the two-dimensional image into voxels of a three-dimensional space;
The position accumulation module is used for gradually accumulating the spatial positions of the puncture needle at different time points to obtain an initial three-dimensional space.
The foregoing describes one embodiment of the present invention in detail, but the disclosure is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.