Movatterモバイル変換


[0]ホーム

URL:


CN118948401B - Puncture positioning method and device based on ultrasonic imaging - Google Patents

Puncture positioning method and device based on ultrasonic imaging
Download PDF

Info

Publication number
CN118948401B
CN118948401BCN202411444375.7ACN202411444375ACN118948401BCN 118948401 BCN118948401 BCN 118948401BCN 202411444375 ACN202411444375 ACN 202411444375ACN 118948401 BCN118948401 BCN 118948401B
Authority
CN
China
Prior art keywords
image
feature
target
ultrasonic
puncture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411444375.7A
Other languages
Chinese (zh)
Other versions
CN118948401A (en
Inventor
冯波
邓东阳
曾国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingfang Precision Medical Device Shenzhen Co ltd
Original Assignee
Jingfang Precision Medical Device Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingfang Precision Medical Device Shenzhen Co ltdfiledCriticalJingfang Precision Medical Device Shenzhen Co ltd
Priority to CN202411444375.7ApriorityCriticalpatent/CN118948401B/en
Publication of CN118948401ApublicationCriticalpatent/CN118948401A/en
Application grantedgrantedCritical
Publication of CN118948401BpublicationCriticalpatent/CN118948401B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a puncture positioning method and device based on ultrasonic imaging, which relate to the technical field of ultrasonic imaging, wherein the method comprises the steps of extracting characteristics of an ultrasonic image and a photoacoustic image which are synchronously acquired when a puncture needle moves to obtain a first characteristic and a second characteristic, aligning the first characteristic and the second characteristic, fusing the first characteristic and the second characteristic to obtain a target characteristic, marking the target characteristic on the ultrasonic image to obtain a target image, constructing a target three-dimensional space according to all the target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position. The ultrasonic image and the photoacoustic image are simultaneously acquired and feature fusion is carried out, so that a target image containing more information can be generated, the accuracy and the reliability of positioning are improved, the motion track and the position of the puncture needle can be more intuitively displayed by constructing a three-dimensional space, and the puncture positioning efficiency is improved.

Description

Puncture positioning method and device based on ultrasonic imaging
Technical Field
The invention belongs to the technical field of ultrasonic imaging, and particularly relates to a puncture positioning method and device based on ultrasonic imaging.
Background
Ultrasound (US) is increasingly used in modern clinical medicine as a non-invasive, ionizing radiation-free imaging technique, particularly playing an important role in intraoperative procedures. The ultrasound imaging can provide clear anatomical views for doctors, can guide operation in real time, greatly improves the accuracy and safety of clinical diagnosis and treatment, and can implement various interventional operations including surgical ablation (such as tumor ablation), tissue biopsy, regional anesthesia, drug delivery, therapeutic injection and the like under the guidance of the ultrasound imaging, and the operations depend on the real-time imaging function of ultrasound, so that the doctors can accurately position target tissues or focuses under the condition of minimally invasive treatment, and pain and postoperative complications of patients are reduced.
Patent CN115530878a discloses a puncture section positioning method and an ultrasonic imaging system method based on ultrasonic imaging, which are characterized in that a first ultrasonic wave is transmitted to a prostate by controlling a first transducer array, echo signals of the first ultrasonic wave are received, multiple frames of first ultrasonic images are generated, at least three frames of target ultrasonic images which are uniformly distributed are determined according to the similarity among the multiple frames of first ultrasonic images, at least two frames of the at least three frames of target ultrasonic images are used as ultrasonic images to be matched, a real-time ultrasonic image of the prostate is matched with each frame of ultrasonic images to be matched, a second ultrasonic image matched with each frame of ultrasonic images to be matched is determined in the real-time ultrasonic images, and a longitudinal section corresponding to the second ultrasonic image is used as a target puncture section to be punctured, so that puncture of the target puncture section is guided, and uniform puncture of the prostate can be ensured. However, during puncture positioning, the smooth surface of the puncture needle head can generate strong specular reflection, so that ultrasonic waves cannot effectively return to the probe, the needle head becomes blurred in an image, and especially at the key moment of puncture positioning, the puncture positioning efficiency is reduced.
Disclosure of Invention
The invention aims to solve the problems that the smooth surface of a puncture needle head can generate strong specular reflection, so that ultrasonic waves cannot effectively return to a probe head, the needle head becomes fuzzy in an image, and particularly the puncture positioning efficiency is reduced at the key moment of puncture positioning, and provides a puncture positioning method and device based on ultrasonic imaging.
In a first aspect of the present invention, there is first provided a puncture positioning method based on ultrasonic imaging, the method comprising:
synchronously acquiring an ultrasonic image and a photoacoustic image when the puncture needle moves through the ultrasonic probe and the photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set;
Respectively extracting features of an ultrasonic image and a photoacoustic image to obtain a first feature and a second feature, wherein the ultrasonic image is any ultrasonic image in the ultrasonic image set, and the photoacoustic image is a photoacoustic image acquired at the same time of the ultrasonic image;
The first feature and the second feature are aligned and then fused to obtain a target feature, and the target feature is marked on the ultrasonic image to obtain a target image;
And constructing a target three-dimensional space according to all target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position.
Optionally, performing feature extraction on the ultrasound image and the photoacoustic image to obtain the first feature and the second feature includes:
respectively carrying out image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image;
Image segmentation is carried out on the first ultrasonic image and the first photoacoustic image through an edge detection algorithm, so that a second ultrasonic image and a second photoacoustic image are obtained;
Respectively extracting geometric features of the second ultrasonic image and the second photoacoustic image to obtain a first geometric feature and a second geometric feature;
substituting the second ultrasonic image and the second photoacoustic image into a convolutional neural network model to obtain a first depth feature and a second depth feature;
The first geometric feature and the first depth feature are noted as first features, and the second geometric feature and the second depth feature are noted as second features.
Optionally, performing spatial fusion on the first feature and the second feature after alignment to obtain a target feature includes:
acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and respectively extracting the needle tip position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;
Calculating Euclidean distances of the first puncture position and the second puncture position to obtain a target distance, and if the target distance is larger than a preset distance, performing similar transformation on the first puncture position and the second puncture position until the target distance is smaller than or equal to the preset distance, and performing geometric alignment on the ultrasonic image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasonic image and a first aligned photoacoustic image;
carrying out local binary processing on the first aligned ultrasonic image to obtain a first texture feature, and extracting textures in the first aligned photoacoustic image through a gray level co-occurrence matrix to obtain a second texture feature;
mapping the first texture features and the second texture features to the same feature space through principal component analysis to perform space alignment to obtain a second aligned ultrasonic image and a second aligned photoacoustic image;
And carrying out weighted fusion on the first characteristic and the second characteristic corresponding to the second aligned ultrasonic image and the second aligned photoacoustic image to obtain an initial target characteristic.
Optionally, constructing the target three-dimensional space according to all target images when the puncture needle moves includes:
Performing inter-frame alignment on all target images to obtain an aligned image set, and sequentially mapping each aligned image to a three-dimensional space according to a time sequence aiming at the aligned images in the aligned image set to obtain an initial three-dimensional space;
And filling unmapped voxels in the initial three-dimensional space by three-time interpolation to obtain a target three-dimensional space.
Optionally, mapping each aligned image to the three-dimensional space in sequence according to the time sequence to obtain the initial three-dimensional space includes:
projecting each alignment image into a three-dimensional coordinate system through geometric information of ultrasonic equipment, and mapping pixel points of a two-dimensional image into voxels of a three-dimensional space;
the space positions of the puncture needle at different time points are gradually accumulated to obtain an initial three-dimensional space.
In a second aspect of the present invention, there is provided a puncture positioning device based on ultrasonic imaging, comprising:
The image acquisition module is used for synchronously acquiring an ultrasonic image and a photoacoustic image when the puncture needle moves through the ultrasonic probe and the photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set;
The device comprises a feature extraction module, a feature extraction module and a photo-acoustic image processing module, wherein the feature extraction module is used for respectively carrying out feature extraction on an ultrasonic image and the photo-acoustic image to obtain a first feature and a second feature, wherein the ultrasonic image is any ultrasonic image in the ultrasonic image set;
The feature fusion module is used for fusing the first features and the second features after being aligned to obtain target features, and labeling the target features on the ultrasonic image to obtain a target image;
The three-dimensional construction module is used for constructing a target three-dimensional space according to all target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position.
Optionally, the feature extraction module includes:
the image preprocessing module is used for respectively carrying out image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image;
The image segmentation module is used for respectively carrying out image segmentation on the first ultrasonic image and the first photoacoustic image through an edge detection algorithm to obtain a second ultrasonic image and a second photoacoustic image;
the geometrical feature extraction module is used for respectively carrying out geometrical feature extraction on the second ultrasonic image and the second photoacoustic image to obtain a first geometrical feature and a second geometrical feature;
The depth feature extraction module is used for substituting the second ultrasonic image and the second photoacoustic image into a convolutional neural network model to obtain a first depth feature and a second depth feature;
and the feature determining module is used for marking the first geometric feature and the first depth feature as first features and marking the second geometric feature and the second depth feature as second features.
Optionally, the feature fusion module includes:
the puncture position determining module is used for acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and extracting the needle point position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;
The geometric alignment module is used for calculating Euclidean distance between the first puncture position and the second puncture position to obtain a target distance, if the target distance is larger than a preset distance, performing similar transformation on the first puncture position and the second puncture position until the target distance is smaller than or equal to the preset distance, and performing geometric alignment on the ultrasonic image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasonic image and a first aligned photoacoustic image;
the texture feature extraction module is used for carrying out local binary processing on the first aligned ultrasonic image to obtain a first texture feature, and extracting textures in the first aligned photoacoustic image through the gray level co-occurrence matrix to obtain a second texture feature;
the space alignment module is used for mapping the first texture features and the second texture features to the same feature space through principal component analysis to perform space alignment to obtain a second aligned ultrasonic image and a second aligned photoacoustic image;
And the weighted fusion module is used for carrying out weighted fusion on the first characteristic and the second characteristic corresponding to the second aligned ultrasonic image and the second aligned photoacoustic image to obtain an initial target characteristic.
Optionally, the three-dimensional building module includes:
the image mapping module is used for carrying out inter-frame alignment on all target images to obtain an aligned image set, and mapping each aligned image to a three-dimensional space according to a time sequence to obtain an initial three-dimensional space aiming at the aligned images in the aligned image set;
And the voxel filling module is used for filling unmapped voxels in the initial three-dimensional space through cubic interpolation to obtain a target three-dimensional space.
Optionally, the image mapping module includes:
The image projection module is used for projecting each alignment image into a three-dimensional coordinate system through geometric information of the ultrasonic equipment, and mapping pixel points of the two-dimensional image into voxels of a three-dimensional space;
The position accumulation module is used for gradually accumulating the spatial positions of the puncture needle at different time points to obtain an initial three-dimensional space.
The invention has the beneficial effects that:
The invention provides a puncture positioning method based on ultrasonic imaging, which comprises the steps of synchronously acquiring an ultrasonic image and a photoacoustic image when a puncture needle moves through an ultrasonic probe and a photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set, respectively extracting features of the ultrasonic image and the photoacoustic image to obtain a first feature and a second feature, wherein the ultrasonic image is any ultrasonic image in the ultrasonic image set, the photoacoustic image is the photoacoustic image acquired at the same time of the ultrasonic image, aligning the first feature and the second feature, fusing to obtain a target feature, labeling the target feature on the ultrasonic image to obtain a target image, constructing a target three-dimensional space according to all the target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position. The ultrasonic image and the photoacoustic image are acquired simultaneously, and the images shot at the same time by utilizing the two imaging technologies are subjected to feature extraction and fusion, so that a target image containing more information can be generated, the accuracy and the reliability of positioning are improved, the needle head is clear in the image by analyzing all the target images when the puncture needle moves, the movement track and the position of the puncture needle head can be displayed more intuitively by constructing a three-dimensional space, and the puncture positioning efficiency is improved.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a flow chart of a method for locating a puncture based on ultrasonic imaging according to an embodiment of the present invention;
FIG. 2 is a flow chart of another ultrasound imaging-based puncture positioning method according to an embodiment of the present invention;
Fig. 3 provides a schematic structural diagram of a puncture positioning device based on ultrasonic imaging.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The term "and/or" is merely an association relation describing the association object, and means that three kinds of relations may exist, for example, a and B may mean that a exists alone, a and B exist together, and B exists alone. Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention provides a puncture positioning method based on ultrasonic imaging. Referring to fig. 1, fig. 1 is a flowchart of a puncture positioning method based on ultrasonic imaging according to an embodiment of the present invention. The method comprises the following steps:
S101, synchronously acquiring an ultrasonic image and a photoacoustic image when a puncture needle moves through an ultrasonic probe and a photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set;
s102, respectively carrying out feature extraction on an ultrasonic image and a photoacoustic image to obtain a first feature and a second feature;
S103, aligning the first feature and the second feature, then fusing to obtain a target feature, and marking the target feature on the ultrasonic image to obtain a target image;
S104, constructing a target three-dimensional space according to all target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position.
The ultrasonic image is any ultrasonic image in the ultrasonic image set, and the photoacoustic image is a photoacoustic image acquired at the same time of the ultrasonic image.
According to the puncture positioning method based on ultrasonic imaging, the ultrasonic image and the photoacoustic image are acquired simultaneously, and the images shot at the same time by utilizing the two imaging technologies are subjected to feature extraction and fusion, so that a target image containing more information can be generated, the positioning accuracy and reliability are improved, the needle head is clear in the image through analyzing all the target images when the puncture needle moves, the movement track and the position of the puncture needle head can be displayed more intuitively through constructing a three-dimensional space, and the puncture positioning efficiency is improved.
In one implementation, the target features are marked on the ultrasonic image, so that the structure for clearly marking the puncture needle and surrounding tissues is arranged on the target image, the visual effect of the puncture needle in the image can be improved, the positioning is more accurate, the target features of the puncture needle in the three-dimensional space are identified, and specific position information is obtained.
In one implementation, the ultrasound probe may be GE HEALTHCARE L, philips L12-3, siemens 14L5, mindray L-4 s, and the photoacoustic probe may be LZ250, MX550D, MSOT Acuity Echo, MX201, etc.
In one implementation, the ultrasound image can provide information of tissue structure, but the puncture needle is difficult to clearly display due to specular reflection, the photoacoustic image can enhance the visualization of the puncture needle because the photoacoustic image has better enhancement effect on the contrast of the needle by the photoacoustic effect, and the two can be combined to remarkably improve the definition and the visibility of the needle in imaging and help doctors to more accurately position the needle.
In one implementation, the motion of the puncture needle is monitored in real time, a three-dimensional space is constructed according to all target images, the dynamic position of the needle in the body can be accurately tracked, the position and the path of the needle can be accurately mastered in a complex anatomical structure, the operation strategy is adjusted in real time, errors are reduced, and the operation success rate is improved.
In one implementation, the planning and adjustment of the penetration path may be optimized using a target three-dimensional space construct. The method not only can optimize the puncture path before operation, but also can adjust the path in real time in the operation process, so that the needle head can accurately reach the target position according to the preset path.
In one embodiment, referring to fig. 2, step S102 specifically includes:
S1021, respectively carrying out image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image;
S1022, respectively carrying out image segmentation on the first ultrasonic image and the first photoacoustic image through an edge detection algorithm to obtain a second ultrasonic image and a second photoacoustic image;
s1023, respectively carrying out geometric feature extraction on the second ultrasonic image and the second photoacoustic image to obtain a first geometric feature and a second geometric feature;
S1024, substituting the second ultrasonic image and the second photoacoustic image into a convolutional neural network model to obtain a first depth feature and a second depth feature;
S1025, the first geometric feature and the first depth feature are marked as first features, and the second geometric feature and the second depth feature are marked as second features.
In one implementation, the preprocessing of the ultrasonic image is to remove speckle noise in the image by Gaussian filtering, then to use contrast enhancement technology to improve the contrast between the puncture needle and background tissue so as to make the puncture needle more prominent in the image, and the preprocessing of the photoacoustic image is to use wiener filtering to remove background noise in the image, then to use histogram equalization to enhance the contrast of the image so as to ensure that the puncture needle and surrounding tissues are more clearly presented.
In one implementation, important boundaries and contours in the image are identified through an edge detection algorithm, so that the boundaries of the puncture needle and surrounding tissues are more obvious, geometric feature extraction is helpful for capturing structural information such as the shape, the size and the boundaries of a target area in the image, depth features extracted by a Convolutional Neural Network (CNN) can extract complex nonlinear features from local details and global modes, and the combined use of the geometric features and the depth features is helpful for enhancing the feature extraction of the puncture needle by a model.
In one implementation, the second ultrasonic image and the second photoacoustic image are images which separate the puncture needle region from the background and only keep the puncture needle region, the geometric features are extracted to extract the length, width, shape, position information and the like of the puncture needle, and the convolutional neural network model is a CNN model with a convolutional layer of 3x3 and 5x 5.
In one embodiment, spatially fusing the first feature and the second feature after alignment to obtain the target feature includes:
Acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and respectively extracting the needle point position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;
calculating Euclidean distance between the first puncture position and the second puncture position to obtain a target distance, and if the target distance is larger than a preset distance, performing similar transformation on the first puncture position and the second puncture position until the target distance is smaller than or equal to the preset distance, and performing geometric alignment on the ultrasonic image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasonic image and a first aligned photoacoustic image;
Carrying out local binary processing on the first aligned ultrasonic image to obtain a first texture feature, and extracting textures in the first aligned photoacoustic image through a gray level co-occurrence matrix to obtain a second texture feature;
Mapping the first texture features and the second texture features to the same feature space through principal component analysis to perform space alignment to obtain a second aligned ultrasonic image and a second aligned photoacoustic image;
And carrying out weighted fusion on the first characteristic and the second characteristic corresponding to the second aligned ultrasonic image and the second aligned photoacoustic image to obtain an initial target characteristic.
In one implementation, the preset distance is determined by a technician, the two modal images can be ensured to be matched accurately in space by respectively extracting the needle tip positions of the puncture needles in the ultrasonic image and the photoacoustic image and performing geometric alignment through similar transformation, a preset alignment threshold is set by calculating Euclidean distance of the positions of the puncture needle tips in the two modal images, and the positions of the two images are adjusted based on similar transformation translation, scaling, rotation and the like, so that the alignment precision is ensured, the multi-modal images can be subjected to comparative analysis in the same space, texture features in the ultrasonic image and the photoacoustic image are respectively extracted through local binary processing and gray level co-occurrence matrix, and abundant detailed information in the images can be captured, so that the fusion effect of the images and the accuracy of the whole analysis result are improved.
In one implementation mode, texture features of ultrasonic and photoacoustic images are mapped to the same feature space through principal component analysis, so that feature dimension unification and alignment are achieved, data dimension is effectively reduced, main information is reserved, and image processing speed and efficiency are improved.
In one implementation, the characteristics of the aligned ultrasonic image and the aligned photoacoustic image are weighted and fused, so that information of two modes can be comprehensively utilized, useful characteristics of different image sources are combined, more comprehensive image information can be obtained, and the robustness and the accuracy of a diagnosis model can be improved.
In one implementation, local binary processing is suitable for enhancing contrast of local areas of the image so that texture information is more prominent, and the gray level co-occurrence matrix can calculate texture statistical features of the image and extract subtle changes in the image. The combination of the two can better capture the microstructure differences in the ultrasound and photoacoustic images.
In one implementation, the final initial target feature is more reliable and accurate by combining the geometric alignment and feature alignment and weighted fusion. This high quality image fusion can provide a physician with more rich diagnostic information, helping to make more informed medical decisions, particularly in interventional procedures, tumor localization and tissue analysis.
In one embodiment, constructing the target three-dimensional space from all of the target images as the needle is moved comprises:
Performing inter-frame alignment on all target images to obtain an aligned image set, and sequentially mapping each aligned image to a three-dimensional space according to a time sequence aiming at aligned images in the aligned image set to obtain an initial three-dimensional space;
Filling unmapped voxels in the initial three-dimensional space by three-time interpolation to obtain a target three-dimensional space.
In one implementation, the inter-frame alignment ensures that each time-series image is consistent in spatial position under the same reference frame, thereby eliminating image misalignment problems due to motion, pose changes, or other factors, and the inter-frame alignment can be achieved by detecting feature points in each frame image, for example, based on feature points, based on optical flow, based on global transformation, based on deep learning, and the like, and then calculating an inter-frame displacement, rotation, or scaling transformation matrix by matching the positional relationship between the feature points.
In one implementation, by mapping the aligned images of the time series to a three-dimensional space, a three-dimensional structure containing complete spatial information can be generated, and compared with a single-frame image, the three-dimensional structure can display the spatial form more intuitively and comprehensively, thereby helping doctors to see the position of the puncture needle better.
In one implementation, the cubic interpolation method is a high-precision interpolation method, by carrying out interpolation filling on unmapped voxels in an initial three-dimensional space, a smoother and more continuous three-dimensional model can be generated, spatial faults and data loss caused by insufficient image frames or gaps are avoided, the fineness and accuracy of three-dimensional reconstruction can be improved, incomplete voxel data can be frequently encountered in the generation of three-dimensional medical images, the integrity and fluency of the three-dimensional model can be ensured by filling blank areas through the cubic interpolation, and the three-dimensional interpolation method is particularly suitable for fine analysis on tissue edges or complex structures.
In one embodiment, mapping each of the aligned images into three-dimensional space in sequence in time series to obtain an initial three-dimensional space includes:
projecting each alignment image into a three-dimensional coordinate system through geometric information of ultrasonic equipment, and mapping pixel points of a two-dimensional image into voxels of a three-dimensional space;
the space positions of the puncture needle at different time points are gradually accumulated to obtain an initial three-dimensional space.
In one implementation, the geometric information of the ultrasound device includes a probe position, an angle, and a scan range, a starting point of a three-dimensional coordinate system is a position of the ultrasound probe, a Z-axis is along a scan direction of the probe, an X-axis and a Y-axis respectively correspond to a horizontal and a vertical direction of the image, a pixel point of the two-dimensional image is calculated its position on an image plane and then mapped to a voxel position in a three-dimensional space, specifically mapped, for example, to an image resolution of 0.5 mm/pixel, a physical size of the image of 100mmx100mm, the image plane is in a z=0 plane, and the image is directly in front of the probe, a two-dimensional coordinate of the pixel (i, j) is (X2D, Y2D), a three-dimensional coordinate thereof is (X3D, Y3D, Z3 d=x2d×0.5mm, y3d=y2d×0.5mm, Z3D is depth information at the pixel (i, j).
In one implementation, the initial three-dimensional space is obtained by gradually accumulating the spatial positions of the puncture needle at different time points, specifically, the three-dimensional coordinate points of the puncture needle at all time points are gradually accumulated. The coordinate points of each time point are added into a whole data set to form a complete puncture needle movement track, and a three-dimensional space model is generated through point cloud data by using the accumulated puncture needle coordinate points.
The embodiment of the invention also provides a puncture positioning device based on ultrasonic imaging based on the same inventive concept. Referring to fig. 3, fig. 3 is a schematic structural diagram of a puncture positioning device based on ultrasonic imaging according to an embodiment of the present invention, including:
The image acquisition module is used for synchronously acquiring an ultrasonic image and a photoacoustic image when the puncture needle moves through the ultrasonic probe and the photoacoustic probe to obtain an ultrasonic image set and a photoacoustic image set;
The device comprises a feature extraction module, a feature extraction module and a photo-acoustic image processing module, wherein the feature extraction module is used for respectively carrying out feature extraction on an ultrasonic image and the photo-acoustic image to obtain a first feature and a second feature, wherein the ultrasonic image is any ultrasonic image in the ultrasonic image set;
the feature fusion module is used for fusing the first features and the second features after aligning to obtain target features, and labeling the target features on the ultrasonic image to obtain a target image;
The three-dimensional construction module is used for constructing a target three-dimensional space according to all target images when the puncture needle moves, and identifying the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position.
According to the puncture positioning device based on ultrasonic imaging, provided by the embodiment of the invention, the target image containing more information can be generated by simultaneously acquiring the ultrasonic image and the photoacoustic image and performing feature extraction and fusion on the images shot at the same time by utilizing the two imaging technologies, so that the accuracy and the reliability of positioning are improved, the needle head is clear in the image by analyzing all the target images when the puncture needle moves, the movement track and the position of the puncture needle head can be displayed more intuitively by constructing a three-dimensional space, and the puncture positioning efficiency is improved.
In one embodiment, the feature extraction module comprises:
The image preprocessing module is used for respectively carrying out image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image;
The image segmentation module is used for respectively carrying out image segmentation on the first ultrasonic image and the first photoacoustic image through an edge detection algorithm to obtain a second ultrasonic image and a second photoacoustic image;
the geometrical feature extraction module is used for respectively carrying out geometrical feature extraction on the second ultrasonic image and the second photoacoustic image to obtain a first geometrical feature and a second geometrical feature;
The depth feature extraction module is used for substituting the second ultrasonic image and the second photoacoustic image into the convolutional neural network model to obtain a first depth feature and a second depth feature;
The feature determining module is used for recording the first geometric feature and the first depth feature as first features and recording the second geometric feature and the second depth feature as second features.
In one embodiment, the feature fusion module comprises:
The puncture position determining module is used for acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and extracting the needle point position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;
the geometric alignment module is used for calculating Euclidean distance between the first puncture position and the second puncture position to obtain a target distance, if the target distance is larger than a preset distance, performing similar transformation on the first puncture position and the second puncture position until the target distance is smaller than or equal to the preset distance, and performing geometric alignment on the ultrasonic image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasonic image and a first aligned photoacoustic image;
The texture feature extraction module is used for carrying out local binary processing on the first aligned ultrasonic image to obtain a first texture feature, and extracting textures in the first aligned photoacoustic image through the gray level co-occurrence matrix to obtain a second texture feature;
the space alignment module is used for mapping the first texture features and the second texture features to the same feature space through principal component analysis to perform space alignment to obtain a second aligned ultrasonic image and a second aligned photoacoustic image;
And the weighted fusion module is used for carrying out weighted fusion on the first characteristic and the second characteristic corresponding to the second aligned ultrasonic image and the second aligned photoacoustic image to obtain an initial target characteristic.
In one embodiment, a three-dimensional build module includes:
the image mapping module is used for carrying out inter-frame alignment on all the target images to obtain an aligned image set, and for aligned images in the aligned image set, mapping each aligned image to a three-dimensional space according to a time sequence to obtain an initial three-dimensional space;
And the voxel filling module is used for filling unmapped voxels in the initial three-dimensional space through cubic interpolation to obtain a target three-dimensional space.
In one embodiment, the image mapping module includes:
The image projection module is used for projecting each alignment image into a three-dimensional coordinate system through geometric information of the ultrasonic equipment, and mapping pixel points of the two-dimensional image into voxels of a three-dimensional space;
The position accumulation module is used for gradually accumulating the spatial positions of the puncture needle at different time points to obtain an initial three-dimensional space.
The foregoing describes one embodiment of the present invention in detail, but the disclosure is only a preferred embodiment of the present invention and should not be construed as limiting the scope of the invention. All equivalent changes and modifications within the scope of the present invention are intended to be covered by the present invention.

Claims (3)

Translated fromChinese
1.一种基于超声成像的穿刺定位装置,其特征在于,所述装置包括:1. A puncture positioning device based on ultrasonic imaging, characterized in that the device comprises:图像获取模块,用于通过超声探头和光声探头同步采集穿刺针移动时的超声图像和光声图像得到超声图像集和光声图像集;An image acquisition module is used to synchronously acquire ultrasound images and photoacoustic images of the puncture needle when it moves through an ultrasound probe and a photoacoustic probe to obtain an ultrasound image set and a photoacoustic image set;特征提取模块,用于分别对超声图像和光声图像进行特征提取得到第一特征和第二特征;所述超声图像为所述超声图像集中任一超声图像;所述光声图像为所述超声图像同一时刻采集的光声图像;A feature extraction module, used for extracting features from an ultrasonic image and a photoacoustic image to obtain a first feature and a second feature respectively; the ultrasonic image is any ultrasonic image in the ultrasonic image set; and the photoacoustic image is a photoacoustic image acquired at the same time as the ultrasonic image;特征融合模块,用于对所述第一特征和所述第二特征对齐后进行融合得到目标特征,将所述目标特征标注到所述超声图像上得到目标图像;a feature fusion module, configured to align and fuse the first feature and the second feature to obtain a target feature, and mark the target feature on the ultrasound image to obtain a target image;三维构建模块,用于根据穿刺针移动时的所有目标图像构建目标三维空间,识别所述目标三维空间中穿刺针的位置得到穿刺定位位置;A three-dimensional construction module is used to construct a target three-dimensional space according to all target images when the puncture needle moves, and identify the position of the puncture needle in the target three-dimensional space to obtain a puncture positioning position;所述特征提取模块包括:The feature extraction module comprises:图像预处理模块,用于分别对所述超声图像和所述光声图像进行图像预处理得到第一超声图像和第一光声图像;An image preprocessing module, used to perform image preprocessing on the ultrasonic image and the photoacoustic image to obtain a first ultrasonic image and a first photoacoustic image respectively;图像分割模块,用于通过边缘检测算法分别对所述第一超声图像和所述第一光声图像进行图像分割,得到第二超声图像和第二光声图像;An image segmentation module, configured to perform image segmentation on the first ultrasonic image and the first photoacoustic image respectively by using an edge detection algorithm to obtain a second ultrasonic image and a second photoacoustic image;几何特征提取模块,用于分别对所述第二超声图像和所述第二光声图像进行几何特征提取得到第一几何特征和第二几何特征;A geometric feature extraction module, used to extract geometric features from the second ultrasonic image and the second photoacoustic image to obtain a first geometric feature and a second geometric feature;深度特征提取模块,用于分别将所述第二超声图像和所述第二光声图像代入卷积神经网络模型得到第一深度特征和第二深度特征;A depth feature extraction module, used for respectively substituting the second ultrasound image and the second photoacoustic image into a convolutional neural network model to obtain a first depth feature and a second depth feature;特征确定模块,用于将所述第一几何特征和所述第一深度特征记为第一特征;将所述第二几何特征和所述第二深度特征记为第二特征;A feature determination module, configured to record the first geometric feature and the first depth feature as a first feature; and record the second geometric feature and the second depth feature as a second feature;所述特征融合模块包括:The feature fusion module includes:穿刺位置确定模块,用于获取所述第一特征和所述第二特征中的第一几何特征和第二几何特征,分别提取所述第一几何特征和所述第二几何特征中所述穿刺针的针尖位置得到第一穿刺位置和第二穿刺位置;a puncture position determination module, used for acquiring a first geometric feature and a second geometric feature in the first feature and the second feature, and respectively extracting the needle tip position of the puncture needle in the first geometric feature and the second geometric feature to obtain a first puncture position and a second puncture position;几何对齐模块,用于计算所述第一穿刺位置和所述第二穿刺位置的欧几里得距离得到目标距离,若所述目标距离大于预设距离,则对所述第一穿刺位置和所述第二穿刺位置进行相似变换,直到所述目标距离小于或等于预设距离,根据调整后距离对所述超声图像和光声图像进行几何对齐得到第一对齐超声图像和第一对齐光声图像;a geometric alignment module, configured to calculate the Euclidean distance between the first puncture position and the second puncture position to obtain a target distance, and if the target distance is greater than a preset distance, perform a similarity transformation on the first puncture position and the second puncture position until the target distance is less than or equal to the preset distance, and perform geometric alignment on the ultrasound image and the photoacoustic image according to the adjusted distance to obtain a first aligned ultrasound image and a first aligned photoacoustic image;纹理特征提取模块,用于对所述第一对齐超声图像进行局部二值处理得到第一纹理特征,通过灰度共生矩阵提取所述第一对齐光声图像中的纹理得到第二纹理特征;A texture feature extraction module, configured to perform local binary processing on the first aligned ultrasound image to obtain a first texture feature, and extract the texture in the first aligned photoacoustic image through a gray level co-occurrence matrix to obtain a second texture feature;空间对齐模块,用于通过主成分分析将所述第一纹理特征和所述第二纹理特征映射到相同的特征空间进行空间对齐得到第二对齐超声图像和第二对齐光声图像;A spatial alignment module, used for mapping the first texture feature and the second texture feature to the same feature space through principal component analysis to perform spatial alignment to obtain a second aligned ultrasound image and a second aligned photoacoustic image;加权融合模块,用于对所述第二对齐超声图像和所述第二对齐光声图像对应的第一特征和第二特征进行加权融合得到初始目标特征。A weighted fusion module is used to perform weighted fusion on the first feature and the second feature corresponding to the second aligned ultrasound image and the second aligned photoacoustic image to obtain an initial target feature.2.根据权利要求1所述的一种基于超声成像的穿刺定位装置,其特征在于,所述三维构建模块包括:2. The puncture positioning device based on ultrasonic imaging according to claim 1, characterized in that the three-dimensional construction module comprises:图像映射模块,用于对所有目标图像进行帧间对齐得到对齐图像集,针对所述对齐图像集中的对齐图像,按照时间序列依次将每一对齐图像映射到三维空间得到初始三维空间;An image mapping module is used to perform inter-frame alignment on all target images to obtain an aligned image set, and for the aligned images in the aligned image set, sequentially map each aligned image to a three-dimensional space in a time sequence to obtain an initial three-dimensional space;体素填充模块,用于通过三次插值对所述初始三维空间中未映射的体素进行填充得到目标三维空间。The voxel filling module is used to fill the unmapped voxels in the initial three-dimensional space by cubic interpolation to obtain the target three-dimensional space.3.根据权利要求2所述的一种基于超声成像的穿刺定位装置,其特征在于,所述图像映射模块包括:3. The puncture positioning device based on ultrasonic imaging according to claim 2, characterized in that the image mapping module comprises:图像投影模块,用于通过超声设备的几何信息,将每一对齐图像投影到三维坐标系中,将二维图像的像素点映射为三维空间的体素;An image projection module, used to project each aligned image into a three-dimensional coordinate system by using geometric information of the ultrasound device, and map the pixels of the two-dimensional image into voxels in the three-dimensional space;位置累积模块,用于逐步累积穿刺针在不同时间点的空间位置得到初始三维空间。The position accumulation module is used to gradually accumulate the spatial positions of the puncture needle at different time points to obtain the initial three-dimensional space.
CN202411444375.7A2024-10-162024-10-16Puncture positioning method and device based on ultrasonic imagingActiveCN118948401B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411444375.7ACN118948401B (en)2024-10-162024-10-16Puncture positioning method and device based on ultrasonic imaging

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411444375.7ACN118948401B (en)2024-10-162024-10-16Puncture positioning method and device based on ultrasonic imaging

Publications (2)

Publication NumberPublication Date
CN118948401A CN118948401A (en)2024-11-15
CN118948401Btrue CN118948401B (en)2025-01-03

Family

ID=93383944

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411444375.7AActiveCN118948401B (en)2024-10-162024-10-16Puncture positioning method and device based on ultrasonic imaging

Country Status (1)

CountryLink
CN (1)CN118948401B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120163876A (en)*2025-03-212025-06-17广州医科大学附属第四医院(广州市增城区人民医院) Anesthesia puncture auxiliary positioning method and system based on image technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107427292A (en)*2015-03-312017-12-01富士胶片株式会社Puncture device and photoacoustic measurement device
CN117679158A (en)*2023-12-052024-03-12湖南扬方医疗科技有限公司Method, device, equipment and medium for generating incisal edge path of body surface tumor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107427292A (en)*2015-03-312017-12-01富士胶片株式会社Puncture device and photoacoustic measurement device
CN117679158A (en)*2023-12-052024-03-12湖南扬方医疗科技有限公司Method, device, equipment and medium for generating incisal edge path of body surface tumor

Also Published As

Publication numberPublication date
CN118948401A (en)2024-11-15

Similar Documents

PublicationPublication DateTitle
CN107456278B (en)Endoscopic surgery navigation method and system
EP3003161B1 (en)Method for 3d acquisition of ultrasound images
US10543045B2 (en)System and method for providing a contour video with a 3D surface in a medical navigation system
JP5474342B2 (en) Anatomical modeling with 3-D images and surface mapping
EP3145431B1 (en)Method and system of determining probe position in surgical site
CN112641514B (en)Minimally invasive interventional navigation system and method
US20120071757A1 (en)Ultrasound Registration
WO2016170372A1 (en)Apparatus and method for registering pre-operative image data with intra-operative laparscopic ultrasound images
US10588702B2 (en)System and methods for updating patient registration during surface trace acquisition
CN118948401B (en)Puncture positioning method and device based on ultrasonic imaging
Beigi et al.Needle trajectory and tip localization in real-time 3-D ultrasound using a moving stylus
CN106236264B (en)Gastrointestinal surgery navigation method and system based on optical tracking and image matching
EP4329581B1 (en)Method and device for registration and tracking during a percutaneous procedure
WO1999016352A1 (en)Interventional radiology guidance system
WO2017190210A1 (en)Methods for improving patient registration
WO2022146918A1 (en)Systems for dynamic image-based localization
WO2022146919A1 (en)Systems for image-based registration and associated methods
CN118476868B (en)Metal needle guiding method, system and image processing equipment
CN116807361B (en)CT image display method, electronic equipment and device
EP4346613B1 (en)Volumetric filter of fluoroscopic sweep video
Langhe et al.Freehand 2D Ultrasound Probe Calibration for Image Fusion with 3D MRI/CT
GaoAn Automatic Registration Approach for Ventricle Insertion Navigation Using the HoloLens2
CN119837600A (en)Kidney puncture navigation system based on ultrasonic three-dimensional reconstruction
CN106236263A (en)The gastrointestinal procedures air navigation aid decomposed based on scene and system
CN120345991A (en)Ablation assisting method and system based on image processing, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp