Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for determining the position of a sheath end point, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.
Fig. 1 is a flowchart of a method for determining the position of a distal point of a sheath, according to an embodiment of the present invention, as shown in fig. 1, the method comprising the steps of:
Step S102, acquiring initial two-dimensional images of the endoscope during the process of extending into the physiological channel from at least two angles.
In the technical solution provided in the step S102 of the present invention, the physiological channel may be a working channel of an endoscope. If the site to be examined by the endoscope is the bronchi of the lung, the physiological channel may be the airways of the lung, which may also be referred to as the bronchi channel. It should be noted that the above-mentioned physiological channel and endoscope are only illustrative, and not particularly limited herein, and may be determined according to the actual position of the lesion and the operation performed.
Alternatively, the initial two-dimensional image may be a two-dimensional CArm perspective, which may also be referred to as CArm view or CArm perspective. For example, the image may be CArm perspective images obtained after the data preprocessing transformation.
Alternatively, the angle may be a backward (Towards) angle or a counterclockwise (Counter Clockwise, abbreviated as Ccw) angle. It should be noted that the above angles are merely examples, and are not particularly limited herein.
Alternatively, the endoscope may include a sheath and an endoscope body. The sheath tube can protect the lens, the light source and other parts of the endoscope main body from being influenced by external environment, prevent damage or pollution, accurately guide the endoscope main body to a specific position, ensure the accuracy and the safety of operation or examination, fix the position of the endoscope main body, prevent unnecessary shaking or displacement in the operation process and ensure the definition and the stability of imaging. The sheath may also be referred to as a consumable. The endoscope body may include components such as a lens, a light source, a sensor, etc., through which the condition inside the physiological channel can be observed.
In this embodiment, an initial two-dimensional image of the endoscope as it is advanced into the physiological channel can be acquired at least two angles.
Optionally, the endoscope is introduced into a physiological passage of the patient, such as a bronchus of the lung, for performing an examination or treatment procedure. The lens, the light source and other parts of the endoscope can capture the internal conditions of the physiological channel in real time and convert the internal conditions into images to be displayed on a corresponding display. By imaging with a C-arm X-ray machine, a CArm perspective of the endoscope can be obtained as it is extended into the physiological channel at different angles. The CArm fluoroscopic images may display the position of the endoscope, the structure of the physiological channel, and possibly lesions or abnormalities to provide real-time visual guidance.
Optionally, after the CArm perspective images are obtained, in order to ensure image quality, reduce noise, and thereby facilitate subsequent image analysis and processing. It can be data pre-processed to optimize image quality and accuracy so that the position and condition of the endoscope within the physiological channel can be more clearly observed. The position and direction of the endoscope can be determined by observing the position of the endoscope and the condition of the physiological channel from different angles according to the initial two-dimensional image after data preprocessing.
For example, if the physiological pathway targeted is the bronchi of the lung, the CArm perspective is used primarily for real-time guided surgical procedures in bronchoscopy to reach the operation (Bronchoscopic Transparenchymal Nodule Access) via pulmonary nodules of the lung parenchyma, confirming the position of the endoscope and whether the sheath reaches the focal point. After the preoperative CT determines the focal points and planned paths, the digitally reconstructed radiological image (DIGITALLY RECONSTRUCTURED RADIOGRAPH, abbreviated as DRR) image generated by the intraoperative CT may be registered with the CArm perspective image, mapping the confirmed focal points onto the CArm perspective. The operator performs the procedure on the CArm perspective based on this information, and confirms the position of the endoscope during the procedure from the CArm perspective at different angles, and whether the sheath tip point has reached the focal point accurately.
It should be noted that the above scenario of obtaining CArm perspective views at different angles is merely illustrative, and is not limited herein, and may be determined according to practical situations.
Step S104, a first image area containing the endoscope main body and a second image area containing the sheath are respectively segmented from the initial two-dimensional images.
In the solution provided in the above step S104 of the present invention, the first image area may be an image area including an endoscope main body. The second image region may be an image region containing a sheath. The first image region and the second image region may be an image of the endoscope body and an image of the sheath tube adaptively segmented according to an orientation and a width of the endoscope, respectively.
In this embodiment, after acquiring the initial two-dimensional images of the at least two angles of the endoscope as it extends into the physiological channel, the partial images with the endoscope body and sheath may be separated therefrom, respectively, resulting in a first image region and a second image region.
Optionally, thresholding is an important step after the initial two-dimensional image has been data pre-processed. The purpose of threshold segmentation is to separate the endoscope and the sheath from the initial two-dimensional image by setting different segmentation thresholds, so that the influence of other redundant features in the initial two-dimensional image is eliminated, and the endoscope fitting, the threshold segmentation and the contour detection are better performed. Compared with the steps performed on the whole initial two-dimensional image, the local segmentation can improve the acquisition efficiency and accuracy of the whole process.
Step S106, determining a first target position of an endoscope tail end point of the endoscope main body in the three-dimensional space based on the first image area, and determining a second target position of a sheath tail end point of the sheath in the three-dimensional space based on the second image area.
In the solution provided in the step S106 of the present invention, the endoscope body may include an endoscope end point, which may also be referred to as a mirror tip, for example, if the endoscope body is a bronchoscope, the endoscope end point may be a bronchoscope end, which may also be referred to as a bronchoscope tip. The first target location may be the coordinates of the endoscope tip point in three-dimensional space and the second target location may be the coordinates of the sheath tip point in three-dimensional space. The sheath end point, which may also be referred to as the sheath tip, may be denoted by point P. The sheath end point may also be referred to as a consumable end.
In this embodiment, after the first image region including the endoscope main body and the second image region including the sheath are respectively segmented from the initial two-dimensional images, the position of the endoscope distal point in the two-dimensional space can be determined based on the first image region, and further the first target position of the endoscope distal point in the three-dimensional space can be determined. The position of the sheath end point in the two-dimensional space can be determined based on the second image area, and then the second target position of the sheath end point in the three-dimensional space can be determined.
Alternatively, image areas of the endoscope body and the sheath are segmented from the initial two-dimensional image, and the positions of the endoscope distal point and the sheath distal point in the two-dimensional space can be determined based on the two areas, respectively, and converted into target positions in the three-dimensional space. The above method process helps to improve the accuracy and safety of the endoscopic procedure. That is, by dividing the image areas of the endoscope body and the sheath, the positions of the endoscope distal end point and the sheath distal end point in the two-dimensional image can be accurately positioned. The two-dimensional position is converted into a target position in a three-dimensional space, so that the position relation between the tail end point of the endoscope and the tail end point of the sheath relative to a reference point (such as a focus point) can be more clearly known.
In the embodiment of the invention, the positions of the endoscope tail end point and the sheath tail end point in space can be more accurately determined through conversion from two dimensions to three dimensions, and the positioning accuracy is improved. The two-dimensional position is converted into the three-dimensional position, so that more three-dimensional information and angles can be provided, a doctor can know the relative positions of the endoscope and the sheath tube in space more clearly, and the surgical operation can be guided more accurately. By determining the positions of the endoscope distal point and the sheath distal point in three-dimensional space, the risk of miscut or damage to healthy tissue can be reduced, and the safety of the procedure can be improved.
Therefore, compared with the method for determining the movement of the sheath end point to the focus point by only using the two-dimensional CArm perspective view in the related art, more accurate positioning information and more space information can be provided by converting from two dimensions to three dimensions, thereby being beneficial to improving the accuracy and safety of operation, providing better operation guidance and support and being beneficial to improving the accuracy and success rate of endoscope operation.
Step S108, determining the azimuth information between the sheath end point and the reference point based on the first target position and the second target position.
In the technical solution provided in the above step S108 of the present invention, the azimuth information may be used to describe the relative positional relationship between the sheath end point and the reference point, and may be used to represent the distance and the angle between the sheath end point and the reference point. The reference point may be used to indicate a location outside the physiological channel where an abnormal condition exists, and may also be referred to as a focal point or target point, which is used to indicate a location outside the physiological channel where a focus exists.
In this embodiment, after the first target position is determined based on the first image region and the second target position is determined based on the second image region, the positional information between the sheath end point and the reference point may be determined based on the first target position and the second target position.
Optionally, by determining the first target location and the second target location, positional information between the sheath tip point and a reference point (e.g., a focal point), including distance and angle, may be further determined. The above procedure is of great importance for positioning and guiding in endoscopic procedures.
Alternatively, the positions of the endoscope tip point and the sheath tip point in two-dimensional and three-dimensional space are determined by based on the first image region and the second image region. The first target position and the second target position respectively represent coordinates of the endoscope end point and the sheath end point in space, and provide a basis for subsequent azimuth information. Using the first target position and the second target position, distance and angle information between the sheath distal point and the reference point can be calculated for describing the relative positional relationship therebetween.
In the embodiment of the invention, the sheath in CArm views with any two angles is segmented and positioned to the tail end of the sheath, and then the position of the sheath in the three-dimensional space is determined by converting the two-dimensional space into the three-dimensional space. And determining whether the sheath has reached the focal point by calculating the angle and distance between the sheath and the focal point. If the sheath does not reach the focus point, corresponding angle adjustment and distance prompt can be provided, so that an operator can be helped to more easily convey the sheath to the focus point, and the accuracy and success rate of the operation are improved. By the method, the sheath tube position is accurately positioned by utilizing the three-dimensional space information, and compared with a two-dimensional CArm perspective view in the related art, the method can provide more accurate position information and space relation and provide more visual and more instructive information for an operator. The method can reduce the adjustment times in the operation process, reduce the operation risk, improve the operation efficiency and success rate, and has important significance for interventional operations such as BTPNA operations.
In the above steps S102 to S108 according to the embodiment of the present invention, the initial two-dimensional image of the endoscope during the extension into the physiological channel may be acquired from at least two angles. The first image region of the endoscope body and the second image region of the sheath can be segmented from the initial two-dimensional image. From the first image region, the coordinates of the endoscope tip point in two-dimensional space can be determined, and the coordinates can be converted into three-dimensional space to obtain a first target position. From the second image region, the coordinates of the tracheal end point in two-dimensional space can be determined and converted into three-dimensional space to obtain the second target position. The distance and the angle between the sheath end point and the reference point can be determined through the first target position and the second target position, so that whether the sheath end point reaches the focus point is determined in real time. In this embodiment, by converting the coordinates from the two-dimensional space to the three-dimensional space, the positions of the endoscope distal end point and the sheath distal end point with respect to the reference point can be determined more accurately, the accuracy of positioning is improved, and more stereoscopic information including depth, angle, and the like can be provided, so that the positional relationship of the endoscope and the reference point can be understood more clearly, and the operations such as diagnosis and treatment can be guided more accurately. Compared with CArm perspective views relying on different angles for confirmation, the method provides richer three-dimensional information, reduces the limitation in two-dimensional space, and is beneficial to improving the accuracy and safety of diagnosis and treatment. By more intuitively knowing the distance and angle between the sheath tip point and the reference point, diagnostic and therapeutic procedures can be guided more accurately, errors are reduced, and success rate is improved. Thereby realizing the technical effect of improving the accuracy of determining the position of the sheath tube end point and solving the technical problem of low accuracy of determining the position of the sheath tube end point.
Embodiments of the present invention will be described in detail with reference to the following steps.
As an optional embodiment, step S104 is to divide a first image area containing the endoscope main body and a second image area containing the sheath tube from the initial two-dimensional images respectively, and the method comprises the steps of performing threshold division on any initial two-dimensional image to obtain a first image area, wherein the first image area contains pixel points with pixel values smaller than a preset first division threshold value in any initial two-dimensional image, and performing threshold division on any initial two-dimensional image to obtain a second image area, and the second image area contains pixel points with pixel values smaller than a preset second division threshold value in any initial two-dimensional image.
In this embodiment, in the process of dividing the first image region and the second image region from the initial two-dimensional image, respectively, the initial two-dimensional image may be divided according to the first division threshold, and the image region including the pixel points whose pixel values are smaller than the first division threshold may be regarded as the first image region. The initial two-dimensional image may be segmented according to a second segmentation threshold, and an image region including pixels having pixel values smaller than the second segmentation threshold may be used as the second image region. The first image area comprises pixel points, corresponding to pixel values in the initial two-dimensional image, smaller than a preset first segmentation threshold value. The first segmentation threshold may be a segmentation threshold at which the endoscope body is segmented. The second image area comprises all pixel points, corresponding to pixel values in the initial two-dimensional image, smaller than a preset second segmentation threshold value. The second segmentation threshold may be a segmentation threshold that segments out the sheathed tube.
In an embodiment of the present invention, data preprocessing may be performed before the initial two-dimensional image is segmented. Data preprocessing is an important step, and aims to improve image quality and reduce noise, so that subsequent image analysis and processing are facilitated. For example, in the data preprocessing process, steps of contrast improvement, brightness improvement, noise removal and the like may be performed, which are only illustrative and not restrictive.
Alternatively, the contrast may be improved by computing an adaptive equalization histogram. Adaptive equalization histogram is a method for enhancing image contrast. In the process, the histogram of the CArm perspective view is analyzed and processed, so that the self-adaptive contrast enhancement can be performed according to the brightness condition of the local area of the image, and the CArm perspective view is clearer and the detail is highlighted.
Alternatively, the brightness may be enhanced by gamma conversion. By adjusting the brightness and contrast of the image, the visual effect of the image can be improved. In the process, the gamma conversion can effectively improve the brightness of the image, so that the sheath tip, the endoscope main body and the like are more striking, and the subsequent detection and analysis are facilitated.
Alternatively, noise may be removed by median filtering. Noise in the image is reduced by ordering and taking the median of the pixel values around the pixel points in the CArm perspective. In the process, the median filtering can effectively remove noise in the image, so that the target area is clearer and more accurate, and the subsequent target detection and positioning are facilitated.
Through the data preprocessing step, the image quality can be improved, noise interference can be reduced, a more accurate and reliable image foundation is provided for detection and positioning of the sheath tip, and therefore the accuracy and the efficiency of detection are improved. It should be noted that the above-mentioned methods for improving contrast, improving brightness and removing noise are only examples, and are not limited herein, and any process and method capable of improving the image quality of CArm perspective views are within the scope of the embodiments of the present invention.
In the embodiment of the invention, after data preprocessing, threshold segmentation is an important step. The purpose of the thresholding is to segment out a local image region with the endoscope body (first image region) and a local image region with the sheath (second image region). And the influence of other redundant pixel points in the image is eliminated, so that the endoscope fitting, threshold segmentation and contour detection are better carried out, and finally, the coordinates of the appointed centroid are conveniently obtained. Compared with the steps performed on the whole perspective image, the local segmentation can improve the acquisition efficiency and accuracy of the sheath end point.
Alternatively, in the process of threshold segmentation, the first segmentation threshold and the second segmentation threshold can be determined according to the gray distribution characteristics of the initial two-dimensional image by using an adaptive binarization method and an oxford method, so that effective segmentation of the initial two-dimensional image is realized, and a target (also called as a foreground, such as an endoscope main body and a sheath) is separated from a background (such as a physiological channel) area, so that a more accurate data basis is provided for subsequent image processing and analysis. The adaptive binarization is a method for dynamically adjusting a binarization threshold according to local characteristics of an image. In adaptive binarization, an algorithm determines a threshold value of each pixel according to gray values of a neighborhood around the pixel, so as to realize adaptive binarization processing of different areas. The method can effectively cope with the region with larger brightness change in the image, and improves the accuracy of segmentation. The oxford method is a global threshold segmentation method aimed at finding suitable segmentation thresholds (i.e., a first segmentation threshold and a second segmentation threshold) that maximize the inter-class variance between the target and the background. The method calculates the inter-class variance under each threshold by traversing a plurality of possible thresholds, and finds the threshold with the maximum inter-class variance as a global threshold for segmentation. The Ojin method is suitable for images with obvious gray level distribution of the background and the foreground, and can effectively realize the binary segmentation of the images.
Optionally, in the process of performing threshold segmentation by using the adaptive binarization and the oxford method, the initial two-dimensional image after data preprocessing may be regarded as a matrix, each element in the matrix corresponds to a pixel point in the initial two-dimensional image, the value of the element is a pixel value, and each pixel value may be a gray value. The gray value may be between (0, 255). The pixel points can be classified into foreground and background according to the comparison of the segmentation threshold and the pixel value. And determining a segmentation threshold value by calculating indexes such as pixel point number duty ratio, average gray scale and the like of the foreground and the background and the inter-class variance, so that the inter-class variance is maximized. The initial two-dimensional image can be divided into a foreground part and a background part by binarization processing under the dividing threshold value.
In the embodiment of the invention, the endoscope main body part can be extracted more accurately through the combination of the Ojin method and the self-adaptive binarization, so that the effective segmentation of the foreground and the background is realized, and the subsequent image analysis and processing are facilitated. The method can automatically determine the proper segmentation threshold, avoids subjectivity of manually adjusting the threshold, and improves accuracy and stability of image processing. In summary, by performing threshold segmentation through the adaptive binarization and the oxford method, the initial frontal day image can be effectively segmented, and the required first image area and second image area can be extracted, so that powerful support is provided for further image processing and analysis. The method can also improve the automation degree and accuracy of image processing, and is helpful for extracting the first image area and the second image area for subsequent processing.
For example, if the initial two-dimensional image is a matrix of size m×n, the segmentation threshold for segmenting the foreground and background is denoted as T. The proportion of pixels belonging to the foreground to the whole initial two-dimensional image is denoted omega0, and the average gray of the gray values corresponding to the pixels of the foreground is denoted mu0. The proportion of pixels belonging to the background to the whole initial two-dimensional image is denoted omega1, and the average gray of the gray values corresponding to the pixels of the background is denoted mu1. The total average gray level of the initial two-dimensional image is denoted μ and the inter-class variance is denoted σ2. If the number of pixels in the image, whose gray value is smaller than the division threshold T, is denoted as a, and the number of pixels, whose gray value is greater than or equal to the division threshold T, is denoted as B, then there are:
A+B=M*N
1=ω0+ω1
μ=ω0μ0+ω1μ1
σ2=ω0(μ0-μ)2+ω1(μ1-μ)
By combining the above formulas, one can get:
σ2=ω0ω1(μ0-μ1)
The dividing threshold T is obtained by sequentially traversing each gray value in the dividing threshold T in the dividing intervals (0, 255). The initial two-dimensional image can be thresholded using a segmentation threshold T.
For example, if a first image region is to be segmented, the foreground may include the portion of the endoscope body that enters the physiological channel up to the location of the puncture point, as well as other pixels of the first segmentation threshold.
It should be noted that the above-mentioned process and method for thresholding the initial two-dimensional image are only examples, and are not limited in particular herein. The segmentation threshold T may be determined according to various parameters (e.g., A, B) corresponding to whether the foreground to be segmented is an endoscope body or a sheath, and parameters (e.g., μ0、μ1) of the initial two-dimensional image. The process and method of determining the end points of the endoscope and the sheath by the partial image area are within the scope of the embodiments of the present invention as long as the threshold segmentation of the entire two-dimensional CArm perspective is enabled.
As an optional embodiment, step S106, determining the first target position of the endoscope end point of the endoscope body in the three-dimensional space based on the first image area includes acquiring an updated first image area, determining orientation information of the endoscope body based on the updated first image area, and determining the first target position of the endoscope end point in the three-dimensional space by combining the orientation information and the updated first image area.
In this embodiment, in determining the first target position of the endoscope tip point in the three-dimensional space based on the first image region, the orientation information of the endoscope main body may be determined from the updated first image region. And combining the orientation information and the updated first image area, determining the position of the endoscope tail end point in the two-dimensional space, and further determining the first target position of the endoscope tail end point in the three-dimensional space. The updated first image region may be a mask-continuous first image region of the endoscope. The orientation information may be used to indicate whether the endoscope tip point is oriented left and right or up and down, for purposes of illustration only and not limitation. The orientation information may also be sheath orientation.
Alternatively, contour detection may be performed from the updated first image region, and finding the shape with the largest area is regarded as the endoscope body. That is, after detecting the contours, the area of each contour can be calculated, and the shape with the largest area can be found. In this image, the shape of the largest area is generally considered to be the main area of the endoscope body, because the shape of the endoscope is generally regular and has a large area. By recognizing the shape with the largest area as the endoscope main body, the position and shape of the endoscope can be accurately positioned and extracted, so that the orientation information can be conveniently determined. The method can help automatically identify and position the endoscope, and provide important information for subsequent operation navigation or diagnosis. Through mask continuous processing and contour detection, the position of the endoscope in the image can be effectively found by combining the identification of the largest shape of the area, and the automatic detection and positioning of the endoscope are realized. The method can improve the processing efficiency and accuracy and provide powerful support for medical analysis and diagnosis.
Alternatively, the endoscope body from which the contour is detected in the updated first image region may be analyzed for orientation information by the endoscope body. And determining the position of the endoscope tail end point in the two-dimensional space according to the orientation information, and converting the position into the three-dimensional space to obtain a first target position.
As an alternative embodiment, acquiring the updated first image area comprises continuously masking the first image area to obtain the updated first image area if the pixels corresponding to the endoscope main body contained in the first image area are in discrete states, and taking the first image area as the updated first image area if the pixels corresponding to the endoscope main body contained in the first image area are in continuous states.
In this embodiment, if the updated first image region needs to be acquired, the first image region may be detected to determine whether the pixels in the first image region that constitute the endoscope main body are in a discrete state or a continuous state. If the pixels of the endoscope main body are in a discrete state, the mask continuous processing can be performed on the first image area, so that an updated first image area is obtained. If the pixels of the endoscope main body are in a continuous state, the first image area is directly used as the updated first image area without performing mask continuous processing. The pixels constituting the endoscope body may be referred to as a mask (may also be referred to as a mask region) of the endoscope body, for example, a mask of a bronchoscope. Mask continuity generally refers to the continuity and integrity of the target area in the image. If the mask is continuous, it means that the pixels of the target area are interconnected in the image without discontinuities or breaks, forming a continuous whole. The continuity of the mask is important for subsequent image analysis and processing because it ensures the integrity and accuracy of the target.
Alternatively, by using a combination of erosion and dilation operations, the masked region of the endoscope body may be made continuous and more complete to ensure accuracy in determining the orientation information of the endoscope body from the first image region.
Alternatively, the mask of the endoscope body in the first image region may be subjected to an etching operation first to remove small pixel points and noise at the edges, and then to an expanding operation to re-expand and connect the mask regions, thereby obtaining a more accurate and continuous mask of the endoscope body. The mask after the processing can be better used for subsequent image analysis and processing, and the accuracy of orientation information and the like of the endoscope main body is improved.
Alternatively, erosion is an image morphological processing operation, which may be performed by sliding a structural element over the image, the pixel being preserved when the structural element completely covers the target area, or else being placed in the background (i.e., black). The etching operation can make the target area small, remove small pixels around the target, smooth the edge of the target, and remove noise and pinholes. Dilation is another morphological processing operation that can be performed by sliding a structural element over the image, where the pixel is set to target (i.e., white) when the structural element overlaps the target area, otherwise the original pixel value is preserved. The expansion operation can enlarge the target area, fill the cavity in the target, connect the broken target part and increase the area and connectivity of the target.
In an embodiment of the invention, if the endoscope body in the obtained first image area is intermittent, it means that the mask of the endoscope body may not be continuous. The above situation may affect the identification, positioning and analysis of the endoscope body, as the intermittent mask may result in incomplete shape and profile of the endoscope body, incomplete information, and may cause errors or inaccurate results. Therefore, in order to better extract and process information such as orientation information of the endoscope main body, it is necessary to continue the masking of the endoscope main body by image processing techniques such as erosion and swelling. This means that in the image the area of the endoscope body should be continuous, without discontinuities or breaks, to ensure accurate identification and analysis of the endoscope body. By continuing the masking of the endoscope body, accuracy and reliability of extraction and processing of the endoscope body can be improved.
It should be noted that the above-mentioned process and method for performing mask continuous processing on the endoscope main body in the first image area are only illustrative, and are not limited in particular, and any process and method capable of performing mask continuous processing to ensure accuracy of analyzing information such as orientation information of the endoscope main body and width of the endoscope main body are within the scope of the embodiments of the present invention.
As an optional embodiment, determining the orientation information of the endoscope main body based on the updated first image area comprises fitting pixel points belonging to the endoscope main body in the updated first image area to obtain a central line fitting curve of the endoscope main body, and determining the orientation information based on coordinate information of any two pixel points on a target coordinate axis on the central line fitting curve.
In this embodiment, in determining the orientation information of the endoscope main body based on the updated first image region, fitting processing may be performed on the pixel points belonging to the endoscope main body in the updated first image region, to obtain a centerline fitting curve of the endoscope main body. The orientation information may be determined based on coordinate information of any two pixel points on the centerline fitting curve on the target coordinate axis. Wherein the centerline fitting curve may also be referred to as a fitting curve.
In the embodiment of the invention, a fitting curve can be obtained by performing polynomial fitting on the obtained pixel points of the endoscope main body in the updated first image area, that is, the shape of the endoscope main body is approximated by a polynomial function. Then, two pixel points are taken on the fitting curve for comparison, and the sizes of the coordinate information of the two points on the target coordinate axis are mainly compared, so that the left-right direction or the up-down direction of the endoscope main body is judged. For example, the pixels of the endoscope main body may be sorted according to the distance between the pixels and the initial position where the endoscope initially extends into the physiological channel, the left and right directions of the endoscope main body may be determined according to the serial numbers corresponding to the two pixels and the coordinate information corresponding to the x coordinate axis, or the up and down directions of the endoscope main body may be determined according to the serial numbers corresponding to the two pixels and the coordinate information corresponding to the y coordinate axis.
It should be noted that the above-mentioned method and process for analyzing the target coordinate axis of the orientation information and determining the orientation information of the endoscope main body are only illustrative, and are not particularly limited herein, and may be set according to actual situations.
Optionally, a suitable fitting curve is found in the first image region by a polynomial fitting algorithm, so that the curve better approximates the shape of the endoscope body. By fitting a curve, the overall shape and profile of the endoscope body can be better described and characterized. Two pixel points are selected on the fitting curve and respectively represent two specific positions of the endoscope main body. Two pixels are typically selected to be located at two end points or at two specific feature points of the endoscope body. The size of the coordinate information of the two points on the target coordinate axis can be used to determine the left and right orientation of the endoscope body.
Alternatively, by comparing the size of the coordinate information of the two selected pixel points with the distance between the two pixel points and the initial position where the endoscope initially extends into the physiological channel, it can be determined which point has larger coordinate information, and the left-right orientation of the endoscope main body is determined accordingly. For example, if the first point is located to the left of the second point, the distance between the first point and the initial position is greater than the distance between the second point and the initial position, indicating that the endoscope body is oriented to the left, and if the second point is located to the left, the distance between the second point and the initial position is greater than the distance between the first point and the initial position, indicating that the endoscope body is oriented to the right. This is by way of illustration only and is not intended to be limiting.
In the embodiment of the invention. By the method, the first image area can be analyzed and judged by the method of image processing and curve fitting, so that the left and right directions of the endoscope main body can be determined. The position and orientation of the endoscope body can be better understood, providing important reference information for subsequent surgical navigation and manipulation. Through image processing and analysis, the direction of the endoscope main body can be accurately judged, and the accuracy and success rate of the operation can be improved.
The method comprises the steps of obtaining two-dimensional coordinate information of an endoscope terminal point in a first image area based on orientation information, an obtained central line fitting curve and pixel points belonging to an endoscope main body in the first image area, and carrying out three-dimensional back projection processing on the obtained two-dimensional coordinate information of the endoscope terminal point in the first image area corresponding to different initial two-dimensional images to obtain the first target position of the endoscope terminal point in the three-dimensional space.
In this embodiment, in determining the first target position of the endoscope tip point in the three-dimensional space in combination with the orientation information and the updated first image region, the two-dimensional coordinate information of the endoscope tip point in the two-dimensional space may be acquired based on the orientation information, the center line fitting curve, and the pixel points of the endoscope main body in the first image region. The two-dimensional coordinate information may be processed in three-dimensional back projection to obtain the first target position.
In this embodiment, the non-zero point detection may be performed after the endoscope body is detected from the contour in the first image region in the threshold-based segmentation step.
Alternatively, during non-zero detection, the endoscope end point, parameters of the tangent on the endoscope end point (e.g., slope k, offset b) of the fitted curve, and another point O on the tangent can be obtained. That is, non-zero detection, i.e., traversing pixel points, is performed in the first image region after the threshold segmentation and contour detection, and the pixel points in the first image region that constitute the endoscope body are acquired, which may represent the contour or edge of the endoscope body.
Optionally, if the orientation information of the endoscope body is right, a point located on the right side of the centerline fitting curve of the endoscope body and located on the right side of the pixel point of the endoscope body may be selected from the pixel points, and the point is an endoscope end point, so that two-dimensional coordinate information of the endoscope end point may be determined. Conversely, if the orientation information is left-facing, the leftmost point on the centerline fitting curve that is located on the endoscope body can be identified as the endoscope distal end point from among the pixel points on the left side of the pixel points of the endoscope body. It should be noted that the above process of screening the end points of the endoscope is only illustrative, and is not limited thereto.
The above procedure and method for determining the endoscope distal point from each point on the centerline fitting curve are only one example of the endoscope body facing right or left, and if the endoscope body is oriented in other directions, the endoscope distal point may be selected according to the actual situation of the orientation.
Alternatively, after the two-dimensional coordinate information of the endoscope tip point is determined, the first target position in the three-dimensional space may be obtained by back projection calculation.
For example, after determining two-dimensional coordinate information of the end point of the endoscope at angles of towars and Ccw, the spatial coordinates thereof can be obtained by back projection. The two-dimensional coordinate information of the endoscope tip point at Towards degrees is known as (u1,v1), and the two-dimensional coordinate information of the endoscope tip point at Ccw degrees is known as (u2,v2). Assuming that the coordinates of the first target position in the corresponding three-dimensional space are (x, y, z), the (u1,v1) and (u2,v2) may be de-distorted according to the inverse of the following equation, resulting in de-distorted (x '1,y'1) and (x'2,y'2):
Where xcenter may be used to represent the abscissa value of the optical center point of the imaging plane of the first image region, ycenter may be used to represent the ordinate value of the optical center point, xfocal may be used to represent the focal length of the camera acquiring CArm perspective in the horizontal direction, yfocal may be used to represent the focal length in the vertical direction, and aij、bij may be a 4-order distortion parameter representing the lateral and longitudinal distortion coefficients, respectively. Already at CArm calibration stage.
For another example, the patient registration phase has solved a rotation matrix and a translation matrix for CT through CArm at Towards, ccw angles, respectively:
Wherein,Can be used to represent the rotation matrix at Towards degrees; can be used to represent a translation matrix at Towards degrees; can be used to represent the rotation matrix at Ccw degrees; Can be used to represent the translation matrix at angles Ccw.
Can be represented by the formulaWherein, P may be used to represent the coordinates of the point a in the coordinate system of the CT image, and a may be used to represent the coordinates in the coordinate system of the CArm image, resulting in the following coordinate information:
The two equations can be combined to obtain the following conversion:
By solving the above-mentioned system of linear equations, the coordinates of the endoscope end point in three-dimensional space, i.e. the first target position (xq,yq,zq), may be obtained, which may also be referred to as the start point of the physiological channel.
As an optional embodiment, step S106, determining the second target position of the sheath end point of the sheath in the three-dimensional space based on the second image area includes performing contour detection on the second image area to obtain a centroid data set corresponding to the sheath, wherein the centroid data set includes two-dimensional coordinate information of each centroid corresponding to the sheath in the second image area, screening each centroid contained in the centroid data set according to a centerline fitting curve and an endoscope end point, taking the screened designated centroid as the sheath end point, obtaining the two-dimensional coordinate information of the sheath end point in the second image area, and performing three-dimensional back projection processing on the obtained two-dimensional coordinate information of the sheath end point in the second image area corresponding to different initial two-dimensional images to obtain the second target position of the sheath end point of the sheath in the three-dimensional space.
In this embodiment, in determining the second target position of the sheath distal point in the three-dimensional space based on the second image region, contour detection may be performed on the second image region to obtain a centroid data set corresponding to the sheath. The individual centroids contained in the centroid data set may be screened based on a centerline fitting curve, an endoscope end point. The selected designated centroid is taken as the sheath end point. Two-dimensional coordinate information of the sheath tip point in the second image region may be acquired. And the two-dimensional coordinate information can be processed into a three-dimensional space through three-dimensional back projection to obtain a second target position of the sheath tube end point in the three-dimensional space, wherein the centroid data set can comprise the two-dimensional coordinate information of each centroid corresponding to the sheath tube in a second image area.
Alternatively, after the specified centroid is determined from the plurality of centroids, two-dimensional coordinate information corresponding to the specified centroid may be determined from the centroid data set. The second target position of the distal end point of the sheath in three-dimensional space can be calculated by using the above-described back-projection calculation formula in the same manner as the determination of the first target position of the distal end point of the endoscope. The above embodiments have already described the back projection calculation process in detail, and will not be described in detail here.
Optionally, an adaptive tuning of the bias (bias) of the oxford method is performed for thresholding to obtain the segmentation result of the sheath image, i.e. the second image region. By self-adaptively adjusting the bias, the method can be better adapted to the characteristics of the image and improve the segmentation effect. The Ojin method is a global threshold segmentation method based on inter-class variance maximization. In the method, the threshold value can be dynamically adjusted according to the characteristics of the image by adaptively adjusting the bias so as to adapt to brightness change and noise conditions of different areas. Based on the adjusted bias, threshold segmentation is performed using the oxford method. The threshold segmentation process is to divide the image into two parts, namely a target (such as a sheath) and a background according to the gray value of the image. Through the bias after self-adaptive adjustment, the threshold value can be determined more accurately, and the effective segmentation of the bronchoscope image is realized. And obtaining a second image area through the threshold segmentation of the Ojin method after the self-adaptive adjustment and the bias. This result will show the separation of the target and background regions of the sheath, thereby facilitating subsequent processing and analysis.
By adaptively adjusting the bias of the Ojin method to perform threshold segmentation, more accurate and effective segmentation can be realized according to the characteristics and requirements of the image, the second image region can be extracted, a better data basis is provided for subsequent processing and analysis, and the accuracy and efficiency of image segmentation can be improved by the method.
As an alternative embodiment, each centroid contained in the centroid data set is screened according to a center line fitting curve and an endoscope tail end point, the method comprises the steps of obtaining a tangent line of the endoscope tail end point on the center line fitting curve, determining a corresponding circle center on the tangent line according to orientation information of the endoscope tail end point and an endoscope main body and a preset radius length, constructing a target circle according to the circle center and the preset radius length, and screening each centroid contained in the centroid data set according to screening conditions according to the target circle to determine a specified centroid.
In this embodiment, a tangent to the endoscope tip point on the centerline fitting curve may be obtained during the screening of the specified centroid from the centroids based on the centerline fitting curve and the endoscope tip point. The circle center of the target circle can be determined on the tangent line according to the end point of the endoscope, the orientation information of the endoscope main body and the preset radius length. A target circle can be constructed with a center and a predetermined radius length. The specified centroid is selected from a plurality of centroids according to the target circle, and the screening condition is selected, wherein the preset radius length may be determined according to the longest sheath length (maxLength), for example, the preset radius length r= maxLength/2+5, which is only used as an example and is not limited in particular. The target circle may be represented by Cm. The longest sheath length can be statistically derived from a large number of surgical data. The centroids may also form a set of centroids, e.g., may be a list of centroids (massList).
Alternatively, the first image region may be adaptively image segmented according to the orientation information of the endoscope body and the overall width, before determining the tangent on the fitted curve, so as to segment the image with the endoscope tip point. The endoscope body from which a particular section is divided may be selected for subsequent processing and analysis based on the orientation determination of the endoscope body. That is, adaptive image segmentation is performed according to the orientation of the endoscope body and the overall width. In this case, if the endoscope body is directed to the right, an image of the endoscope body of the right side portion may be selected to be divided. By performing adaptive segmentation based on the orientation information and the width information, images of the endoscope body of a specific portion can be better selected and extracted. After the adaptive segmentation, an image of the endoscope body in the right portion can be acquired. The purpose of the method is to provide more accurate data for subsequent processing and analysis, so that the fitting effect is better. Segmenting out portions of the endoscope body image may allow for more accurate and targeted subsequent processing, helping to better understand the shape and characteristics of the endoscope body.
In the embodiment of the invention, the image of the endoscope main body of a specific part (such as the tail end point of the endoscope) can be effectively selected and extracted by carrying out self-adaptive segmentation according to the orientation information and the whole width of the endoscope main body, so that a more accurate data basis is provided for subsequent processing and analysis. The method can pertinently divide the image according to specific conditions, and is beneficial to improving the recognition and positioning accuracy of the endoscope main body.
Alternatively, after the image of the distal end point of the endoscope is obtained, polynomial fitting may be performed to obtain a fitting line for obtaining the image of the distal end point of the endoscope, thereby indicating the movement path of the endoscope. That is, a polynomial fit is performed on the image of the distal end point of the endoscope segmented in the previous step. By fitting the algorithm, a suitable polynomial function can be found to approximate the shape and contour of the endoscope tip to indicate the path and direction of movement of the endoscope. On the basis of the fitting line, a tangent to the tip of the mirror may be taken, which may approximately represent the direction of the sheath. By calculating the slope or direction of the fit line, the approximate orientation of the sheath can be determined, providing important information about the sheath position and orientation. The direction of the sheath can be more accurately determined by polynomial fitting and obtaining the tangent line of the tip of the mirror, and important information and guidance are provided for subsequent operation. The method can better understand the position and the moving path of the endoscope main body, and improve the efficiency and the accuracy of the operation.
Optionally, after the end point of the endoscope and the tangent line on the end point of the endoscope are obtained through the non-zero detection process, a target circle Cm may be constructed by taking a point on the end point of the endoscope as a circle, which is located on the tangent line and has a distance of a preset radius length r from the end point of the endoscope as a circle center O. Through the above procedure, the endoscope end point, the tangent parameter (slope k, offset b) on the fitted curve on the endoscope end point, and another point O on the tangent can be obtained. Such information may help determine the end point position and orientation of the sheath, providing an important data basis for subsequent processing and analysis. The method can improve the identification and positioning accuracy of the endoscope main body and provide useful information and guidance for operation.
Optionally, contour detection may be performed on the second image area obtained by the threshold segmentation, so as to obtain two-dimensional coordinate information of each centroid in the centroid list of the sheath contour:
Wherein Cx can be used to represent the abscissa of each centroid, i.e., the abscissa in the two-dimensional coordinate information, Cy can be used to represent the ordinate of each centroid, i.e., the ordinate of the two-dimensional coordinate information, x, y are the serial numbers corresponding to each centroid, x, y are positive integers, M can be used to represent the geometric moment of the image, in particular M10 can be used to represent the first-order horizontal moment, M01 can be used to represent the first-order vertical moment, and M00 can be used to represent the area of the outline in the image.
As an alternative embodiment, the screening criteria include at least a specified centroid within the target circle, a distance between the specified centroid and the tangent line being equal to or less than a first distance threshold, and a distance between the specified centroid and an endoscope endpoint being equal to or greater than a distance between any centroid within the centroid dataset and the endoscope endpoint.
In this embodiment, the screening criteria may include at least that the specified centroid may be within the target circle, that the distance between the specified centroid and the switch may be less than or equal to a first distance threshold, and that the distance between the specified centroid and the endoscope endpoint may be greater than or equal to the distance between either centroid and the endoscope endpoint. The first distance threshold may also be referred to as a tangent threshold, may be predetermined according to experimental data, and may be used to exclude a noise centroid, and improve accuracy of the determined specified centroid.
Alternatively, the specified centroid D (x 1, y 1), i.e., the specified centroid in the centroids massList, can be screened out by at least three screening conditions described above.
Optionally, the first filtering condition is that the specified centroid is a centroid within the circle Cm. The specified centroid in the screening condition must lie within circle Cm, i.e., within circular region Cm with center of circle O. This condition may be used to define a range of locations for the specified centroid, ensuring that the specified centroid is within the specified circular area for further screening and analysis. By limiting the fact that the specified centroid must be located in the circle Cm, the search range of the specified centroid can be reduced, interference of irrelevant points is reduced, and screening accuracy is improved.
Optionally, the second screening condition is within a range of a distance tangent threshold. The distance from the specified centroid to the tangent line in the screening condition must be within a specified threshold range, i.e., the distance from the tangent line must not exceed a certain threshold. The screening condition is used for limiting the distance range from the appointed centroid to the tangent line and ensuring that the proximity degree of the appointed centroid and the tangent line meets the requirement. By setting the threshold range of the distance from the tangent line, the point far from the tangent line can be filtered, the appointed centroid close to the tangent line is reserved, and the screening accuracy is improved.
Optionally, the third screening condition is a centroid having a specified centroid furthest from an end point of the endoscope. The selection criteria may be based on meeting the first two criteria, such that a specified centroid furthest from the distal endoscope point, i.e., a specified centroid D (x 1, y 1) of centroids massList furthest from the distal endoscope point, may be selected. The screening conditions are used to determine the final selection of the specified centroid D (x 1, y 1) to ensure that the specified centroid furthest from the endoscope's end point is selected as the final specified centroid. By selecting the designated centroid furthest from the end point of the endoscope, the proper selection of the designated centroid under specific conditions can be ensured, other possible interference factors are avoided, and the accuracy and the precision of the designated centroid are improved.
In summary, the specified centroid D (x 1, y 1) can be effectively determined by screening the specified centroid by the above three screening conditions. The setting and application of the conditions can help to screen out the appointed centroid meeting the requirements, eliminate the interference points which do not meet the conditions, and improve the screening accuracy and the result reliability. The above screening method facilitates the determination of the appropriate selection of a given centroid to meet a particular need and requirement.
It should be noted that the above screening conditions are only illustrative, and are not limited in particular, and any process and method capable of screening out a specified centroid from a large number of centroids as the end point of the sheath are within the scope of the embodiments of the present invention.
As an optional embodiment, step S108, determining the azimuth information between the sheath distal end point and the reference point based on the first target position and the second target position, comprises obtaining the reference coordinate information of the reference point in the three-dimensional space and the target coordinate information of the target point on the physiological channel in the three-dimensional space, wherein the target point is a puncture point determined on the physiological channel based on the planning path, determining the distance in the azimuth information based on the second target position and the reference coordinate information, determining a first straight line from the target point to the reference point according to the target coordinate information and the reference coordinate information, determining a second straight line from the target point to the sheath distal end point according to the target coordinate information and the second target position, and determining the included angle between the first straight line and the second straight line as the angle in the azimuth information.
In this embodiment, in determining the azimuth information between the sheath distal point and the reference point based on the first target position and the second target position, the reference coordinate information of the reference point in the three-dimensional space and the target coordinate information of the target point on the physiological channel in the three-dimensional space may be acquired. The distance in the position information may be determined based on the second target location and the reference coordinate information. The first straight line where the target point and the reference point are located can be determined according to the target coordinate information and the reference coordinate information, and the second straight line where the target point and the sheath end point are located can be determined according to the target coordinate information and the second target position. The angle between the first straight line and the second straight line can be determined to be the angle in the azimuth information.
Alternatively, the target Point may be a Point of Entry (POE) determined on the physiological channel based on the planned path. For example, the target point may be a puncture point on the bronchial wall. If the target is bronchoscopy or bronchobiopsy, the planned path can be performed by LungPro software, and the position of the appropriate POE point, i.e. the target coordinate information Pp=(xP,yP,zP of the target point, is calculated.
It should be noted that the above-mentioned software for planning a path and POE point determining process are only examples, and are not limited herein.
For example, the distance between the second target position and the reference coordinate information may be determined by the following formula:
Wherein d1 may be used to represent the distance between the second target location and the reference coordinate information, i.e., the distance of the sheath end point from the focal point, Pt=(xt,yt,zt) may be used to represent the reference coordinate information, i.e., the focal point coordinates, Ps=(xs,ys,zs) may be used to represent the second target location, i.e., the coordinates of the sheath end point P.
For another example, the angle between the first line and the second line may be determined by the following formula:
Wherein θ may be used to represent the angle between the first line and the second line, i.e., the angle between POE to the focal point and POE to the sheath tip point, Pt may be used to represent the focal point coordinates, PP may be used to represent the POE point coordinates, and Ps may be used to represent the sheath tip point coordinates.
It should be noted that the above process and method for determining the angle and the distance in the azimuth information are only exemplary, and are not limited herein.
As an alternative embodiment, the method further comprises determining a distance between the target point and the endoscope tip point based on the first target position and the target coordinate information, wherein the distance between the target point and the endoscope tip point is used to determine a degree of matching between an actual movement path of the endoscope within the physiological channel and the planned path.
In this embodiment, the distance between the target point and the endoscope end point may also be determined according to the first target position and the target coordinate information, wherein the distance between the target point and the endoscope end point may be used to determine the degree of matching between the actual movement path of the endoscope within the physiological channel and the previously planned path.
For example, the distance between the first target location and the target coordinate information may be determined by the following formula:
Wherein d2 may be used to represent the distance between the first target location and the target coordinate information, i.e. the distance of the bronchoscope tip from the POE point, and Pq=(xq,yq,zq) may be used to represent the first target location, i.e. the coordinates of the bronchoscope tip point.
In the embodiment of the invention, the error magnitude between the current actual moving path and the pre-planned planning path, namely the matching degree between the current actual moving path and the pre-planned planning path, can be determined through the magnitude of d2. If d2 is smaller, the higher the matching degree between the actual moving path and the planned path planned in advance can be indicated, and the more accurate the puncture can be indicated. If d is larger, the matching degree between the actual moving path and the planned path planned in advance can be lower, and the puncture accuracy is lower, so that corresponding adjustment is needed. Ideally, the POE point coincides with the bronchoscope tip point, i.e., the puncture is accurate, d2 =0.
In embodiments of the present invention, an initial two-dimensional image of the endoscope during its extension into a physiological channel may be acquired from at least two angles. The first image region of the endoscope body and the second image region of the sheath can be segmented from the initial two-dimensional image. From the first image region, the coordinates of the endoscope tip point in two-dimensional space can be determined, and the coordinates can be converted into three-dimensional space to obtain a first target position. From the second image region, the coordinates of the tracheal end point in two-dimensional space can be determined and converted into three-dimensional space to obtain the second target position. The distance and the angle between the sheath end point and the reference point can be determined through the first target position and the second target position, so that whether the sheath end point reaches the focus point is determined in real time. In this embodiment, by converting the coordinates from the two-dimensional space to the three-dimensional space, the positions of the endoscope distal end point and the sheath distal end point with respect to the reference point can be determined more accurately, the accuracy of positioning is improved, and more stereoscopic information including depth, angle, and the like can be provided, so that the positional relationship of the endoscope and the reference point can be understood more clearly, and the operations such as diagnosis and treatment can be guided more accurately. Compared with CArm perspective views relying on different angles for confirmation, the method provides richer three-dimensional information, reduces the limitation in two-dimensional space, and is beneficial to improving the accuracy and safety of diagnosis and treatment. By more intuitively knowing the distance and angle between the sheath tip point and the reference point, diagnostic and therapeutic procedures can be guided more accurately, errors are reduced, and success rate is improved. Thereby realizing the technical effect of improving the accuracy of determining the position of the sheath tube end point and solving the technical problem of low accuracy of determining the position of the sheath tube end point.
Example 2
In the embodiments of the present invention, the method for determining the position of the distal end point of the sheath provided by the present invention is described in detail below with reference to another alternative embodiment of the bronchoscope.
At present, BTPNA operation, namely pulmonary parenchymal nodule arrival operation under a bronchoscope, is also called tunnel technology, and a tunnel is established again by punching holes on the bronchial wall, so that a nodule part can be reached from a working channel in the pulmonary parenchyma without depending on a natural bronchial channel, and the full lung arrival and accurate treatment can be achieved theoretically. BTPNA is used as an interventional procedure, the procedure needs to rely on C-arm imaging, the C-arm is used as a perspective imaging system, only two-dimensional position reference can be presented, 3D information is lost, the C-arm is rotated to determine the space position, imaging is performed at a plurality of angles, but because the C-arm is imaged by means of X-rays (X-rays), operators and patients have potential risks if exposed to radiation environment for a long time.
Optionally, in BTPNA procedures, the operator can acquire current positional information in real time within the airway through image information acquired by the lens of the bronchoscope, but once the airway is closed, the positional information becomes unknown. At this time, to determine the sheath position of the airway portion, a CArm perspective is typically used. However, since the CArm perspective is a two-dimensional image, it is difficult to acquire spatial position information of the sheath of the airway portion, which makes it difficult to determine whether a focal point has been reached, increasing the complexity and risk of the procedure. During the operation, if the tools such as biopsy needle, sheath tube and the like do not accurately reach the focus position, the CArm perspective view is a two-dimensional image, so that effective spatial information prompt such as angles, distances and the like is difficult to provide. This can lead to a lack of accurate guidance by the operator during the procedure, increasing the difficulty and duration of the procedure, as well as increasing the risk of the procedure.
Alternatively, BTPNA surgery is a bronchoscopically performed pulmonary lump puncture for diagnosis and treatment of pulmonary disease. In such procedures, preoperative CT scans are typically used to confirm the location and size of the lesion and plan the surgical path. During surgery, the DRR image (digitally reconstructed radiological image, virtual image) generated by the intraoperative CT is registered with CArm fluoroscopic image (real image), and the confirmed target position in the CT is mapped onto CArm perspective. The purpose of this is to be able to accurately guide the operator to the intended lesion position during the actual operation.
In BTPNA surgery, the operator first performs a related procedure, such as taking a biopsy sample or performing other treatment, by passing a tool guided by a bronchoscope through pulmonary tissue to reach the focal site according to a pre-planned path. In the operation process, an operator confirms whether the sheath tube accurately reaches the focus point after the air outlet channel through CArm perspective views at different angles. Through CArm perspective, the operator can observe the position of the tool in the body in real time, thereby ensuring the accuracy and safety of the operation.
In the whole BTPNA operation process, the steps of pre-operation CT scanning, registration of DRR images generated by intra-operation CT and CArm perspective images, position confirmation of CArm views at different angles and the like are all used for improving the accuracy of the operation, reducing the operation risk, ensuring that an operator can accurately reach focus points, and therefore effectively diagnosing and treating. The method comprehensively utilizing the imaging technology and the real-time navigation technology is beneficial to improving the success rate and the safety of BTPNA operations.
In one related art, confirmation can be made by CArm perspective views at different angles, but the CArm perspective view is a two-dimensional image, which is difficult to accurately locate. And the CArm perspective is difficult to provide effective information when the sheath does not reach the lesion site and needs to be adjusted. Therefore, there is still a technical problem that the accuracy of determining the position of the distal point of the sheath is low.
The invention provides a method for repositioning three-dimensional coordinates based on a two-dimensional X-ray image. The method comprises the steps of dividing a sheath tube in CArm views with any two angles, positioning the sheath tube to the tail end position of the sheath tube, and then calculating the position of the sheath tube in a three-dimensional space through back projection. And determining whether the sheath has reached the focal point by calculating the angle and distance between the sheath and the focal point. If the sheath does not reach the focal point, the technique can provide angular adjustment and distance cues to help the operator more easily deliver the sheath to the focal point, thereby improving the accuracy and success rate of the procedure. The sheath tube position is accurately positioned by utilizing the three-dimensional space information, and compared with a two-dimensional CArm perspective view in the related art, the three-dimensional space information can provide more accurate position information and space relation, and more visual and more instructive information can be provided for an operator. By the innovative method, the adjustment times in the operation process can be reduced, the operation risk is reduced, the operation efficiency and success rate are improved, and the method has important significance for interventional operations such as BTPNA operations. Thereby realizing the technical effect of improving the accuracy of determining the position of the end point of the sheath, and solving the technical problem of low accuracy of determining the position of the end point of the sheath.
The method is further described below.
In this embodiment, fig. 2 is a flowchart of a method for calculating a spatial position by two-dimensional perspective back projection according to an embodiment of the present invention, as shown in fig. 2, the method may include the steps of:
in step S202, two CArm perspective views are acquired to determine the position of the distal end of the sheath.
In this embodiment, two angled CArm perspective views may be taken and the sheath tip position may be split from the figure.
Alternatively, due to the physical characteristics of the bronchoscope, the air outlet channel cannot be realized, and the planned POE point, i.e. the puncture point, can be reached at the most. At this time, the sheath tube needs to reach the POE point from the bronchoscope working channel, and then the sheath tube is discharged from the POE point to the target point through the lung parenchyma according to the puncture angle and the puncture distance of the planned path.
Fig. 3 (a) is a flowchart of a data preprocessing method in a sheath tip detection process according to an embodiment of the present invention, and as shown in fig. 3 (a), the method may include the steps of:
Step S301, enhancing the contrast of CArm perspective.
In this embodiment, contrast may be enhanced by computing an adaptive equalization histogram.
In step S302, the brightness of the CArm perspective is increased.
In this embodiment, gamma conversion may be performed to increase brightness.
Step S303, removing noise in the CArm perspective view.
In this embodiment, a median filter may be performed to remove noise from the CArm perspective.
For example, fig. 4 is a schematic diagram of a CArm perspective image obtained after data preprocessing and transformation according to an embodiment of the present invention, as shown in fig. 4, hooks in a CArm perspective image after data preprocessing and transformation may be bronchoscopes, and a semi-transparent pipeline beside the bronchoscopes may be airways. Through the data preprocessing step, the image quality can be improved, noise interference can be reduced, a more accurate and reliable image foundation is provided for detection and positioning of the sheath tip, and therefore the accuracy and the efficiency of detection are improved.
Fig. 3 (b) is a flowchart of a method for cutting consumable materials during sheath tip detection according to an embodiment of the present invention, where, as shown in fig. 3 (b), the method is aimed at cutting out a partial CArm perspective view with a bronchoscope tip and a sheath, eliminating the influence of other redundant images, and better performing bronchoscope fitting, threshold segmentation and contour detection, and finally facilitating the acquisition of a specified centroid coordinate. This link improves the efficiency and accuracy of the acquisition of the specified centroid coordinates compared to processing the above steps over the entire CArm perspective. The method may comprise the steps of:
and step S304, performing threshold segmentation on the CArm perspective view after data preprocessing by adopting a self-adaptive binarization and Ojin method.
In the embodiment, through the combination of the Ojin method and the adaptive binarization, the bronchoscope part can be extracted from the CArm perspective more accurately, the effective segmentation of the foreground and the background is realized, and the subsequent image analysis and processing are facilitated.
Alternatively, taking a gray scale image as an example, we can consider an image as a matrix of m×n size, i.e. pixels in the image, each value is a pixel value, where the pixel value is between (0, 255). Segmentation threshold T for foreground (i.e., object) and background. The proportion of the number of pixels belonging to the foreground to the whole image is marked as omega0, and the average gray scale of the foreground is marked as mu0. The proportion of the number of background pixels to the whole image is designated omega1, and the average gray scale is designated mu1. The total average gray level of the image is denoted μ and the inter-class variance is denoted σ2.
Assuming that the background of the image is dark and the size of the image is m×n, the number of pixels in the image having a gray level value smaller than the threshold T is denoted as a, and the number of pixels having a gray level greater than or equal to the threshold T is denoted as B, there are:
A+B=M*N
1=ω0+ω1
μ=ω0μ0+ω1μ1
σ2=ω0(μ0-μ)2+ω1(μ1-μ)
By combining the above formulas, one can get:
σ2=ω0ω1(μ0-μ1)
The value of the threshold T is sequentially traversed through the gray scale intervals (0, 255) to obtain the threshold T which enables sigma2 to be maximum. The image is thresholded using a maximum threshold T. The foreground includes the bronchoscope part entering the bronchoscope channel to POE position and other pixels meeting the threshold.
In step S305, the mask of the bronchoscope and consumable in the image is continued by the etching expansion.
In this embodiment, the mask region of the bronchoscope can be made continuous and more complete by the combined use of erosion and dilation operations to ensure accuracy in determining the sheath orientation from CArm images.
FIG. 5 is a schematic illustration of a bronchoscope with a mask that is etched and inflated to be continuous, as shown in FIG. 5, if the bronchoscope in the acquired image is intermittent, meaning that the bronchoscope mask may not be continuous, in accordance with an embodiment of the present invention. The above situation may affect the identification, positioning and analysis of the bronchoscope, as the intermittent mask may result in incomplete shape and profile of the bronchoscope, incomplete information, and may cause errors or inaccurate results. Thus, in order to better extract and process information such as sheath orientation, the bronchoscope mask can be made continuous by erosion and dilation. This means that the mask of the bronchoscope is continuous in the image, without discontinuities or breaks, to ensure accurate identification and analysis of the bronchoscope, which may improve the accuracy and reliability of extraction and processing of the bronchoscope.
In step S306, the shape with the largest area is found by contour detection to be regarded as a bronchoscope.
In this embodiment, contour detection is performed, and finding the shape with the largest area from the image is regarded as a bronchoscope.
Fig. 6 is a schematic view of a bronchoscope according to an embodiment of the present invention, as shown in fig. 6, wherein the white hooks may be bronchoscopes according to the embodiment of the present invention.
Step S307, determining the sheath orientation.
In this embodiment, a fitting curve is obtained by performing polynomial fitting on the bronchoscope image obtained in step S306 described above. Two points are taken on the curve for comparison, for example, the size of the x coordinate is compared, and the left and right directions of the sheath are judged.
Step S308, the tail end of the bronchoscope is divided in a self-adaptive mode according to the direction of the sheath and the whole width of the bronchoscope.
In this embodiment, the image with the bronchoscope tip is segmented adaptively, depending on the orientation and width of the mirror as a whole.
Fig. 7 is a schematic diagram of adaptively dividing the end of a bronchoscope according to an embodiment of the present invention, as shown in fig. 7, it may be determined that the bronchoscope is facing right at this time, and a right side branch bronchoscope may be adaptively divided, so as to obtain a part of the bronchoscope for better fitting effect at the back.
Fig. 3 (c) is a flowchart of a method for acquiring coordinates of a sheath tip in a sheath tip detection process according to an embodiment of the present invention, as shown in fig. 3 (c), the method may include the steps of:
step S309, bronchoscope tip fitting is performed.
In this embodiment, the polynomial fits the bronchoscopic end image of the last step segmentation.
Fig. 8 is a schematic view of a fitted line of a bronchoscope according to an embodiment of the present invention, as shown in fig. 8, by polynomial fitting, a white fitted line in fig. 8 may be obtained, the fitted line may indicate a moving path of the mirror, and then a tangent line of a tip of the mirror may be taken based on the fitted line as an approximate direction of the sheath.
In step S310, the bronchoscope tip, the tangent parameters at the bronchoscope tip, and another point on the tangent are obtained.
In this embodiment, the bronchoscope endpoint a, endpoint tangent parameters (slope k, offset b), and another point O on the tangent are obtained based on the non-zero detection performed after the contour detection of step S306 in the sheath segmentation.
Fig. 9 is a schematic diagram of a tangent line of a fitting line at a bronchoscope end point according to an embodiment of the present invention, as shown in fig. 9, traversing pixels to obtain all right end points of a white image, then preserving x coordinates, screening out points on the fitting line in all right end points, and taking the screened end points as mirror end points, namely bronchoscope end points. The slope k is the tangential slope of the point on the fitting line. The bias b is obtained from y=kx+b.
Step S311, the adaptive adjustment of the Ojin method bias is performed for threshold segmentation.
In this embodiment, the oxford bias is adaptively adjusted to perform threshold segmentation. Fig. 10 is a schematic diagram of a segmentation result of a sheath obtained by threshold segmentation according to an embodiment of the present invention, and the segmentation result of the graph may be obtained as shown in fig. 10.
Step S312, a centroid list is obtained.
In this embodiment, for the segmentation result obtained in the above step S311, contour detection may be performed to obtain centroids massList of the respective contours.
Wherein Cx can be used to represent the abscissa of each centroid, i.e., the abscissa in the two-dimensional coordinate information, Cy can be used to represent the ordinate of each centroid, i.e., the ordinate of the two-dimensional coordinate information, x, y are the serial numbers corresponding to each centroid, x, y are positive integers, M can be used to represent the geometric moment of the image, in particular M10 can be used to represent the first-order horizontal moment, M01 can be used to represent the first-order vertical moment, and M00 can be used to represent the area of the outline in the image.
Step S313, according to the screening condition, the appointed centroid is screened out from the centroid list.
In this embodiment, the specified centroid D (x 1, y 1) is selected from the centroids massList based on three conditions-the specified centroid must be the centroid within the circle Cm, within a threshold distance from the tangent, furthest from the bronchoscope end point, wherein the circle Cm can be constructed with the endoscope end point as a point on the circle, the point on the tangent and a distance from the endoscope end point of a preset radius length r as the center O.
Fig. 11 is a schematic diagram of determining the end point of the sheath from a circle, as shown in fig. 11, in which a white circle is a circle Cm, and a point satisfying the above three conditions in the circle Cm is a designated centroid, according to an embodiment of the present invention.
Fig. 12 (a) is a schematic view of a bronchoscope tip and a sheath tip according to an embodiment of the present invention, wherein the sheath tip and the bronchoscope tip are located in respective positions in CArm images obtained at angles Towards as shown in fig. 12 (a). Fig. 12 (b) is a schematic view of a spatial position of a sheath according to an embodiment of the present invention, and as shown in fig. 12 (b), in a CArm image obtained at an angle Ccw, a portion between a distal end of the sheath and a distal end of a bronchoscope may be the spatial position of the sheath.
Step S204, calculating the spatial position of the sheath air outlet channel part.
In this embodiment, after the completion of the above step S202, the coordinates of the bronchus tip and the sheath tip in two perspective views of towrds, cc w can be obtained, and the three-dimensional coordinates thereof in space can be found from the two-dimensional coordinates of the two perspective views. Because the sheath tube behind the air outlet channel is directly punctured to the target point, the position of the sheath tube of the air outlet channel part in space can be obtained according to the three-dimensional space coordinates of the puncture point and the tail end of the sheath tube.
In the embodiment of the present invention, the following description of the orthographic projection process begins:
assuming that the coordinates of the spatial midpoint P (sheath end point) are known as (xs,ys,zs), the process of solving for its coordinates projected onto a two-dimensional plane can be expressed as:
Where mR can be used to represent the rotation matrix of CT through CArm, and mT can be used to represent the translation matrix of CT through CArm, which can be determined during the patient registration phase.
Scaling equally can be expressed as:
And converting the coordinates into coordinates on the projection image, specifically representing the x-axis and y-axis components of the normalized coordinate points by multiplying the x-axis and y-axis components of the focal coordinates of the image respectively, and then adding the x-axis and y-axis components of the center point coordinates of the image, thereby obtaining converted image coordinate points.
Where xcenter may be used to represent the abscissa value of the optical center point of the imaging plane of the first image region, ycenter may be used to represent the ordinate value of the optical center point, xfocal may be used to represent the focal length of the camera capturing CArm perspective in the horizontal direction, and yfocal may be used to represent the focal length in the vertical direction.
Since CArm has distortion during imaging, the distortion also needs to be calculated during projection, expressed as:
Wherein aij、bij may be a 4-order distortion parameter, which represents a lateral distortion coefficient and a longitudinal distortion coefficient, respectively. Already at CArm calibration stage.
FIG. 13 is a schematic diagram of back projection calculation of spatial coordinates according to an embodiment of the present invention, as shown in FIG. 13, where now it is known that a point in space is projected to coordinates of two different planes, where the coordinates in space are required, and according to the principle of back projection, if there is an intersection point between two rays in space, there is only one intersection point, so that the spatial coordinates can be obtained by back projection from the end coordinates of the bronchoscope and the end coordinates of the sheath in Towards and Ccw views. For the example of a sheathed distal end, it is known to have a coordinate (u1,v1) at view Towards and a coordinate (u2,v2) at view Ccw. Let the point P coordinate in the corresponding space be (x, y, z), first de-distort (u1,v1),(u2,v2) according to the inverse of the above equation (3) (4) to obtain (x'1,y'1),(x'2,y'2).
The patient registration phase has solved for a rotation matrix and a translation matrix for CT to CArm at angles towrds, ccw, respectively:
Wherein,Can be used to represent the rotation matrix at Towards degrees; can be used to represent a translation matrix at Towards degrees; can be used to represent the rotation matrix at Ccw degrees; Can be used to represent the translation matrix at angles Ccw.
Can be obtained from the above formulas (1) and (2):
Combining equations (5) and (6) above, the conversion yields:
By solving the above-described linear equation set (7), the coordinates (xs,ys,zs) of the sheath end point P can be obtained. And the end point coordinates (xq,yq,zq) of the bronchoscope can be obtained by the same method, and finally the starting point coordinates and the end point coordinates of the sheath gas outlet channel are obtained.
Step S206, calculating the distance and angle from the sheath tube end point to the focus point, and rendering on the airway tree to realize visual display.
In this embodiment, the distance and angle between the end point of the sheath and the focal point can be determined by the coordinates of the end of the bronchoscope and the end of the sheath, which have been obtained above, and the distance between the end of the bronchoscope and the POE point can also be determined. Thereby being capable of being rendered on the bronchial tree for visual display.
Optionally, POE points (puncture points on the bronchial wall) are read from Lungpoint software (xP,yP,zP), focal point coordinates (xt,yt,zt) already obtained by the path planning node can be obtained, and the distance between the P point coordinates and the focal point coordinates is calculated:
Wherein d1 can be used to represent the distance between the sheath tip point and the focal point, Pt=(xt,yt,zt) can be used to represent the focal point coordinates, and Ps=(xs,ys,zs) can be used to represent the coordinates of the sheath tip point P.
Similarly, the distance between the end of the second bronchoscope and the POE point can be determined by the following formula:
Wherein d2 can be used to represent the distance of the bronchoscope tip from the POE point and Pq=(xq,yq,zq) can be used to represent the coordinates of the bronchoscope tip point.
Optionally, the magnitude of the error between the current actual movement path and the pre-planned path, i.e. the degree of matching between the two, is determined by the magnitude of d2. If d2 is smaller, the higher the matching degree between the actual moving path and the planned path planned in advance can be indicated, and the more accurate the puncture can be indicated. If d2 is larger, it can be stated that the matching degree between the actual moving path and the planned path planned in advance is lower, and that the puncture accuracy is lower, and corresponding adjustment is needed. Ideally, the POE point coincides with the bronchoscope tip point, i.e., the puncture is accurate, d2 =0.
For another example, the angle between the POE to the focal point and the POE to the sheath tip (xs,ys,zs) can be determined by the following formula:
Wherein θ may be used to represent the angle between the POE to the lesion point and the POE to the sheath tip point, Pt may be used to represent the lesion point coordinates, PP may be used to represent the POE point coordinates, and Ps may be used to represent the sheath tip point coordinates.
It should be noted that the above process and method for determining the angle and the distance in the azimuth information are only exemplary, and are not limited herein.
Fig. 14 is a schematic diagram of a visual display of a bronchial tree according to an embodiment of the present invention, as shown in fig. 14, POE, focal point, consumable tip, etc. may be visually displayed on the bronchial tree, to assist an operator in determining whether the consumable tip reaches the focal point, or to prompt a distance and an angle difference between a current position of the consumable tip and the focal point.
In embodiments of the present invention, an initial two-dimensional image of the endoscope during its extension into a physiological channel may be acquired from at least two angles. The first image region of the endoscope body and the second image region of the sheath can be segmented from the initial two-dimensional image. From the first image region, the coordinates of the endoscope tip point in two-dimensional space can be determined, and the coordinates can be converted into three-dimensional space to obtain a first target position. From the second image region, the coordinates of the tracheal end point in two-dimensional space can be determined and converted into three-dimensional space to obtain the second target position. The distance and the angle between the sheath end point and the reference point can be determined through the first target position and the second target position, so that whether the sheath end point reaches the focus point is determined in real time. In this embodiment, by converting the coordinates from the two-dimensional space to the three-dimensional space, the positions of the endoscope distal end point and the sheath distal end point with respect to the reference point can be determined more accurately, the accuracy of positioning is improved, and more stereoscopic information including depth, angle, and the like can be provided, so that the positional relationship of the endoscope and the reference point can be understood more clearly, and the operations such as diagnosis and treatment can be guided more accurately. Compared with CArm perspective views relying on different angles for confirmation, the method provides richer three-dimensional information, reduces the limitation in two-dimensional space, and is beneficial to improving the accuracy and safety of diagnosis and treatment. By more intuitively knowing the distance and angle between the sheath tip point and the reference point, diagnostic and therapeutic procedures can be guided more accurately, errors are reduced, and success rate is improved. Thereby realizing the technical effect of improving the accuracy of determining the position of the sheath tube end point and solving the technical problem of low accuracy of determining the position of the sheath tube end point.
Example 3
The embodiment of the invention provides a device for determining the position of a sheath end point, and it should be noted that the device for generating a document according to the embodiment of the invention can be used for executing the method for determining the position of the sheath end point provided by the embodiment of the invention in fig. 1. The following describes a device for determining the position of a distal end point of a sheath according to an embodiment of the present invention.
Fig. 15 is a schematic view of a device for determining the position of a distal end point of a sheath according to an embodiment of the present invention, which may include an acquisition unit 1502, a segmentation unit 1504, a first determination unit 1506, and a second determination unit 1508, as shown in fig. 15.
An acquisition unit 1502 for acquiring an initial two-dimensional image of an endoscope during extension into a physiological channel from at least two angles, wherein the endoscope comprises a sheath and an endoscope body.
A segmentation unit 1504 for segmenting a first image region including the endoscope body and a second image region including the sheath from the initial two-dimensional images, respectively.
The first determining unit 1506 is configured to determine a first target position of an endoscope distal point of the endoscope main body in the three-dimensional space based on the first image region, and determine a second target position of a sheath distal point of the sheath in the three-dimensional space based on the second image region.
A second determining unit 1508 configured to determine, based on the first target position and the second target position, azimuth information between a sheath distal point and a reference point, where the azimuth information is used to represent a distance and an angle between the sheath distal point and the reference point, and the reference point is used to represent a position where an abnormal state exists outside the physiological channel.
The position determining device of the sheath end point provided by the embodiment of the invention obtains initial two-dimensional images of an endoscope in the process of extending into a physiological channel from at least two angles through the obtaining unit 1502, respectively divides a first image area containing an endoscope main body and a second image area containing a sheath from the initial two-dimensional images through the dividing unit 1504, determines a first target position of the endoscope end point of the endoscope main body in a three-dimensional space based on the first image area and determines a second target position of the sheath end point of the sheath in the three-dimensional space based on the second image area, and determines position information between the sheath end point and a reference point through the second determining unit 1508 based on the first target position and the second target position, thereby solving the technical problem of low accuracy of determining the position of the sheath end point and realizing the technical effect of improving the accuracy of determining the position of the sheath end point.
Optionally, the segmentation unit 1504 may include a first segmentation module configured to perform threshold segmentation on any one of the initial two-dimensional images to obtain a first image area, where the first image area includes each pixel point in any one of the initial two-dimensional images, where a pixel value corresponding to the pixel point is smaller than a preset first segmentation threshold, and a second segmentation module configured to perform threshold segmentation on any one of the initial two-dimensional images to obtain a second image area, where the second image area includes each pixel point in any one of the initial two-dimensional images, where a pixel value corresponding to the pixel point is smaller than a preset second segmentation threshold.
Optionally, the first determining unit 1506 may include a first acquiring module configured to acquire the updated first image area and determine orientation information of the endoscope body based on the updated first image area, and a first determining module configured to determine a first target position of the endoscope tip point in the three-dimensional space by combining the orientation information and the updated first image area.
Optionally, the first acquisition module may include a first processing sub-module configured to perform mask continuous processing on the first image area to obtain an updated first image area if the pixel point corresponding to the endoscope main body included in the first image area is in a discrete state, and a second processing sub-module configured to use the first image area as the updated first image area if the pixel point corresponding to the endoscope main body included in the first image area is in a continuous state.
Optionally, the first acquisition module may include a second processing sub-module configured to perform fitting processing on pixels belonging to the endoscope main body in the updated first image area to obtain a centerline fitting curve of the endoscope main body, and a first determining sub-module configured to determine orientation information based on coordinate information of any two pixels on the centerline fitting curve on a target coordinate axis.
Optionally, the first determining module may include a first obtaining sub-module configured to obtain two-dimensional coordinate information of an endoscope end point in the first image area based on the orientation information, the obtained centerline fitting curve, and pixel points belonging to the endoscope main body in the first image area, and a third processing sub-module configured to perform three-dimensional back projection processing on the obtained two-dimensional coordinate information of the endoscope end point in the first image area corresponding to different initial two-dimensional images, so as to obtain a first target position of the endoscope end point in a three-dimensional space.
Optionally, the first determining unit 1506 may include a second obtaining module configured to perform contour detection on the second image area to obtain a centroid data set corresponding to the sheath, where the centroid data set includes two-dimensional coordinate information of each centroid corresponding to the sheath in the second image area, a first screening module configured to screen each centroid included in the centroid data set according to a centerline fitting curve and an endoscope end point, a third obtaining module configured to use the screened designated centroid as the sheath end point and obtain two-dimensional coordinate information of the sheath end point in the second image area, and a first processing module configured to perform three-dimensional back projection processing on the obtained two-dimensional coordinate information of the sheath end point in the second image area corresponding to a different initial two-dimensional image to obtain a second target position of the sheath end point of the sheath in a three-dimensional space.
The first screening module can comprise a second obtaining sub-module, a second determining sub-module and a third determining sub-module, wherein the second obtaining sub-module is used for obtaining a tangent line of an endoscope tail end point on a central line fitting curve, the second determining sub-module is used for determining a corresponding circle center on the tangent line according to the direction information of the endoscope tail end point and an endoscope main body and a preset radius length, the third determining sub-module is used for constructing a target circle according to the circle center and the preset radius length, and screening each centroid contained in a centroid data set according to the target circle according to screening conditions to determine a specified centroid.
Optionally, the second determining unit 1508 may include a fourth obtaining module configured to obtain reference coordinate information of a reference point in a three-dimensional space and target coordinate information of a target point on a physiological channel in the three-dimensional space, where the target point is a puncture point determined on the physiological channel based on a planned path, a second determining module configured to determine a distance in the azimuth information based on the second target position and the reference coordinate information, and a third determining module configured to determine a first straight line from the target point to the reference point according to the target coordinate information and the reference coordinate information, and determine a second straight line from the target point to an end point of the sheath according to the target coordinate information and the second target position, and determine an angle between the first straight line and the second straight line as an angle in the azimuth information.
Optionally, the device can further comprise a third determining unit, configured to determine a distance between the target point and the end point of the endoscope according to the first target position and the target coordinate information, where the distance between the target point and the end point of the endoscope is used to determine a matching degree between an actual moving path of the endoscope in the physiological channel and the planned path.
The position determining device of the sheath end point may further include a processor and a memory, the units and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor includes a kernel, and the kernel fetches a corresponding program unit from the memory. The kernel can be provided with one or more than one, and the to-be-shut-down devices of the same device type are controlled to carry out graceful shutdown by adjusting kernel parameters.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), which includes at least one memory chip.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel can be provided with one or more than one, and the working efficiency of traders is improved by adjusting kernel parameters.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
Example 4
There is also provided, in accordance with an embodiment of the present invention, a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements a method of determining a position of a sheath end point.
Example 5
According to an embodiment of the present invention, there is further provided a processor, configured to execute a program, where the program executes a method for determining a position of a sheath end point during execution of the program.
Example 6
Fig. 16 is a schematic diagram of an electronic device for determining a position of a sheath end point according to an embodiment of the present invention, and as shown in fig. 16, the embodiment of the present invention further provides an electronic device, where the device includes a processor, a memory, and a program stored in the memory and executable on the processor, and the processor implements the steps in the above embodiments when executing the program.
Example 7
According to an embodiment of the present application, there is also provided a computer program product. Optionally, in this embodiment, the computer program product may include a computer program, where the computer program when executed by a processor implements the method for determining the position of a sheath end point according to the embodiment of the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.