Movatterモバイル変換


[0]ホーム

URL:


CN112258478A - Data processing method and pose precision verification system - Google Patents

Data processing method and pose precision verification system
Download PDF

Info

Publication number
CN112258478A
CN112258478ACN202011140891.2ACN202011140891ACN112258478ACN 112258478 ACN112258478 ACN 112258478ACN 202011140891 ACN202011140891 ACN 202011140891ACN 112258478 ACN112258478 ACN 112258478A
Authority
CN
China
Prior art keywords
medical image
postoperative
determining
image
implant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011140891.2A
Other languages
Chinese (zh)
Inventor
王棋
谢永召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baihui Weikang Technology Co Ltd
Original Assignee
Beijing Baihui Weikang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baihui Weikang Technology Co LtdfiledCriticalBeijing Baihui Weikang Technology Co Ltd
Priority to CN202011140891.2ApriorityCriticalpatent/CN112258478A/en
Publication of CN112258478ApublicationCriticalpatent/CN112258478A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application provides a data processing method and a pose precision verification system, wherein the data processing method comprises the following steps: acquiring a preoperative medical image and a postoperative medical image of a tissue structure comprising at least part of a surgical object, wherein the postoperative medical image comprises an implanted implant; performing registration processing on the preoperative medical image and the postoperative medical image according to first gray information of the preoperative medical image and second gray information of the postoperative medical image to obtain registration transformation parameters which enable the same tissue structure in the preoperative medical image and the postoperative medical image to meet the preset matching degree requirement; and determining pose information of the implant in the postoperative medical image according to the registration transformation parameters, and determining pose deviation of the implant according to the pose information. The data processing method can determine pose deviations of the post-operative implant.

Description

Data processing method and pose precision verification system
Technical Field
The application relates to the technical field of medical equipment, in particular to a data processing method and a pose precision verification system.
Background
In medicine today, surgery is an important means of saving the health of a patient. In order to achieve better treatment effect, part of the operations need to implant an object for auxiliary treatment into a patient, a doctor needs to plan an implantation position before the operations before the object is implanted, the implantation position is implanted mainly by the aid of personal experience of the doctor or an operation robot, and if the implantation position is consistent with or has small deviation from the preoperative planned position, the operation is successful. However, after implanting a body into a patient, it is often easy to find a position of implantation that is not consistent or very different from the preoperative planned position, which means that the operation has failed and needs to be re-performed as soon as possible. In the prior art, a doctor cannot accurately determine whether an implant is positioned at a planned position after an operation, and can only judge whether the operation is successful according to postoperative performance and clinical experience of a patient, so that the treatment opportunity of the patient is easily delayed, and the problem that the treatment effect cannot meet the expectation easily is caused.
Disclosure of Invention
In order to solve the above problem, embodiments of the present application provide a data processing method and a pose accuracy verification system, so as to at least partially solve the above problem.
According to a first aspect of an embodiment of the present application, an embodiment of the present application provides a data processing method, including: acquiring a preoperative medical image and a postoperative medical image of a tissue structure comprising at least part of a surgical object, wherein the postoperative medical image comprises an implanted implant; performing registration processing on the preoperative medical image and the postoperative medical image according to first gray information of the preoperative medical image and second gray information of the postoperative medical image to obtain registration transformation parameters which enable the same tissue structure in the preoperative medical image and the postoperative medical image to meet the preset matching degree requirement; and determining pose information of the implant in the postoperative medical image according to the registration transformation parameters, and determining pose deviation of the implant according to the pose information.
According to a second aspect of the embodiments of the present application, there is also provided a pose accuracy verification system, including: a processor for transmitting and receiving control signals; a memory for storing at least one executable command for causing the processor to perform the operations of the data processing method as provided by the first aspect of the embodiments of the present application.
In the data processing method in the embodiment of the application, registration can be performed according to the preoperative medical image and the postoperative medical image, the pose information of the implant in the postoperative medical image can be determined, and the pose deviation of the implant is determined according to the pose information, so that the problem that a doctor cannot accurately determine whether the implant is positioned at a planned position after an operation, and can judge whether the operation is successful only according to postoperative performance and clinical experience of a patient, so that the treatment time of the patient is easily delayed, and the treatment effect is difficult to meet the expectation is solved.
Drawings
The drawings are only for purposes of illustrating and explaining the present application and are not to be construed as limiting the scope of the present application.
Fig. 1 is a flowchart illustrating steps of a data processing method according to a first embodiment of the present application;
fig. 2 is a flowchart illustrating steps of a data processing method according to a second embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the protection of the embodiments in the present application.
Example one
Referring to fig. 1, a flowchart illustrating steps of a data processing method according to a first embodiment of the present application is shown.
In the embodiment of the application, the data processing method comprises the following steps:
step S102: pre-operative medical images and post-operative medical images of a tissue structure comprising at least a portion of a surgical object are acquired, wherein the post-operative medical images include an implanted implant.
In modern medicine, medical images refer to images formed by scanning or transmitting a human body or a part of the human body using medical Imaging equipment, and there are CT (Computed Tomography), MRI (Magnetic Resonance Imaging), and the like in common, because the characteristics of each tissue of the human body in the medical images are generally different, for example, the gray values of pixel points corresponding to bone tissues, skin, muscles, and the like are generally different, a doctor can obtain the relative position relationship of physiological tissues inside the human body and whether there are health problems by analyzing the information in the medical images, in addition, the characteristics of the implant in the medical images are also different from those of the tissue of the human body, so that in the case of diagnosis and treatment of a disease condition requiring the intervention of the medical images in many surgeries, when an implant for therapeutic use is implanted into the body of a patient, medical images play a decisive role in preoperative planning and postoperative verification of surgery.
In this embodiment, the preoperative medical image refers to a medical image taken by a patient before an implantation operation is performed, and the internal state of the body of the patient when no implant is implanted into the body of the patient can be obtained from the preoperative medical image, from which a doctor can plan an implantation position. The postoperative medical image is a medical image taken after a patient is implanted into the patient, the internal state of the body of the patient when the implant is implanted into the body of the patient can be obtained from the postoperative medical image, and the postoperative medical image contains the image of the implant, so that a doctor can obtain the state information of the implant in the body of the patient through the postoperative medical image.
Therefore, by acquiring the preoperative medical image and the postoperative medical image of the tissue structure including at least a part of the surgical object, sufficient preparation is made for determining the posture deviation of the implant before and after the operation.
The types of pre-operative medical images and post-operative medical images are not limited in the embodiments of the present application, and may be, for example, both CT or MRI, or both modalities, i.e., one CT and the other MRI. Of course, CT and MRI are only two common representatives of medical images, and are not intended to be limiting, and other types of medical images are also within the scope of the embodiments of the present application.
The embodiment of the application does not limit the way of implanting the implant into the body of the patient, and the implant can be implanted by an operation robot or can be implanted by a doctor according to experience.
Step S104: and performing registration processing on the preoperative medical image and the postoperative medical image according to the first gray scale information of the preoperative medical image and the second gray scale information of the postoperative medical image to obtain registration transformation parameters which enable the same tissue structure in the preoperative medical image and the postoperative medical image to meet the preset matching degree requirement.
The medical images are composed of pixels as in other images, and therefore each include gray scale information including pixels and gray scale values corresponding to the pixels. The pixels may be represented by their coordinates in the medical image, or a number may be set in advance for each pixel and represented by their numbers. In the preoperative medical image and the postoperative medical image, because the shooting time is different and the shooting angle of a patient is different, the error is inevitably caused if the two images are directly analyzed. Therefore, in the embodiment of the present application, the registration processing is performed on the preoperative medical image and the postoperative medical image according to the first gray scale information of the preoperative medical image and the second gray scale information of the postoperative medical image, so as to obtain a registration transformation parameter that enables the same tissue structure in the preoperative medical image and the postoperative medical image to meet the predetermined matching degree requirement. Therefore, the pose of the implant in the postoperative medical image can be correspondingly transformed into the coordinate system corresponding to the preoperative medical image according to the configuration transformation parameters, and the deviation of the implant is determined.
In this embodiment, the predetermined matching degree is satisfied when the points corresponding to the same tissue structure in the pre-operation medical image and the post-operation medical image are aligned, that is, the deviation between the two points is smaller than a predetermined value.
In the embodiment of the present application, the registration processing may be to convert the pre-operation medical image and the post-operation medical image into the same three-dimensional image coordinate system, and perform one-to-one correspondence on pixel points in the post-operation medical image and the pre-operation medical image according to a coordinate transformation expression related to a registration transformation parameter, so that the two images are matched. In the present embodiment, the manner of this registration processing is not limited, and as long as it can be correctly registered, it should be considered as being within the scope of the present embodiment.
Step S106: and determining pose information of the implant in the postoperative medical image according to the registration transformation parameters, and determining pose deviation of the implant according to the pose information.
The obtained registration transformation parameters meet the requirement of preset matching degree, so that an image meeting the requirement of determining the posture deviation of the implant can be obtained by performing registration transformation on pixel points in the postoperative medical image according to the registration transformation parameters. In this image, all the features of the post-operative medical image are included, so that the pose information of the implant can be determined from this image. In the present embodiment, the pose information includes, but is not limited to, the positions of some fixed points of the implant and the direction of a line between some fixed points, which may be, for example, the top and end points of the implant.
According to the determined implant position posture information, comparing the determined implant position posture information with the preoperative planned implant position of the doctor in the preoperative medical image, and obtaining the deviation between the postoperative implant position and the preoperative planned position, namely the posture deviation of the implant. In this embodiment, the pose deviation may include distances between some fixed points of the implant and corresponding points of the preoperative planned position, and the like, which is not limited in this embodiment, and obviously, calculation of the pose deviation may be flexibly determined according to pose information according to experience of a doctor during actual use, so as to better determine the postoperative implantation state of the implant and ensure the health of the patient.
Therefore, through the data processing method in the embodiment, registration can be performed according to the preoperative medical image and the postoperative medical image, the pose information of the implant in the postoperative medical image can be determined, and the pose deviation of the implant is determined according to the pose information, so that the problem that a doctor cannot accurately determine whether the implant is in a planned position after an operation, and can judge whether the operation is successful only according to postoperative performance and clinical experience of a patient, so that the treatment time of the patient is easily delayed, and the treatment effect is difficult to meet the expectation is solved.
Example two
Referring to fig. 2, a flowchart of steps of a data processing method according to a second embodiment of the present application is shown.
In this embodiment, the data processing method includes the steps of:
step S202: pre-operative medical images and post-operative medical images of a tissue structure comprising at least a portion of a surgical object are acquired, wherein the post-operative medical images include an implanted implant.
This step is similar to the step S102 in the first embodiment, and is not described again.
Step S204: and performing registration processing on the preoperative medical image and the postoperative medical image according to the first gray scale information of the preoperative medical image and the second gray scale information of the postoperative medical image to obtain registration transformation parameters which enable the same tissue structure in the preoperative medical image and the postoperative medical image to meet the preset matching degree requirement.
In this embodiment, the registration processing mainly performs rigid transformation on the post-operation medical image for multiple times by using the first gray scale information of the pre-operation medical image and the second gray scale information of the post-operation medical image, and finally obtains a relevant registration transformation parameter. Therefore, the registration transformation parameters in the embodiment of the present application include three translation transformation parameters for the post-operative medical image to translate along the x, y, and z axes and three rotation transformation parameters for the post-operative medical image to rotate along the x, y, and z axes.
In the embodiment of the present application, the preoperative medical image is taken as a fixed image F, the postoperative medical image is taken as a floating image M, and the rigid transformation of M can be referred to as (R, t), where R is a rotation parameter and t is a translation parameter. The similarity measure is denoted as S (·), then the registration process can be expressed as:
Figure BDA0002738233500000061
the registration transformation parameters meeting the matching degree requirement, namely three translation transformation parameters of the postoperative medical image which is translated along the x, y and z axes and three rotation transformation parameters of the postoperative medical image which is rotated along the x, y and z axes, are finally obtained in the process.
Specifically, the step S204 includes the following sub-steps:
substep S2041: and determining third gray scale information of the transformed postoperative image obtained by rigid transformation according to the second gray scale information of the postoperative medical image and the current transformation parameter.
As can be seen from the above, the object of rigid transformation is mainly a floating image, i.e. a post-operation medical image, and therefore, when rigid transformation is performed, transformation is performed each time according to the second gray scale information of the post-operation medical image and the current transformation parameter, so as to obtain a transformed floating image (i.e. a transformed post-operation image), and thus obtain the third gray scale information of the transformed post-operation image. The third gray information is used for indicating a third gray value of each third pixel point of the image after the transformation. Each third pixel point can be represented in a coordinate manner.
Substep S2042: and determining the current matching degree of the same tissue structure in the preoperative medical image and the transformed postoperative image according to the first gray information of the preoperative medical image and the third gray information of the transformed postoperative image.
In the preoperative medical image and the rigid-transformation postoperative image, the first gray scale information comprises a first gray scale value of each first pixel point in the preoperative medical image, and the third gray scale information comprises a third gray scale value of each third pixel point in the rigid-transformation postoperative image.
In the embodiment of the present application, mutual information (denoted as MI (F, M)) of the preoperative medical image and the transformed postoperative image is used as the similarity measure (i.e., the current matching degree), the size of the mutual information measures the current matching degree of the same tissue structure in the preoperative medical image and the transformed postoperative image, when the mutual information is larger, the matching degree of the two images is larger, that is, the registration result of the two images is better, and when the mutual information is maximum, the matching degree of the two images is best. In this embodiment, the mutual information is calculated by applying the following formula:
MI(F,M)=H(F)+H(M)-H(F,M)
where MI (F, M) is mutual information, H (F) is a first edge entropy of the preoperative medical image, H (M) is a third edge entropy of the post-transform image, and H (F, M) is a joint entropy of the preoperative medical image and the post-transform image.
Thus, specifically, the sub-step S2042 is divided into the following sub-flows:
and (4) a sub-process: and determining a first edge entropy of the preoperative medical image according to the first gray value of each first pixel point.
In this embodiment, the first edge entropy is calculated by using the first gray value of each first pixel, and for convenience of description, a simplified process is used for description below.
Assuming that the gray values of the three first pixel points F1, F2 and F3 of the preoperative medical image F are a, b and c, the first edge probability can be calculated, wherein the first edge probability is the ratio of the occurrence frequency of a certain gray value to the total number of the first pixel points. Since the preoperative medical image F in the foregoing example includes 3 different gray values, the first edge probability corresponding to each gray value is: when the gray value is a, P1 is 1/3; when the gray value is b, P2 is 1/3; when the gray value is c, P3 is 1/3. If the f1, f2, and f3 gray scales are a, and c, respectively, the first edge probability of each gray scale can be calculated as: when the gray value is a, P1 is 2/3; when the gray value is c, P2 is 1/3.
Then according to the formula:
Figure BDA0002738233500000071
and calculating to obtain the first edge entropy. Where the logarithm is generally based on 2.
For example: if the f1, f2 and f3 gray-scale values are a, b and c respectively, P1 is 1/3, P2 is 1/3, and P3 is 1/3, then:
H(F)=-(1/3)*log(1/3)–(1/3)*log(1/3)–(1/3)*log(1/3);
if the f1, f2 and f3 gray-scale values are a, a and c respectively, P1 is 2/3 and P2 is 1/3, then:
H(F)=-(2/3)*log(2/3)–(1/3)*log(1/3)。
a sub-process II: and determining a third edge entropy of the image after the transformation according to the third gray value of each third pixel point.
In this embodiment, the third edge entropy is calculated by using the third gray value of each third pixel, and for convenience of description, a simplified process is used for description below.
Assuming a floating image, that is, the gray values of three third pixel points M1, M2 and M3 of the image M after transformation are d, e and g, respectively, the third edge probability can be calculated as follows: when the gray value is d, P1 is 1/3; when the gray value is e, P2 is 1/3; when the gray value is g, P3 is 1/3. If the m1, m2, and m3 gray-scale values are d, and g, respectively, the third edge probability can be calculated as: when the gray value is d, P1 is 2/3; when the gray value is g, P2 is 1/3.
Then according to the formula:
Figure BDA0002738233500000082
the third edge entropy is obtained by calculation (the calculation method is similar to h (f), and is not described again). Where the logarithm is generally based on 2.
And the subprocess comprises: and determining the joint entropy of the preoperative medical image and the transformed postoperative image according to the first gray value of each first pixel point and the third gray value of each third pixel point.
In this embodiment, the joint entropy of the pre-operative medical image and the transformed post-operative image is calculated using the following formula:
Figure BDA0002738233500000081
wherein P (f, m) is the joint probability distribution of the preoperative medical image and the transformed postoperative image, and (f, m) is the pixel gray value pair of the same position of the two images.
Specifically, the sub-process (C) includes the following sub-stage a, sub-stage B, and sub-stage C.
A sub-stage A: and determining a joint distribution histogram of the preoperative medical image and the transformed postoperative image according to the first gray value of each first pixel point and the third gray value of each third pixel point.
In the embodiment of the present application, the joint distribution histogram h (f, m) of the preoperative medical image and the transformed postoperative image is calculated by using each first gray value and each third gray value, and in this embodiment, the joint distribution histogram may count the occurrence times of different values of the pixel gray value pair (f, m) at the same position of the two images, so as to calculate the joint probability distribution based on the joint distribution histogram.
In order to reduce the complexity of the calculation and increase the operation speed, the Parzen window algorithm is used to estimate the joint distribution histogram in the present embodiment. Of course, the method for obtaining the joint distribution histogram is not limited in the embodiments of the present application, and other algorithms may be used to generate or estimate the joint distribution histogram.
And a sub-stage B: and determining the joint probability distribution of the preoperative medical image and the transformed postoperative image according to the joint distribution histogram.
The joint probability distribution is calculated according to the following formula:
Figure BDA0002738233500000091
this sub-process B is illustrated in a simplified process: suppose that the gray values of three first pixel points F1, F2 and F3 of the preoperative medical image F are a, b and c respectively, and the gray values of three third pixel points M1, M2 and M3 of the transformation postoperative image M are d, e and g respectively, wherein F1, F2 and F3 correspond to M1, M2 and M3 respectively. Then (f, m) has the following values: (a, d), (b, e), (c, g); h (f, m) has the following values: h (a, d), h (b, e), h (c, g). Then, the number of (a, d), (b, e) and (c, g) can be counted in the joint distribution histogram, and h (a, d) is 1, h (b, e) is 1, h (c, g) is 1, Σ h (i, j) is 3, so that the joint probability distributions P (a, d) ═ 1/3, P (b, e) ═ 1/3, and P (c, g) ═ 1/3 are obtained in this simplified procedure.
Of course, the above simplification process is only for convenience of description, and the actual situation is very complicated, but the method according to the embodiment can calculate the joint probability distribution accurately and quickly.
And a sub-stage C: and determining the joint entropy of the preoperative medical image and the transformed postoperative image according to the joint probability distribution.
From the joint probability distribution found above, the calculation can be made according to the following formula:
Figure BDA0002738233500000101
joint entropy of the pre-operative medical image and the transformed post-operative image is found, wherein the logarithm is generally base 2.
For example: when P (a, d) is 1/3, P (b, e) is 1/3, and P (c, g) is 1/3:
H(F,M)=-(1/3)*log(1/3)–(1/3)*log(1/3)–(1/3)*log(1/3)。
therefore, through the three sub-stages of the sub-process and the three sub-stages of the sub-process, the joint entropy of the preoperative medical image and the post-transformation post-operation image after each rigid transformation can be rapidly and accurately obtained, and the registration effect is accurate and rapid.
And a subflow (IV): and determining the current matching degree of the same tissue structure in the preoperative medical image and the transformation postoperative image according to the first edge entropy, the third edge entropy and the joint entropy.
Based on the first edge entropy H (F), the third edge entropy H (M), and the joint entropy H (F, M) obtained above, the mutual information formula may be utilized:
MI(F,M)=H(F)+H(M)-H(F,M),
and solving to obtain mutual information of the current preoperative medical image and the transformed postoperative image, and determining the current matching degree of the same tissue structure in the preoperative medical image and the transformed postoperative image according to the mutual information.
Substep S2043: and determining whether the current matching degree meets the optimal matching degree, and obtaining the registration transformation parameters according to the determination result.
In this embodiment, mutual information between the pre-operation medical image and the transformed post-operation image is calculated, and the mutual information is used as a matching degree. When determining whether the optimal matching degree is satisfied, a situation that the mutual information is maximum needs to be found. Therefore, in this embodiment, an objective function related to mutual information may be established and optimized, i.e. the optimal parameters of the objective function are found. In this embodiment, a random gradient descent method is used for implementation, and the objective function is defined as a negative value of the mutual information, so that when the objective function value is reduced to the minimum, the mutual information obtains the maximum value, and thus the optimal parameter is obtained. The objective function is:
J(θ12,...,θ6)=-MI(F,M(θ12,...,θ6)),
the parameters θ 1, θ 2, and θ 6 are parameters (i.e., current transformation parameters) that need to be solved and iterated, and are the registration transformation parameters in this embodiment when the parameters are optimal parameters, and the registration result can be obtained by transforming the postoperative medical image through the six optimal parameters.
Specifically, the sub-step S2043 includes the following sub-processes:
subflow I: and obtaining the previous matching degree corresponding to the rigid transformation before the current matching degree.
In this embodiment, since it is necessary to perform rigid transformation a plurality of times and perform mutual information calculation according to the post-transformation medical image after rigid transformation, when performing the first rigid transformation, there is no previous rigid transformation, and there is no mutual information calculation, i.e. there is no previous matching degree. In this case, therefore, the gray values of the current postoperative medical image and the preoperative medical image are used to calculate mutual information as the previous matching degree of the next rigid transformation.
After the second rigid transformation, the previous matching degree begins to exist, at this time, the current matching degree and the previous matching degree are compared, if the difference is smaller than or equal to a set value (the set value can be determined as required, such as 0,0.1, 0.5 or 1, and the difference is an absolute value), the sub-process ii is executed, or if the current matching degree is larger than the set value, the sub-process iii is executed.
And (2) a sub-process II: and if the difference value between the current matching degree and the previous matching degree is less than or equal to a set value, taking the current transformation parameter corresponding to the current matching degree as the registration transformation parameter.
And when the difference value of the target function before and after two times of rigid transformation is smaller than or equal to a set value, which indicates that the mutual information between the current image after transformation and the preoperative medical image obtains the maximum value or meets the requirement result, taking the current transformation parameter corresponding to the current mutual information as the registration transformation parameter. And registering the postoperative medical image according to the registration transformation parameters to obtain a registration result. In an optional implementation manner of this embodiment, the set value is 0, the difference between the target function before and after two rigid transformations is equal to the set value, at this time, the mutual information between the transformed postoperative image and the preoperative medical image obtains the maximum value, and the current transformation parameter corresponding to the current mutual information is used as the registration transformation parameter to register the postoperative medical image, so that the registration result with the highest matching degree can be obtained.
Subflow III: and if the difference value between the current matching degree and the previous matching degree is larger than a set value, determining a new current transformation parameter according to the current transformation parameter, the partial derivative of the current matching degree and the updating step length.
In this embodiment, if the difference between the current value of the objective function, i.e., the mutual information value, and the previous and subsequent rigid transformations is greater than the set value, it indicates that the optimization has not been completed, and further iteration is required. In the present embodiment, the registration parameter θ is based on a single registration parameter θiThe formula for iteration (i ═ 1.., 6) is as follows:
Figure BDA0002738233500000121
wherein,
Figure BDA0002738233500000122
is the objective function to the parameter thetaiThe partial derivative of (i ═ 1.., 6), α is the learning rate, and is also the update step size for each step of the gradient descent. And when parameter iteration is carried out, six parameters are assigned again in each round. Taking θ 1 as an example, the meaning of the formula is to subtract the update step size multiplied by the partial derivative of the objective function to θ 1 from the value of θ 1 of the current rigid transformation, and use the result as the parameter value of the next rigid transformation, i.e. the new current transformation parameter.
The update step α is generally set according to experience and practical requirements, and cannot be too large or too small, which may cause the optimal registration parameters to be missed during iteration, and too small may cause the iteration speed to be too slow, reducing the processing speed.
After the sub-process iii is executed, a new current transformation parameter is obtained, and then the step S2041 is executed: and determining third gray scale information of the transformed postoperative image obtained by rigid transformation according to the second gray scale information of the postoperative medical image and the current transformation parameter.
And finally, stopping iteration when the difference value of the target functions of certain two times, namely the difference value of mutual information, is less than or equal to a set value. And taking the parameter transformation parameter corresponding to the mutual information at the moment as the registration transformation parameter. And registering the postoperative medical image according to the registration transformation parameters to obtain a registration result.
Step S206: and determining pose information of the implant in the postoperative medical image according to the registration transformation parameters, and determining pose deviation of the implant according to the pose information.
Specifically, the step S206 includes the following sub-steps:
substep S2061: and obtaining a final registration image according to the registration transformation parameters.
And obtaining a final registered image according to the obtained registration transformation parameters, wherein the final registered image is the registration result. The final registered image, including all features of the post-operative medical image, can thus determine pose information for the implant from the image.
Substep S2062: performing a segmentation process on the final registered image to obtain an implant contour in the post-operative medical image.
Specifically, the substep S2062 may comprise: and adjusting the gray value range of the final registration image, carrying out binarization on the adjusted final registration image according to the gray value range, and obtaining the implant contour in the postoperative medical image according to the binarization result.
Since the gray value of the implant in the final registered image is greatly different from that of the human tissue and is generally specific, in this embodiment, the segmentation processing is performed on the implant to better determine the pose accuracy of the implant, and the gray value range of the corresponding position of the final registered image can be adjusted by adjusting the window width and the window level of the final registered image and according to the actual implant condition, and the gray value range is binarized according to the gray value range, so that the gray value of the implant contour in the final registered image is set to be 0 or 255, and the implant contour of the post-operation medical image is obtained from the final registered image.
Of course, other methods may be used to segment the implant contour, and this embodiment is not limited thereto.
Substep S2063: the method comprises the steps of obtaining postoperative actual position points and preoperative planning position points of the implant according to the contour of the implant, determining pose information according to the postoperative actual position points and the preoperative planning position points, and determining pose deviation of the implant according to the pose information.
The post-operation actual position points comprise a first target point and a first entry point, the first target point refers to the position of a corresponding pixel point of the tail end point of the implant in the image of the implant contour, the first entry point refers to the position of a corresponding pixel point of the top end point of the implant in the image of the implant contour, and the first target point and the first entry point can be directly determined from the image of the binarized implant contour. The first target point corresponds to the actual implantation site.
The preoperative actual position point comprises a second target point and a second entry point, the second target point refers to the position of an implantation position pixel point selected from the preoperative medical image when the terminal point of the implant is planned preoperatively, and the second entry point refers to the position of an implantation position pixel point selected from the preoperative medical image when the top point of the implant is planned preoperatively. The second target point and the second entry point correspond to a final registered image from which they can be determined based on shape matching of the implant. The second target point corresponds to the planned implantation location.
Therefore, the actual position points before the operation and the planning position points after the operation can determine the pose information.
The pose deviation in the present embodiment includes a position deviation, which refers to a distance deviation between a position where the implant is actually implanted and a position where the implant is planned to be implanted, and a pose deviation, which refers to an angle deviation between a direction where the implant is actually implanted and a direction where the implant is planned to be implanted. The present embodiment can determine the pose deviation from the pose information.
Specifically, the sub-step S2063 includes the following sub-processes:
subflow (1): and calculating the distance between the two coordinates according to the coordinates of the first target point in the postoperative actual position point and the coordinates of the second target point in the preoperative planning position point so as to determine the position deviation.
In this embodiment, if the first target point coordinate is Tp' (x)t′,yt′,zt'), second target point Tp (x)t,yt,zt) The distance Err between the two coordinates can be calculated by the following formulap
Errp=||xt-xt′,yt-yt′,zt-zt′||2
The distance between the two coordinate points corresponds to the two-norm of the two coordinate points, which corresponds to the distance deviation between the actual implantation position of the implant and the planned implantation position, so that the position deviation can be determined therefrom.
Subflow (2): determining a first direction vector according to the coordinate of a first target point in the post-operation actual position point and the coordinate of a first entry point; determining a second direction vector according to the coordinates of a second target point and the coordinates of a second entry point in the preoperative planning position points; calculating an included angle between the first direction vector and the second direction vector to determine attitude deviation; and determining the pose deviation according to the position deviation and the attitude deviation.
In this embodiment, if the first in-point coordinate is Ep' (x)e′,ye′,ze') and the second entry point has coordinates Ep (x)e,ye,ze) Then, the first direction vector can be obtained:
Figure BDA0002738233500000141
the first direction vector corresponds to the direction in which the implant is actually implanted.
Second direction vector:
Figure BDA0002738233500000142
the second direction vector corresponds to the direction of the planned implant.
The included angle Err between the first direction vector and the second direction vector can be obtained according to the following vector included angle formulad
Figure BDA0002738233500000151
This angle corresponds to the angular deviation between the direction of actual implantation of the implant and the direction of planned implantation, so that the deviation in posture can be determined therefrom.
Therefore, the pose deviation can be determined according to the position deviation and the attitude deviation.
Therefore, the data processing method in the embodiment can perform registration according to the preoperative medical image and the postoperative medical image, can determine the pose information of the implant in the postoperative medical image, and determine the pose deviation of the implant according to the pose information, so that the problems that a doctor cannot accurately determine whether the implant is in a planned position after an operation, and can judge whether the operation is successful only according to postoperative performance and clinical experience of a patient, the treatment time of the patient is easily delayed, and the treatment effect is difficult to meet the expectation are solved.
EXAMPLE III
In the embodiment of the application, a pose precision verification system is provided, which is characterized by comprising: a processor for transmitting and receiving control signals; the memory is used for storing at least one executable command, and the executable command enables the processor to execute the operation of the data processing method.
Therefore, as the memory in the pose accuracy verification system in this embodiment stores at least one executable command, which enables the processor to execute the operations of the data processing method, registration can be performed according to the preoperative medical image and the postoperative medical image, pose information of the implant in the postoperative medical image can be determined, and pose deviation of the implant can be determined according to the pose information, so that the problem that a doctor cannot accurately determine whether the implant is in a planned position after the operation, and can only judge whether the operation is successful according to postoperative performance and clinical experience of a patient, so that the treatment time of the patient is easily delayed, and the treatment effect is difficult to meet expectations is solved.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. A data processing method, comprising:
acquiring a preoperative medical image and a postoperative medical image of a tissue structure comprising at least a portion of a surgical object, wherein the postoperative medical image comprises an implanted implant;
performing registration processing on the preoperative medical image and the postoperative medical image according to the first gray information of the preoperative medical image and the second gray information of the postoperative medical image to obtain registration transformation parameters which enable the same tissue structure in the preoperative medical image and the postoperative medical image to meet the requirement of preset matching degree;
and determining pose information of the implant in the postoperative medical image according to the registration transformation parameters, and determining pose deviation of the implant according to the pose information.
2. The method according to claim 1, wherein the registering the pre-operation medical image and the post-operation medical image according to the first gray scale information of the pre-operation medical image and the second gray scale information of the post-operation medical image to obtain a registration transformation parameter that allows a same tissue structure in the pre-operation medical image and the post-operation medical image to satisfy a predetermined matching requirement includes:
determining third gray scale information of the transformed postoperative image obtained by rigid transformation according to the second gray scale information of the postoperative medical image and the current transformation parameter;
determining the current matching degree of the same tissue structure in the preoperative medical image and the transformed postoperative image according to the first gray information of the preoperative medical image and the third gray information of the transformed postoperative image;
and determining whether the current matching degree meets the optimal matching degree, and obtaining the registration transformation parameters according to the determination result.
3. The method of claim 2, wherein the first gray scale information comprises a first gray scale value of each first pixel in the pre-operative medical image, and the third gray scale information comprises a third gray scale value of each third pixel in the transformed post-operative image;
the determining a current matching degree of a same tissue structure in the preoperative medical image and the transformed postoperative image according to the first gray scale information of the preoperative medical image and the third gray scale information of the transformed postoperative image includes:
determining a first edge entropy of the preoperative medical image according to a first gray value of each first pixel point;
determining a third edge entropy of the image after transformation according to a third gray value of each third pixel point;
determining the joint entropy of the preoperative medical image and the post-transformation image according to the first gray value of each first pixel point and the third gray value of each third pixel point;
determining the current matching degree of the same tissue structure in the preoperative medical image and the post-transformation image according to the first edge entropy, the third edge entropy and the joint entropy.
4. The method of claim 3, wherein determining the joint entropy of the pre-operative medical image and the post-transformation medical image according to the first gray-scale value of each of the first pixel points and the third gray-scale value of each of the third pixel points comprises:
determining a joint distribution histogram of the preoperative medical image and the transformed postoperative image according to the first gray value of each first pixel point and the third gray value of each third pixel point;
determining a joint probability distribution of the preoperative medical image and the transformed postoperative image according to the joint distribution histogram;
and determining the joint entropy of the preoperative medical image and the transformed postoperative image according to the joint probability distribution.
5. The method of claim 1, wherein the registration transformation parameters include three translational transformation parameters for translation of the post-operative medical image along x, y, and z axes and three rotational transformation parameters for rotation along x, y, and z axes.
6. The method according to claim 2, wherein the determining whether the current matching degree satisfies an optimal matching degree and obtaining the registration transformation parameter according to a determination result comprises:
obtaining the previous matching degree corresponding to the rigid transformation before the current matching degree;
and if the difference value between the current matching degree and the previous matching degree is less than or equal to a set value, taking the current transformation parameter corresponding to the current matching degree as the registration transformation parameter.
7. The method of claim 6, wherein the determining whether the current matching degree satisfies an optimal matching degree and obtaining the registration transformation parameter according to a determination result further comprises:
if the difference value between the current matching degree and the last matching degree is larger than the set value, determining a new current transformation parameter according to the current transformation parameter, the partial derivative of the current matching degree and the updating step length;
and using the new current transformation parameter, returning to the second gray scale information of the postoperative medical image and the current transformation parameter, and determining the third gray scale information of the transformed postoperative image obtained by rigid transformation to continue executing until the registration transformation parameter is obtained.
8. The method of claim 1, wherein determining pose information for an implant in the post-operative medical image based on the registration transformation parameters and determining a pose bias for the implant based on the pose information comprises:
obtaining a final registration image according to the registration transformation parameters;
performing segmentation processing on the final registration image to obtain an implant contour in the post-operative medical image;
acquiring postoperative actual position points and preoperative planning position points of the implant according to the implant contour, determining the pose information according to the postoperative actual position points and the preoperative planning position points, and determining the pose deviation of the implant according to the pose information.
9. The method according to claim 8, wherein the segmenting the final registered image to obtain the implant contour in the post-operative medical image comprises:
and adjusting the gray value range of the final registration image, carrying out binarization on the adjusted final registration image according to the gray value range, and obtaining the implant contour in the postoperative medical image according to the binarization result.
10. The method of claim 8, wherein the obtaining post-operative actual location points and pre-operative planned location points of the implant from the implant profile, determining the pose information from the post-operative actual location points and the pre-operative planned location points, and determining the pose deviation of the implant from the pose information comprises:
calculating the distance between the coordinates according to the coordinates of the first target point in the postoperative actual position point and the coordinates of the second target point in the preoperative planning position point so as to determine the position deviation;
determining a first direction vector according to the coordinate of a first target point and the coordinate of a first entry point in the postoperative actual position points; determining a second direction vector according to the coordinates of a second target point and the coordinates of a second entry point in the preoperative planning position points; calculating an included angle between the first direction vector and the second direction vector to determine an attitude deviation; and determining the pose deviation according to the position deviation and the attitude deviation.
11. A pose accuracy verification system, comprising:
a processor for transmitting and receiving control signals;
memory for storing at least one executable command causing the processor to perform the operations of the data processing method of any one of claims 1-10.
CN202011140891.2A2020-10-222020-10-22Data processing method and pose precision verification systemPendingCN112258478A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011140891.2ACN112258478A (en)2020-10-222020-10-22Data processing method and pose precision verification system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011140891.2ACN112258478A (en)2020-10-222020-10-22Data processing method and pose precision verification system

Publications (1)

Publication NumberPublication Date
CN112258478Atrue CN112258478A (en)2021-01-22

Family

ID=74263203

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011140891.2APendingCN112258478A (en)2020-10-222020-10-22Data processing method and pose precision verification system

Country Status (1)

CountryLink
CN (1)CN112258478A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114266831A (en)*2021-12-302022-04-01杭州堃博生物科技有限公司Data processing method, device, equipment, medium and system for assisting operation
CN119074214A (en)*2024-08-292024-12-06北京纳通医用机器人科技有限公司 Implant position assessment method, device, equipment and storage medium
CN119850712A (en)*2024-12-262025-04-18泗阳县中医院有限公司Robot-assisted nail placement precision evaluation method

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103310458A (en)*2013-06-192013-09-18北京理工大学Method for elastically registering medical images by aid of combined convex hull matching and multi-scale classification strategy
CN106726023A (en)*2016-12-302017-05-31北京爱康宜诚医疗器材有限公司Joint position calibrating installation
CN107610162A (en)*2017-08-042018-01-19浙江工业大学A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation
TWI618036B (en)*2017-01-132018-03-11China Medical University Surgery probe navigation simulation method based on tomographic image and system thereof
CN108447044A (en)*2017-11-212018-08-24四川大学A kind of osteomyelitis lesions analysis method based on medical figure registration
CN109907825A (en)*2019-03-252019-06-21天津大学 Mixed reality guided proximity particle surgery implant system
CN110755156A (en)*2019-11-142020-02-07苏州铸正机器人有限公司 A titanium nail registration device and method for cochlear implant navigation surgery
CN110910406A (en)*2019-11-202020-03-24中国人民解放军总医院Method and system for evaluating three-dimensional space curative effect after liver tumor ablation
CN111028278A (en)*2018-10-092020-04-17武汉大学中南医院Method for providing human body joint data based on tomography technology
WO2020123928A1 (en)*2018-12-142020-06-18Mako Surgical Corp.Systems and methods for preoperative planning and postoperative analysis of surgical procedures

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103310458A (en)*2013-06-192013-09-18北京理工大学Method for elastically registering medical images by aid of combined convex hull matching and multi-scale classification strategy
CN106726023A (en)*2016-12-302017-05-31北京爱康宜诚医疗器材有限公司Joint position calibrating installation
TWI618036B (en)*2017-01-132018-03-11China Medical University Surgery probe navigation simulation method based on tomographic image and system thereof
CN107610162A (en)*2017-08-042018-01-19浙江工业大学A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation
CN108447044A (en)*2017-11-212018-08-24四川大学A kind of osteomyelitis lesions analysis method based on medical figure registration
CN111028278A (en)*2018-10-092020-04-17武汉大学中南医院Method for providing human body joint data based on tomography technology
WO2020123928A1 (en)*2018-12-142020-06-18Mako Surgical Corp.Systems and methods for preoperative planning and postoperative analysis of surgical procedures
CN109907825A (en)*2019-03-252019-06-21天津大学 Mixed reality guided proximity particle surgery implant system
CN110755156A (en)*2019-11-142020-02-07苏州铸正机器人有限公司 A titanium nail registration device and method for cochlear implant navigation surgery
CN110910406A (en)*2019-11-202020-03-24中国人民解放军总医院Method and system for evaluating three-dimensional space curative effect after liver tumor ablation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张佳: "基于互信息的医学图像配准算法研究", 《中国优秀硕士学位论文全文数据库》*
罗守华等: "多模影像的脑深部电刺激术后效果评估管线的设计与实现", 《生物医学工程学杂志》*
阿里•什库达: "《体部成像的正常变异与误判》", 31 March 2004*

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114266831A (en)*2021-12-302022-04-01杭州堃博生物科技有限公司Data processing method, device, equipment, medium and system for assisting operation
CN119074214A (en)*2024-08-292024-12-06北京纳通医用机器人科技有限公司 Implant position assessment method, device, equipment and storage medium
CN119850712A (en)*2024-12-262025-04-18泗阳县中医院有限公司Robot-assisted nail placement precision evaluation method

Similar Documents

PublicationPublication DateTitle
US20240096508A1 (en)Systems and methods for using generic anatomy models in surgical planning
US10991070B2 (en)Method of providing surgical guidance
JP2024099519A (en) Artificial intelligence intraoperative surgical guidance systems and methods of use
US6747646B2 (en)System and method for fusing three-dimensional shape data on distorted images without correcting for distortion
US8152816B2 (en)Computer-assisted planning method for correcting changes in the shape of joint bones
CN112258478A (en)Data processing method and pose precision verification system
US20250195146A1 (en)Systems and methods for generating a three-dimensional model of a joint from two-dimensional images
CN111991122A (en) Bone Reconstruction and Orthopedic Implants
WO2006092600A1 (en)Surgical planning
CN110946659A (en)Registration method and system for image space and actual space
CN112509022A (en)Non-calibration object registration method for preoperative three-dimensional image and intraoperative perspective image
CN113633377B (en)Tibia optimization registration system and method for tibia high osteotomy
US20220241014A1 (en)Systems and methods for predicting surgical outcomes
US20210290309A1 (en)Method, system and apparatus for surface rendering using medical imaging data
CN116452755B (en)Skeleton model construction method, system, medium and equipment
CN119700293B (en) Preoperative planning method and planning equipment for hip joint spacer prosthesis
CN119112369A (en) Surgical treatment methods, devices, surgical robot systems and related equipment
CN114938995A (en)Pelvis registration system and medical equipment applied to hip replacement surgery
Bou-Sleiman et al.Minimization of intra-operative shaping of orthopaedic fixation plates: a population-based design
Zhao et al.Chest modeling and personalized surgical planning for pectus excavatum
CN110772278A (en)Implant body postoperative verification method, device and terminal
Wu et al.Statistical humerus atlas for optimal design of asian-specific humerus implants and associated intramedullary robotics
CN119399281B (en) Bone landmark detection method and device based on image registration
CN117422721B (en)Intelligent labeling method based on lower limb CT image
CN113643433B (en) Morphology and posture estimation method, device, equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information

Address after:100191 Room 501, floor 5, building 9, No. 35 Huayuan North Road, Haidian District, Beijing

Applicant after:Beijing Baihui Weikang Technology Co.,Ltd.

Address before:100191 Room 608, 6 / F, building 9, 35 Huayuan North Road, Haidian District, Beijing

Applicant before:Beijing Baihui Wei Kang Technology Co.,Ltd.

CB02Change of applicant information

[8]ページ先頭

©2009-2025 Movatter.jp