Disclosure of Invention
The invention aims to overcome the defects of low positioning precision and large positioning delay in the mixed reality eyeglass positioning scheme in the prior art, and provides a surgical navigation system and a method for corresponding a virtual space three-dimensional model to a real space position.
The invention solves the technical problems by the following technical scheme:
a first aspect of the present invention provides a surgical navigation system comprising a communicatively coupled optical positioning device and a mixed reality display device;
the optical positioning device is used for respectively acquiring coordinate data of a target organ, a surgical instrument and the mixed reality display device in a real space and transmitting all acquired coordinate data to the mixed reality display device;
the mixed reality display device is used for carrying out transformation processing on the received coordinate data to respectively obtain coordinate data of the target organ, the surgical instrument and the mixed reality display device in a virtual space, and displaying a pre-stored target three-dimensional model and a three-dimensional model of the surgical instrument in the virtual space according to the coordinate data in the virtual space so that the position of the target three-dimensional model in the virtual space corresponds to the position of the target organ in a real space and the position of the three-dimensional model of the surgical instrument in the virtual space corresponds to the position of the surgical instrument in the real space;
the target three-dimensional model is used for displaying the three-dimensional structure of the target organ and the operation position on the target organ.
Optionally, a first positioning tool is arranged on the target organ, a second positioning tool is arranged on the surgical instrument, and a third positioning tool is arranged on the mixed reality display device;
the optical positioning device is specifically configured to collect first coordinate data of the first positioning tool, and use the first coordinate data as coordinate data of the target organ in a real space; collecting second coordinate data of the second positioning tool, and taking the second coordinate data as coordinate data of the surgical instrument in a real space; and collecting third coordinate data of the third positioning tool, and taking the third coordinate data as coordinate data of the mixed reality display device in a real space.
Optionally, the mixed reality display device is further configured to display a two-dimensional cross-sectional view in the virtual space according to the target three-dimensional model.
Optionally, the target organ is a spinal column, and the surgical instrument comprises a pedicle screw;
the mixed reality display device is also used for outputting prompt information according to the distance between the pedicle screw and the operation position in the virtual space.
Optionally, the mixed reality display device is further configured to collect gesture information, and adjust a three-dimensional model displayed in the virtual space according to the gesture information.
A second aspect of the present invention provides a method for a virtual space three-dimensional model corresponding to a real space position, comprising the steps of:
the optical positioning device respectively collects coordinate data of a target organ, a surgical instrument and a mixed reality display device in a real space;
transmitting all the acquired coordinate data to a mixed reality display device;
the mixed reality display device performs transformation processing on the received coordinate data to respectively obtain coordinate data of the target organ, the surgical instrument and the mixed reality display device in a virtual space;
displaying a pre-stored target three-dimensional model and a three-dimensional model of a surgical instrument in a virtual space according to the coordinate data in the virtual space, so that the position of the target three-dimensional model in the virtual space corresponds to the position of the target organ in a real space, and the position of the three-dimensional model of the surgical instrument in the virtual space corresponds to the position of the surgical instrument in the real space;
the target three-dimensional model is used for displaying the three-dimensional structure of the target organ and the operation position on the target organ.
Optionally, a first positioning tool is arranged on the target organ, a second positioning tool is arranged on the surgical instrument, and a third positioning tool is arranged on the mixed reality display device;
the step of the optical positioning device respectively collecting coordinate data of the target organ, the surgical instrument and the mixed reality display device in the real space specifically comprises the following steps:
the optical positioning device collects first coordinate data of the first positioning tool and takes the first coordinate data as coordinate data of the target organ in a real space; collecting second coordinate data of the second positioning tool, and taking the second coordinate data as coordinate data of the surgical instrument in a real space; and collecting third coordinate data of the third positioning tool, and taking the third coordinate data as coordinate data of the mixed reality display device in a real space.
Optionally, the method further comprises the steps of: the mixed reality display device displays a two-dimensional cross-sectional view in the virtual space according to the target three-dimensional model.
Optionally, the target organ is a spinal column, and the surgical instrument comprises a pedicle screw;
the method further comprises the steps of: the mixed reality display device outputs prompt information according to the distance between the pedicle screw and the operation position in the virtual space.
Optionally, the method further comprises the steps of: and the mixed reality display equipment acquires gesture information and adjusts the three-dimensional model displayed in the virtual space according to the gesture information.
The invention has the positive progress effects that: the surgical navigation system provided by the invention realizes surgical navigation based on the optical positioning equipment and the mixed reality display equipment, acquires coordinate data of the target organ, the surgical instrument and the mixed reality display equipment in the real space based on the optical positioning equipment, can greatly improve the accuracy of surgical navigation, can guide a doctor to perform operation at the visual blind spot position by means of the target three-dimensional model displayed in the virtual space by the mixed reality display equipment, can assist the doctor to find the surgical position more quickly and accurately, can assist the doctor to avoid damaging important parts such as nerve vessels and the like in the operation, reduces the burden of the doctor and the pain of patients, and has higher clinical application value.
Detailed Description
The invention is further illustrated by means of the following examples, which are not intended to limit the scope of the invention.
Example 1
As shown in fig. 1, the present embodiment provides asurgical navigation system 100 comprising a communicatively coupledoptical pointing device 20 and mixedreality display device 30. Theoptical positioning device 20 and the mixedreality display device 30 may be in communication connection through a wireless manner, such as Wi-Fi (wireless communication technology), or may be in communication connection through a wired manner, such as a data line.
Theoptical positioning device 20 is used for respectively acquiring coordinate data of a target organ, a surgical instrument and a mixed reality display device in a real space, and transmitting all acquired coordinate data to the mixedreality display device 30.
In a specific implementation, the target organ may be a site to be operated on. The surgical instrument may include an object to be placed in the target organ and an instrument required for placing the object in the target organ.
In an alternative embodiment, the target organ is provided with a first positioning tool, the surgical instrument is provided with a second positioning tool, and the mixed reality display device is provided with a third positioning tool. The optical positioning device is specifically configured to collect first coordinate data of the first positioning tool, and use the first coordinate data as coordinate data of the target organ in a real space; collecting second coordinate data of the second positioning tool, and taking the second coordinate data as coordinate data of the surgical instrument in a real space; and collecting third coordinate data of the third positioning tool, and taking the third coordinate data as coordinate data of the mixed reality display device in a real space.
In order to ensure that the coordinate data acquired by the optical positioning device does not deviate in the operation process, the first positioning tool is rigidly connected with the target organ, the second positioning tool is rigidly connected with the surgical instrument, and the third positioning tool is rigidly connected with the mixed reality display device.
In a specific implementation, at least three optical positioning target balls are mounted on each of the first positioning tool, the second positioning tool and the third positioning tool. In a specific example, four optical positioning target balls are mounted on each of the first positioning tool, the second positioning tool, and the third positioning tool.
It will be appreciated that during surgery, the position of the surgical instrument and the mixed reality display device in real space will vary with the operation of the physician, and therefore, the optical positioning device is to acquire the coordinate data of the target organ, the surgical instrument and the mixed reality display device in real space, i.e. the first coordinate data of the first positioning tool, the second coordinate data of the second positioning tool and the third coordinate data of the third positioning tool in real time.
The mixedreality display device 30 is configured to perform transformation processing on the received coordinate data to obtain coordinate data of the target organ, the surgical instrument, and the mixed reality display device in a virtual space, and display a pre-stored target three-dimensional model and a three-dimensional model of the surgical instrument in the virtual space according to the coordinate data in the virtual space, so that a position of the target three-dimensional model in the virtual space corresponds to a position of the target organ in the real space, and a position of the three-dimensional model of the surgical instrument in the virtual space corresponds to a position of the surgical instrument in the real space.
The target three-dimensional model is used for displaying the three-dimensional structure of the target organ and the operation position on the target organ.
In a specific implementation, the mixed reality display device may be a head-mounted or ear-mounted mixed reality glasses. During the operation, a doctor can wear the head-mounted mixed reality glasses and see the three-dimensional model displayed in the virtual space through the head-mounted mixed reality glasses.
The mixed reality display device stores a target three-dimensional model and a three-dimensional model of a surgical instrument in advance. Specifically, medical imaging of the target organ can be obtained by means of medical imaging means such as CT (Computed Tomography, electronic computed tomography) or MRI (Magnetic Resonance Imaging ). Typically, the medical image is in DICOM (Digital Imaging and Communications in Medicine, digital imaging and communication in medicine) format, and image processing is required to be performed on the medical image in DICOM format to obtain a three-dimensional model of the target organ that can be presented in mixed reality glasses. In the operation planning stage, the operation position is marked in the three-dimensional model of the target organ through lines with different colors, so that a target three-dimensional model is formed. The data of the target three-dimensional model and the data of the three-dimensional model of the surgical instrument are then input into the mixed reality display device.
In an optional implementation manner, the mixed reality display device is configured to perform first coordinate transformation on coordinate data of the target organ and the surgical instrument in a real space, to obtain coordinate data of the target organ and the surgical instrument in a local coordinate system, and perform second coordinate transformation on coordinate data of the target organ and the surgical instrument in the local coordinate system according to the coordinate data of the mixed reality display device in the real space and the coordinate data of the mixed reality display device in the virtual space, to obtain coordinate data of the target organ and the surgical instrument in the virtual space.
In this embodiment, the mixed reality display device converts coordinate data in a real space coordinate system into coordinate data in a local coordinate system through first coordinate conversion; and converting the coordinate data under the local coordinate system into the coordinate data under the virtual space coordinate system through the second coordinate conversion. The real space coordinate system may also be referred to as a coordinate system of the optical positioning device, and the local coordinate system may also be referred to as a coordinate system of the mixed reality display device.
In an optional implementation manner, the mixed reality display device is used for subtracting the target coordinate data from the coordinate data of the target organ and the surgical instrument in the real space to obtain the coordinate data of the target organ and the surgical instrument in the local coordinate system. The target coordinate data are coordinate data, in real space, of the mixed reality display device, acquired during initialization of the optical positioning device, namely an origin of a local coordinate system.
In this embodiment, since the unit of the real space coordinate system and the local coordinate system is 1mm, there is no scaling relationship between the two coordinate systems. The coordinate data of the target organ in the real space is NDIM, the coordinate data of the surgical instrument in the real space is NDIP, the coordinate data of the mixed reality display device acquired during the initialization of the optical positioning device in the real space is NDIH1, the coordinate data of the target organ in the local coordinate system can be obtained to be NDIM-NDIH1, and the coordinate data of the surgical instrument in the local coordinate system is NDIP-NDIH 1.
In an optional implementation manner, the mixed reality display device is used for multiplying coordinate data of the target organ and the surgical instrument in a local coordinate system by conversion coefficients respectively to obtain coordinate data of the target organ and the surgical instrument in a virtual space. The conversion coefficient is the coordinate data of the mixed reality display device in a virtual space divided by the coordinate data of the mixed reality display device in a real space. It should be noted that, the coordinate data of the mixed reality display device in the real space is real-time data, which is different from the coordinate data of the mixed reality display device in the real space collected during the initialization of the optical positioning device.
In this embodiment, the coordinate data of the mixed reality display device in the virtual space is UnityH1, and specifically, can be obtained through the API of Unity. Real-time coordinate data of the mixed reality display device in a real space is NDIH2, and a conversion coefficient m=unityh1/NDIH 2. Multiplying the coordinate data (NDIM-NDIH 1) of the target organ in the local coordinate system by M to obtain the coordinate data (NDIM-NDIH 1) of the target organ in the virtual space as UnityH 1/NDIH 2. Multiplying the coordinate data (NDIP-NDIH 1) of the surgical instrument in the local coordinate system by M, to obtain the coordinate data (NDIP-NDIH 1) of the surgical instrument in the virtual space as uni-NDIH 1/NDIH 2.
In one example of the pedicle screw placement operation, as shown in fig. 2, the target organ is aspinal column 42, thesurgical instrument 43 includes pedicle screws to be placed in the spinal column, theoptical positioning device 20 collects coordinate data of thespinal column 42, thesurgical instrument 43, and the head-mountedmixed reality glasses 44, and transmits the collected coordinate data to the head-mountedmixed reality glasses 44, and the head-mountedmixed reality glasses 44 present a three-dimensional model in a virtual space after transforming the coordinate data.
In the above example of pedicle screw placement, the surgical site on the target organ is the location of pedicle screw placement into the spine.
In this embodiment, the optical positioning device performs optical positioning on the target organ, the surgical instrument and the mixed reality display device in real time, the mixed reality display device presents a three-dimensional model in the virtual space according to real-time position information provided by the optical positioning device, a doctor coincides with the position of the actual target organ through the target three-dimensional model seen by the mixed reality display device, and coincides with the position of the actual surgical instrument, meanwhile, the surgical position planned before surgery can be displayed on the actual target organ, and the doctor can operate the actual surgical instrument according to the surgical position in the target three-dimensional model, thereby realizing surgical navigation.
The operation navigation system provided by the embodiment realizes operation navigation based on the optical positioning equipment and the mixed reality display equipment, can greatly improve the accuracy of operation navigation based on the optical positioning equipment, can guide a doctor to operate at the position of the visual blind spot by means of the target three-dimensional model displayed in the virtual space by the mixed reality display equipment, can assist the doctor to find the operation position faster and more accurately, avoid damaging important parts such as nerve blood vessels in the assisting operation, lighten the burden of the doctor and the pain of a patient, and has higher clinical application value.
In the specific implementation, the setting position of the first positioning tool on the target organ determines whether the position of the target organ in the real space can completely coincide with the position of the target three-dimensional model in the virtual space, and similarly, the setting position of the second positioning tool on the surgical instrument determines whether the position of the surgical instrument in the real space can completely coincide with the position of the three-dimensional model of the surgical instrument in the virtual space. Therefore, manual calibration is often required before surgery, specifically, manual fine adjustment is performed on the three-dimensional model displayed in the virtual space by the mixed reality display device, so that the position of the target organ in the real space completely coincides with the position of the target three-dimensional model in the virtual space, and the position of the surgical instrument in the real space completely coincides with the position of the three-dimensional model of the surgical instrument in the virtual space.
In an alternative embodiment, to further assist the doctor in determining the surgical position, the mixed reality display device is further configured to display a two-dimensional cross-sectional view in the virtual space according to the target three-dimensional model.
In the above example of pedicle screw placement, the two-dimensional cross-sectional view displayed by the mixed reality display device includes at least two cross-sections, wherein the picture of one cross-section is a plane formed by the nail tip and the nail post passing through the pedicle screw, and the picture formed by cutting the three-dimensional model of the target; the other profile is taken from a plane formed through the tip of the pedicle screw and perpendicular to the post of the pedicle screw, and from a three-dimensional model of the object.
In the above example of pedicle screw placement surgery, the mixed reality display device is further configured to output hint information based on a distance between the pedicle screw and the surgical site within the virtual space. Note that, in general, a prompt message is output according to a distance between the tip of the pedicle screw and the operation position, so as to prompt a doctor whether the tip of the pedicle screw deviates from the operation position. The prompt information may be a sound, for example, the mixed reality display device outputs a sound through a speaker to prompt a doctor of a real-time distance between the nail tip and the operation position. The prompt information may also be an image, for example, the mixed reality display device prompts the doctor by displaying the image in the virtual space.
In a specific implementation, the coordinate data of the tip of the pedicle screw in the virtual space may be determined from the length of the pedicle screw, the location of the pedicle screw on the surgical instrument, and the coordinate data of the surgical instrument in the virtual space. The distance between the two can be calculated according to the coordinate data of the nail tip of the pedicle screw and the operation position in the virtual space. Wherein, can export different prompt messages according to different distances. In a specific example, if the distance between the spike tip and the surgical site is greater than a preset distance, no prompt message is output; and if the distance between the nail tip and the operation position is smaller than or equal to the preset distance, outputting prompt information. In another specific example, the cue is a sound, and the volume of the cue sound increases as the distance between the tip of the nail and the surgical site increases.
In an optional implementation manner, the mixed reality display device is further configured to collect gesture information, and adjust a three-dimensional model or a two-dimensional cross-sectional view displayed in the virtual space according to the gesture information. In order to enable a doctor to more accurately determine an operation position and assist in avoiding injury to important parts such as nerve vessels in operation, in a specific implementation, the doctor can rotate, move, enlarge or reduce a three-dimensional model of a target, a three-dimensional model of a surgical instrument or a two-dimensional section view displayed in a virtual space through specific gestures, so that different visual angles can be freely checked.
Example 2
The present embodiment provides a method for mapping a virtual space three-dimensional model to a real space position, which can be implemented using the surgical navigation system described in embodiment 1. As shown in fig. 3, the method provided in this embodiment may include the following steps S301 to S304:
step S301, the optical positioning device acquires coordinate data of the target organ, the surgical instrument and the mixed reality display device in a real space respectively.
In a specific implementation, the target organ may be a site to be operated on. The surgical instrument may include an object to be placed in the target organ and an instrument required for placing the object in the target organ.
In an alternative embodiment, the target organ is provided with a first positioning tool, the surgical instrument is provided with a second positioning tool, and the mixed reality display device is provided with a third positioning tool. In this embodiment, step S301 specifically includes: the optical positioning device collects first coordinate data of the first positioning tool and takes the first coordinate data as coordinate data of the target organ in a real space; collecting second coordinate data of the second positioning tool, and taking the second coordinate data as coordinate data of the surgical instrument in a real space; and collecting third coordinate data of the third positioning tool, and taking the third coordinate data as coordinate data of the mixed reality display device in a real space.
In order to ensure that the coordinate data acquired by the optical positioning device does not deviate in the operation process, the first positioning tool is rigidly connected with the target organ, the second positioning tool is rigidly connected with the surgical instrument, and the third positioning tool is rigidly connected with the mixed reality display device.
In a specific implementation, at least three optical positioning target balls are mounted on each of the first positioning tool, the second positioning tool and the third positioning tool. In a specific example, four optical positioning target balls are mounted on each of the first positioning tool, the second positioning tool, and the third positioning tool.
It will be appreciated that during surgery, the position of the surgical instrument and the mixed reality display device in real space will vary with the operation of the physician, and therefore, the optical positioning device is to acquire the coordinate data of the target organ, the surgical instrument and the mixed reality display device in real space, i.e. the first coordinate data of the first positioning tool, the second coordinate data of the second positioning tool and the third coordinate data of the third positioning tool in real time.
And step S302, all the acquired coordinate data are sent to the mixed reality display device. The optical positioning device is in communication connection with the mixed reality display device, and the optical positioning device sends all acquired coordinate data to the mixed reality display device. The optical positioning device and the mixed reality display device can be in communication connection in a wireless mode, such as Wi-Fi, and can also be in communication connection in a wired mode, such as a data wire.
Step S303, the mixed reality display device performs transformation processing on the received coordinate data, so as to obtain coordinate data of the target organ, the surgical instrument and the mixed reality display device in a virtual space respectively.
And step S304, displaying a pre-stored target three-dimensional model and a three-dimensional model of the surgical instrument in the virtual space according to the coordinate data in the virtual space, so that the position of the target three-dimensional model in the virtual space corresponds to the position of the target organ in the real space, and the position of the three-dimensional model of the surgical instrument in the virtual space corresponds to the position of the surgical instrument in the real space.
The target three-dimensional model is used for displaying the three-dimensional structure of the target organ and the operation position on the target organ.
In a specific implementation, the mixed reality display device may be a head-mounted mixed reality glasses, and in the operation process, a doctor may wear the head-mounted mixed reality glasses and see the three-dimensional model displayed in the virtual space through the head-mounted mixed reality glasses.
The mixed reality display device stores a target three-dimensional model and a three-dimensional model of a surgical instrument in advance. Specifically, medical imaging of the target organ can be obtained by means of medical imaging means such as CT or MRI. In general, a medical image is in DICOM format, and it is necessary to perform image processing on the medical image in DICOM format to obtain a three-dimensional model of a target organ that can be presented in mixed reality glasses. In the operation planning stage, the operation position is marked in the three-dimensional model of the target organ through lines with different colors, so that a target three-dimensional model is formed. The data of the target three-dimensional model and the data of the three-dimensional model of the surgical instrument are then input into the mixed reality display device.
In an optional implementation manner, the mixed reality display device performs first coordinate transformation on coordinate data of the target organ and the surgical instrument in a real space to obtain coordinate data of the target organ and the surgical instrument in a local coordinate system, and performs second coordinate transformation on coordinate data of the target organ and the surgical instrument in the local coordinate system according to the coordinate data of the mixed reality display device in the real space and the coordinate data of the surgical instrument in the virtual space to obtain coordinate data of the target organ and the coordinate data of the surgical instrument in the virtual space.
In this embodiment, the mixed reality display device converts coordinate data in a real space coordinate system into coordinate data in a local coordinate system through first coordinate conversion; and converting the coordinate data under the local coordinate system into the coordinate data under the virtual space coordinate system through the second coordinate conversion. The real space coordinate system may also be referred to as a coordinate system of the optical positioning device, and the local coordinate system may also be referred to as a coordinate system of the mixed reality display device.
In an optional implementation manner, the mixed reality display device subtracts the target coordinate data from the coordinate data of the target organ and the surgical instrument in real space, so as to obtain the coordinate data of the target organ and the surgical instrument in a local coordinate system. The target coordinate data are coordinate data, in real space, of the mixed reality display device, acquired during initialization of the optical positioning device, namely an origin of a local coordinate system.
In this embodiment, since the unit of the real space coordinate system and the local coordinate system is 1mm, there is no scaling relationship between the two coordinate systems. The coordinate data of the target organ in the real space is NDIM, the coordinate data of the surgical instrument in the real space is NDIP, the coordinate data of the mixed reality display device acquired during the initialization of the optical positioning device in the real space is NDIH1, the coordinate data of the target organ in the local coordinate system can be obtained to be NDIM-NDIH1, and the coordinate data of the surgical instrument in the local coordinate system is NDIP-NDIH 1.
In an optional implementation manner, the mixed reality display device multiplies the coordinate data of the target organ and the surgical instrument in the local coordinate system by a conversion coefficient to obtain the coordinate data of the target organ and the surgical instrument in the virtual space. The conversion coefficient is the coordinate data of the mixed reality display device in a virtual space divided by the coordinate data of the mixed reality display device in a real space. It should be noted that, the coordinate data of the mixed reality display device in the real space is real-time data, which is different from the coordinate data of the mixed reality display device in the real space collected during the initialization of the optical positioning device.
In this embodiment, the coordinate data of the mixed reality display device in the virtual space is UnityH1, and specifically, can be obtained through the API of Unity. Real-time coordinate data of the mixed reality display device in a real space is NDIH2, and a conversion coefficient m=unityh1/NDIH 2. Multiplying the coordinate data (NDIM-NDIH 1) of the target organ in the local coordinate system by M to obtain the coordinate data (NDIM-NDIH 1) of the target organ in the virtual space as UnityH 1/NDIH 2. Multiplying the coordinate data (NDIP-NDIH 1) of the surgical instrument in the local coordinate system by M, to obtain the coordinate data (NDIP-NDIH 1) of the surgical instrument in the virtual space as uni-NDIH 1/NDIH 2.
In one example of pedicle screw placement, the target organ is the spinal column, the surgical instrument includes pedicle screws to be placed in the spinal column, the optical locating device collects coordinate data of the spinal column, the surgical instrument, and the head-mounted mixed reality glasses, and transmits the collected coordinate data to the head-mountedmixed reality glasses 44, which present a three-dimensional model in the virtual space after transforming the coordinate data.
In the above example of pedicle screw placement, the surgical site on the target organ is the location of pedicle screw placement into the spine.
In this embodiment, the optical positioning device performs optical positioning on the target organ, the surgical instrument and the mixed reality display device in real time, the mixed reality display device presents a three-dimensional model in the virtual space according to real-time position information provided by the optical positioning device, a doctor coincides with the position of the actual target organ through the target three-dimensional model seen by the mixed reality display device, and coincides with the position of the actual surgical instrument, meanwhile, the surgical position planned before surgery can be displayed on the actual target organ, and the doctor can operate the actual surgical instrument according to the surgical position in the target three-dimensional model, thereby realizing surgical navigation.
According to the method, surgical navigation is realized based on the optical positioning equipment and the mixed reality display equipment, the accuracy of surgical navigation can be greatly improved based on the optical positioning equipment, a doctor can be guided to operate at the position of the visual blind spot by means of the target three-dimensional model displayed in the virtual space by the mixed reality display equipment, the doctor can be assisted to find the surgical position more quickly and accurately, important parts such as nerve blood vessels are prevented from being damaged in the auxiliary operation, the burden of the doctor and pains of patients are relieved, and the method has higher clinical application value.
In the specific implementation, the setting position of the first positioning tool on the target organ determines whether the position of the target organ in the real space can completely coincide with the position of the target three-dimensional model in the virtual space, and similarly, the setting position of the second positioning tool on the surgical instrument determines whether the position of the surgical instrument in the real space can completely coincide with the position of the three-dimensional model of the surgical instrument in the virtual space. Therefore, manual calibration is often required before surgery, specifically, manual fine adjustment is performed on the three-dimensional model displayed in the virtual space by the mixed reality display device, so that the position of the target organ in the real space completely coincides with the position of the target three-dimensional model in the virtual space, and the position of the surgical instrument in the real space completely coincides with the position of the three-dimensional model of the surgical instrument in the virtual space.
In an alternative embodiment, to further assist the surgeon in determining the surgical site, the method further comprises the steps of: the mixed reality display device displays a two-dimensional cross-sectional view in the virtual space according to the target three-dimensional model.
In the above example of pedicle screw placement, the two-dimensional cross-sectional view displayed by the mixed reality display device includes at least two cross-sections, wherein the picture of one cross-section is a plane formed by the nail tip and the nail post passing through the pedicle screw, and the picture formed by cutting the three-dimensional model of the target; the other profile is taken from a plane formed through the tip of the pedicle screw and perpendicular to the post of the pedicle screw, and from a three-dimensional model of the object.
In the above example of pedicle screw placement, the method may further comprise the steps of: the mixed reality display device outputs prompt information according to the distance between the pedicle screw and the operation position in the virtual space. Note that, in general, a prompt message is output according to a distance between the tip of the pedicle screw and the operation position, so as to prompt a doctor whether the tip of the pedicle screw deviates from the operation position. The prompt information may be a sound, for example, the mixed reality display device outputs a sound through a speaker to prompt a doctor of a real-time distance between the nail tip and the operation position. The prompt information may also be an image, for example, the mixed reality display device prompts the doctor by displaying the image in the virtual space.
In a specific implementation, the coordinate data of the tip of the pedicle screw in the virtual space may be determined from the length of the pedicle screw, the location of the pedicle screw on the surgical instrument, and the coordinate data of the surgical instrument in the virtual space. The distance between the two can be calculated according to the coordinate data of the nail tip of the pedicle screw and the operation position in the virtual space. Wherein, can export different prompt messages according to different distances. In a specific example, if the distance between the spike tip and the surgical site is greater than a preset distance, no prompt message is output; and if the distance between the nail tip and the operation position is smaller than or equal to the preset distance, outputting prompt information. In another specific example, the cue is a sound, and the volume of the cue sound increases as the distance between the tip of the nail and the surgical site increases.
In an alternative embodiment, the method further comprises the steps of: and the mixed reality display equipment acquires gesture information and adjusts a three-dimensional model or a two-dimensional section view displayed in the virtual space according to the gesture information. In order to enable a doctor to more accurately determine an operation position and assist in avoiding injury to important parts such as nerve vessels in operation, in a specific implementation, the doctor can rotate, move, enlarge or reduce a three-dimensional model of a target, a three-dimensional model of a surgical instrument or a two-dimensional section view displayed in a virtual space through specific gestures, so that different visual angles can be freely checked.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.