RELATED APPLICATIONSThis application is a continuation application of U.S. patent application Ser. No. 15/609,305 filed on May 31, 2017, which is a continuation application of U.S. patent application Ser. No. 13/924,505 filed on Jun. 21, 2013 (published as U.S. Patent Publication No. 2016-0242849), both of which are incorporated herein by reference in their entirety for all purposes. Application Ser. No. 13/924,505 claims priority to U.S. Provisional Pat. App. No. 61/662,702 filed Jun. 21, 2012 (expired) and U.S. Provisional Pat. App. No. 61/800,527 filed Mar. 15, 2013 (expired), which are incorporated herein by reference in their entireties for all purposes.
BACKGROUNDVarious medical procedures require the precise localization of a three-dimensional position of a surgical instrument within the body in order to effect optimized treatment. Limited robotic assistance for surgical procedures is currently available. One of the characteristics of many of the current robots used in surgical applications which make them error prone is that they use an articular arm based on a series of rotational joints. The use of an articular system may create difficulties in arriving at an accurately targeted location because the level of any error is increased over each joint in the articular system.
SUMMARYSome embodiments of the invention provide a surgical robot (and optionally an imaging system) that utilizes a Cartesian positioning system that allows movement of a surgical instrument to be individually controlled in an x-axis, y-axis and z-axis. In some embodiments, the surgical robot can include a base, a robot arm coupled to and configured for articulation relative to the base, as well as an end-effectuator coupled to a distal end of the robot arm. The effectuator element can include the surgical instrument or can be configured for operative coupling to the surgical instrument. Some embodiments of the invention allow the roll, pitch and yaw rotation of the end-effectuator and/or surgical instrument to be controlled without creating movement along the x-axis, y-axis, or z-axis.
DESCRIPTION OF THE DRAWINGSFIG. 1 is a partial perspective view of a room in which a medical procedure is taking place by using a surgical robot.
FIG. 2 is a perspective view of a surgical robot according to an embodiment of the invention.
FIGS. 3A-3B are perspective views of the surgical robot illustrated inFIG. 2, which show the movement of the base of the surgical robot in the z-axis direction in accordance with an embodiment of the invention.
FIG. 4 is a partial perspective view of the surgical robot ofFIG. 2 which shows how the robot arm can be moved in the x-axis direction.
FIGS. 5A-5B are partial perspective views of the surgical robot ofFIG. 2, which show how the robot arm can be moved in the y-axis direction.
FIG. 6 is a perspective view of a portion of the robot arm ofFIG. 2 showing how an effectuator element can be twisted about a y-axis.
FIG. 7 is a perspective view of a portion of a robot arm ofFIG. 2 showing how an effectuator element can be pivoted about a pivot axis that is perpendicular to the y-axis.
FIGS. 8A-8B are partial perspective views of the surgical robot ofFIG. 2, which show the movement of asurgical instrument35 along the z-axis from an effectuator element.
FIG. 9 is a system diagram which shows local positioning sensors, a controlling PC, and a Radiofrequency (RF) transmitter in accordance with an embodiment of the invention.
FIG. 10 is a system diagram of the controlling PC, user input, and motors for controlling the robot in accordance with an embodiment of the invention.
FIG. 11 is a flow chart diagram for general operation of a surgical robot in accordance with one embodiment of the invention.
FIG. 12 is a flow chart diagram for a closed screw/needle insertion performed using a surgical robot in accordance with one embodiment of the invention.
FIG. 13 is a flow chart diagram of a safe zone surgery performed using a surgical robot as described herein in accordance with one embodiment of the invention.
FIG. 14 is a flow chart diagram of a flexible catheter insertion procedure performed using a surgical robot as described herein in accordance with one embodiment of the invention.
FIG. 15A shows a screenshot of a monitor display showing a set up of the anatomy in X, Y and Z views in accordance with one embodiment of the invention.
FIG. 15B shows a screenshot of a monitor display showing what the user views during an invasive procedure in accordance with one embodiment of the invention.
FIG. 16 depicts a surgical robot having a plurality of optical markers mounted for tracking movement in an x-direction in accordance with one embodiment of the invention.
FIGS. 17A-17B depict surgical instruments having a stop mechanism in accordance with one embodiment of the invention.
FIGS. 17C-17E illustrate tools for manually adjusting a drill stop with reference to drill bit markings in accordance with one embodiment of the invention.
FIGS. 17F-17J illustrate tools for locking and holding a drill bit in a set position in accordance with one embodiment of the invention.
FIGS. 18A-18B depicts an end-effectuator having a clearance mechanism in accordance with one embodiment of the invention.
FIG. 19A-19B depicts an end-effectuator having an attachment element for applying distraction and/or compression forces in accordance with one embodiment of the invention.
FIGS. 20A-20E show the use of calibration frames with the guidance system in accordance with one embodiment of the invention.
FIG. 21A depicts flexible roll configurations of a targeting fixture in accordance with one embodiment of the invention.
FIG. 21B shows possible positions of markers along a line in space in accordance with one embodiment of the invention.
FIG. 21C depicts flexible roll configurations of a targeting fixture in accordance with one embodiment of the invention.
FIG. 21D shows a fixture that can be employed to provide desired stiffness to the unrolled fixture such that it maintains its position after unrolling occurs in accordance with one embodiment of the invention.
FIGS. 22A-22D depict a targeting fixture and method configured for application to the skull of a patient in accordance with one embodiment of the invention.
FIG. 23 depicts a dynamic tracking device mounted to the spinous process of the lumbar spine of a patient in accordance with one embodiment of the invention.
FIGS. 24-27 illustrate methods for the robot in accordance with one embodiment of the invention.
FIGS. 28A-28B illustrate methods for calculating position and/or orientation of an end-effectuator for the robot in accordance with one embodiment of the invention.
FIGS. 29-33 illustrate methods for the robot in accordance with other embodiments of the invention.
FIG. 34 illustrates a computing platform that enables implementation of various embodiments of the invention.
FIGS. 35A-35B display a surgical robot in accordance with one embodiment of the invention.
DETAILED DESCRIPTIONReferring now toFIGS. 1 and 35A, some embodiments include asurgical robot system1 is disclosed in a room10 where a medical procedure is occurring. In some embodiments, thesurgical robot system1 can comprise asurgical robot15 and one ormore positioning sensors12. In this aspect, thesurgical robot15 can comprise a display means29 (including for example adisplay150 shown inFIG. 10), and ahousing27. In some embodiments adisplay150 can be attached to thesurgical robot15, whereas in other embodiments, a display means29 can be detached fromsurgical robot15, either within surgical room10 or in a remote location. In some embodiments, thehousing27 can comprise arobot arm23, and an end-effectuator30 coupled to therobot arm23 controlled by at least onemotor160. For example, in some embodiments, thesurgical robot system1 can include amotor assembly155 comprising at least one motor (represented as160 inFIG. 10). In some embodiments, the end-effectuator30 can comprise asurgical instrument35. In other embodiments, the end-effectuator30 can be coupled to thesurgical instrument35. As used herein, the term “end-effectuator” is used interchangeably with the terms “end-effectuator,” “effectuator element,” and “effectuator element.” In some embodiments, the end-effectuator30 can comprise any known structure for effecting the movement of thesurgical instrument35 in a desired manner.
In some embodiments, prior to performance of an invasive procedure, a three-dimensional (“3D”) image scan can be taken of a desired surgical area of thepatient18 and sent to a computer platform in communication withsurgical robot15 as described herein (see for example theplatform3400 including thecomputing device3401 shown inFIG. 34). In some embodiments, a physician can then program a desired point of insertion and trajectory forsurgical instrument35 to reach a desired anatomical target within or upon the body ofpatient18. In some embodiments, the desired point of insertion and trajectory can be planned on the 3D image scan, which in some embodiments, can be displayed on display means29. In some embodiments, a physician can plan the trajectory and desired insertion point (if any) on a computed tomography scan (hereinafter referred to as “CT scan”) of apatient18. In some embodiments, the CT scan can be an isocentric C-arm type scan, an O-arm type scan, or intraoperative CT scan as is known in the art. However, in some embodiments, any known 3D image scan can be used in accordance with the embodiments of the invention described herein.
In some embodiments, thesurgical robot system1 can comprise a local positioning system (“LPS”) subassembly to track the position ofsurgical instrument35. The LPS subassembly can comprise at least one radio-frequency (RF)transmitter120 that is coupled were affixed to the end-effectuator30 or thesurgical instrument35 at a desired location. In some embodiments, the at least oneRF transmitter120 can comprise a plurality oftransmitters120, such as, for example, at least threeRF transmitters120. In another embodiment, the LPS subassembly can comprise at least oneRF receiver110 configured to receive one or more RF signals produced by the at least oneRF transmitter120. In some embodiments, the at least oneRF receiver110 can comprise a plurality ofRF receivers110, such as, for example, at least threeRF receivers110. In these embodiments, theRF receivers110 can be positioned at known locations within the room10 where the medical procedure is to take place. In some embodiments, theRF receivers110 can be positioned at known locations within the room10 such that theRF receivers110 are not coplanar within a plane that is parallel to the floor of the room10.
In some embodiments, during use, the time of flight of an RF signal from eachRF transmitter120 of the at least oneRF transmitter120 to eachRF receiver110 of the at least one RF receiver110 (e.g., one RF receiver, two RF receivers, three RF receivers, etc.) can be measured to calculate the position of eachRF transmitter120. Because the velocity of the RF signal is known, the time of flight measurements result in at least three distance measurements for each RF transmitter120 (one to each RF receiver110).
In some embodiments, thesurgical robot system1 can comprise a control device (for example acomputer100 having a processor and a memory coupled to the processor). In some embodiments, the processor of thecontrol device100 can be configured to perform time of flight calculations as described herein. Further, in some embodiments, can be configured to provide a geometrical description of the location of the at least oneRF transmitter120 with respect to an operative end of thesurgical instrument35 or end-effectuator30 that is utilized to perform or assist in performing an invasive procedure. In some further embodiments, the position of theRF transmitter120, as well as the dimensional profile of thesurgical instrument35 or theeffectuator element30 can be displayed on a monitor (for example on a display means29 such as thedisplay150 shown inFIG. 10). In one embodiment, the end-effectuator30 can be a tubular element (for example a guide tube50) that is positioned at a desired location with respect to, for example, a patient's18 spine to facilitate the performance of a spinal surgery. In some embodiments, theguide tube50 can be aligned with the z axis70 defined by a correspondingrobot motor160 or, for example, can be disposed at a selected angle relative to the z-axis70. In either case, the processor of the control device (i.e. the computer100) can be configured to account for the orientation of the tubular element and the position of theRF transmitter120. As further described herein, in some embodiments, the memory of the control device (computer100 for example) can store software for performing the calculations and/or analyses required to perform many of the surgical method steps set forth herein.
Another embodiment of the disclosedsurgical robot system1 involves the utilization of arobot15 that is capable of moving the end-effectuator30 along x-, y-, and z-axes (see66,68,70 inFIG. 35B). In this embodiment, the x-axis66 can be orthogonal to the y-axis68 and z-axis70, the y-axis68 can be orthogonal to the x-axis66 and z-axis70, and the z-axis70 can be orthogonal to the x-axis66 and the y-axis68. In some embodiments, therobot15 can be configured to effect movement of the end-effectuator30 along one axis independently of the other axes. For example, in some embodiments, therobot15 can cause the end-effectuator30 to move a given distance along the x-axis66 without causing any significant movement of the end-effectuator30 along the y-axis68 or z-axis70.
In some further embodiments, the end-effectuator30 can be configured for selective rotation about one or more of the x-axis66, y-axis68, and z-axis70 (such that one or more of the Cardanic Euler Angles (e.g., roll, pitch, and/or yaw) associated with the end-effectuator30 can be selectively controlled). In some embodiments, during operation, the end-effectuator30 and/orsurgical instrument35 can be aligned with a selected orientation axis (labeled “Z Tube” inFIG. 35B) that can be selectively varied and monitored by an agent (forexample computer100 and platform3400) that can operate thesurgical robot system1. In some embodiments, selective control of the axial rotation and orientation of the end-effectuator30 can permit performance of medical procedures with significantly improved accuracy compared to conventional robots that utilize, for example, a six degree offreedom robot arm23 comprising only rotational axes.
In some embodiments, as shown inFIG. 1, therobot arm23 that can be positioned above the body of thepatient18, with the end-effectuator30 selectively angled relative to the z-axis toward the body of thepatient18. In this aspect, in some embodiments, the roboticsurgical system1 can comprise systems for stabilizing therobotic arm23, the end-effectuator30, and/or thesurgical instrument35 at their respective positions in the event of power failure. In some embodiments, therobotic arm23, end-effectuator30, and/orsurgical instrument35 can comprise a conventional worm-drive mechanism (not shown) coupled to therobotic arm23, configured to effect movement of the robotic arm along the z-axis70. In some embodiments, the system for stabilizing therobotic arm23, end-effectuator30, and/orsurgical instrument35 can comprise a counterbalance coupled to therobotic arm23. In another embodiment, the means for maintaining therobotic arm23, end-effectuator30, and/orsurgical instrument35 can comprise a conventional brake mechanism (not shown) that is coupled to at least a portion of therobotic arm23, such as, for example, the end-effectuator30, and that is configured for activation in response to a loss of power or “power off” condition of thesurgical robot15.
Referring toFIG. 1, in some embodiments, thesurgical robot system1 can comprise a plurality ofpositioning sensors12 configured to receive RF signals from the at least one conventional RF transmitter (not shown) located within room10. In some embodiments, the at least oneRF transmitter120 can be disposed on various points on thesurgical robot15 and/or onpatient18. For example, in some embodiments, the at least oneRF transmitter120 can be attached to one or more of thehousing27,robot arm23, end-effectuator30, andsurgical instrument35. Some embodiments includepositioning sensors12 that in some embodiments compriseRF receivers110. In some embodiments,RF receivers110 are in communication with a computer platform as described herein (see for example3400 comprising acomputing device3401FIG. 34) that receives the signal from theRF transmitters120. In some embodiments, eachtransmitter120 of the at least oneRF transmitter120 can transmit RF energy on a different frequency so that the identity of eachtransmitter120 in the room10 can be determined. In some embodiments, the location of the at least oneRF transmitter120, and, consequently, the objects to which thetransmitters120 are attached, are calculated by the computer (e.g.,computing device3401 inFIG. 34) using time-of-flight processes.
In some embodiments, the computer (not shown inFIG. 1) is also in communication withsurgical robot15. In some embodiments, a conventional processor (not shown) of thecomputer100 of thecomputing device3401 can be configured to effect movement of thesurgical robot15 according to a preplanned trajectory selected prior to the procedure. For example, in some embodiments, thecomputer100 of thecomputing device3401 can userobotic guidance software3406 and robotic guidance data storage3407 (shown inFIG. 34) to effect movement of thesurgical robot15.
In some embodiments, the position ofsurgical instrument35 can be dynamically updated so thatsurgical robot15 is aware of the location ofsurgical instrument35 at all times during the procedure. Consequently, in some embodiments, thesurgical robot15 can move thesurgical instrument35 to the desired position quickly, with minimal damage topatient18, and without any further assistance from a physician (unless the physician so desires). In some further embodiments, thesurgical robot15 can be configured to correct the path ofsurgical instrument35 if thesurgical instrument35 strays from the selected, preplanned trajectory.
In some embodiments, thesurgical robot15 can be configured to permit stoppage, modification, and/or manual control of the movement of the end-effectuator30 and/orsurgical instrument35. Thus, in use, in some embodiments, an agent (e.g., a physician or other user) that can operate thesystem1 has the option to stop, modify, or manually control the autonomous movement of end-effectuator30 and/orsurgical instrument35. Further, in some embodiments, tolerance controls can be preprogrammed into thesurgical robot15 and/or processor of the computer platform3400 (such that the movement of the end-effectuator30 and/orsurgical instrument35 is adjusted in response to specified conditions being met). For example, in some embodiments, if thesurgical robot15 cannot detect the position ofsurgical instrument35 because of a malfunction in the at least oneRF transmitter120, then thesurgical robot15 can be configured to stop movement of end-effectuator30 and/orsurgical instrument35. In some embodiments, ifsurgical robot15 detects a resistance, such as a force resistance or a torque resistance above a tolerance level, then thesurgical robot15 can be configured to stop movement of end-effectuator30 and/orsurgical instrument35.
In some embodiments, thecomputer100 for use in the system (for example represented by computing device3401), as further described herein, can be located withinsurgical robot15, or, alternatively, in another location within surgical room10 or in a remote location. In some embodiments, thecomputer100 can be positioned in operative communication withpositioning sensors12 andsurgical robot15.
In some further embodiments, thesurgical robot15 can also be used with existing conventional guidance systems. Thus, alternative conventional guidance systems beyond those specifically disclosed herein are within the scope and spirit of the invention. For instance, a conventionaloptical tracking system3417 for tracking the location of the surgical device, or a commercially available infraredoptical tracking system3417, such as Optotrak® (Optotrak® is a registered trademark of Northern Digital Inc. Northern Digital, Waterloo, Ontario, Canada), can be used to track the patient18 movement and the robot's base25 location and/or intermediate axis location, and used with thesurgical robot system1. In some embodiments in which thesurgical robot system1 comprises a conventional infraredoptical tracking system3417, thesurgical robot system1 can comprise conventional optical markers attached to selected locations on the end-effectuator30 and/or thesurgical instrument35 that are configured to emit or reflect light. In some embodiments, the light emitted from and/or reflected by the markers can be read by cameras and/or optical sensors and the location of the object can be calculated through triangulation methods (such as stereo-photogrammetry).
Referring now toFIG. 2, it is seen that, in some embodiments, thesurgical robot15 can comprise a base25 connected towheels31. The size and mobility of these embodiments can enable the surgical robot to be readily moved from patient to patient and room to room as desired. As shown, in some embodiments, thesurgical robot15 can further comprise a case40 that is slidably attached tobase25 such that the case40 can slide up and down along the z-axis70 substantially perpendicular to the surface on whichbase25 sits. In some embodiments, thesurgical robot15 can include a display means29, and ahousing27 which containsrobot arm23.
As described earlier, the end-effectuator30 can comprise asurgical instrument35, whereas in other embodiments, the end-effectuator30 can be coupled to thesurgical instrument35. In some embodiments, it isarm23 can be connected to the end-effectuator30, withsurgical instrument35 being removably attached to the end-effectuator30.
Referring now toFIGS. 2, 3A-3B, 4, 5A-5B, 6, 7, and 8A-8B, in some embodiments, theeffectuator element30 can include anouter surface30d, and can comprise adistal end30adefining a beveled leadingedge30band a non-beveledleading edge30c. In some embodiments, thesurgical instrument35 can be any known conventional instrument, device, hardware component, and/or attachment that is used during performance of a an invasive or non-invasive medical procedure (including surgical, therapeutic, and diagnostic procedures). For example and without limitation, in some embodiments, thesurgical instrument35 can be embodied in or can comprise a needle7405,7410, a conventional probe, a conventional screw, a conventional drill, a conventional tap, a conventional catheter, a conventional scalpel forceps, or the like. In addition or in the alternative, in some embodiments, thesurgical instrument35 can be a biological delivery device, such as, for example and without limitation, a conventional syringe, which can distribute biologically acting compounds throughout the body of apatient18. In some embodiments, thesurgical instrument35 can comprise a guide tube50 (also referred to herein as a “Z-tube50”) that defines a central bore configured for receipt of one or more additionalsurgical instruments35.
In some embodiments, thesurgical robot15 is moveable in a plurality of axes (for instance x-axis66, y-axis68, and z-axis70) in order to improve the ability to accurately and precisely reach a target location. Some embodiments include arobot15 that moves on a Cartesian positioning system; that is, movements in different axes can occur relatively independently of one another instead of at the end of a series of joints.
Referring now toFIGS. 3A and 3B, the movement of case40 relative to base25 ofsurgical robot15 is represented as a change of height of thesystem1 and the position of the case40 with respect to thebase25. As illustrated, in some embodiments, case40 can be configured to be raised and lowered relative to thebase25 along the z-axis. Some embodiments include ahousing27 that can be attached to case40 and be configured to move in the z-direction (defined by z-frame72) with case40 when case40 is raised and lowered. Consequently, in some embodiments,arm23, the end-effectuator30, andsurgical instrument35 can be configured to move with case40 as case40 is raised and lowered relative tobase25.
In a further embodiment, referring now toFIG. 4,housing27 can be slidably attached to case40 so that it can extend and retract along the x-axis66 relative to case40 and substantially perpendicularly to the direction case40 moves relative tobase25. Consequently, in some embodiments, therobot arm23, the end-effectuator30, andsurgical instrument35 can be configured to move withhousing27 ashousing27 is extended and retracted relative to case40.
Referring now toFIGS. 5A and 5B, the extension ofarm23 along the y-axis68 is shown. In some embodiments,robot arm23 can be extendable along the y-axis68 relative to case40,base25, andhousing27. Consequently, in some embodiments, the end-effectuator30 andsurgical instrument35 can be configured to move witharm23 asarm23 is extended and retracted relative tohousing27. In some embodiments,arm23 can be attached to a low profile rail system (not shown) which is encased byhousing27.
Referring now toFIGS. 6, 7 andFIGS. 8A-B, the movement of the end-effectuator30 is shown.FIG. 6 shows an embodiment of an end-effectuator30 that is configured to rotate about the y-axis68, performing a rotation having a specific roll62.FIG. 7 shows an embodiment of an end-effectuator30 that is configured to rotate about the x-axis66, performing a rotation having a specific pitch60.FIG. 8 shows an embodiment of an end-effectuator30 that is configured to raise and lowersurgical instrument35 along a substantially vertical axis, which can be a secondary movable axis64, referred to as “Z-tube axis64”. In some embodiments, the orientation of theguide tube50 can be initially aligned with z-axis70, but such orientation can change in response to changes in roll62 and/or pitch60.
FIG. 9 shows a system diagram of the3D positioning sensors110,computer100, andRF transmitters120 in accordance with some embodiments of the invention is provided. As shown,computer100 is in communication withpositioning sensors110. In some embodiments, during operation,RF transmitters120 are attached to various points on thesurgical robot15. In some embodiments, theRF transmitters120 can also be attached to various points on or around an anatomical target of apatient18. In some embodiments,computer100 can be configured to send a signal to theRF transmitters120, prompting theRF transmitters120 to transmit RF signals that are read by thepositioning sensors110. In some embodiments, thecomputer100 can be coupled to theRF transmitters120 using any conventional communication means, whether wired or wireless. In some embodiments, thepositioning sensors110 can be in communication withcomputer100, which can be configured to calculate the location of the positions of all theRF transmitters120 based on time-of-flight information received from thepositioning sensors110. In some embodiments,computer100 can be configured to dynamically update the calculated location of thesurgical instrument35 and/or end-effectuator30 being used in the procedure, which can be displayed to the agent.
Some embodiments can include a system diagram ofsurgical robot system1 having acomputer100, a display means29 comprising adisplay150,user input170, andmotors160, provided as illustrated inFIG. 10. In some embodiments,motors160 can be installed in thesurgical robot15 and control the movement of the end-effectuator30 and/orsurgical instrument35 as described above. In some embodiments,computer100 can be configured to dynamically update the location of thesurgical instrument35 being used in the procedure, and can be configured to send appropriate signals to themotors160 such that thesurgical robot15 has a corresponding response to the information received bycomputer100. For example, in some embodiments, in response to information received bycomputer100, thecomputer100 can be configured to prompt themotors160 to move thesurgical instrument35 along a preplanned trajectory.
In some embodiments, prior to performance of a medical procedure, such as, for example, an invasive surgical procedure,user input170 can be used to plan the trajectory for a desired navigation. After the medical procedure has commenced, if changes in the trajectory and/or movement of the end-effectuator30 and/orsurgical instrument35 are desired, a user can use theuser input170 to input the desired changes, and thecomputer100 can be configured to transmit corresponding signals to themotors160 in response to theuser input170.
In some embodiments, themotors160 can be or can comprise conventional pulse motors. In this aspect, in some embodiments, the pulse motors can be in a conventional direct drive configuration or a belt drive and pulley combination attached to thesurgical instrument35. Alternatively, in other embodiments, themotors160 can be conventional pulse motors that are attached to a conventional belt drive rack-and-pinion system or equivalent conventional power transmission component.
In some embodiments, the use of conventional linear pulse motors within thesurgical robot15 can permit establishment of a non-rigid position for the end-effectuator30 and/orsurgical instrument35. Thus, in some embodiments, the end-effectuator30 and/orsurgical instrument35 will not be fixed in a completely rigid position, but rather the end-effectuator30 and/or thesurgical instrument35 can be configured such that an agent (e.g., a surgeon or other user) can overcome the x-axis66 and y-axis68, and force the end-effectuator30 and/orsurgical instrument35 from its current position. For example, in some embodiments, the amount of force necessary to overcome such axes can be adjusted and configured automatically or by an agent. In some embodiments, thesurgical robot15 can comprise circuitry configured to monitor one or more of: (a) the position of therobot arm23, the end-effectuator30, and/or thesurgical instrument35 along the x-axis66, y-axis68, and z-axis70; (b) the rotational position (e.g., roll62 and pitch60) of therobot arm23, the end-effectuator30, and/or thesurgical instrument35 relative to the x- (66), y- (68), and z- (70) axes; and (c) the position of the end-effectuator30, and/or thesurgical instrument35 along the travel of the re-orientable axis that is parallel at all times to the end-effectuator30 and surgical instrument35 (the Z-tube axis64).
In one embodiment, circuitry for monitoring the positions of the x-axis66, y-axis68, z-axis70, Z-tube axis64, roll62, and/or pitch60 can comprise relative or absolute conventional encoder units (also referred to as encoders) embedded within or functionally coupled to conventional actuators and/or bearings of at least one of themotors160. Optionally, in some embodiments, the circuitry of thesurgical robot15 can be configured to provide auditory, visual, and/or tactile feedback to the surgeon or other user when the desired amount of positional tolerance (e.g., rotational tolerance, translational tolerance, a combination thereof, or the like) for the trajectory has been exceeded. In some embodiments, the positional tolerance can be configurable and defined, for example, in units of degrees and/or millimeters.
In some embodiments, therobot15 moves into a selected position, ready for the surgeon to deliver a selectedsurgical instrument35, such as, for example and without limitation, a conventional screw, a biopsy needle8110, and the like. In some embodiments, as the surgeon works, if the surgeon inadvertently forces the end-effectuator30 and/orsurgical instrument35 off of the desired trajectory, then thesystem1 can be configured to provide an audible warning and/or a visual warning. For example, in some embodiments, thesystem1 can produce audible beeps and/or display a warning message on the display means29, such as “Warning: Off Trajectory,” while also displaying the axes for which an acceptable tolerance has been exceeded.
In some embodiments, in addition to, or in place of the audible warning, a light illumination may be directed to the end-effectuator30, theguide tube50, the operation area (i.e. the surgical field17) of thepatient18, or a combination of these regions. For example, some embodiments include at least one visual indication900 capable of illuminating a surgical field17 of apatient18. Some embodiments include at least one visual indication900 capable of indicating a target lock by projecting an illumination on a surgical field17. In some embodiments, thesystem1 can provide feedback to the user regarding whether therobot15 is locked on target. In some other embodiments, thesystem1 can provide an alert to the user regarding whether at least onemarker720 is blocked, or whether thesystem1 is actively seeking one ormore markers720.
In some embodiments, the visual indication900 can be projected by one or more conventional light emitting diodes mounted on or near the robot end-effectuator30. In some embodiments, the visual indication can comprise lights projected on the surgical field17 including a color indicative of the current situation. In some embodiments, a green projected light could represent a locked-on-target situation, whereas in some embodiments, a red illumination could indicate a trajectory error, or obscuredmarkers720. In some other embodiments, a yellow illumination could indicate thesystem1 is actively seeking one ormore markers720.
In some embodiments, if the surgeon attempts to exceed the acceptable tolerances, therobot15 can be configured to provide mechanical resistance (“push back” or haptic feedback) to the movement of the end-effectuator30 and/orsurgical instrument35 in this manner, thereby promoting movement of the end-effectuator30 and/orsurgical instrument35 back to the correct, selected orientation. In some embodiments, when the surgeon then begins to correct the improper position, therobot15 can be configured to substantially immediately return the end-effectuator30 and/orsurgical instrument35 back to the desired trajectory, at which time the audible and visual warnings and alerts can be configured to cease. For example, in some embodiments, the visual warning could include a visual indication900 that may include a green light if no tolerances have been exceeded, or a red light if tolerances are about to, or have been exceeded.
As one will appreciate, a conventional worm-drive system would be absolutely rigid, and arobot15 having such a worm-drive system would be unable to be passively moved (without breaking the robot15) no matter how hard the surgeon pushed. Furthermore, a completely rigid articulation system can be inherently unsafe to apatient18. For example, if such arobot15 were moving toward thepatient18 and inadvertently collided with tissues, then these tissues could be damaged. Although conventional sensors can be placed on the surface of such arobot15 to compensate for these risks, such sensors can add considerable complexity to theoverall system1 and would be difficult to operate in a fail-safe mode. In contrast, during use of therobot15 described herein, if the end-effectuator30 and/orsurgical instrument35 inadvertently collides with tissues of thepatient18, a collision would occur with a more tolerable force that would be unlikely to damage such tissues. Additionally, in some embodiments, auditory and/or visual feedback as described above can be provided to indicate an increase in the current required to overcome the obstacle. Furthermore, in some embodiments, the end-effectuator30 of therobot15 can be configured to displace itself (move away) from the inadvertently contacted tissue if a threshold requiredmotor160 current is encountered. In some embodiments, this threshold could be configured (by a control component, for example) for each axis such that the moderate forces associated with engagement between the tissue and the end-effectuator30 can be recognized and/or avoided.
In some embodiments, the amount of rigidity associated with the positioning and orientation of the end-effectuator30 and/or thesurgical instrument35 can be selectively varied. For example, in some embodiments, therobot15 can be configured to shift between a high-rigidity mode and a low-rigidity mode. In some embodiments, therobot15 can be programmed so that it automatically shifts to the low-rigidity mode as the end-effectuator30 andsurgical instrument35 are shifted from one trajectory to another, from a starting position as they approach a target trajectory and/or target position. Moreover, in some embodiment, once the end-effectuator30 and/orsurgical instrument35 is within a selected distance of the target trajectory and/or target position, such as, for example, within about 1° and about 1 mm of the target, therobot15 can be configured to shift to the high-rigidity mode. In some embodiments, this mechanism may improve safety because therobot15 would be unlikely to cause injury if it inadvertently collided with the patient18 while in the low-rigidity mode.
Some embodiments include arobot15 that can be configured to effect movement of the end-effectuator30 and/orsurgical instrument35 in a selected sequence of distinct movements. In some embodiments, during movement of the end-effectuator30 and/orsurgical instrument35 from one trajectory to another trajectory, the x-axis66, y-axis68, roll62, and60 pitch60 orientations are all changed simultaneously, and the speed of movement of the end-effectuator30 can be increased. Consequently, because of the range of positions through which the end-effectuator30 travels, the likelihood of a collision with the tissue of the patient18 can also be increased. Hence, in some embodiments, therobot15 can be configured to effect movement of the end-effectuator30 and/orsurgical instrument35 such that the position of the end-effectuator30 and/orsurgical instrument35 within the x-axis66 and the y-axis68 are adjusted before the roll62 and pitch60 of the end-effectuator30 and/orsurgical instrument35 are adjusted. In some alternative embodiments, therobot15 can be configured to effect movement of the end-effectuator30 and/orsurgical instrument35 so that the roll62 and pitch60 are shifted to 0°. The position of the end-effectuator30 and/orsurgical instrument35 within the x-axis66 and the y-axis68 are adjusted, and then the roll62 and pitch60 of the end-effectuator30 and/orsurgical instrument35 are adjusted.
Some embodiments include arobot15 that can be optionally configured to ensure that the end-effectuator30 and/orsurgical instrument35 are moved vertically along the z-axis70 (away from the patient18) by a selected amount before a change in the position and/or trajectory of the end-effectuator30 and/orsurgical instrument35 is affected. For example, in some embodiments, when an agent (for example, a surgeon or other user, or equipment) changes the trajectory of the end-effectuator30 and/orsurgical instrument35 from a first trajectory to a second trajectory, therobot15 can be configured to vertically displace the end-effectuator30 and/orsurgical instrument35 from the body of thepatient18 along the z-axis70 by the selected amount (while adjusting x-axis66 and y-axis68 configurations to remain on the first trajectory vector, for example), and then effecting the change in position and/or orientation of the end-effectuator30 and/orsurgical instrument35. This ensures that the end-effectuator30 and/orsurgical instrument35 do not move laterally while embedded within the tissue of thepatient18. Optionally, in some embodiments, therobot15 can be configured to produce a warning message that seeks confirmation from the agent (for example, a surgeon or other user, or equipment) that it is safe to proceed with a change in the trajectory of the end-effectuator30 and/orsurgical instrument35 without first displacing the end-effectuator30 and/orsurgical instrument35 along the z-axis.
In some embodiments, at least one conventional force sensor (not shown) can be coupled to the end-effectuator30 and/orsurgical instrument35 such that the at least one force sensor receives forces applied along the orientation axis (Z-tube axis64) to thesurgical instrument35. In some embodiments, the at least one force sensor can be configured to produce a digital signal. In some embodiments for example, the digital signal can be indicative of the force that is applied in the direction of the Z-tube axis64 to thesurgical instrument35 by the body of the patient18 as thesurgical instrument35 advances into the tissue of thepatient18. In some embodiments, the at least one force sensor can be a small conventional uniaxial load cell based on a conventional strain gauge mechanism. In some embodiments, the uniaxial load cell can be coupled to, for example, analog-to-digital filtering to supply a continuous digital data stream to thesystem1. Optionally, in some embodiments, the at least one force sensor can be configured to substantially continuously produce signals indicative of the force that is currently being applied to thesurgical instrument35. In some embodiments, thesurgical instrument35 can be advanced into the tissue of the patient18 by lowering the z-axis70 while the position of the end-effectuator30 and/orsurgical instrument35 along the x-axis66 and y-axes68 is adjusted such that alignment with the selected trajectory vector is substantially maintained. Furthermore, in some embodiments, the roll62 and pitch60 orientations can remain constant or self-adjust during movement of the x- (66), y- (68), and z- (70) axes such that thesurgical instrument35 remains oriented along the selected trajectory vector. In some embodiments, the position of the end-effectuator30 along the z-axis70 can be locked at a selected mid-range position (spaced a selected distance from the patient18) as thesurgical instrument35 advances into the tissue of thepatient18. In some embodiments, the stiffness of the end-effectuator30 and/or thesurgical instrument35 can be set at a selected level as further described herein. For example, in some embodiments, the stiffness of the Z-tube axis64 position of the end-effectuator30 and/or thesurgical instrument35 can be coupled to a conventional mechanical lock (not shown) configured to impart desired longitudinal stiffness characteristics to the end-effectuator30 and/orsurgical instrument35. In some embodiments, if the end-effectuator30 and/orsurgical instrument35 lack sufficient longitudinal stiffness, then the counterforce applied by the tissue of the patient18 during penetration of thesurgical instrument35 can oppose the direction of advancement of thesurgical instrument35 such that thesurgical instrument35 cannot advance along the selected trajectory vector. In other words, as the z-axis70 advances downwards, the Z-tube axis64 can be forced up and there can be no net advancement of thesurgical instrument35. In some embodiments, the at least one force sensor can permit an agent (for example, a surgeon or other user, or equipment) to determine, (based on sudden increase in the level of applied force monitored by the force sensor at the end-effectuator30 and/or the surgical instrument35), when thesurgical instrument35 has encountered a bone or other specific structure within the body of thepatient18.
In some alternative embodiments, the orientation angle of the end-effectuator30 and/orsurgical instrument35 and the x-axis66 and y-axis68 can be configured to align the Z-tube axis64 with the desired trajectory vector at a fully retracted Z-tube position, while a z-axis70 position is set in which the distal tip of thesurgical instrument35 is poised to enter tissue. In this configuration, in some embodiments, the end-effectuator30 can be positioned in a manner that the end-effectuator30 can move, for example, exactly or substantially exactly down the trajectory vector if it were advanced only alongguide tube50. In such scenario, in some embodiments, advancing the Z-tube axis64 can cause theguide tube50 to enter into tissue, and an agent (a surgeon or other user, equipment, etc.) can monitor change in force from the load sensor. Advancement can continue until a sudden increase in applied force is detected at the time thesurgical instrument35 contacts bone.
In some embodiments, therobot15 can be configured to deactivate the one ormore motors160 that advance the Z-tube axis64 such that the end-effectuator30 and/or thesurgical instrument35 can move freely in the Z-tube axis64 direction while the position of the end-effectuator30 and/or thesurgical instrument35 continues to be monitored. In some embodiments, the surgeon can then push the end-effectuator30 down along the Z-tube axis64, (which coincides with the desired trajectory vector) by hand. In some embodiments, if the end-effectuator30 position has been forced out of alignment with the trajectory vector, the position of thesurgical instrument35 can be corrected by adjustment along the x- (66) and/or y- (68) axes and/or in the roll62 and/or pitch60 directions. In some embodiments, whenmotor160 associated with the Z-tube50 movement of thesurgical instrument35 is deactivated, the agent (for example, a surgeon or other user, or equipment) can manually force thesurgical instrument35 to advance until a tactile sense of thesurgical instrument35 contacts bone, or another known region of the body).
In some further embodiments, the roboticsurgical system1 can comprise a plurality ofconventional tracking markers720 configured to track the movement of therobot arm23, the end-effectuator30, and/or thesurgical instrument35 in three dimensions. It should be appreciated that three dimensional positional information from trackingmarkers720 can be used in conjunction with the one dimensional linear positional information from absolute or relative conventional linear encoders on each axis of therobot15 to maintain a high degree of accuracy. In some embodiments, the plurality of trackingmarkers720 can be mounted (or otherwise secured) thereon an outer surface of therobot15, such as, for example and without limitation, on thebase25 of therobot15, or therobot arm23. In some embodiments, the plurality of trackingmarkers720 can be configured to track the movement of therobot15 arm, the end-effectuator30, and/or thesurgical instrument35. In some embodiments, thecomputer100 can utilize the tracking information to calculate the orientation and coordinates of thedistal tip30aof thesurgical instrument35 based on encoder counts along the x-axis66, y-axis68, z-axis70, the Z-tube axis64, and the roll62 and pitch60 axes. Further, in some embodiments, the plurality of trackingmarkers720 can be positioned on thebase25 of therobot15 spaced from the surgical field17 to reduce the likelihood of being obscured by the surgeon, surgical tools, or other parts of therobot15. In some embodiments, at least onetracking marker720 of the plurality of trackingmarkers720 can be mounted or otherwise secured to the end-effectuator30. In some embodiments, the positioning of one ormore tracking markers720 on the end-effectuator30 can maximize the accuracy of the positional measurements by serving to check or verify the end-effectuator30 position (calculated from the positional information from the markers on thebase25 of therobot15 and the encoder counts of the x- (66), y- (68), roll62, pitch60, and Z-tube axes64).
In some further embodiments, at least one optical marker of the plurality ofoptical tracking markers720 can be positioned on therobot15 between the base25 of therobot15 and the end-effectuator30 instead of, or in addition to, themarkers720 on thebase25 of therobot15, (seeFIG. 16). In some embodiments, the at least onetracking marker720 can be mounted to a portion of therobot15 that effects movement of the end-effectuator30 and/orsurgical instrument35 along the x-axis to enable thetracking marker720 to move along the x-axis66 as the end-effectuator30 andsurgical instrument35 move along the x-axis66. The placement of the trackingmarkers720 in this way can reduce the likelihood of a surgeon blocking thetracking marker720 from the cameras or detection device, or thetracking marker720 becoming an obstruction to surgery. In certain embodiments, because of the high accuracy in calculating the orientation and position of the end-effectuator30 based on thetracking marker720 outputs and/or encoder counts from each axis, it can be possible to very accurately determine the position of the end-effectuator30. For example, in some embodiments, without requiring knowledge of the counts of axis encoders for the z-axis70, which is between the x-axis66 and thebase25, knowing only the position of themarkers720 on the x-axis66 and the counts of encoders on the y- (68), roll62, pitch60, and Z-tube axes64 can enable computation of the position of the end-effectuator30. In some embodiments, the placement ofmarkers720 on any intermediate axis of therobot15 can permit the exact position of the end-effectuator30 to be calculated based on location ofsuch markers720 and counts of encoders on axes (66,62,60,64) between themarkers720 and the end-effectuator30. In some embodiments, from the configuration of the robot15 (see for example,FIG. 2), the order of axes from the base25 to the end-effectuator30 is z- (70) then x- (66) then y- (68) then roll62 then pitch60 then Z-tube64. Therefore, for example, within embodiments in whichtracking markers720 are placed on thehousing27 of therobot15 that moves with the roll62 axis, the locations ofsuch tracking markers720 and the encoder counts of the pitch60 and Z-tube axes64 can be sufficient to calculate the end-effectuator30 position.
In some embodiments, when thesurgical instrument35 is advanced into the tissue of the patient18 with the assistance of aguide tube50, thesurgical instrument35 can comprise a stop mechanism52 that is configured to prevent thesurgical instrument35 from advancing when it reaches a predetermined amount of protrusion (see for example,FIGS. 17A-B). In some embodiments, by knowing the lengths of theguide tube50 and thesurgical instrument35, the distance between the respective ends of thesurgical instrument35, and the location where the stop mechanism52 is attached, it is possible to determine the maximum distance past the end of theguide tube50 that thesurgical instrument35 can protrude.
In some embodiments, it can be desirable to monitor not just the maximum protrusion distance of thesurgical instrument35, but also the actual protrusion distance at any instant during the insertion process. Therefore, in some embodiments, therobot15 can substantially continuously monitor the protrusion distance, and in some embodiments, the distance can be displayed on a display (such as display means29). In some embodiments, protrusion distance can be substantially continuously monitored using a spring-loaded plunger54 including a spring-loaded mechanism55aand sensor pad55bthat has a coupled wiper56 (see for exampleFIG. 17B). In some embodiments, the stop mechanism52 on thesurgical instrument35 can be configured to contact the spring-loaded mechanism55 well before it encounters the end of theguide tube50. In some embodiments, when the wiper56 moves across the position sensor pad55b, its linear position is sampled, thereby permitting calculation of the distance by which thesurgical instrument35 protrudes past the end of theguide tube50 substantially in real-time. In some embodiments, any conventional linear encoding mechanism can be used to monitor the plunger's depth of depression and transmit that information to thecomputer100 as further described herein.
Some embodiments include instruments that enable the stop on adrill bit42 to be manually adjusted with reference to markings44 on thedrill bit42. For example,FIGS. 17C-17E depict tools for manually adjusting a drill stop46 with reference to drill bit markings44 in accordance with one embodiment of the invention. As shown, in some embodiments, thedrill bit42 can include release mechanisms48 on each end of the drill stop46. In some embodiments, if the release48 on one end of the drill stop46 is pulled, it is possible to move the drill stop46 up the shaft of thedrill bit42. In some embodiments, if the release48 on the other end of the drill stop46 is pulled, it is possible to move the drill stop46 down the shaft (see the direction of movement inFIGS. 17D and 17E). In some embodiments, if neither release mechanism48 is pulled, the drill stop46 will not move in either direction, even if bumped.
Some embodiments include the ability to lock and hold thedrill bit42 in a set position relative to thetube50 in which it is housed. For example, in some embodiments, thedrill bit42 can be locked by locking the drill stop46 relative to thetube50 using a locking mechanism.FIGS. 17F-J illustrates tools for locking and holding adrill bit42 in a set position in accordance with one embodiment of the invention. In some embodiments, the locking mechanism49 shown in FIG.17H can comprise two clam shells49 (shown inFIG. 17F). In some embodiments, adrill bit42 can be locked into position by assembling the clam shells around the drill stop46 (shown inFIG. 17G). This feature allows the user to lock thedrill bit42 in a position such that the tip slightly protrudes past the end of the tube50 (seeFIGS. 171 and 17J). In this position, the user can force thetube50 to penetrate through soft tissues to force thetube50 to contact bone (for example during a percutaneous spine screw insertion).
In some further embodiments, the end-effectuator30 can be configured not block the trackingoptical markers720 or interfere with the surgeon. For example, in some embodiments, the end-effectuator30 can comprise a clearance mechanism33 including an actuator33athat permits this configuration, as depicted inFIGS. 18A and 18B. As shown, theguide tube50 can be secured within a housing of the end-effectuator30 with two shafts32. In some embodiments, the shafts32 move relative to one other, due to a parallelogram effect of the clearance mechanism33, the position of theguide tube50 can mimic the position of the end-effectuator30 (seeFIG. 18B).
In applications such as cervical or lumbar fusion surgery, it can be beneficial to apply distraction or compression across one or more levels of the spine (anteriorly or posteriorly) before locking hardware in place. In some embodiments, the end-effectuator30 can comprise an attachment element37 that is configured to apply such forces (see for exampleFIGS. 19A-B). In some embodiments, the end-effectuator30 attachment element37 can be configured for coupling to the end-effectuator30 at substantially the same location as the clearance mechanism33. In some embodiments, the end-effectuator30 with attachment element37 snaps into the same place as the end-effectuator30 without the attachment element37. In some embodiments, during use of the end-effectuator30 attachment element37, the relative movement of the two shafts32 caused by angulation30ewill not cause movement in the pitch60 direction and will instead cause distraction (illustrated as moving from an attachment element37 distance37ainFIG. 19A to distance37binFIG. 19B). Further, although shaft32 movement as shown inFIGS. 19A-B would cause distraction, rotation of the actuator33ain the opposite direction to that represented by30ewould cause compression (i.e. the distance37binFIG. 19B would move towards the distance37ainFIG. 19A).
In view of the embodiments described hereinbefore, some embodiments that can be implemented in accordance with the disclosed subject matter can be better appreciated with reference to the flowcharts inFIGS. 24-33. For purposes of simplicity of explanation, the method disclosed by the embodiments described herein is presented and described as a series of steps; however, it is to be understood and appreciated that the claimed subject matter is not limited by the order of acts, as some acts may occur in different orders and/or concurrently with other acts from that shown and described herein. For example, the various methods or processes of some embodiments of the invention can alternatively be represented as a series of interrelated states or events, such as in a state diagram. Furthermore, not all illustrated acts may be required to implement a method in accordance with some embodiments of the invention. Further yet, two or more of the disclosed methods or processes can be implemented in combination with each other, to accomplish one or more features or advantages herein described.
It should be further appreciated that the methods disclosed in the various embodiments described throughout the subject specification can be stored on an article of manufacture, or computer-readable medium, to facilitate transporting and transferring such methods to a computing device (e.g., a desktop computer, a mobile computer, a mobile telephone, a blade computer, a programmable logic controller, and the like) for execution, and thus implementation, by a processor of the computing device or for storage in a memory thereof.
In some embodiments, thesurgical robot15 can adjust its position automatically continuously or substantially continuously in order to move the end-effectuator30 to an intended (i.e. planned) position. For example, in some embodiments, thesurgical robot15 can adjust its position automatically continuously or substantially continuously based on the current position of the end-effectuator30 and surgical target as provided by a current snapshot of tracking markers, LPS, or other tracking data. It should further be appreciated that certain position adjustment strategies can be inefficient. For example, an inefficient strategy for therobot15 to find a target location can be an iterative algorithm to estimate the necessary direction of movement, move toward the target location, and then assess a mismatch between a current location and the target location (the mismatch referred to as an error), and estimate a new direction, repeating the cycle of estimate-movement-assessment until the target location is reached within a satisfactory error. Conversely, the position adjustment strategies in accordance with some embodiments of the invention are substantively more efficient than iterative strategies. For example, in some embodiments, asurgical robot15 can make movements and adjust its location by calibrating the relative directions of motions in each axis (permitting computation via execution of software or firmware with the computer100) at each frame of tracking data, of a unique set of necessary motor encoder counts that can cause each of the individual axes to move to the correct location. In some embodiments, the Cartesian design of the disclosedrobot15 can permit such a calibration to be made by establishing a coordinate system for therobot15 and determining key axes of rotation.
As described in greater detail below, in some embodiments, methods for calibrating the relative directions of the robot's15 axes can utilize a sequence of carefully planned movements, each in a single axis. In some embodiments, during these moves,temporary tracking markers720 are attached to the end-effectuator30 to capture the motion of the end-effectuator30. It should be appreciated that the disclosed methods do not require the axes of therobot15 to be exactly or substantially perpendicular, nor do they require the vector along which a particular axis moves (such as the x-axis66) to coincide with the vector about which rotation occurs (such as pitch60, which occurs primarily about the x-axis66). In certain embodiments, the disclosed methods include motion along aspecific robot15 axis that occurs in a straight line. In some embodiments, the disclosed methods for calibrating the relative directions of movement of the robot's15 axes can utilize one or more frames of tracking data captured at the ends of individual moves made in x- (66), y- (68), roll (62), pitch (60), and Z-tube axes64 frommarkers720 temporarily attached to the end-effectuator's30guide tube50. In some embodiments, when moving individual axes, all other axes can be configured at the zero position (for example, the position where the encoder for the axis reads 0 counts). Additionally or alternatively, one or more frames of tracking data with allrobot15 axes at 0 counts (neutral position) may be necessary, and one or more frames of data with thetemporary markers720 rotated to a different position about the longitudinal axis of theguide tube50 may be necessary. In some embodiments, themarker720 positions from these moves can be used to establish a Cartesian coordinate system for therobot15 in which the origin (0,0,0) is through the center of the end-effectuator30 and is at the location along the end-effectuator30 closest to where pitch60 occurs. Additionally or alternatively, in some embodiments, this coordinate system can be rotated to an alignment in which y-axis68 movement of therobot15 can occur exactly or substantially along the coordinate system's y-axis68, while x-axis66 movement of therobot15 occurs substantially perpendicular to the y-axis68, but by construction of the coordinate system, without resulting in any change in the z-axis70 coordinate. In certain embodiments, the steps for establishing the robot's15 coordinate system based at least on the foregoing individual moves can comprise the following: First, from the initial and final positions of the manual rotation of trackingmarkers720 about the long axis of the end-effectuator30, a finite helical axis of motion is calculated, which can be represented by a vector that is centered in and aligned with the end-effectuator30. It should be appreciated that methods for calculating a finite helical axis of motion from two positions of three or more markers are described in the literature, for example, by Spoor and Veldpaus (Spoor, C. W. and F. E. Veldpaus, “Rigid body motion calculated from spatial co-ordinates of markers,” J Biomech 13(4): 391-393 (1980)). In some embodiments, rather than calculating the helical axis, the vector that is centered in and aligned with the end-effectuator30 can be defined, or constructed, by interconnecting two points that are attached to two separate rigid bodies that can be temporarily affixed to the entry and exit of theguide tube50 on the Z-tube axis64. In this instance, each of the two rigid bodies can include at least one tracking marker720 (e.g., onetracking marker720, two trackingmarkers720, three trackingmarkers720, more than three trackingmarkers720, etc.), and a calibration can be performed that provides information indicative of the locations on the rigid bodies that are adjacent to the entry and exit of theguide tube50 relative to the tracking markers.
A second helical axis can be calculated from the pitch60 movements, providing a vector substantially parallel to the x-axis of therobot15 but also close to perpendicular with the first helical axis calculated. In some embodiments, the closest point on the first helical axis to the second helical axis (or vector aligned with the end-effectuator30) is calculated using simple geometry and used to define the origin of the robot's coordinate system (0,0,0). A third helical axis is calculated from the two positions of the roll62 axis. In certain scenarios, it cannot be assumed that the vector about which roll occurs (third helical axis) and the vector along which the y-axis68 moves are exactly or substantially parallel. Moreover, it cannot be assumed that the vector about which pitch60 occurs and the vector along which x-axis66 motion occurs are exactly or substantially parallel. Vectors for x-axis66 and y-axis68 motion can be determined from neutral and extended positions of x-axis66 and y-axis68 and stored separately. As described herein, in some embodiments, the coordinate system can be realigned to enable y-axis movement of therobot15 to occur exactly or substantially in the y-axis68 direction of the coordinate system, and x-axis66 movement of therobot15 without any change in the z-coordinate (70). In general, to perform such a transformation of coordinate systems, a series of rotations about a coordinate axis is performed and applied to every point of interest in the current coordinate system. Each point is then considered to be represented in the new coordinate system. In some embodiments, to apply a rotation of a point represented by a 3×1 vector about a particular axis, the vector can be pre-multiplied by a 3×3 rotation matrix. The 3×3 rotation matrix for a rotation of Rx degrees about the x-axis is:
The 3×3 rotation matrix for a rotation of Rydegrees about the y-axis is:
The 3×3 rotation matrix for a rotation of Rzdegrees about the z-axis is:
In some embodiments, to transform coordinate systems, a series of three rotations can be performed. For example, such rotations can be applied to all vectors and points of interest in the current coordinate system, including the x-movement vector, y-movement vector and each of the helical axe, to align the y movement vector with the new coordinate system's y-axis, and to align the x movement vector as closely as possible to the new coordinate system's x-axis at z=0. It should be appreciated that more than one possible sequence of three rotations can be performed to achieve substantially the same goal. For example, in some embodiments, a sequence of three rotations can comprise (1) a rotation about x using an Rxvalue appropriate to rotate the y-movement vector until its z coordinate equal 0, followed by (2) a rotation about z using an Rzvalue appropriate to rotate the y-movement vector until its x coordinate equal 0, followed by (3) a rotation about y using an Ryvalue appropriate to rotate the x-movement vector until its z coordinate equals 0. In some embodiments, to find the rotation angle appropriate to achieve a given rotation, the arctangent function can be utilized. For example, in some embodiments, the angle needed to rotate a point or vector (x1,y1,z1) about the z axis to y1=0 is −arctan(y1/x1).
It should be appreciated that after transformation of the coordinate system, in some embodiments, although the new coordinate system is aligned such that the y-movement axis of thesurgical robot15 is exactly or substantially exactly aligned with the coordinate system's y-axis68, the roll62 rotation movement of therobot15 should not be assumed to occur exactly or substantially exactly about a vector aligned with the coordinate system's y-axis68. Similarly, in some embodiments, the pitch60 movement of thesurgical robot15 should not be assumed to occur exactly or substantially exactly about a vector aligned with the coordinate system's x-axis. In some embodiments, in roll62 and pitch60 rotational movement there can be linear and orientational “offsets” from the helical axis of motion to the nearest coordinate axis. In some embodiments, from the helical axes determined above using tracked markers, such offsets can be calculated and retained (e.g., stored in a computing device's memory) so that for any rotation occurring during operation, the offsets can be applied, rotation can be performed, and then negative offsets can be applied so that positional change occurring with rotation motion accounts for the true center of rotation.
In some embodiments, during tracking, the desired trajectory can be first calculated in the medical image coordinate system, then transformed to therobot15 coordinate system based at least on known relative locations of active markers. For example, in some embodiments, conventional light-emitting markers and/or conventional reflective markers associated with anoptical tracking system3417 can be used (see for exampleactive markers720 inFIG. 20A). In other embodiments, conventional electromagnetic sensors associated with an electromagnetic tracking system can be used. In some other embodiments, radio-opaque markers (forexample markers730 shown inFIG. 20A) can be used with a CT imaging system. In some embodiments, radio-opaque markers730 (spheres formed, at least in part from metal or other dense material), can be used to provide amarker730 that can at least partially absorb x-rays to produce a highly contrasted image of the sphere in a CT scan image.
In some embodiments, the necessary counts for the end-effectuator30 to reach the desired position in the robot's15 coordinate system can be calculated based on the following example process. First the necessary counts to reach the desired angular orientation can be calculated. In some embodiments, a series of three rotations can be applied to shift the coordinate system temporarily to a new coordinate system in which the y-axis68 coincides or substantially coincides with the helical axis of motion for roll62, and the x-axis66 is largely aligned with the helical axis of motion for pitch60 and by definition, and the helical axis of motion for pitch60 has constant z=0. Then, the number of counts necessary to achieve the desired pitch60 can be determined, keeping track of how this pitch60 can affect roll62. In one implementation, to find the necessary counts to achieve the desired pitch, the change in pitch angle60 can be multiplied by the previously calibrated motor counts per degree for pitch. The change in roll62 caused by this change in pitch60 can be calculated from the orientation of the helical axis and the rotation angle (pitch) about the helical axis. Then, the necessary roll62 to get to the desired roll62 to reach the planned trajectory alignment can be calculated, with the benefit that applying roll62 does not, by definition of the coordinate system, result in any further change in pitch. The coordinate system is then shifted back to the previously describedrobot15 coordinate system by the inverse of the three rotations applied above. Then the necessary counts to reach the desired x-axis66 position can be calculated, also keeping track of how this x-axis66 position change will affect y-axis68 position. Then the necessary y-axis68 counts to reach the desired y-axis position can be readily calculated with the benefit that changing the y-axis68 coordinate can have no effect on any other axis since the y-axis motion vector is by definition aligned with the robot's y-axis68. In a scenario in which the Z-tube50 position is being actively controlled, the orientation of the Z-tube50 movement vector is adjusted when adjusting roll62 and pitch60 and the counts necessary to move it to the desired position along the trajectory vector is calculated from the offset. In some embodiments, after the necessary counts to achieve the desired positions in all axes are calculated as described, these counts can be sent as computer-accessible instructions (e.g., computer-readable and/or computer-executable instructions) to respective controllers for each axis in order to move the axes to the computed positions.
FIG. 24 is a flowchart of a method2400 for positioning and advancing through soft tissue in accordance with one or more aspects according to one embodiment of the invention. As shown, in some embodiments, at block2410, a medical image is accessed (e.g., received, retrieved, or otherwise acquired). As described herein, the medical image can be a 3D anatomical image scan including, but not limited to a CT scan, a magnetic resonance imaging scan (hereinafter referred to as an “MRI scan”), an X-ray image, or other anatomical scan. It should be appreciated that any 3D anatomical scan may be utilized with thesurgical robot15 and is within the scope of the present invention. In some embodiments, at block2420, a targetingfixture690 is calibrated to the medical image. In some embodiments, the calibration can be semi-automated or automated. In some embodiments, at block2430, data indicative of an intended trajectory associated with the medical image is received. In some embodiments, at block2440, arobot15 is substantially maintained on the intended trajectory. In some embodiments, a control platform (for example,platform3400 shown inFIG. 34) can adjust movement of therobot15 in order to substantially maintain the intended trajectory.
FIGS. 25-26 are flowcharts of methods for calibrating a targetingfixture690 to a medical image in accordance with one or more embodiments of the invention. As shown inFIG. 25, in some embodiments, themethod2500 can embody a semi-automated calibration method and can be implemented (e.g., executed) as part of block2420 in certain scenarios. In some embodiments, at block2510, data indicative of a medical image having a representation of a plurality of radio-opaque markers (for example radio-opaque markers730) is received. In one embodiment, as described herein, such plurality can contain four radio-opaque markers730. In some embodiments, atblock2520, a geometrical center for each radio-opaque marker730 is determined in a coordinate system associated with the medical image. In some embodiments, image thresholding can be utilized to define one or more edges of each radio-opaque marker730 and a geometrical center thereof. Thresholding refers to an image processing technique in which pixel intensity within a 2D region can be monitored. For example, the x, y positions (for instance expressed in mm) of pixels of an intensity that reach a predetermined value can be retrieved. Stated similarly, the threshold refers to the transition pixel intensity from light to dark. In some embodiments, on 2D slices of the medical image, the radio-opaque marker730 can appear light and the adjacent space (such as tissue or air) can appear dark. In some embodiments, displaying pixels that satisfy a thresholding criterion at an intensity encountered at the edge of a radio-opaque marker can yield a largely circular trace outlining the marker on the medical image. Since in some embodiments,markers730 can be spherical, a method for finding the center of themarker730 in a 2D view can include firstly restricting the 2D view to a sampling region with the high-intensity image of the sphere toward the center of the region and pixels of lower intensity toward the outer edges of the region. Secondly, the method can include finding the mean x threshold position (e.g., the maximum x coordinate of pixels satisfying the threshold criterion plus minimum x coordinate of pixels satisfying the threshold criterion divided by two), and finding the mean y threshold position using a similar method. In some embodiments, the center of the sphere can be found by determining 2D centers of slices through thesame marker730 in two orthogonal views. For example, in some embodiments, the method can include finding mean x and mean y from an xy slice, then finding mean x and mean z from an xz slice to get a mean x, y, and z axis coordinate representing the center of themarker730. Further, upon or after the mean x, mean y, and mean z are found, new xy and xz slices can be evaluated again and the maximum and minimum x, y, and z threshold values can be again determined to evaluate the dimensions of the thresholded object in each view. It can be appreciated from this method that in some embodiments, a non-spherical object of high intensity, such as a small process of cortical bone extending away from the side of the spine, may fail to satisfy (1) a condition where there is high intensity near the middle of the region, but low intensity all around, since the process may extend out of the region in one or more directions; or (2) a condition where the dimensions in x, y, and z of the centered object do not match each other (e.g., non-spherical case).
As shown inFIG. 25, in some embodiments, atblock2530, it is ascertained if one centered sphere is determined for each radio-opaque marker730 for the fixture being calibrated. In some embodiments, when at least one such sphere is not determined, or identified, the threshold setting is adjusted and flow is directed to block2510. In some embodiments, atblock2540, each centered sphere is mapped to each radio-opaque marker730 of the plurality of radio-opaque markers730. As shown, in some embodiments, block2540 can represent a mapping action which, in some embodiments, can comprise implementing a sorting process to establish a specific centered sphere is associated with a specific one of the plurality of radio-opaque markers730. In some embodiments, a plurality of radio-opaque markers730 contains four radio-opaque markers730 (represented, for example, as OP1, OP2, OP3, and OP4). In some embodiments, the sorting process can map each one of four centeredmarkers730 to one of OP1, OP2, OP3, or OP4. In some embodiments, the sorting process can distinguish aspecific marker730 by measuring inter-marker distances from mean positions of the fourunidentified markers730, and comparing such distances to extant inter-marker distances (for example, those that are pre-measured and retained in memory, such as mass storage device3404) for eachmarker730 on a marker fixture. In some embodiments, theopaque markers730 on thefixture690 can be placed asymmetrically, eachmarker730 can be identified from a unique set of inter-marker distances corresponding tosuch marker730. For example, in some embodiments where the sum of inter-marker distances of oneunknown marker730 relative to theother threes markers730 measured from the medical image is D, a single physical marker730 (one of OP1, OP2, OP3, or OP4) can have a matching inter-marker distance sum within a specified tolerance (such as ±1 mm) of D. In some embodiments, at block2550, coordinates of each centered sphere can be retained (for example in memory of a computer platform3400). As described herein, in some embodiments, such coordinates can be utilized in a process for tracking movement of arobot15.
Some embodiments include method2600 (shown as a flowchart inFIG. 26) that can embody an automated calibration method and can be implemented (e.g., executed) as part of block2420 in certain scenarios. In some embodiments, atblock2605, for a Z position, an x-y grid of test area squares is created. In some embodiments, each test area square can be larger than the diameter of a sphere (a radio-opaque marker730) associated with a targetingfixture690 comprised of material that, when imaged, appears as opaque. In some embodiments, each test area square can be at least partially overlapping with at least one adjacent test area square. In one embodiment of the invention, a nearly half the surface of a test area square can overlap with the surface of an adjacent test area square. In some embodiments, atblock2610, calibration is initiated at place Z=0,x-y grid row 0,x-y grid column 0. In some embodiments, atblock2615, borders of a medical image within a first test area square are determined. It should be appreciated that in some embodiments, a sphere can be rendered as a circular area, but a section of bone represented in the medical image can be asymmetrical. In some embodiments, a thresholding process in accordance with one or more aspects described herein can be implemented to exclude one or moreinvalid markers730 by assessing if the x, y, and z axes boundaries of the object are of substantially equivalent dimensions, consistent with the shape being spherical.
In some embodiments, atblock2620, it is determined if a maximum (max) border coordinate is less than the maximum coordinate of the test area, and a minimum (min) border coordinate is greater than the minimum coordinate of the test area, and vertical span of features rendered in the image are equal or substantially equal to horizontal span of such features. As shown inFIG. 26, in some embodiments, in the negative case, flow is directed to block2645, at which the first test area is moved to next grid location and next Z plane. Conversely, in case the three foregoing conditions are fulfilled, flow is directed to block2625, at which X coordinate and Y coordinate are centered at the center of the current test area. In some embodiments, atblock2630, Z coordinate is probed by creating a second test area square spanning upwards and downwards in XZ plane and/or YZ plane to determine one or more borders of an object. In some embodiments, atblock2635, it is determined if borders of the object observed in XZ plane are of substantially equivalent relative spacing (vertically and horizontally) to borders in the x-y plane, consistent with the shape of the object being spherical. In some embodiments, when such borders are of different spacing, flow is directed to block2645. Conversely, when spacing of such borders is substantially equivalent between views, a sphere having a center at X coordinate, Y coordinate, and Z coordinate is identified atblock2640 and flow is directed to block2645.
In some embodiments, atblock2650, it is determined if last row and column in x-y grid are reached and last Z plane is reached as a result of updating the first test area atblock2645. In some embodiments, in the negative case, flow is directed to block2615, in which the first area is the updated instance of a prior first area, with the flow reiterating one or more ofblocks2620 through2645. Conversely, in the affirmative case, flow is directed to block2655 at which invalid marker(s)730 can be excluded. In some embodiments, a paring process can be implemented to exclude one or moreinvalid markers730. For this paring process, in some embodiments, the known spacings between each of the N radio-opaque markers730 (with N a natural number) on the targetingfixture690 and each other radio-opaque marker730 on the targetingfixture690 can be compared to themarkers730 that have been found on the medical image. In a scenario in which more than N number ofmarkers730 can be found on the medical image, any sphere found on the medical image that does not have spacings relative to N−1other markers730 that are within an acceptable tolerance of known spacings retained, for example, on a list can be considered to be invalid. For example, if a targetingfixture690 has four radio-opaque markers730, there are six known spacings, with eachmarker730 having a quantifiable spacing relative to three other markers730: the inter-marker spacings for markers 1-2, 1-3, 1-4, 2-3, 2-4, and 3-4. On the 3D medical image of the targetingfixture690, in some embodiments, if fivepotential markers730 are found on the medical image, their inter-marker spacings can be calculated. In this scenario, there are 10 inter-marker spacings: 1-2, 1-3, 1-4, 1-5, 2-3, 2-4, 2-5, 3-4, 3-5, and 4-5, with each sphere having a quantifiable spacing relative to fourother markers730. Considering each of the fivepotential markers730 individually, if any one of such fivemarkers730 does not have three of its four inter-marker spacings within a very small distance of the spacings on the list of six previously quantified known spacings, it is considered invalid.
In some embodiments, atblock2660, each centered radio-opaque marker730, identified atblock2640, can be mapped to each radio-opaque marker730 of a plurality of radio-opaque markers730. In some embodiments, a sorting process in accordance with one or more aspects described herein can be implemented to mapsuch markers730 to radioopaque markers730. In some embodiments, atblock2665, coordinates of each centered sphere can be retained (e.g., in memory of a computer platform3400). As described herein, in some embodiments, such coordinates can be utilized in a process for tracking movement of arobot15. In some embodiments, during tracking, the established (e.g., calibrated) spatial relationship betweenactive markers720 and radio-opaque markers730 can be utilized to transform the coordinate system from the coordinate system of the medical image to the coordinate system of thetracking system3417, or vice versa. Some embodiments include a process for transforming coordinates from the medical image's coordinate system to the tracking system's coordinate system can include afixture690 comprising four radio-opaque markers OP1, OP2, OP3, and OP4 (for example radio-opaque markers730) in a rigidly fixed position relative to four active markers AM1, AM2, AM3, AM4 (for example, active markers720). In some embodiments, at the time the calibration of thefixture690 occurred, this positional relationship can be retained in a computer memory (e.g., system memory3412) for later access on real-time or substantially on real-time in a set of four arbitrary reference Cartesian coordinate systems that can be readily reachable through transformations at any later frame of data. In some embodiments, each reference coordinate system can utilize an unambiguous positioning of three of theactive markers720. Some embodiments can include a reference coordinate system for AM1, AM2, and AM3 can be coordinate system in which AM1 can be positioned at the origin (e.g., the three-dimensional vector (0,0,0)); AM2 can be positioned on the x-axis (e.g., x-coordinate AM2x>0, y-coordinate AM2y=0, and z-coordinate AM2z=0); and AM3 can be positioned on the x-y plane (e.g., x-coordinate AM3xunrestricted, y-coordinate AM3y>0, and z-coordinated AM3z=0). Some embodiments include a method to generate a transformation to such coordinate system can comprise (1) translation of AM1, AM2, AM3, OP1, OP2, OP3, and OP4 in a manner that AM1 vector position is (0,0,0); (2) rotation about the x-axis by an angle suitable to position AM2 at z=0 (e.g., rotation applied to AM2, AM3 and OP1-OP4); (3) rotation about the z-axis by an angle suitable to position AM2 at y=0 and x>0 (e.g., rotation applied to AM2, AM3 and OP1-OP4); (4) rotation about the x-axis by an angle suitable to position AM3 at z=0 and y>0 (e.g., rotation applied to AM3 and OP1-OP4). It should be appreciated that, in some embodiments, it is unnecessary to retain these transformations in computer memory, for example; rather, the information retained for later access can be the coordinates of AM1-AM3 and OP1-OP4 in such reference coordinate system. In some embodiments, another such reference coordinate system can transform OP1-OP4 by utilizing AM2, AM3, and AM4. In some embodiments, another such reference coordinate system can transform OP1-OP4 by utilizing AM1, AM3, and AM4. In some further embodiments, another such reference coordinate system can transform OP1-OP4 by utilizing AM1, AM2, and AM4.
In some embodiments, at the time of tracking, during any given frame of data, the coordinates of the active markers AM1-AM4 can be provided by thetracking system3417. In some embodiments, by utilizing markers AM1, AM2, and AM3, transformations suitable to reach the conditions of the reference coordinate system can be applied. In some embodiments, such transformations can position AM1, AM2, and AM3 on the x-y plane in a position in proximity to the position that was earlier stored in computer memory for this reference coordinate system. In some embodiments, for example, to achieve a best fit of the triad ofactive markers720 on their stored location, a least squares algorithm can be utilized to apply an offset and rotation to the triad ofmarkers720. In one implementation, the least squares algorithm can be implemented as described by Sneath (Sneath P. H. A., Trend-surface analysis of transformation grids, J. Zoology 151, 65-122 (1967)). In some embodiments, transformations suitable to reach the reference coordinate system, including the least squares adjustment, can be retained in memory (e.g.,system memory3412 and/or mass storage device3404). In some embodiments, the retained coordinates of OP1-OP4 in such reference coordinate system can be retrieved and the inverse of the retained transformations to reach the reference coordinate system can be applied to such coordinates. It should be appreciated that the new coordinates of OP1-OP4 (the coordinates resulting from application of the inverse of the transformations) are in the coordinate system of thetracking system3417. Similarly, in some embodiments, by utilizing the remaining three triads ofactive markers720, the coordinates of OP1-OP4 can be retrieved.
In some embodiments, the four sets of OP1-OP4 coordinates in the tracking system's coordinate system that can be calculated from different triads ofactive markers720 are contemplated to have coordinates that are approximately equivalent. In some embodiments, when coordinates are not equivalent, the data set can be analyzed to determine which of theactive markers720 provides non-suitable (or poor) data by assessing how accurately each triad ofactive markers720 at the current frame overlays onto the retained positions ofactive markers720. In some other embodiments, when the coordinates are nearly equivalent, a mean value obtained from the four sets can be utilized for each radio-opaque marker730. In some embodiments, to transform coordinates of other data (such as trajectories from the medical image coordinate system) to the tracking system's coordinate system, the same transformations can be applied to the data. For example, in some embodiments, the tip and tail of a trajectory vector can be transformed to the four reference coordinate systems and then retrieved with triads ofactive markers720 at any frame of data and transformed to the tracking system's coordinate system.
FIG. 27 is a flowchart of amethod2700 for automatically maintaining asurgical robot15 substantially on a trajectory in accordance some embodiments of the invention. In some embodiments, at block2710, data indicative of position of one or more of Z-frame72 or Z-tube50 are received. In some embodiments, atblock2720, data indicative of eachrobot15 joint in thesurgical robot15, such as encoder counts from eachaxis motor160, are accessed. In some embodiments, at block2730, acurrent robot15 position on a planned trajectory is accessed (the position being represented in a camera coordinate system or the coordinate system of other tracking device). In one embodiment, the planned trajectory can be generated by an operator. For example, the operator (e.g., a surgeon) can scroll and rotate through the image slices until the desired anatomy can be viewed on three windows representing three orthogonal planes (typically sagittal, coronal, and axial slices). The operator can then draw a line at the desired slope and location on one window; the line simultaneously is calculated and appears on the other two windows, constrained by the views of the screens and orientation on the window on which it was drawn.
In another embodiment, a line (e.g., referred to as line t) that is fixed on the image both in angle and position represents the desired trajectory; the surgeon has to rotate and scroll the images to align this trajectory to the desired location and orientation on the anatomy. At least one advantage of such embodiment is that it can provide a more complete, holistic picture of the anatomy in relationship to the desired trajectory that may not require the operator to erase and start over or nudge the line after it is drawn, and this process was therefore adopted. In some embodiments, a planned trajectory can be retained in a memory of a computing device (for example, computing device3401) that controls thesurgical robot15 or is coupled thereto for use during a specific procedure. In some embodiments, each planned trajectory can be associated with a descriptor that can be retained in memory with the planned trajectory. As an example, the descriptor can be the level and side of the spine where screw insertion is planned.
In another embodiment, the line t that is (fixed on the image both in angle and position representing the desired trajectory) is dictated by the current position of the robot'send effectuator30, or by an extrapolation of the endeffectuator guide tube50 if aninstrument35 were to extend from it along the same vector. In some embodiments, as therobot15 is driven manually out over the patient18 by activatingmotors160 controlling individual or combined axes64,66,68,70, the position of this extrapolated line (robot's end effectuator30) is updated on the medical image, based onmarkers720 attached to the robot, conventional encoders showing current position of an axis, or a combination of these registers. In some embodiments, when the desired trajectory is reached, that vector's position in the medical image coordinate system is stored into the computer memory (for example in memory of a computer platform3400) so that later, when recalled, therobot15 will move automatically in the horizontal plane to intersect with this vector. In some embodiments, instead of manually driving therobot15 by activatingmotors160, the robot's axes can be put in a passive state. In some embodiments, in the passive state, themarkers720 continue to collect data on therobot arm23 position and encoders on each axis64,66,68,70 continue to provide information regarding the position of the axis; therefore the position of an extrapolated line can be updated on the medical image as thepassive robot15 is dragged into any orientation and position in the horizontal plane. In some embodiments, when a desired trajectory is reached, the position can be stored into the computer memory. Some embodiments include conventional software control or a conventional switch activation capable of placing therobot15 into an active state to immediately rigidly hold the position or trajectory, and to begin compensating for movement of thepatient18.
In some further embodiments, the computing device that implements themethod2700 or that is coupled to thesurgical robot15 can render one or more planned trajectories. Such information can permit confirming that the trajectories planned are within the range of the robot's15 reach by calculating thenecessary motor160 encoder counts to reach each desired trajectory, and assessing if the counts are within the range of possible counts of each axis.
In some embodiments, information including whether each trajectory is in range, and how close each trajectory is to being out of range can be provided to an agent (such as a surgeon or other user, or equipment). For example, in some embodiments, a display means29 (such as a display device3411) can render (i.e. display) the limits of axis counts or linear or angular positions of one or more axes and the position on each axis where each targeted trajectory is currently located.
In another embodiment, the display device3411 (for example, a display150) can render a view of the horizontal work field as a rectangle with the robot's x-axis66 movement and y-axis68 movement ranges defining the horizontal and vertical dimensions of the rectangle, respectively. In some embodiments, marks (for example, circles) on the rectangle can represent the position of each planned trajectory at the current physical location of therobot15 relative to thepatient18. In another embodiment, a 3D Cartesian volume can represent the x-axis66 movement, y-axis68 movement and z-axis70 movement ranges of therobot15. In some embodiments, line segments or cylinders rendered in the volume can represent the position of each planned trajectory at the current location of therobot15 relative to thepatient18. Repositioning of therobot15 or apatient18 is performed at this time to a location that is within range of the desired trajectories. In other embodiments, the surgeon can adjust the Z Frame72 position, which can affect the x-axis66 range and the y-axis68 range of trajectories that therobot15 is capable of reaching (for example, converging trajectories require less x-axis66 or y-axis reach the lower therobot15 is in the z-axis70). During this time, simultaneously, a screen shows whether tracking markers on thepatient18 androbot15 are in view of the detection device of the tracking system (for example,optical tracking system3417 shown inFIG. 34). Repositioning of the cameras, if necessary, is also performed at this time for good visibility or optimal detection of tracking sensors.
In some embodiments, atblock2740, orientation of an end-effectuator30 in arobot15 coordinate system is calculated. In some embodiments, atblock2750, position of the end-effectuator30 in therobot15 coordinate system is calculated. In some embodiments, at block2760, a line t defining the planned trajectory in therobot15 coordinate system is determined. In some embodiments, atblock2770,robot15 position is locked on the planned trajectory at a current Z level. In some embodiments, atblock2780, information indicative of quality of the trajectory lock can be supplied. In some embodiments, actual coordinate(s) of thesurgical robot15 can be rendered in conjunction with respective coordinate(s) of the planned trajectory. In some embodiments, aural indicia can be provided based on such quality. For instance, in some embodiments, a high-frequency and/or high-amplitude noise can embody aural indicia suitable to represent a low-quality lock. In some alternative embodiments, a brief melody may be repeatedly played, such as the sound associated with successful recognition of a USB memory device by a computer, to indicate successful lock on the planned trajectory. In other embodiments, a buzz or other warning noise may be played if therobot15 is unable to reach its target due to the axis being mechanically overpowered, or if the trackingmarkers720 are undetectable by cameras8200 or other marker position sensors.
In some embodiments, at block2790, it is determined if a surgical procedure is finished and, in the affirmative case, the flow terminates. In other embodiments, the flow is directed to block2710. In some embodiments, themethod2700 can be implemented (i.e., executed) as part of block2440 in certain scenarios. It should be appreciated that in some embodiments, themethod2700 also can be implemented for anyrobot15 having at least one feature that enable movement of therobot15.
FIG. 28A is a flowchart of a method2800afor calculating position and/or orientation of an end-effectuator30 in arobot15 according to one at least one embodiment of the invention. In some embodiments, the position and/or orientation can be calculated based at least on monitored position of atracking array690 mounted on the robot's x-axis66 and monitored counts of encoders on the y-axis68, roll62, pitch60, and Z-tube axis64 actuators. In some embodiments, position and/or orientation are calculated in arobot15 coordinate system. In some embodiments, the method2800acan embody one or more ofblocks2740 or2750. In some embodiments, at block2805a, the current position (i.e., 3D position) of x-axis66 mountedrobot15 tracking markers is accessed. In some embodiments, at block2810a, the current position of thetracking array690 mounted to therobot15 is transformed to neutral position. This is a position that was previously stored and represents the position of thetracking array690 when therobot15 was at zero counts on each axis between thetracker690 and therobot15 base (x-axis66 and z-axis70 in this configuration). In some embodiments, the set of transformations (T1) to transform from the current position to the neutral position can be retained in computer memory (for example, thesystem memory3412 and/or mass storage device3404). In some embodiments, a tip and tail of a line segment representing the vector in line with the end-effectuator30 can be computed based at least on the process described herein. In some embodiments, this process can establish arobot15 coordinate system and calibrate the relative orientations of the axes of movement of therobot15 where trackingmarkers720 can be attached temporarily to the end-effectuator30. In some embodiments, the vector's position in space can be determined by finding the finite helical axis of motion ofmarkers720 manually rotated to two positions around theguide tube50. In some embodiments, the vector's position in space can be determined by connecting a point located at the entry of the guide tube50 (identified by a temporarily mountedrigid body690 with tracking markers720) to a point located at the exit of the guide tube50 (identified by a second temporarily mountedrigid body690 with tracking markers720).
In some embodiments, the tip of the line segment can be obtained as the point along the vector that is closest to the vector representing the helical axis of motion during pitch. In some embodiments, the tail of the line segment can be set an arbitrary distance (for example about 100 mm) up the vector aligned with theguide tube50 and/or first helical axis. In some embodiments, the Cartesian coordinates of such tip and tail positions can be transformed to a coordinate system described herein in which the y-axis68 movement can coincide with the y-axis68 of the coordinate system, and the x-axis66 can be aligned such that x-axis66 movement can cause the greatest change in direction in the x-axis66, moderate change in the y-axis68, and no change in the z-axis70. In some embodiments, these coordinates can be retained in a computer memory (for example system memory3412) for later retrieval. In some embodiments, atblock2815a, tip and tail coordinates for neutral are accessed (i.e., retrieved). In some embodiments, at block2820a, tip and tail are translated along Z-tube50 neutral unit vector by monitored Z-tube50 counts. In some embodiments, at block2825a, an instantaneous axis of rotation (“IAR”) is accessed. The IAR is the same as the helical axis of motion ignoring the element of translation along the helical axis for pitch60 for neutral. As described earlier, in some embodiments, the vectors for this IAR were previously stored in computer memory at the time the coordinate system of therobot15 was calibrated. In some embodiments, at block2830a, tip coordinate, tail coordinate, and IAR vector direction and location coordinates are transformed (for example, iteratively transformed) to a new coordinate system in which IAR is aligned with X axis. In some embodiments, data indicative of such transformations (T2) can be stored. In some embodiments, at block2835a, tip coordinate and tail coordinate are rotated about X axis by pitch60 angle. In some embodiments, at block2840a, tip coordinate and tail coordinate are transformed back by inverse of T2 to the previous coordinate system. In some embodiments, atblock2845a, previously stored vectors that represent the IAR for roll62 are accessed. In some embodiments, at block2850a, tip coordinate, tail coordinate, IAR coordinate are transformed (for example, iteratively transformed) to a new coordinate system in which IAR is aligned with y-axis68. In some embodiments, data indicative of such transformation(s) (T3) can be retained in memory. In some embodiments, at block2855a, tip coordinate and tail coordinate are rotated about y-axis68 by roll62 angle. In some embodiments, at block2860a, tip coordinate and tail coordinate are transformed back by inverse of T3 to the previous coordinate system. In some embodiments, at block2865a, tip coordinate and tail coordinate are translated along a y-axis68 unit vector (e.g., a vector aligned in this coordinate system with the y-axis68) by monitored counts. In some embodiments, at block2870a, tip coordinate and tail coordinate are transformed back by inverse of T1 to the current coordinate system monitored by thetracking system3417.
FIG. 28B is a flowchart of amethod2800bfor calculating position and/or orientation of an end-effectuator30 in arobot15 in accordance with one embodiment of the invention. In some embodiments, the position and/or the orientation can be calculated based at least on monitored position of atracking array690 mounted on the robot's15 roll62 axis and monitored counts of encoders on the pitch60 and Z-tube50 actuators. In some embodiments, position and/or orientation can be calculated in arobot15 coordinate system. In accordance with some embodiments of the invention, the method2800B can embody one or more ofblocks2740 or2750. In some embodiments, at block2805b, current position of an array of one ormore robot15tracking markers720 is accessed. In some embodiments, the current position is a 3D position and the array ofrobot15tracking markers720 can be mounted to the roll62 axis of therobot15. In some embodiments, at block2810b, the current position of the array ofrobot15tracking markers720 mounted to therobot15 is transformed to neutral position. In some embodiments, the neutral position can be a position that was previously stored and can represent the position of therobot15tracking array690 when therobot15 had zero counts on each axis between thetracker690 and the robot base25 (e.g., z-axis70, x-axis68, y-axis66, and roll62 axis in this configuration). In some embodiments, data indicative of a set of transformations (T1) to go from the current position to the neutral position can be stored in a computer memory (for example,mass storage device3404 or system memory3412).
In some embodiments, in order to establish arobot15 coordinate system and calibrate the relative orientations of the axes of movement of therobot15, a tip and tail of a line segment representing the vector in line with the end-effectuator30 with temporarily attached trackingmarkers720 is located. In some embodiments, the vector's position in space can be determined by finding the finite helical axis of motion of markers manually rotated to two positions around theguide tube50. In other embodiments, the vector's position in space can be determined by connecting a point located at the entry of the guide tube50 (identified by a temporarily mountedrigid body690 with tracking markers720) to a point located at the exit of the guide tube50 (identified by a second temporarily mountedrigid body690 with tracking markers720).
In some embodiments, the tip of the line segment can be found as the point along the vector that is closest to the vector representing the helical axis of motion during pitch. In some embodiments, the tail of the line segment can be set an arbitrary distance (for example, nearly 100 mm) up the vector aligned with the guide tube/first helical axis. In some embodiments, the Cartesian coordinates of these tip and tail positions can be transformed to a coordinate system described herein in which the y-axis68 movement substantially coincides with the y-axis68 of the coordinate system, and the x-axis66 movement is aligned in a manner that, in some embodiments, x-axis66 movement causes the greatest change in direction in the x-axis66, slight change in y-axis68, and no change in the z-axis70. It should be appreciated that such coordinates can be retained in memory (for example system memory3412) for later retrieval. In some embodiments, at block2815b, tip and tail coordinates for the neutral position are accessed (i.e., retrieved or otherwise obtained). In some embodiments, at block2820b, tip and tail are translated along Z-tube50 neutral unit vector by monitored Z-tube50 counts. In some embodiments, atblock2825b, IAR is accessed. In one implementation, the vectors for this IAR may be available in a computer memory, for example, such vectors may be retained in the computer memory at the time the coordinate system of therobot15 is calibrated in accordance with one or more embodiments described herein. In some embodiments, atblock2830b, tip coordinate, tail coordinate, and IAR vector direction and location coordinates are transformed to a new coordinate system in which IAR is aligned with x-axis66. In some embodiments, data indicative of the applied transformations (T2) can be retained in a computer memory. In some embodiments, atblock2835b, tip coordinate and tail coordinate are rotated about x-axis66 by pitch60 angle. In some embodiments, at block2840b, tip coordinate and tail coordinate are transformed back by applying the inverse of T2 to the previous coordinate system. In some embodiments, atblock2870b, tip coordinate and tail coordinate are transformed back by applying the inverse of T1 to the current coordinate system monitored by thetracking system3417.
FIG. 29 is a flowchart of a method2900 for determining a line indicative of a trajectory in arobot15 coordinate system in accordance with one embodiment of the invention. In some embodiments, the trajectory can be a planned trajectory associated with a surgical procedure. In some embodiments, atblock2910, for a set of currentactive marker720 positions on a targetingfixture690, respectiveopaque marker730 positions are accessed from a rigid body source (such as a fixture690). In some embodiments, atblock2920, anopaque marker730 position is transformed from a representation in an image coordinate system to a representation in a currentactive marker720 coordinate system. In some embodiments, atblock2930, a planned trajectory is transformed from a representation in the image coordinate system to a representation in the currentactive maker720 coordinate system. In some embodiments, at block2940, a set ofcurrent marker720 positions on arobot15 is transformed to a representation in arobot15 coordinate system. In some embodiments, atblock2950, the planned trajectory is transformed from a representation in the currentactive marker720 coordinate system to therobot15 coordinate system.
FIG. 30 is a flowchart of amethod3000 for adjusting arobot15 position to lock on a trajectory in accordance in accordance with one embodiment of the invention. As illustrated, in some embodiments, the trajectory can be locked at a current Z plane, or level above the surgical field17. In some embodiments, atblock3005, it is determined if roll62 of an end-effectuator30 matches roll of the trajectory (represented by a line t (or t)). In the negative case, in some embodiments, an instruction to move a roll62 axis is transmitted atblock3010 and flow is directed to block3015. Conversely, in the affirmative case, in some embodiments, flow is directed to block3015 where it is determined if the pitch60 of the end-effectuator30 has matched the pitch of the trajectory. In the negative case, an instruction to move a pitch60 axis is transmitted atblock3020 and flow is directed to block3025. In the affirmative case, in some embodiments, flow is directed to block3025 where it is determined if x-axis66 coordinates of points on the vector of the end-effectuator30 intercept the x-axis66 coordinates of the desired trajectory vector. In the negative case, in some embodiments, an instruction to move the x-axis66 can be transmitted and flow is directed to3035. In the affirmative case, in some embodiments, flow is directed to block3035 where it is determined if y-axis68 coordinates of points on the vector of the end-effectuator30 intercept the y-axis68 coordinates of the desired trajectory vector. In the negative case, in some embodiments, an instruction to move the y-axis68 can be transmitted atblock3040 and flow is directed to block3045. In the affirmative case, in some embodiments, flow is directed to block3045 in which it is determined if a Z-tube50 is being adjusted. In some embodiments, an end-user can configure information (i.e., data or metadata) indicative of the Z-tube50 being adjusted to control it to a desired position, for example. In the negative case, in some embodiments, flow is terminated. In the affirmative case, in some embodiments, flow is directed to block3050 where it is determined if the Z-tube50 is positioned at a predetermined distance from anatomy. In the affirmative case, in some embodiments, flow terminates and the Z-tube50 is located at a desired position with respect to a target location in the anatomy (bone, biopsy site, etc.). In the negative case, in some embodiments, an instruction to move the Z-tube axis64 is transmitted atblock3055 and the flow is directed to block3050. It should be noted that thesubject method3000 in some embodiments, but not all embodiments, may require that movement in each of the indicated axes (x-axis66, y-axis68, Z-tube axis64, roll62, and pitch60) occurs without affecting the other axes earlier in the method flow. For example, in some embodiments, the y-axis68 movement atblock3040 should not cause change in the position of x-axis66 coordinate, which was already checked atblock3025. In some embodiments, themethod3000 can be implemented iteratively in order to reach a desired final position in instances where the axes do not move completely independently. In certain embodiments, themethod3000 can account for all axis positions nearly simultaneously, and can determine the exact amount of movement necessary in each axis, and thus it can be more efficient.
FIG. 31 is a flowchart of amethod3100 for positioning an end-effectuator30 in space in accordance with one embodiment of the invention. In some embodiments, the positioning can comprise positioning the end-effectuator30 in a first plane (for example, the x-y plane or horizontal plane) and moving the end-effectuator30 along a direction substantially normal to the first plane.FIGS. 32-33 are flowcharts of methods for driving an end-effectuator30 to a procedure location in accordance with one embodiment of the invention. As an example, in some embodiments, the procedure location can be a position at the surface of a bone into which a conventional screw of other piece of hardware is to be inserted. In some embodiments, the end-effectuator30 can be fitted with aguide tube50 or conventional dilator. In some embodiments, in scenarios in which a Z-tube50 of asurgical robot15 comprising the end-effectuator30 is to be locked and a Z-frame72 is to be advanced, themethod3200 can be implemented (i.e., executed). In applications where conventional screws are to be driven into bone, the surgeon may want to move the end-effectuator tip30, fitted with aguide tube50 or a conventional dilator, all the way down to the bone. It should be appreciated that in some embodiments, since the first lateral movement occurs above the level where thepatient18 is lying, the methods depicted inFIGS. 31-33 can mitigate the likelihood that therobot15 randomly collides with apatient18. In some embodiments, the method can also utilize the robot's Cartesian architecture, and the ease with which a coordinated movement down the infinite trajectory vector can be made. That is, in some embodiments, to move down this vector, the roll62 and pitch60 axes need no adjustment, while the x-axis66, y-axis68, and Z-frame72 axes are moved at a fixed rate. In certain embodiments, for anarticular robot15 to make such a move, the multiple angular axes would have to be synchronized nonlinearly, with all axes simultaneously moved at varying rates.
FIG. 34 illustrates a block diagram of acomputer platform3400 having acomputing device3401 that enables various features of the invention, and performance of the various methods disclosed herein in accordance with some embodiments of the invention. In some embodiments, thecomputing device3401 can control operation of asurgical robot15 and anoptical tracking system3417 in accordance with aspects described herein. In some embodiments, control can comprise calibration of relative systems of coordinates, generation of planned trajectories, monitoring of position of various units of thesurgical robots15 and/or units functionally coupled thereto, and implementation of safety protocols, and the like. For example, in some embodiments,computing device3401 can embody a programmable controller that can control operation of asurgical robot15 as described herein. It should be appreciated that in accordance with some embodiments of the invention, theoperating environment3400 is only an example of an operating environment and is not intended to suggest any limitation as to the scope of use or functionality of operating environment architecture. In some embodiments of the invention, theoperating environment3400 should not be interpreted as having any dependency or requirement relating to any one functional element or combination of functional elements (e.g., units, components, adapters, or the like).
The various embodiments of the invention can be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that can be suitable for use with the systems and methods of the invention comprise personal computers, server computers, laptop devices or handheld devices, and multiprocessor systems. Additional examples comprise mobile devices, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that comprise any of the above systems or devices, and the like.
In some embodiments, the processing effected in the disclosed systems and methods can be performed by software components. In some embodiments, the disclosed systems and methods can be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers, such ascomputing device3401, or other computing devices. Generally, program modules comprise computer code, routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The disclosed methods also can be practiced in grid-based and distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote computer storage media including memory storage devices.
Further, one skilled in the art will appreciate that the systems and methods disclosed herein can be implemented via a general-purpose computing device in the form of thecomputing device3401. In some embodiments, the components of thecomputing device3401 can comprise, but are not limited to, one ormore processors3403, orprocessing units3403, asystem memory3412, and a system bus3413 that couples various system components including theprocessor3403 to thesystem memory3412. In some embodiments, in the case ofmultiple processing units3403, the system can utilize parallel computing.
In general, aprocessor3403 or aprocessing unit3403 refers to any computing processing unit or processing device comprising, but not limited to, single-core processors; single-processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Additionally or alternatively, aprocessor3403 orprocessing unit3403 can refer to an integrated circuit, an application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Processors or processing units referred to herein can exploit nano-scale architectures such as, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance performance of the computing devices that can implement the various aspects of the subject invention. In some embodiments,processor3403 orprocessing unit3403 also can be implemented as a combination of computing processing units.
The system bus3413 represents one or more of several possible types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can comprise an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, and a Peripheral Component Interconnects (PCI), a PCI-Express bus, a Personal Computer Memory Card Industry Association (PCMCIA), Universal Serial Bus (USB) and the like. The bus3413, and all buses specified in this specification and annexed drawings also can be implemented over a wired or wireless network connection and each of the subsystems, including theprocessor3403, amass storage device3404, anoperating system3405,robotic guidance software3406, roboticguidance data storage3407, anetwork adapter3408,system memory3412, an input/output interface3410, adisplay adapter3409, adisplay device3411, and ahuman machine interface3402, can be contained within one or moreremote computing devices3414a,bat physically separate locations, functionally coupled (e.g., communicatively coupled) through buses of this form, in effect implementing a fully distributed system.
In some embodiments,robotic guidance software3406 can configure thecomputing device3401, or a processor thereof, to perform the automated control of position of the local robot3416 (for example, surgical robot15) in accordance with aspects of the invention. Such control can be enabled, at least in part, by atracking system3417. In some embodiments, when thecomputing device3401 embodies thecomputer100 functionally coupled tosurgical robot15,robotic guidance software3406 can configuresuch computer100 to perform the functionality described in the subject invention. In some embodiments,robotic guidance software3406 can be retained in a memory as a group of computer-accessible instructions (for instance, computer-readable instructions, computer-executable instructions, or computer-readable computer-executable instructions). In some embodiments, the group of computer-accessible instructions can encode the methods of the invention (such as the methods illustrated inFIGS. 24-33 in accordance with some embodiments of the invention). In some embodiments, the group of computer-accessible instructions can encode various formalisms (e.g., image segmentation) for computer vision tracking. Some embodiments includerobotic guidance software3406 that can include a compiled instance of such computer-accessible instructions, a linked instance of such computer-accessible instructions, a compiled and linked instance of such computer-executable instructions, or an otherwise executable instance of the group of computer-accessible instructions.
Some embodiments include roboticguidance data storage3407 that can comprise various types of data that can permit implementation (e.g., compilation, linking, execution, and combinations thereof) of therobotic guidance software3406. In some embodiments, roboticguidance data storage3407 can comprise data associated with intraoperative imaging, automated adjustment of position of thelocal robot3416 and/orremote robot3422, or the like. In some embodiments, the data retained in the roboticguidance data storage3407 can be formatted according to any image data in industry standard format. As illustrated, in some embodiments, aremote tracking system3424 can enable, at least in part, control of theremote robot3422. In some embodiments, the information can comprise tracking information, trajectory information, surgical procedure information, safety protocols, and so forth.
In some embodiments of the invention, thecomputing device3401 typically comprises a variety of computer readable media. The readable media can be any available media that is accessible by thecomputer3401 and comprises, for example and not meant to be limiting, both volatile and non-volatile media, removable and non-removable media. In some embodiments, thesystem memory3412 comprises computer readable media in the form of volatile memory, such as random access memory (RAM), and/or non-volatile memory, such as read only memory (ROM). In some embodiments, thesystem memory3412 typically contains data (such as a group of tokens employed for code buffers) and/or program modules such asoperating system3405 androbotic guidance software3406 that are immediately accessible to, and/or are presently operated-on by theprocessing unit3403. In some embodiments,operating system3405 can comprise operating systems such as Windows operating system, Unix, Linux, Symbian, Android, Apple iOS operating system, Chromium, and substantially any operating system for wireless computing devices or tethered computing devices. Apple® is a trademark of Apple Computer, Inc., registered in the United States and other countries. iOS® is a registered trademark of Cisco and used under license by Apple Inc. Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Android® and Chrome® operating system are a registered trademarks of Google Inc. Symbian® is a registered trademark of Symbian Ltd. Linux® is a registered trademark of Linus Torvalds. UNIX® is a registered trademark of The Open Group.
In some embodiments,computing device3401 can comprise other removable/non-removable, volatile/non-volatile computer storage media. As illustrated, in some embodiments,computing device3401 comprises amass storage device3404 which can provide non-volatile storage of computer code (e.g., computer-executable instructions), computer-readable instructions, data structures, program modules, and other data for thecomputing device3401. For instance, in some embodiments, amass storage device3404 can be a hard disk, a removable magnetic disk, a removable optical disk, magnetic cassettes or other magnetic storage devices, flash memory cards, CD-ROM, digital versatile disks (DVD) or other optical storage, random access memories (RAM), read only memories (ROM), electrically erasable programmable read-only memory (EEPROM), and the like.
In some embodiments, optionally, any number of program modules can be stored on themass storage device3404, including by way of example, anoperating system3405, andtracking software3406. In some embodiments, each of theoperating system3405 and tracking software3406 (or some combination thereof) can comprise elements of the programming and thetracking software3406. In some embodiments, data and code (for example, computer-executable instructions, patient-specific trajectories, andpatient18 anatomical data) can be retained as part oftracking software3406 and stored on themass storage device3404. In some embodiments,tracking software3406, and related data and code, can be stored in any of one or more databases known in the art. Examples of such databases comprise, DB2®, Microsoft® Access, Microsoft® SQL Server, Oracle®, mySQL, PostgreSQL, and the like. Further examples include membase databases and flat file databases. The databases can be centralized or distributed across multiple systems.
DB2® is a registered trademark of IBM in the United States. Microsoft®, Microsoft® Access®, and Microsoft® SQL Server™ are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Oracle® is a registered trademark of Oracle Corporation and/or its affiliates. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. PostgreSQL® and the PostgreSQL® logo are trademarks or registered trademarks of The PostgreSQL Global Development Group, in the U.S. and other countries.
In some embodiments, an agent (for example, a surgeon or other user, or equipment) can enter commands and information into thecomputing device3401 via an input device (not shown). Examples of such input devices can comprise, but are not limited to, a camera (or other detection device for non-optical tracking markers), a keyboard, a pointing device (for example, a mouse), a microphone, a joystick, a scanner (for example, a barcode scanner), a reader device such as a radiofrequency identification (RFID) readers or magnetic stripe readers, gesture-based input devices such as tactile input devices (for example, touch screens, gloves and other body coverings or wearable devices), speech recognition devices, or natural interfaces, and the like. In some embodiments, these and other input devices can be connected to theprocessing unit3403 via ahuman machine interface3402 that is coupled to the system bus3413. In some other embodiments, they can be connected by other interface and bus structures, such as a parallel port, game port, an IEEE 1394 port (also known as a firewire port), a serial port, or a universal serial bus (USB).
In some further embodiments, adisplay device3411 can also be functionally coupled to the system bus3413 via an interface, such as adisplay adapter3409. In some embodiments, thecomputer3401 can have more than onedisplay adapter3409 and thecomputer3401 can have more than onedisplay device3411. For example, in some embodiments, adisplay device3411 can be a monitor, a liquid crystal display, or a projector. Further, in addition to thedisplay device3411, some embodiments can include other output peripheral devices that can comprise components such as speakers (not shown) and a printer (not shown) capable of being connected to thecomputer3401 via input/output Interface3410. In some embodiments, the input/output interface3410 can be a pointing device, either tethered to, or wirelessly coupled to thecomputing device3410. In some embodiments, any step and/or result of the methods can be output in any form to an output device. In some embodiments, the output can be any form of visual representation, including, but not limited to, textual, graphical, animation, audio, tactile, and the like.
In certain embodiments, one or more cameras can be contained or functionally coupled to thetracking system3417, which is functionally coupled to the system bus3413 via an input/output interface of the one or more input/output interfaces3410. Such functional coupling can permit the one or more camera(s) to be coupled to other functional elements of thecomputing device3401. In one embodiment, the input/output interface, at least a portion of the system bus3413, and thesystem memory3412 can embody a frame grabber unit that can permit receiving imaging data acquired by at least one of the one or more cameras. In some embodiments, the frame grabber can be an analog frame grabber, a digital frame grabber, or a combination thereof. In some embodiments, where the frame grabber is an analog frame grabber, theprocessor3403 can provide analog-to-digital conversion functionality and decoder functionality to enable the frame grabber to operate with medical imaging data. Further, in some embodiments, the input/output interface can include circuitry to collect the analog signal received from at least one camera of the one or more cameras. In some embodiments, in response to execution byprocessor3403,tracking software3406 can operate the frame grabber to receive imaging data in accordance with various aspects described herein.
Some embodiments include acomputing device3401 that can operate in a networked environment (for example, an industrial environment) using logical connections to one or moreremote computing devices3414a,b, aremote robot3422, and atracking system3424. By way of example, in some embodiments, a remote computing device can be a personal computer, portable computer, a mobile telephone, a server, a router, a network computer, a peer device or other common network node, and so on. In particular, in some embodiments, an agent (for example, a surgeon or other user, or equipment) can point to other tracked structures, including anatomy of apatient18, using a remote computing device3414 such as a hand-held probe that is capable of being tracked and sterilized. In some embodiments, logical connections between thecomputer3401 and aremote computing device3414a,bcan be made via a local area network (LAN) and a general wide area network (WAN). In some embodiments, the network connections can be implemented through anetwork adapter3408. In some embodiments, thenetwork adapter3408 can be implemented in both wired and wireless environments. Some embodiments include networking environments that can be conventional and commonplace in offices, enterprise-wide computer networks, intranets. In some embodiments, the networking environments generally can be embodied in wire-line networks or wireless networks (for example, cellular networks, such as third generation (“3G”) and fourth generation (“4G”) cellular networks, facility-based networks (for example, femtocell, picocell, wifi networks). In some embodiments, a group of one ormore networks3415 can provide such networking environments. In some embodiments of the invention, the one or more network(s) can comprise a LAN deployed in an industrial environment comprising thesystem1 described herein.
As an illustration, in some embodiments, application programs and other executable program components such as theoperating system3405 are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of thecomputing device3401, and are executed by the data processor(s) of thecomputer100. Some embodiments include an implementation oftracking software3406 that can be stored on or transmitted across some form of computer readable media. Any of the disclosed methods can be performed by computer readable instructions embodied on computer readable media. Computer readable media can be any available media that can be accessed by a computer. By way of example and not meant to be limiting, computer-readable media can comprise “computer storage media,” or “computer-readable storage media,” and “communications media.” “Computer storage media” comprise volatile and non-volatile, removable and non-removable media implemented in any methods or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. In some embodiments of the invention, computer storage media comprises, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
As described herein, some embodiments include thecomputing device3401 that can control operation oflocal robots3416 and/orremote robots3422. Within embodiments in which thelocal robot3416 or theremote robot3422 aresurgical robots15, thecomputing device3401 can executerobotic guidance software3407 to controlsuch robots3416,3422,15. In some embodiments, therobotic guidance software3407, in response to execution, can utilize trajectories (such as, tip and tail coordinates) that can be planned and/or configured remotely or locally. In an additional or alternative aspect, in response to execution, therobotic guidance software3407 can implement one or more of the methods described herein in a local robot's computer or a remote robot's computer to cause movement of theremote robot15 or thelocal robot15 according to one or more trajectories.
In some embodiments, thecomputing device3401 can enable pre-operative planning of the surgical procedure. In some embodiments, thecomputing device3401 can permit spatial positioning and orientation of a surgical tool (for example, instrument35) during intraoperative procedures. In some further embodiments, thecomputing device3401 can enable open procedures. In some other embodiments, thecomputing device3401 can enable percutaneous procedures. In certain embodiments, thecomputing device3401 and therobotic guidance software3407 can embody a3D tracking system3417 to simultaneously monitor the positions of the device and the anatomy of thepatient18. In some embodiments, the3D tracking system3417 can be configured to cast the patient's anatomy and the end-effectuator30 in a common coordinate system.
In some embodiments, thecomputing device3401 can access (i.e., load) image data from a conventional static storage device. In some embodiments, thecomputing device3401 can permit a 3D volumetric representation ofpatient18 anatomy to be loaded into memory (for example, system memory3412) and displayed (for example, via display device3411). In some embodiments, thecomputing device3401, in response to execution of therobotic guidance software3407 can enable navigation through the 3D volume representation of a patient's anatomy.
In some embodiments, thecomputing device3401 can operate with a conventional power source required to offer the device for sale in the specified country. A conventional power cable that supplies power can be a sufficient length to access conventional hospital power outlets. In some embodiments, in the event of a power loss, thecomputing device3401 can hold the current end-effectuator30 in a position unless an agent (for example, a surgeon or other user, or equipment) manually moves the end-effectuator30.
In some embodiments, thecomputing device3401 can monitor system physical condition data. In some embodiments, thecomputing device3401 can report to an operator (for example, a surgeon) each of the physical condition data and indicate an out-of-range value. In some embodiments, thecomputing device3401 can enable entry and storage of manufacturing calibration values for end-effectuator30 positioning using, for example, the input/output interface3410. In some embodiments, thecomputing device3401 can enable access to manufacturing calibration values by an agent (for example, a surgeon or other user, or equipment) authenticated to an appropriate access level. In some embodiments, the data can be retained in roboticguidance data storage3407, or can be accessed via network(s)3415 when the data is retained in aremote computing device3414a.
In some embodiments, thecomputing device3401 can render (using for example display device3411) a technical screen with a subset of the end-effectuator30 positioning calibration and system health data. The information is only accessible to an agent (for example, a surgeon or other user, or equipment) authenticated to an appropriate level.
In some embodiments, thecomputing device3401 can enable field calibration of end-effectuator30 positioning only by an agent (for example, a surgeon or other user, or equipment) authenticated to an appropriate access level. In some embodiments, thecomputing device3401 can convey the status oflocal robot3416,remote robot3422, and/or other device being locked in position using a visual or aural alert.
In some further embodiments, thecomputing device3401 can include an emergency stop control that upon activation, disables power to the device'smotors160 but not to theprocessor3403. In some embodiments, the emergency stop control can be accessible by the operator ofcomputing device3401. In some embodiments, thecomputing device3401 can monitor the emergency stop status and indicate to the operator that the emergency stop has been activated.
In some other embodiments, thecomputing device3401 can be operated in a mode that permits manual positioning of the end-effectuator30. In some embodiments, thecomputing device3401 can boot directly to an application representing therobotic guidance software3406. In some embodiments,computing device3401 can perform a system check prior to each use. In scenarios in which the system check fails, thecomputing device3401 can notify an operator.
In some embodiments, thecomputing device3401 can generate an indicator for reporting system status. Some embodiments include thecomputing device3401 that can minimize or can mitigate delays in processing, and in the event of a delay in processing, notify an agent (for example, a surgeon or other user, or equipment). For example, in some embodiments, a delay may occur while a system scan is being performed to assess system status, and consequently thecomputing device3401 can schedule (for example, generate a process queue) system scans to occur at low usage times. In some embodiments, a system clock of thecomputing device3401 can be read before and after key processes to assess the length of time required to complete computation of a process. In some embodiments, the actual time to complete the process can be compared to the expected time. In some embodiments, if a discrepancy is found to be beyond an acceptable tolerance, the agent can be notified, and/or concurrently running non-essential computational tasks can be terminated. In one embodiment, a conventional system clock (not shown) can be part ofprocessor3403.
In some embodiments, thecomputing device3401 can generate a display that follows a standardized workflow. In some embodiments, thecomputing device3401 can render or ensure that text is rendered in a font of sufficient size and contrast to be readable from an appropriate distance. In some embodiments, thecomputing device3401 can enable an operator to locate the intended position of a surgical implant or tool.
In some further embodiments, thecomputing device3401 can determine the relative position of the end-effectuator30 to the anatomy of thepatient18. For example, to at least such end, thecomputing device3401 can collect data theoptical tracking system3417, and can analyze the data to generate data indicative of such relative position. In some embodiments, thecomputing device3401 can indicate the end-effectuator30 position and orientation. In some embodiments, thecomputing device3401 can enable continuous control of end-effectuator30 position relative to the anatomy of apatient18.
In some embodiments, thecomputing device3401 can enable an agent (for example, a surgeon or other user, or equipment) to mark the intended position of a surgical implant or tool (for example, instrument35). In some embodiments, thecomputing device3401 can allow the position and orientation of a conventional hand-held probe (or an instrument35) to be displayed overlaid on images of the patient's anatomy.
In some embodiments, thecomputing device3401 can enable an agent (for example, a surgeon or other user, or equipment) to position conventional surgical screws. In some embodiments, thecomputing device3401 can enable selection of the length and diameter of surgical screws by the agent. In yet another aspect, the computing device can ensure that the relative position, size and scale of screws are maintained on thedisplay3411 when in graphical representation. In some embodiments, thecomputing device3401 can verify screw path plans against an operation envelope and reject screw path plans outside this envelope. In still another aspect, thecomputing device3401 can enable hiding of a graphical screw representation.
In some embodiments, thecomputing device3401 can enable a function that allows the current view to be stored. In some embodiments, thecomputing device3401 can enable a view reset function that sets the current view back to a previously stored view. In some embodiments, thecomputing device3401 can enable an authentication based tiered access system. In some embodiments, thecomputing device3401 can log and store system activity. In some embodiments, thecomputing device3401 can enable access to the system activity log to an agent authorized to an appropriate level. In some embodiments, thecomputing device3401 can enable entry and storage ofpatient18 data.
In some embodiments, thecomputing device3401 can enable the appropriate disposition ofpatient18 data and/or procedure data. For example, in a scenario in which such data are being collected for research, thecomputing device3401 can implement de-identification of the data in order to meetpatient18 privacy requirements. In some embodiments, the de-identification can be implemented in response to execution of computer-executable instruction(s) retained inmemory3412 or any other memory accessible to thecomputing device3401. In some embodiments, the de-identification can be performed automatically before the patient18 data and/or procedure data are sent to a repository or any other data storage (includingmass storage device3404, for example). In some embodiments, indicia (e.g., a dialog box) can be rendered (for example, at display device3411) to prompt an agent (e.g., machine or human) to permanently deletepatient18 data and/or procedure data at the end of a procedure.
FIG. 11 shows a flow chart diagram1100 for general operation of therobot15 according to some embodiments is shown. In some embodiments, atstep210, the local positioning system (herein referred to as “LPS”) establishes a spatial coordinate measuring system for the room10 where the invasive procedure is to occur; in other words, the LPS is calibrated. In some embodiments, in order to calibrate the LPS, a conventional mechanical fixture that includes a plurality of attached calibratingtransmitters120 is placed within the room10 wherepositioning sensors12 are located. In some embodiments of the invention, at least three calibratingtransmitters120 are required, but any number of calibratingtransmitters120 above three is within the scope of the invention. Also, in some embodiments, at least threepositioning sensors12 are required, but any number ofpositioning sensors12 above three is also within the scope of the invention, and the accuracy of the system is increased with the addition of more positioning sensors.
In some embodiments, the distance between each of the calibratingtransmitters120 relative to each other is measured prior tocalibration step210. Each calibratingtransmitter120 transmits RF signals on a different frequency so that thepositioning sensors12 can determine whichtransmitter120 emitted a particular RF signal. In some embodiments, the signal of each of thesetransmitters120 is received by positioningsensors12. In some embodiments, since the distance between each of the calibratingtransmitters120 is known, and thesensors12 can identify the signals from each of the calibratingtransmitters120 based on the known frequency, using time of flight calculation, thepositioning sensors12 are able to calculate the spatial distance of each of thepositioning sensors12 relative to each other. Thesystem1 is now calibrated. As a result, in some embodiments, thepositioning sensors12 can now determine the spatial position of anynew RF transmitter120 introduced into the room10 relative to thepositioning sensors12.
In some embodiments, a step220ain which a 3D anatomical image scan, such as a CT scan, is taken of the anatomical target. Any 3D anatomical image scan may be used with thesurgical robot15 and is within the scope of the present invention. In some embodiments, atstep230, the positions of theRF transmitters120 tracking the anatomical target are read by positioningsensors110. Thesetransmitters120 identify the initial position of the anatomical target and any changes in position during the procedure. In some embodiments, if anyRF transmitters120 must transmit through a medium that changes the RF signal characteristics, then the system will compensate for these changes when determining the transmitter's120 position.
In some embodiments, atstep240, the positions of thetransmitters120 on the anatomy are calibrated relative to the LPS coordinate system. In other words, the LPS provides a reference system, and the location of the anatomical target is calculated relative to the LPS coordinates. In some embodiments, to calibrate the anatomy relative to the LPS, the positions oftransmitters120 affixed to the anatomical target are recorded at the same time as positions oftemporary transmitters120 placed on precisely known anatomical landmarks also identified on the anatomical image. This calculation is performed by acomputer100.
In some embodiments, atstep250, the positions of theRF transmitters120 that track the anatomical target are read. Since the locations of thetransmitters120 on the anatomical target have already been calibrated, the system can easily determine if there has been any change in position of the anatomical target. Some embodiments include astep260, where the positions of thetransmitters120 on thesurgical instrument35 are read. Thetransmitters120 may be located on thesurgical instrument35 itself, and/or there may betransmitters120 attached to various points of thesurgical robot15.
In some embodiments of the invention, thesurgical robot15 can also include a plurality of attached conventional position encoders that help determine the position of thesurgical instrument35. In some embodiments, the position encoders can be devices used to generate an electronic signal that indicates a position or movement relative to a reference position. In some other embodiments, a position signal can be generated using conventional magnetic sensors, conventional capacitive sensors, and conventional optical sensors.
In some embodiments, position data read from the position encoders may be used to determine the position of thesurgical instrument35 used in the procedure. In some embodiments, the data may be redundant of position data calculated fromRF transmitters120 located on thesurgical instrument35. Therefore, in some embodiments, position data from the position encoders may be used to double-check the position being read from the LPS.
In some embodiments, at step270, the coordinates of the positions of thetransmitters120 on thesurgical instrument35, and/or the positions read from the position encoders, is calibrated relative to the anatomical coordinate system. In other words, in some embodiments, the position data of thesurgical instrument35 is synchronized into the same coordinate system as the patient's anatomy. In some embodiments, this calculation is performed automatically by thecomputer100 since the positions of thetransmitters120 on the anatomical target and the positions of thetransmitters120 on thesurgical instrument35 are in the same coordinate system, and the positions of thetransmitters120 on the anatomical target are already calibrated relative to the anatomy.
In some embodiments, atstep280, thecomputer100 superimposes a representation of the location calculated in step270 of the surgical device on the 3D anatomical image of the patient18 taken instep220. In some embodiments, the superimposed image can be displayed to an agent. In some embodiments, atstep290, thecomputer100 sends the appropriate signals to themotors160 to drive thesurgical robot15. In some embodiments, if the agent preprogrammed a trajectory, then therobot15 is driven so that thesurgical instrument35 follows the preprogrammed trajectory if there is no further input from the agent. In some embodiments, if there is agent input, then thecomputer100 drives therobot15 in response to the agent input.
In some embodiments, atstep295, thecomputer100 determines whether the anatomy needs to be recalibrated. In some embodiments, the agent may choose to recalibrate the anatomy, in which case thecomputer100 responds to agent input. Alternatively, in some embodiments, thecomputer100 may be programmed to recalibrate the anatomy in response to certain events. For instance, in some embodiments, thecomputer100 may be programmed to recalibrate the anatomy if theRF transmitters120 on the anatomical target indicate that the location of the anatomical target has shifted relative to the RF transmitters120 (i.e. this spatial relationship should be fixed). In some embodiments, an indicator that the anatomical target location has shifted relative to thetransmitters120 is if thecomputer100 calculates that thesurgical instrument35 appears to be inside bone when no drilling or penetration is actually occurring.
In some embodiments, if the anatomy needs to be calibrated, then the process beginning atstep230 is repeated. In some embodiments, if the anatomy does not need to be recalibrated, then the process beginning atstep250 is repeated. In some embodiments, at any time during the procedure, certain fault conditions may cause thecomputer100 to interrupt the program and respond accordingly. For instance, in some embodiments, if the signal from theRF transmitters120 cannot be read, then thecomputer100 may be programmed to stop the movement of therobot15, or remove thesurgical instrument35 from thepatient18. Another example of a fault condition is if therobot15 encounters a resistance above a preprogrammed tolerance level.
FIG. 12 shows a flow chart diagram1200 for a closed screw/needle insertion procedure according to an embodiment of the invention is shown. In a closed pedicle screw insertion procedure, in some embodiments, therobot15 holds aguide tube50 adjacent to the patient18 in the correct angular orientation at the point where a conventional pedicle screw is to be inserted through the tissue and into the bone of thepatient18.
In some embodiments, the distance between each of the calibratingtransmitters120 relative to each other is measured prior tocalibration step300. In some embodiments, each calibratingtransmitter120 transmits RF signals on a different frequency so thepositioning sensors12 can determine whichtransmitter120 emitted a particular RF signal. In some embodiments, the signal of each of thesetransmitters120 is received by positioningsensors12. Since the distance between each of the calibratingtransmitters120 is known, and thesensors12 can identify the signals from each of the calibratingtransmitters120 based on the known frequency, using time of flight calculation, in some embodiments, thepositioning sensors12 are able to calculate the spatial distance of each of thepositioning sensors12 relative to each other. Thesystem1 is now calibrated. As a result, in some embodiments, thepositioning sensors12 can now determine the spatial position of anynew RF transmitter120 introduced into the room10 relative to thepositioning sensors12.
In some embodiments, atstep310, a 3D anatomical image scan, such as a CT scan, is taken of the anatomical target. Any 3D anatomical image scan may be used with thesurgical robot15 and is within the scope of the present invention. In some embodiments, atstep320, the operator selects a desired trajectory and insertion point of thesurgical instrument35 on the anatomical image captured atstep310. In some embodiments, the desired trajectory and insertion point is programmed into thecomputer100 so that therobot15 can drive aguide tube50 automatically to follow the trajectory. In some embodiments, atstep330, the positions of theRF transmitters120 tracking the anatomical target are read by positioningsensors110. In some embodiments, thesetransmitters120 identify the initial position of the anatomical target and any changes in position during the procedure.
In some embodiments, if anyRF transmitters120 must transmit through a medium that changes the RF signal characteristics, the system will compensate for these changes when determining the transmitter's120 position. In some embodiments, atstep340, the positions of thetransmitters120 on the anatomy are calibrated relative to the LPS coordinate system. In other words, the LPS provides a reference system, and the location of the anatomical target is calculated relative to the LPS coordinates. In some embodiments, to calibrate the anatomy relative to the LPS, the positions oftransmitters120 affixed to the anatomical target are recorded at the same time as positions oftemporary transmitters120 on precisely known anatomical landmarks also identified on the anatomical image. This calculation is performed by a computer.
In some embodiments, atstep350, the positions of theRF transmitters120 that track the anatomical target are read. Since the locations of thetransmitters120 on the anatomical target have already been calibrated, in some embodiments, the system can easily determine if there has been any change in position of the anatomical target. In some embodiments, atstep360, the positions of thetransmitters120 on thesurgical instrument35 are read. In some embodiments, thetransmitters120 may be located on thesurgical instrument35, and/or attached to various points of thesurgical robot15.
In some embodiments, atstep370, the coordinates of the positions of thetransmitters120 on thesurgical instrument35, and/or the positions read from the position encoders, are calibrated relative to the anatomical coordinate system. In other words, the position data of thesurgical instrument35 is synchronized into the same coordinate system as the anatomy. This calculation is performed automatically by thecomputer100 since the positions of thetransmitters120 on the anatomical target and the positions of thetransmitters120 on thesurgical instrument35 are in the same coordinate system and the positions of thetransmitters120 on the anatomical target are already calibrated relative to the anatomy.
In some embodiments, atstep380, thecomputer100 superimposes a representation of the location calculated instep370 of the surgical device on the 3D anatomical image of the patient18 taken instep310. The superimposed image can be displayed to the user. In some embodiments, atstep390, thecomputer100 determines whether theguide tube50 is in the correct orientation and position to follow the trajectory planned atstep320. If it is not, then step393 is reached. If it is in the correct orientation and position to follow the trajectory, then step395 is reached.
In some embodiments, atstep393, thecomputer100 determines what adjustments it needs to make in order to make theguide tube50 follow the preplanned trajectory. Thecomputer100 sends the appropriate signals to drive themotors160 in order to correct the movement of the guide tube. In some embodiments, atstep395, thecomputer100 determines whether the procedure has been completed. If the procedure has not been completed, then the process beginning atstep350 is repeated.
In some embodiments, at any time during the procedure, certain fault conditions may cause thecomputer100 to interrupt the program and respond accordingly. For instance, if the signal from theRF transmitters120 cannot be read, then thecomputer100 may be programmed to stop the movement of therobot15 or lift theguide tube50 away from thepatient18. Another example of a fault condition is if therobot15 encounters a resistance above a preprogrammed tolerance level. Another example of a fault condition is if theRF transmitters120 on the anatomical target shift so that actual and calculated positions of the anatomy no longer match. One indicator that the anatomical target location has shifted relative to thetransmitters120 is if thecomputer100 calculates that thesurgical instrument35 appears to be inside bone when no drilling or penetration is actually occurring.
In some embodiments, the proper response to each condition may be programmed into the system, or a specific response may be user-initiated. For example, thecomputer100 may determine that in response to an anatomy shift, the anatomy would have to be recalibrated, and the process beginning atstep330 should be repeated. Alternatively, a fault condition may require the flowchart to repeat fromstep300. Another alternative is the user may decide that recalibration fromstep330 is desired, and initiate that step himself.
Referring now toFIG. 13, a flow chart diagram1300 for a safe zone surgical procedure performed using the system described herein is shown in accordance with some embodiments of the invention. In a safe zone surgical procedure, there is a defined safe zone around the surgical area within which the surgical device must stay. The physician manually controls the surgical device that is attached to the end-effectuator30 of thesurgical robot15. If the physician moves the surgical device outside of the safe zone, then thesurgical robot15 stiffens thearm23 so that the physician cannot move theinstrument35 in any direction that would move thesurgical instrument35 outside the safe zone.
In some embodiments, the distance between each of the calibratingtransmitters120 relative to each other is measured prior tocalibration step400. Each calibratingtransmitter120 transmits RF signals on a different frequency so thepositioning sensors12 can determine whichtransmitter120 emitted a particular RF signal. The signal of each of thesetransmitters120 is received by positioningsensors12. Since the distance between each of the calibratingtransmitters120 is known, and thesensors12 can identify the signals from each of the calibratingtransmitters120 based on the known frequency, thepositioning sensors12 are able to calculate, using time of flight calculation, the spatial distance of each of thepositioning sensors12 relative to each other. Thesystem1 is now calibrated. As a result, thepositioning sensors12 can now determine the spatial position of anynew RF transmitter120 introduced into the room10 relative to thepositioning sensors12.
In some embodiments, atstep410, a 3D anatomical image scan, such as a CT scan, is taken of the anatomical target. Any 3D anatomical image scan may be used with thesurgical robot15 and is within the scope of the present invention. In some embodiments, atstep420, the operator inputs a desired safe zone on the anatomical image taken instep410. In an embodiment of the invention, the operator uses an input to thecomputer100 to draw a safe zone on a CT scan taken of the patient18 instep410. In some embodiments, atstep430, the positions of theRF transmitters120 tracking the anatomical target are read by positioning sensors. Thesetransmitters120 identify the initial position of the anatomical target and any changes in position during the procedure. In some embodiments, if anyRF transmitters120 must transmit through a medium that changes the RF signal characteristics, then the system will compensate for these changes when determining the transmitter's120 position.
In some embodiments, atstep440, the positions of thetransmitters120 on the anatomy are calibrated relative to the LPS coordinate system. In other words, the LPS provides a reference system, and the location of the anatomical target is calculated relative to the LPS coordinates. To calibrate the anatomy relative to the LPS, the positions oftransmitters120 affixed to the anatomical target are recorded at the same time as positions oftemporary transmitters120 on precisely known landmarks on the anatomy that can also be identified on the anatomical image. This calculation is performed by acomputer100. In some embodiments, atstep450, the positions of theRF transmitters120 that track the anatomical target are read. Since the locations of thetransmitters120 on the anatomical target have already been calibrated, the system can easily determine if there has been any change in position of the anatomical target.
In some embodiments, atstep460, the positions of thetransmitters120 on thesurgical instrument35 are read. Thetransmitters120 may be located on thesurgical instrument35 itself, and/or there may betransmitters120 attached to various points of thesurgical robot15. In some embodiments, atstep470, the coordinates of the positions of thetransmitters120 on thesurgical instrument35, and/or the positions read from the position encoders, are calibrated relative to the anatomical coordinate system. In other words, the position data of thesurgical instrument35 is synchronized into the same coordinate system as the anatomy. This calculation is performed automatically by thecomputer100 since the positions of thetransmitters120 on the anatomical target and the positions of thetransmitters120 on thesurgical instrument35 are in the same coordinate system and the positions of thetransmitters120 on the anatomical target are already calibrated relative to the anatomy.
In some embodiments, atstep480, thecomputer100 superimposes a representation of the location calculated instep470 of the surgical device on the 3D anatomical image of the patient18 taken instep410. In some embodiments, the superimposed image can be displayed to the user. In some embodiments, atstep490, thecomputer100 determines whether the surgical device attached to the end-effectuator30 of thesurgical robot15 is within a specified range of the safe zone boundary (for example, within 1 millimeter of reaching the safe zone boundary). In some embodiments, if the end-effectuator30 is almost to the boundary, then step493 is reached. In some embodiments, if it is well within the safe zone boundary, then step495 is reached.
In some embodiments, atstep493, thecomputer100 stiffens the arm of thesurgical robot15 in any direction that would allow the user to move the surgical device closer to the safe zone boundary. In some embodiments, atstep495, thecomputer100 determines whether the anatomy needs to be recalibrated. In some embodiments, the user may choose to recalibrate the anatomy, in which case thecomputer100 responds to user input. Alternatively, in some embodiments, thecomputer100 may be programmed to recalibrate the anatomy in response to certain events. For instance, in some embodiments, thecomputer100 may be programmed to recalibrate the anatomy if theRF transmitters120 on the anatomical target indicate that the location of the anatomical target has shifted relative to the RF transmitters120 (i.e. this spatial relationship should be fixed.) In some embodiments, an indicator that the anatomical target location has shifted relative to thetransmitters120 is if thecomputer100 calculates that thesurgical instrument35 appears to be inside bone when no drilling or penetration is actually occurring.
In some embodiments, if the anatomy needs to be calibrated, then the process beginning atstep430 is repeated. In some embodiments, if the anatomy does not need to be recalibrated, then the process beginning atstep450 is repeated. In some embodiments, at any time during the procedure, certain fault conditions may cause thecomputer100 to interrupt the program and respond accordingly. For instance, in some embodiments, if the signal from theRF transmitters120 cannot be read, then thecomputer100 may be programmed to stop the movement of therobot15 or remove thesurgical instrument35 from thepatient18. Another example of a fault condition is if therobot15 encounters a resistance above a preprogrammed tolerance level.
Referring now toFIG. 14, a flow chart diagram1400 for a conventional flexible catheter or wire insertion procedure according to an embodiment of the invention is shown. Catheters are used in a variety of medical procedures to deliver medicaments to a specific site in a patient's body. Often, delivery to a specific location is needed so a targeted diseased area can then be treated. Sometimes instead of inserting the catheter directly, a flexible wire is first inserted, over which the flexible catheter can be slid.
In some embodiments, the distance between each of the calibratingtransmitters120 relative to each other is measured prior tocalibration step500. In some embodiments, each calibratingtransmitter120 transmits RF signals on a different frequency so thepositioning sensors12,110 can determine whichtransmitter120 emitted a particular RF signal. In some embodiments, the signal from each of thesetransmitters120 is received by positioningsensors12,110. Since the distance between each of the calibratingtransmitters120 is known, and the sensors can identify the signals from each of the calibratingtransmitters120 based on the known frequency, in some embodiments, using time of flight calculation, thepositioning sensors12,110 are able to calculate the spatial distance of each of thepositioning sensors12,110 relative to each other. The system is now calibrated. As a result, in some embodiments, thepositioning sensors12,110 can now determine the spatial position of anynew RF transmitter120 introduced into the room10 relative to thepositioning sensors12,110.
In some embodiments, atstep510, reference needles that contain theRF transmitters120 are inserted into the body. The purpose of these needles is to track movement of key regions of soft tissue that will deform during the procedure or with movement of thepatient18.
In some embodiments, atstep520, a 3D anatomical image scan (such as a CT scan) is taken of the anatomical target. Any 3D anatomical image scan may be used with thesurgical robot15 and is within the scope of the present invention. In some embodiments, the anatomical image capture area includes the tips of the reference needles so that their transmitters'120 positions can be determined relative to the anatomy. In some embodiments, atstep530, the RF signals from the catheter tip and reference needles are read.
In some embodiments, atstep540, the position of the catheter tip is calculated. Because the position of the catheter tip relative to the reference needles and the positions of the reference needles relative to the anatomy are known, thecomputer100 can calculate the position of the catheter tip relative to the anatomy. In some embodiments, atstep550, the superimposed catheter tip and the shaft representation is displayed on the anatomical image taken instep520. In some embodiments, atstep560, thecomputer100 determines whether the catheter tip is advancing toward the anatomical target. If it is not moving to the anatomical target, then step563 is reached. If it is correctly moving, then step570 is reached.
In some embodiments, atstep563, therobot15 arm is adjusted to guide the catheter tip in the desired direction. If the anatomy needs to be calibrated, then in some embodiments, the process beginning atstep520 is repeated. If the anatomy does not need to be recalibrated, then the process beginning atstep540 is repeated. In some embodiments, atstep570, thecomputer100 determines whether the procedure has been completed. If the procedure has not been completed, then the process beginning atstep540 is repeated.
In some embodiments, at any time during the procedure, certain fault conditions may cause thecomputer100 to interrupt the program and respond accordingly. For instance, in some embodiments, if the signal from the RF transmitter's120 cannot be read, then thecomputer100 may be programmed to stop the movement of therobot15 or remove the flexible catheter from thepatient18. Another example of a fault condition is if therobot15 encounters a resistance above a preprogrammed tolerance level. A further example of a fault condition is if the RF transmitter's120 on the anatomical target indicate the location of the anatomical target shift so that actual and calculated positions of the anatomy no longer match. In some embodiments, one indicator that the anatomical target location has shifted relative to the transmitter's120 is if thecomputer100 calculates that thesurgical instrument35 appears to be inside bone when no drilling or penetration is actually occurring.
In some embodiments, the proper response to each condition may be programmed into the system, or a specific response may be user-initiated. For example, in some embodiments, thecomputer100 may determine that in response to an anatomy shift, the anatomy would have to be recalibrated, and the process beginning atstep520 should be repeated. Alternatively, in some embodiments, a fault condition may require the flowchart to repeat fromstep500. In other embodiments, the user may decide that recalibration fromstep520 is desired, and initiate that step himself.
Referring now toFIGS. 15A & 15B, screenshots of software for use with the described system is provided in accordance with some embodiments of the invention. The software provides the method to select the target area of surgery, plan the surgical path, check the planned trajectory of the surgical path, synchronize the medical images to the positioning system and precisely control the positioning system during surgery. The surgical positioning system and navigation software includes an optical guidance system or RF Local Positioning System (RF-LPS), which are in communication with the positioning system.
FIG. 15A shows a screen shot600 of the selection step for a user using a software program as described herein in accordance with some embodiments of the invention. Screen shot600 includes windows615,625, and635, which show a 3D anatomical image of surgical target630 on different planes. In this step, the user selects the appropriate 3D image corresponding to anatomical location of where the procedure will occur. In some embodiments, the user uses a graphic control to change the perspective of the image in order to more easily view the image from different angles. In some embodiments, the user can view the surgical target630 within separate coordinated views for each of the x-axis, y-axis and z-axis coordinates for each anatomical location in the database in each window615,625 and635, respectively.
In some embodiments, after selecting the desired 3D image of the surgical target630, the user will plan the appropriate trajectory on the selected image. In some embodiments, an input control is used with the software in order to plan the trajectory of thesurgical instrument35. In one embodiment of the invention, the input control is in the shape of a biopsy needle8110 for which the user can plan a trajectory.
FIG. 15B shows a screen shot650 during the medical procedure in accordance with some embodiments of the invention. In some embodiments, the user can still view the anatomical target630 in different x-axis, y-axis and z-axis coordinate views on windows615,625, and635. As shown in screen shot650, the user can see the planned trajectory line670 in multiple windows615 and625. The actual trajectory and location of thesurgical instrument35 is superimposed on the image (shown as line segment660). In some embodiments, the actual trajectory and location of thesurgical instrument35 is dynamically updated and displayed, and is shown as a line segment660. In some other embodiments, the actual trajectory and location of thesurgical instrument35 could be shown as a trapezoid or a solid central line surrounded by a blurred or semi-transparent fringe to represent the region of uncertainty. In some embodiments (under perfect conditions with no bending of the surgical instrument as it enters tissues) thetracking system3417 androbot15 encoders calculate that thesurgical instrument35 should be located at the solid line or center of the trapezoid. In some embodiments, due to bending of theinstrument35 that might occur if tissues of different densities are crossed, there might be bending, with the amount of reasonably expected bending displayed as the edges of the trapezoid or fringe. In some embodiments, the size of this edge could be estimated knowing the stiffness and tolerance of thesurgical instrument35 within theguide tube50, and by using experimental data collected for thesame instrument35 under previous controlled conditions. In some embodiments, displaying this region of uncertainty helps prevent the user from expecting the system to deliver a tool to a target trajectory with a physically impossible level of precision.
As described earlier, in some embodiments, thesurgical robot15 can be used with alternate guidance systems other than an LPS. In some embodiments, thesurgical robot system1 can comprise a targetingfixture690 for use with a guidance system. In some embodiments, one targetingfixture690 comprises acalibration frame700, as shown inFIGS. 20A-20E. Acalibration frame700 can be used in connection with many invasive procedures; for example, it can be used in thoracolumbar pedicle screw insertion in order to help achieve a more accurate trajectory position. In some embodiments, the use of thecalibration frame700 can simplify the calibration procedure. In some embodiments of the invention, thecalibration frame700 can be temporarily affixed to the skin of apatient18 surrounding a selected site for a medical procedure, and then the medical procedure can be performed through a window defined by the calibration frame.
As shown inFIGS. 20A and 20B, in some embodiments of the invention, thecalibration frame700 can comprise a combination of radio-opaque markers730 and infrared, or “active,”markers720. In some embodiments, the radio-opaque markers730 can be located within theCT scan region710, and theactive markers720 can be located outside of theCT scan region710. In some embodiments, a surgical field17 (i.e., the area where the invasive procedure will occur) can be located within the perimeter created by radio-opaque markers730. In some embodiments, the actual distances of the radio-opaque730 andactive markers720 relative to each other can be measured from a high-precision laser scan of the calibration frame. Additionally or alternatively, in some embodiments, the actual relative distances can be measured by actively measuring the positions ofactive markers720 while nearly simultaneously or simultaneously pointing with a pointing device, such as a conventional digitizing probe, to one or more locations on the surface of the radio-opaque markers730. In certain embodiments, digitizing probes can compriseactive markers720 embedded in arigid body690 and a tip extending from the rigid body.
In some embodiments, through factory calibration or other calibration method(s), such as pivoting calibration, the location of the probe tip relative to the rigid body of the probe can be established. In some embodiments, it can then be possible to calculate the location of the probe's tip from the probe'sactive markers720. In some embodiments, for a probe with a concave tip that is calibrated as previously described, the point in space returned during operation of the probe can represent a point distal to the tip of the probe at the center of the tip's concavity. Therefore, in some embodiments, when a probe (configured with a concave tip and calibrated tomarker730 of the same or nearly the same diameter as the targeting fixture's radio-opaque marker730) is touched to the radio-opaque marker730, the probe can register the center of the sphere. In some embodiments,active markers720 can also be placed on the robot in order to monitor a position of therobot15 andcalibration frame700 simultaneously or nearly simultaneously. In some embodiments, thecalibration frame700 is mounted on the patient's skin before surgery/biopsy, and will stay mounted during the entire procedure. Surgery/biopsy takes place through the center of theframe700.
In some embodiments, when the region of the plate with the radio-opaque markers730 is scanned intra-operatively or prior to surgery (for example, using a CT scanner), the CT scan contains both the medical images of the patient's bony anatomy, and spherical representations of the radio-opaque markers730. In some embodiments, software is used to determine the locations of the centers of themarkers730 relative to the trajectories defined by the surgeon on the medical images. Because the pixel spacing of the CT scan can be conveyed within encoded headers in DICOM images, or can be otherwise available to a tracking software (for example, the robotic guidance software3406), it can, in some embodiments, be possible to register locations of the centers of themarkers730 in Cartesian coordinates (in millimeters, for example, or other length units). In some embodiments, it can be possible to register the Cartesian coordinates of the tip and tail of each trajectory in the same length units.
In some embodiments, because the system knows the positions of the trajectories relative to the radio-opaque markers730, the positions of the radio-opaque markers730 relative to theactive markers720, and the positions of theactive markers720 on thecalibration frame700 relative to the active markers on the robot15 (not shown), the system has all information necessary to position the robot's end-effectuator30 relative to the defined trajectories.
In some other embodiments of the invention, thecalibration frame700 can comprise at least three radio-opaque markers730 embedded in the periphery of thecalibration frame700. In some embodiments, the at least three radio-opaque markers730 can be positioned asymmetrically about the periphery of thecalibration frame700 such that the software, as described herein, can sort the at least three radio-opaque markers730 based only on the geometric coordinates of eachmarker730. In some embodiments, thecalibration frame700 can comprise at least one bank ofactive markers720. In some embodiments, each bank of the at least one bank can comprise at least threeactive markers720. In some embodiments, the at least one bank ofactive markers720 can comprise four banks ofactive markers720. In yet another aspect, thecalibration frame700 can comprise a plurality of leveling posts77 coupled to respective corner regions of thecalibration frame700. In some embodiments, the corner regions of thecalibration frame700 can include leveling posts77 that can comprise radiolucent materials. In some embodiments, the plurality of leveling posts77 can be configured to promote uniform, rigid contact between thecalibration frame700 and the skin of thepatient18. In some embodiments, a surgical-grade adhesive film, such as, for example and without limitation, Ioban™ from 3M™, can be used to temporarily adhere thecalibration frame700 to the skin of thepatient18. 3M™ and Ioban™ are registered trademarks of 3M Company. In some further embodiments, thecalibration frame700 can comprise a plurality of upright posts75 that are angled away from the frame700 (seeFIG. 20B). In some embodiments, the plurality ofactive markers720 can be mounted on the plurality of upright posts75.
As shown inFIG. 20B, in some embodiments, there are four radio-opaque markers730 (non-metallic BBs from an air gun) embedded in the periphery of the frame, labeled OP1, OP2, OP3, OP4. In some embodiments, only threemarkers730 are needed for determining the orientation of a rigid body in space (the 4th marker is there for added accuracy). In some embodiments, the radio-opaque markers730 are placed in an asymmetrical configuration (notice how OP1 and OP2 are separated from each other by more distance than OP3 and OP4, and OP1 and OP4 are aligned with each other across the gap, however OP3 is positioned more toward the center than OP2). The reason for this arrangement is so that a computer algorithm can automatically sort the markers to determine which is which if only given the raw coordinates of the four markers and not their identification.
In some embodiments, there are four banks of active markers720 (threemarkers720 per bank). Only one bank of threemarkers720 is needed (redundancy is for added accuracy and so that the system will still work if the surgeon, tools, or robot are blocking some of the markers. In some embodiments, despite the horizontal orientation of thepatient18, the angulation of the upright posts can permit theactive markers720 to face toward the cameras or detection devices of the tracking system (for example, the tracking system3417). In some embodiments, the upright posts can be angled away from the calibration frame by about 10°.
In some applications, to establish the spatial relationship between the active720 and radio-opaque markers730, a conventional digitizing probe, such as a 6-marker probe, embedded withactive markers720 in a known relationship to the probe's tip (see for exampleFIG. 20C) can be used to point to each of the radio-opaque markers730. In some embodiments, the probe can point to locations on two opposite surfaces of the spherical radio-opaque markers730 while recording the position of the probe tip and theactive markers720 on theframe700 simultaneously. Then, the average position of the two surface coordinates can be taken, corresponding to the center of the sphere. An image of therobot15 used with this targetingfixture690 is shown inFIG. 20D. For placement of conventional surgical screws, a biopsy, injection, or other procedures, in some embodiments, therobot15 can work through the window formed by theframe700. During a surgical procedure, in some embodiments, the working portal is kept on the interior of theframe700 and themarkers720 on the exterior of theframe700 can improve accuracy over a system where fiducials are mounted away from the area where surgery is being performed. Without wishing to be bound by theory, simulation, and/or modeling, it is believed that a reason for improved accuracy is that optimal accuracy of trackingmarkers720 can be achieved if trackingmarkers720 are placed around the perimeter of theframe700 being tracked.
Further embodiments of the invention are shown inFIG. 20E illustrating acalibration frame700. Thisfixture690 is simplified to make it less obstructive to the surgeon. In some embodiments, thecalibration frame700 can comprise fouractive markers720 having a lower profile than theactive markers720 described above and depicted inFIGS. 20A-20D. For example, thecalibration frame700 can comprise a plurality of upright posts75 that are angled away from the calibration frame by about 10°. In some embodiments, theactive markers720 are mounted on the posts75 that are angled back by 10°, and this angulation keeps themarkers720 facing toward the cameras despite the patient being horizontal.
Moreover, in some embodiments, thefront markers720 can have less chance of obscuring therear markers720. For example, posts75 that are farthest away from the camera or farthest from a detection device of thetracking system3417 can be taller and spaced farther laterally than the posts75 closest to the camera. In some further embodiments of the invention, thecalibration frame700 can comprisemarkers730 that are both radio-opaque for detection by a medical imaging scanner, and visible by the cameras or otherwise detectable by the real-time tracking system3417. In some embodiments, the relationship between radio-opaque730 and active markers (730,720) does not need to be measured or established because they are one in the same. Therefore, in some embodiments, as soon as the position is determined from the CT scan (or other imaging scan), the spatial relationship between therobot15 and anatomy of the patient18 can be defined.
In other embodiments, the targetingfixture690 can comprise a flexible roll configuration. In some embodiments, the targetingfixture690 can comprise three or more radio-opaque markers730 that define a rigid outer frame and nine or moreactive markers720 embedded in a flexible roll of material (for example, theflexible roll705 inFIG. 21A). As described earlier, radio-opaque markers730 are visible on CT scans and/or other medical diagnostic images, such as MRI, or reconstructions from O-arm or Iso-C scans, and their centroids can be determined from the 3D image.Active markers720 include trackedmarkers720 that have 3D coordinates that are detectable in real-time using cameras or other means. Some embodiments can utilize active marker systems based on reflective optical systems such as Motion Analysis Inc., or Peak Performance. Other suitable technologies include infrared-emitting marker systems such as Optotrak, electromagnetic systems such as Medtronic's Axiem®, or Flock of Birds®, or a local positioning system (“LPS”) described by Smith et al. in U.S. Patent Publication No. 2007/0238985. Flock Of Birds® is a registered trademark of Ascension Technology Corporation. Axiem is a trademark of Medtronic, Inc., and its affiliated companies. Medtronic® is a registered trademark used for Surgical and Medical Apparatus, Appliances and Instruments.
In some embodiments of the invention, at least a portion of theflexible roll705 can comprise self-adhering film, such as, for example and without limitation, 3M™Ioban™ adhesive film (iodine-impregnated transparent surgical drape) similar to routinely used operating room product model 6651 EZ (3M, St. Paul, Minn.). Ioban™ is a trademark of 3M company. In some embodiments, within theflexible roll705, the radio-opaque and active markers (730,720) can be rigidly coupled to each other, with each radio-opaque marker730 coupled to three or moreactive markers720. Alternatively, in some embodiments, the markers can simultaneously serve as radio-opaque and active markers (for example, anactive marker720 whose position can be detected from cameras or other sensors), and the position determined from the 3D medical image can substantially exactly correspond to the center of themarker720. In some embodiments, as few as threesuch markers720 could be embedded in theflexible roll705 and still permit determination of the spatial relationship between therobot15 and the anatomy of thepatient18. If radio-opaque markers730 andactive markers720 are not one in the same, in some embodiments the at least threeactive markers720 must be rigidly connected to each radio-opaque marker730 because three separate non-collinear points are needed to unambiguously define the relative positions of points on a rigid body. That is, if only one or 2active markers720 are viewed, there is more than one possible calculated position where a rigidly coupled radio-opaque marker could be.
In some embodiments of the invention, other considerations can be used to permit the use of twoactive markers720 per radio-opaque marker730. For example, in some embodiments, if twoactive markers720 and one radio-opaque marker730 are intentionally positioned collinearly, with the radio-opaque marker730 exactly at the midpoint between the twoactive markers720, the location of the radio-opaque marker730 can be determined as the mean location of the twoactive markers720. Alternatively, in some embodiments, if the twoactive markers720 and the radio-opaque marker730 are intentionally positioned collinearly but with the radio-opaque marker730 closer to oneactive marker720 than the other (see for exampleFIG. 21B), then in some embodiments, the radio-opaque marker730 must be at one of two possible positions along the line in space formed by the two active markers720 (seeFIG. 21B). In this case, in some embodiments, if theflexible roll705 is configured so that each pair ofactive markers720 is oriented (when in its final position) with onemarker720 more toward the center of theflexible roll705, then it can be determined from the orientations of all markers or certain combinations of markers from different regions which of the two possible positions within each region is the correct position for the radio-opaque marker730 (seeFIG. 21C showingflexible roll705 showed rolled on a torso and shown unrolled on a torso withmarkers720,730 in place). As shown, the radio-opaque markers730 can be positioned toward the inside of the frame705), with marker groups nearer to the top of the figure having the radio-opaque marker730 positioned below theactive markers720 and marker groups near the bottom of the figure having the radio-opaque marker positioned above theactive markers720.
In some embodiments, theflexible roll705 can be positioned across the patient's back or other area, and adhered to the skin of the patient18 as it is unrolled. In some embodiments, knowing the spatial relationship between each triad ofactive markers720 and the rigidly coupled radio-opaque marker730, it is possible to establish the relationship between the robot15 (position established by its own active markers720) and the anatomy (visualized together with radio-opaque markers730 on MM, CT, or other 3D scan). In some embodiments, theflexible roll705 can be completely disposable. Alternatively, in some other embodiments, theflexible roll705 can comprise reusable marker groups integrated with a disposable roll with medical grade adhesive on each side to adhere to thepatient18 and themarker groups720,730. In some further embodiments, theflexible roll705 can comprise a drape incorporated into theflexible roll705 for covering thepatient18, with the drape configured to fold outwardly from theroll705.
In some embodiments, after theroll705 has been unrolled, theroll705 can have a desired stiffness such that theroll705 does not substantially change its position relative to the bony anatomy of thepatient18. In some embodiments of the invention, a conventional radiolucent wire can be embedded in the perimeter of theframe700. In some embodiments, it a chain of plastic beads, such as the commercially available tripods shown inFIG. 21D, or a commercially available “snake light” type fixture, can be employed to provide desired stiffness to the unrolled fixture such that it maintains its position after unrolling occurs. For example, in some embodiments, the beads of the chain of plastic beads as shown can be affixed to each other with a high friction so that they hold their position once shifted. Further, in some embodiments, chains of beads can be incorporated into, and define, a perimeter of theframe700. In some embodiments, this type of frame could be loaded with conventional chemicals that mix at the time of application. For example, in some embodiments, components of a conventional two-part epoxy could be held in separate fragile baggies within the frame that pop open when the user first starts to manipulate the beads. In some embodiments, the user would attach the frame to thepatient18, and mold it to the contours of the patient's body. After a short period of time, theframe700 would solidify to form a very rigid frame, locking the beads in their current orientation.
In some embodiments of the invention, the targetingfixture690 can be an adherable fixture, configured for temporary attachment to the skin of apatient18. For example, in some embodiments, the targetingfixture690 can be temporarily adhered to the patient18 during imaging, removed, and then subsequently reattached during a follow-up medical procedure, such as a surgery. In some embodiments, the targetingfixture690 can be applied to the skull of apatient18 for use in placement of electrodes for deep brain stimulation. In some embodiments, this method can use asingle fixture690, or two related fixtures. In this instance, the two related fixtures can share the same surface shape. However, onefixture690 can be temporarily attached at the time of medical image scanning, and can include radio-opaque markers730 (but not active markers720), and thesecond fixture690 can be attached at the time of surgery, and can include active markers720 (but not radio-opaque markers730).
In some embodiments, the first fixture (for scanning) can comprise aframe690 with three or more embedded radio-opaque markers730, and two ormore openings740 for application of markings (the markings shown as750 inFIG. 22B). In some embodiments, thedevice690 can be adhered to the scalp of apatient18, and theopenings740 can be used to paint marks on the scalp with, for example, henna or dye (shown as “+” marks750 inFIG. 22B). With thisfixture690 in place, in some embodiments, the patient18 can receive a 3D scan (for example an MRI or CT scan) in which the radio-opaque markers730 are captured. As illustrated byFIG. 22B, in some embodiments, thefixture690 can then be removed, leaving the dye marks750 on the scalp. In some embodiments, on a later date (before the dye marks750 wear off), the patient18 can return, and the surgeon or technician can attach the 2nd fixture (for surgery) containingactive markers720 for intraoperative tracking (seeFIG. 22C-22D). In some embodiments, thefixture690 shown inFIG. 22C can be mounted to the scalp, spatially positioned and oriented in the same position as the previously adhered first fixture (shown inFIG. 22A) by ensuring that the previously placed dye marks750 line up with holes in the second fixture690 (see the alignment arrows depicted inFIG. 22C). Optionally, in some embodiments, thefixture690 can have a transparent frame for good visualization. The above-described method assumes that the locations of themarks750 do not change over the period of time between the scan and the return of thepatient18 for surgery. Since the relative positions between the radio-opaque markers730 from the temporary (first) fixture690 (which appear in the scan) and theactive markers720 on the second appliedfixture690 are known through a calibration and/or by careful manufacturing of thefixtures690, the coordinate system of the anatomy and the coordinate system of theactive markers720 can be synchronized so that therobot15 can target any planned trajectory on the 3D image as described further herein. Further, in some embodiments, this method can enable image guidance with only one pre-op scan and without requiring the patient18 to go home after a pre-op scan. This circumvents the need for a patient18 to take care of wounds from targeting screws that are invasively drilled into the skull of thepatient18.
In some embodiments of the invention, the targetingfixture690 can comprise a conventional clamping mechanism for securely attaching the targetingfixture690 to thepatient18. For example, in some embodiments, the targetingfixture690 can be configured to clamp to the spinous process6301 of a patient18 after the surgeon has surgically exposed the spinous process.FIG. 23 shows adynamic tracking device2300 mounted to thespinous process2310 in the lumbar spine of a patient18 in accordance with some embodiments of the invention. This targeting fixture is used with Medtronic's StealthStation. This figure is reprinted from Bartolomei J, Henn J S, Lemole G M Jr., Lynch J, Dickman C A, Sonntag V K H, Application of frameless stereotaxy to spinal surgery, Barrow Quarterly 17(1), 35-43 (2001). StealthStation® is a trademark of Medtronic, Inc., and its affiliated companies.
In some embodiments, during use of a targetingfixture690 having a conventional clamping mechanism with image guidance, the relationship between themarkers720,730 and the bony anatomy of the patient18 can be established using a registration process wherein known landmarks are touched with a digitizing probe at the same time that the markers on the tracker are visible. In some embodiments of the invention, the probe itself can have a shaft protruding from a group ofmarkers720,730, thereby permitting thetracking system3417 to calculate the coordinates of the probe tip relative to themarkers720,730.
In some embodiments, the clamping mechanism of the targetingfixture690 can be configured for clamping to thespinous process2310, or can be configured for anchoring to bone of the patient18 such that thefixture690 is substantially stationary and not easily moved. In some further embodiments, the targetingfixture690 can comprise at least threeactive markers720 and distinct radio-opaque markers730 that are detected on the CT or other 3D image, preferably near the clamp (to be close to bone). In some alternative embodiments, theactive markers720 themselves must be configured to be visualized accurately on CT or other 3D image. In certain embodiments, the portion of thefixture690 containing a radio-opaque marker730 can be made to be detachable to enable removal from the fixture after the 3D image is obtained. In some further embodiments, a combination of radio-opaque730 andactive markers720 can allow tracking with therobot15 in the same way that is possible with the frame-type targeting fixtures690 described above.
In some embodiments, one aspect of the software and/or firmware disclosed herein is a unique process for locating the center of the above-describedmarkers730 that takes advantage of the fact that a CT scan can comprise slices, typically spaced 1.5 mm or more apart in the z direction, and sampled with about 0.3 mm resolution in the x-axis and y-axis directions. In some embodiments, since the diameter of the radio-opaque markers730 is several times larger than this slice spacing, different z slices of the sphere will appear as circles of different diameters on each successive x-y planar slice. In some embodiments, since the diameter of the sphere is defined beforehand, the necessary z position of the center of the sphere relative to the slices can be calculated to provide the given set of circles of various diameters. Stated similarly, in some embodiments, a z slice substantially exactly through the center of the sphere can yield a circle with a radius R that is substantially the same as that of the sphere. In some embodiments, a z slice through a point at the top or bottom of the sphere can yield a circle with a radius R approximating zero. In some other embodiments, a z slice through a z-axis coordinate Z1 between the center and top or bottom of the sphere can yield a circle with a radius R1=R cos(arcsin(Z1/R)).
In some embodiments of the invention, the observed radii of circles on z slices of known inter-slice spacing can be analyzed using the equation defined by R1=R cos(arcsin(Z1/R)). This provides a unique mathematical solution permitting the determination of the distance of each slice away from the center of the sphere. In cases in which a sphere has a diameter small enough that only a few slices through the sphere appear on a medical image, this process can provide a more precise the center of a sphere.
Some embodiments of the use of thecalibration frame700 are described to further clarify the methods of use. For example, some embodiments include the steps of a conventional closed screw or conventional needle (for example, a biopsy needle8110) insertion procedure utilizing acalibration frame700 as follows. In some embodiments, acalibration frame700 is attached to the patient's18 skin, substantially within the region at which surgery/biopsy is to take place. In some embodiments, thepatient18 receives a CT scan either supine or prone, whichever positioning orients thecalibration frame700 upward. In some embodiments, the surgeon subsequently manipulates three planar views of the patient's18 CT images with rotations and translations. In some embodiments, the surgeon then draws trajectories on the images that define the desired position, and strike angle of the end-effectuator30. In some embodiments, automatic calibration can be performed in order to obtain the centers of radio-opaque makers730 of thecalibration frame700, and to utilize the stored relationship between theactive markers720 and radio-opaque markers730. This procedure permits therobot15 to move in the coordinate system of the anatomy and/or drawn trajectories.
In some embodiments, therobot15 then will move to the desired position. In some embodiments, if forceful resistance beyond a pre-set tolerance is exceeded, therobot15 will halt. In some further embodiments, therobot15 can hold theguide tube50 at the desired position and strike angle to allow the surgeon to insert a conventional screw or needle (for example, needle7405,7410 or biopsy needle8110). In some embodiments, if tissues move in response to applied force or due to breathing, the movement will be tracked byoptical markers720, and the robot's position will automatically be adjusted.
As a further illustration of a procedure using an alternate guidance system, in some embodiments, the steps of an open screw insertion procedure utilizing an optical guidance system is described. In some embodiments, after surgical exposure, a targetingfixture690 comprising a small tree of optical markers, for example, can be attached to a bony prominence in the area of interest. In some embodiments, conventional calibration procedures for image guidance can be utilized to establish the anatomy relative to theoptical tracking system3417 and medical images. For another example, the targetingfixture690 can contain rigidly mounted, substantially permanent or detachable radio-opaque markers730 that can be imaged with a CT scan. In some embodiments, the calibration procedures consistent with those stated for thecalibration frame700 can be utilized to establish the anatomy relative to therobot15 and the medical image.
In some embodiments, the surgeon manipulates three planar views of the patient's CT images with rotations and translations. In some embodiments, the surgeon then draws trajectories on the images that define the desired position and strike angle of the end-effectuator30. In some embodiments, therobot15 moves to the desired position. In some embodiments, if forceful resistance beyond a pre-set tolerance is exceeded, therobot15 will halt. In some embodiments, therobot15 holds theguide tube50 at the desired position and strike angle to allow the surgeon to insert a conventional screw. In some embodiments, if tissues move in response to applied force or due to breathing, the movement will be tracked byoptical markers720, and the robot's position will automatically be adjusted.
Although several embodiments of the invention have been disclosed in the foregoing specification, it is understood that many modifications and other embodiments of the invention will come to mind to which the invention pertains, having the benefit of the teaching presented in the foregoing description and associated drawings. It is thus understood that the invention is not limited to the specific embodiments disclosed hereinabove, and that many modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although specific terms are employed herein, as well as in the claims which follow, they are used only in a generic and descriptive sense, and not for the purposes of limiting the described invention, nor the claims which follow.
It will be appreciated by those skilled in the art that while the invention has been described above in connection with particular embodiments and examples, the invention is not necessarily so limited, and that numerous other embodiments, examples, uses, modifications and departures from the embodiments, examples and uses are intended to be encompassed by the claims attached hereto. The entire disclosure of each patent and publication cited herein is incorporated by reference, as if each such patent or publication were individually incorporated by reference herein. Various features and advantages of the invention are set forth in the following claims.