BACKGROUND1. Technical Field
The present disclosure relates to systems and method used in relation to a surgical procedure. More specifically, the present disclosure is directed to the use of a planning system to determine a treatment plan and a navigation system to effect a treatment plan for a surgical procedure.
2. Background of the Related Art
Electrosurgical devices have become widely used. Electrosurgery involves the application of thermal and/or electrical energy to cut, dissect, ablate, coagulate, cauterize, seal or otherwise treat biological tissue during a surgical procedure. Electrosurgery is typically performed using a handpiece including a surgical device (e.g., end effector or ablation probe) that is adapted to transmit energy to a tissue site during electrosurgical procedures, a remote electrosurgical generator operable to output energy, and a cable assembly operatively connecting the surgical device to the remote generator.
Treatment of certain diseases requires the destruction of malignant tissue growths, e.g., tumors. In the treatment of diseases such as cancer, certain types of tumor cells have been found to denature at elevated temperatures that are slightly lower than temperatures normally injurious to healthy cells. Known treatment methods, such as hyperthermia therapy, typically involving heating diseased cells to temperatures above 41° C. while maintaining adjacent healthy cells below the temperature at which irreversible cell destruction occurs. These methods may involve applying electromagnetic radiation to heat, ablate and/or coagulate tissue. There are a number of different types of electrosurgical apparatus that can be used to perform ablation procedures.
Minimally invasive tumor ablation procedures for cancerous or benign tumors may be performed using two dimensional (2D) preoperative computed tomography (CT) images and an “ablation zone chart” which typically describes the characteristics of an ablation needle in an experimental, ex vivo tissue across a range of input parameters (power, time). Energy dose (power, time) can be correlated to ablation tissue effect (volume, shape) for a specific design. It is possible to control the energy dose delivered to tissue through microwave antenna design, for example, an antenna choke may be employed to provide a known location of microwave transfer from device into tissue. In another example, dielectric buffering enables a relatively constant delivery of energy from the device into the tissue independent of differing or varying tissue properties.
After a user determines which ablation needle should be used to effect treatment of a target, the user performs the treatment with ultrasound guidance. Typically, a high level of skill is required to place a surgical device into a target identified under ultrasound. Of primary importance is the ability to choose the angle and entry point required to direct the device toward the ultrasound image plane (e.g., where the target is being imaged).
Ultrasound-guided intervention involves the use of real-time ultrasound imaging (transabdominal, intraoperative, etc.) to accurately direct surgical devices to their intended target. This can be performed by percutaneous application and/or intraoperative application. In each case, the ultrasound system will include a transducer that images patient tissue and is used to identify the target and to anticipate and/or follow the path of an instrument toward the target.
Ultrasound-guided interventions are commonly used today for needle biopsy procedures to determine malignancy of suspicious lesions that have been detected (breast, liver, kidney, and other soft tissues). Additionally, central-line placements are common to gain jugular access and allow medications to be delivered. Finally, emerging uses include tumor ablation and surgical resection of organs (liver, lung, kidney, and so forth). In the case of tumor ablation, after ultrasound-guided targeting is achieved a biopsy-like needle may be employed to deliver energy (RF, microwave, cryo, and so forth) with the intent to kill tumor. In the case of an organ resection, intimate knowledge of subsurface anatomy during dissection, and display of a surgical device, in relation to this anatomy, is key to gaining successful surgical margin while avoiding critical structures.
In each of these cases, the ultrasound-guidance typically offers a two dimensional image plane that is captured from the distal end of a patient-applied transducer. Of critical importance to the user for successful device placement is the ability to visualize and characterize the target, to choose the instrument angle and entry point to reach the target, and to see the surgical device and its motion toward the target. Today, the user images the target and uses a high level of skill to select the instrument angle and entry point. The user must then either move the ultrasound transducer to see the instrument path (thus losing site of the target) or assume the path is correct until the device enters the image plane. Of primary importance is the ability to choose the angle and entry point required to direct the device toward the ultrasound image plane (e.g., where the target is being imaged).
SUMMARYThis description may use the phrases “in an embodiment,” “in embodiments,” “in some embodiments,” or “in other embodiments,” which may each refer to one or more of the same or different embodiments in accordance with the present disclosure. For the purposes of this description, a phrase in the form “A/B” means A or B. For the purposes of the description, a phrase in the form “A and/or B” means “(A), (B), or (A and B)”. For the purposes of this description, a phrase in the form “at least one of A, B, or C” means “(A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C)”.
As shown in the drawings and described throughout the following description, as is traditional when referring to relative positioning on a surgical device, the term “proximal” refers to the end of the apparatus that is closer to the user or generator, while the term “distal” refers to the end of the apparatus that is farther away from the user or generator. The term “user” refers to any medical professional (i.e., doctor, nurse, or the like) performing a medical procedure involving the use of aspects of the present disclosure described herein.
As used in this description, the term “surgical device” generally refers to a surgical tool that imparts electrosurgical energy to treat tissue. Surgical devices may include, but are not limited to, needles, probes, catheters, endoscopic instruments, laparoscopic instruments, vessel sealing devices, surgical staplers, etc. The term “electrosurgical energy” generally refers to any form of electromagnetic, optical, or acoustic energy.
Electromagnetic (EM) energy is generally classified by increasing frequency or decreasing wavelength into radio waves, microwaves, infrared, visible light, ultraviolet, X-rays and gamma-rays. As used herein, the term “microwave” generally refers to electromagnetic waves in the frequency range of 300 megahertz (MHz) (3×108cycles/second) to 300 gigahertz (GHz) (3×1011cycles/second). As used herein, the term “RF” generally refers to electromagnetic waves having a lower frequency than microwaves. As used herein, the term “ultrasound” generally refers to cyclic sound pressure with a frequency greater than the upper limit of human hearing.
As used in this description, the term “ablation procedure” generally refers to any ablation procedure, such as microwave ablation, radio frequency (RF) ablation or microwave ablation-assisted resection. As it is used in this description, “energy applicator” generally refers to any device that can be used to transfer energy from a power generating source, such as a microwave or RF electrosurgical generator, to tissue.
As they are used in this description, the terms “power source” and “power supply” refer to any source (e.g., battery) of electrical power in a form that is suitable for operating electronic circuits. As it is used in this description, “transmission line” generally refers to any transmission medium that can be used for the propagation of signals from one point to another. As used in this description, the terms “switch” or “switches” generally refers to any electrical actuators, mechanical actuators, electro-mechanical actuators (rotatable actuators, pivotable actuators, toggle-like actuators, buttons, etc.), optical actuators, or any suitable device that generally fulfills the purpose of connecting and disconnecting electronic devices, or a component thereof, instruments, equipment, transmission line or connections and appurtenances thereto, or software.
As used in this description, “electronic device” generally refers to a device or object that utilizes the properties of electrons or ions moving in a vacuum, gas, or semiconductor. As it is used herein, “electronic circuitry” generally refers to the path of electron or ion movement, as well as the direction provided by the device or object to the electrons or ions. As it is used herein, “electrical circuit” or simply “circuit” generally refers to a combination of a number of electrical devices and conductors that when connected together, form a conducting path to fulfill a desired function. Any constituent part of an electrical circuit other than the interconnections may be referred to as a “circuit element” that may include analog and/or digital components.
The term “generator” may refer to a device capable of providing energy. Such device may include a power source and an electrical circuit capable of modifying the energy outputted by the power source to output energy having a desired intensity, frequency, and/or waveform.
As it is used in this description, “user interface” generally refers to any visual, graphical, tactile, audible, sensory or other mechanism for providing information to and/or receiving information from a user or other entity. The term “user interface” as used herein may refer to an interface between a human user (or operator) and one or more devices to enable communication between the user and the device(s). Examples of user interfaces that may be employed in various embodiments of the present disclosure include, without limitation, switches, potentiometers, buttons, dials, sliders, a mouse, a pointing device, a keyboard, a keypad, joysticks, trackballs, display screens, various types of graphical user interfaces (GUIs), touch screens, microphones and other types of sensors or devices that may receive some form of human-generated stimulus and generate a signal in response thereto. As it is used herein, “computer” generally refers to anything that transforms information in a purposeful way.
The systems described herein may also utilize one or more controllers to receive various information and transform the received information to generate an output. The controller may include any type of computing device, computational circuit, or any type of processor or processing circuit capable of executing a series of instructions that are stored in a memory. The controller may include multiple processors and/or multicore central processing units (CPUs) and may include any type of processor, such as a microprocessor, digital signal processor, microcontroller, or the like. The controller may also include a memory to store data and/or algorithms to perform a series of instructions.
Any of the herein described methods, programs, algorithms or codes may be converted to, or expressed in, a programming language or computer program. A “Programming Language” and “Computer Program” is any language used to specify instructions to a computer, and includes (but is not limited to) these languages and their derivatives: Assembler, Basic, Batch files, BCPL, C, C+, C++, Delphi, Fortran, Java, JavaScript, Machine code, operating system command languages, Pascal, Perl, PL1, scripting languages, Visual Basic, metalanguages which themselves specify programs, and all first, second, third, fourth, and fifth generation computer languages. Also included are database and other data schemas, and any other meta-languages. For the purposes of this definition, no distinction is made between languages which are interpreted, compiled, or use both compiled and interpreted approaches. For the purposes of this definition, no distinction is made between compiled and source versions of a program. Thus, reference to a program, where the programming language could exist in more than one state (such as source, compiled, object, or linked) is a reference to any and all such states. The definition also encompasses the actual instructions and the intent of those instructions.
Any of the herein described methods, programs, algorithms or codes may be contained on one or more machine-readable media or memory. The term “memory” may include a mechanism that provides (e.g., stores and/or transmits) information in a form readable by a machine such a processor, computer, or a digital processing device. For example, a memory may include a read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, or any other volatile or non-volatile memory storage device. Code or instructions contained thereon can be represented by carrier wave signals, infrared signals, digital signals, and by other like signals.
As it is used in this description, the phrase “treatment plan” refers to a selected ablation needle, energy level, and/or treatment duration to effect treatment of a target. The term “target” refers to a region of tissue slated for treatment, and may include, without limitation, tumors, fibroids, and other tissue that is to be ablated. The phrase “ablation zone” refers to the area and/or volume of tissue that will be ablated.
As it is used in this description, the phrase “computed tomography” (CT) or “computed axial tomography” (CAT) refer to a medical imaging method employing tomography created by computer processing. Digital geometry processing is used to generate a three-dimensional image of the inside of an object from a large series of two-dimensional X-ray images taken around a single axis of rotation.
As it is used in this description, the term magnetic resonance imaging (MRI), nuclear magnetic resonance imaging (NMRI), or magnetic resonance tomography (MRT) refer to a medical imaging technique used in radiology to visualize detailed internal structures. MRI makes use of the property of nuclear magnetic resonance (NMR) to image nuclei of atoms inside the body. An MRI machine uses a powerful magnetic field to align the magnetization of some atomic nuclei in the body, while using radio frequency fields to systematically alter the alignment of this magnetization. This causes the nuclei to produce a rotating magnetic field detectable by the scanner and this information is recorded to construct an image of the scanned area of the body.
As it is used in this description, the term “three-dimensional ultrasound” or “3D ultrasound” refers to medical ultrasound technique providing three dimensional images.
As it is used in this description, the phrase “digital imaging and communication in medicine” (DICOM) refers to a standard for handling, storing, printing, and transmitting information relating to medical imaging. It includes a file format definition and a network communications protocol. The communication protocol is an application protocol that uses TCP/IP to communicate between systems. DICOM files can be exchanged between two entities that are capable of receiving image and patient data in DICOM format.
Any of the herein described systems and methods may transfer data therebetween over a wired network, wireless network, point to point communication protocol, a DICOM communication protocol, a transmission line, a removable storage medium, and the like.
The systems described herein may utilize one or more sensors configured to detect one or more properties of tissue and/or the ambient environment. Such properties include, but are not limited to: tissue impedance, tissue type, tissue clarity, tissue compliance, temperature of the tissue or jaw members, water content in tissue, jaw opening angle, water motality in tissue, energy delivery, and jaw closure pressure.
In an aspect of the present disclosure, a method of determining a treatment plan is provided. The method includes obtaining a plurality of images and rendering the plurality of images in three dimensions. The plurality of images are automatically segmented to demarcate a target area, and a treatment plan is automatically determined based on the target area.
In the method, automatically segmenting the plurality of images includes selecting a seed point and creating a region of interest around the seed point. A first plurality of pixels in the region of interest is compared to a predetermined threshold. A second plurality of pixels is selected from the first plurality of pixels, wherein the second plurality of pixels are connected to the seed point and are less than the predetermined threshold. A geometric filter is applied to the second plurality of pixels.
The method also includes determining if the second plurality of pixels forms a predetermined object. If the second plurality of pixels does not form a predetermined object, the predetermined threshold is increased and the steps of comparing a first plurality of pixels, selecting a second plurality of pixels, applying a geometric filter, and determining if the second plurality of pixels forms a predetermined object are repeated.
Further, automatically determining a treatment plan includes performing a volumetric analysis on the target area, selecting a surgical device, and calculating an energy level and treatment duration based on the target area and the selected surgical device. The plurality of images are rendered and the target area and the treatment plan are displayed.
In addition, the method also includes automatically segmenting at least one vessel or at least one organ. The treatment plan is adjusted based on a proximity of the at least one vessel or organ to the target and the adjusted treatment plan is displayed.
In another aspect of the present disclosure, a method of navigating a surgical device using a fiducial pattern disposed on an ultrasound device and an image capture device disposed on the surgical device is provided. In the method, an ultrasound image of a scan plane and a fiducial image of the fiducial pattern are obtained. The fiducial image is corrected for lens distortion and then a correspondence between the fiducial image and a model image is determined. A camera pose is estimated and a position of the surgical device is transformed to model coordinates. The ultrasound image and a virtual image of the surgical device based on the model coordinates are then displayed.
In yet another aspect of the present disclosure, a method of tracking a first device having an image capture device in relation to a second device having a fiducial pattern is provided. The method includes obtaining a fiducial image of the fiducial pattern, correcting the fiducial image for lens distortion, finding correspondence between the fiducial image and a model image, estimating a camera pose, and transforming a position of the surgical device to model coordinates.
The fiducial pattern includes a plurality of first unique identifiers disposed in a region and a plurality of second unique identifiers. The method also includes finding the plurality of first unique identifiers by applying a first threshold to the fiducial image, performing a connected component analysis, applying a geometric filter to determine the weighted centroids of the plurality of first unique identifiers, and storing the weighted centroids of the plurality of first unique identifiers. In addition, the method includes finding the plurality of second unique identifiers by inverting the fiducial image, applying a second threshold to the inverted fiducial image, performing a connected component analysis, applying a geometric filter to determine the weighted centroids of the plurality of second unique identifiers and to determine the region having the plurality of first unique identifiers, and storing the weighted centroids of the plurality of second identifiers and the region having the plurality of first unique identifiers. The first threshold and second threshold are dynamic thresholds.
Correspondence between the fiducial image and the model image is found by selecting a plurality of first unique identifiers from the fiducial image, arranging the plurality of first unique identifiers in clockwise order, arranging a plurality of model fiducials in clockwise order, and computing a planar homography. The plurality of model fiducials are transformed into image coordinates using the computed planar homography. A model fiducial from the plurality of model fiducials is found that matches the fiducial image, and the residual error is computed.
Selecting a plurality of first unique identifiers from the fiducial image is done by selecting the plurality of first unique identifiers, selecting the region having the plurality of first unique identifiers, counting the number of first unique identifiers in the selected region, and comparing the number of first unique identifiers in the selected region to a predetermined number. If the number of first unique identifiers in the selected region equals the predetermined number, the method proceeds to arranging the plurality of first unique identifiers in clockwise order. If the number of first unique identifiers in the selected region does not equal the predetermined number, a new region is selected and the number of first unique identifiers in the new region is counted.
In yet another aspect of the present disclosure, a planning and navigation method is provided. The planning and navigation method includes obtaining a plurality of images, rendering the plurality of images in three dimensions, automatically segmenting the plurality of images to demarcate a target area, and automatically determining a treatment plan based on the target area. The planning and navigation method also includes obtaining an ultrasound image of a scan plane including the target, obtaining a fiducial image of a fiducial pattern disposed on an ultrasound device using an image capture device on a surgical device, correcting the fiducial image for lens distortion, finding correspondence between the fiducial image and a model image, estimating a camera pose, transforming a position of the surgical device to model coordinates, displaying the ultrasound image and a virtual image of the surgical device based on the model coordinates, navigating the surgical device to the target using the displayed ultrasound image and the virtual image, and treating the target based on the treatment plan.
In the method, automatically segmenting the plurality of images includes selecting a seed point and creating a region of interest around the seed point. A first plurality of pixels in the region of interest is compared to a predetermined threshold. A second plurality of pixels is selected from the first plurality of pixels, wherein the second plurality of pixels are connected to the seed point and are less than the predetermined threshold. A geometric filter is applied to the second plurality of pixels.
The planning and navigation method also includes determining if the second plurality of pixels forms a predetermined object. If the second plurality of pixels does not form a predetermined object, the predetermined threshold is increased and the steps of comparing a first plurality of pixels, selecting a second plurality of pixels, applying a geometric filter, and determining if the second plurality of pixels forms a predetermined object are repeated.
In another aspect, automatically determining a treatment plan includes performing a volumetric analysis on the target area, selecting a surgical device, and calculating an energy level and treatment duration based on the target area and the selected surgical device. The plurality of images are rendered and the target area and the treatment plan are displayed.
In addition, the method also includes automatically segmenting at least one vessel or at least one organ. The treatment plan is adjusted based on a proximity of the at least one vessel or organ to the target and the adjusted treatment plan is displayed.
The fiducial pattern includes a plurality of first unique identifiers disposed in a region and a plurality of second unique identifiers. The method also includes finding the plurality of first unique identifiers by applying a first threshold to the fiducial image, performing a connected component analysis, applying a geometric filter to determine the weighted centroids of the plurality of first unique identifiers, and storing the weighted centroids of the plurality of first unique identifiers. In addition, the method includes finding the plurality of second unique identifiers by inverting the fiducial image, applying a second threshold to the inverted fiducial image, performing a connected component analysis, applying a geometric filter to determine the weighted centroids of the plurality of second unique identifiers and to determine the region having the plurality of first unique identifiers, and storing the weighted centroids of the plurality of second identifiers and the region having the plurality of first unique identifiers. The first threshold and second threshold are dynamic thresholds.
Correspondence between the fiducial image and the model image is found by selecting a plurality of first unique identifiers from the fiducial image, arranging the plurality of first unique identifiers in clockwise order, arranging a plurality of model fiducials in clockwise order, and computing a planar homography. The plurality of model fiducials are transformed into image coordinates using the computed planar homography. A model fiducial from the plurality of model fiducials is found that matches the fiducial image and the residual error is computed.
Selecting a plurality of first unique identifiers from the fiducial image is done by selecting the plurality of first unique identifiers, selecting the region having the plurality of first unique identifiers, counting the number of first unique identifiers in the selected region, and comparing the number of first unique identifiers in the selected region to a predetermined number. If the number of first unique identifiers in the selected region equals the predetermined number, the method proceeds to arranging the plurality of first unique identifiers in clockwise order. If the number of first unique identifiers in the selected region does not equal the predetermined number, a new region is selected and the number of first unique identifiers in the new region is counted.
In yet another aspect of the present disclosure, a planning system is provided. The planning system includes a memory configured to store a plurality of images. The planning system also includes a controller configured to render the plurality of images in three dimensions, automatically segment the plurality of images to demarcate a target area, and automatically determine a treatment plan based on the target area. A display is provided to display the rendered plurality of images and the target area.
In the planning system, the controller performs a volumetric analysis to determine a treatment plan. The planning system may also include an input means configured to adjust the treatment plan. The display provides a graphical user interface.
The controller may segment at least one vessel and adjust the treatment plan based on the proximity of the vessel to the target. The controller may segment at least one organ and adjust the treatment plan based on a position of the target in relation to the organ.
In yet another aspect of the present disclosure, a navigation system is provided. The navigation system includes an ultrasound device having a fiducial pattern disposed thereon configured to obtain an ultrasound image in a scan plane and a surgical instrument having an image capture device configured to capture a fiducial image of the fiducial pattern. A controller is configured to receive the ultrasound image and the fiducial image, wherein the controller determines a position of the surgical instrument in relation to the scan plane based on the fiducial image and a display is configured to display the ultrasound image and a virtual image of the surgical instrument based on the position of the surgical instrument in relation to the scan plane.
In the navigation system, the fiducial pattern is affixed to a known location on the ultrasound device and the image capture device is affixed to a known location on the surgical instrument. The fiducial pattern has a plurality of markings of known characteristics and respective relative positions that reside within a known topology. The controller corresponds the fiducial image to a model image, estimates a camera pose, and transforms the surgical instrument to model coordinates. The controller also corrects the fiducial image for lens distortion. Additionally, the controller can recognize a topology within the fiducial marker where the topology references two or more independent unique identifiers located in known positions on a single pattern on a marker.
In yet another aspect of the present disclosure, a fiducial tracking system is provided. The fiducial tracking system includes a first device having a fiducial pattern disposed thereon and a second device having an image capture device disposed thereon. The image capturing device is configured to obtain a fiducial image of the fiducial pattern. A controller is also provided that receives the fiducial image, corrects the fiducial image for lens distortion, finds correspondence between the fiducial image and a model image, estimates a camera pose, and transforms a position of the surgical device to model coordinates.
In the fiducial tracking system, the fiducial pattern includes a plurality of first unique identifiers disposed in a region and a plurality of second unique identifiers.
The controller finds the plurality of first unique identifiers by applying a first threshold to the fiducial image, performing a connected component analysis, applying a geometric filter to determine the weighted centroids of the plurality of first unique identifiers, and storing the weighted centroids of the plurality of first unique identifiers.
The controller finds the plurality of second unique identifiers by inverting the fiducial image, applying a second threshold to the inverted fiducial image, performing a connected component analysis, applying a geometric filter to determine the weighted centroids of the plurality of second unique identifiers and to determine the region having the plurality of first unique identifiers, and storing the weighted centroids of the plurality of second identifiers and the region having the plurality of first unique identifiers.
The controller may find correspondence between the fiducial image and the model image by selecting a plurality of first unique identifiers from the fiducial image, arranging the plurality of first unique identifiers in clockwise order, arranging a plurality of model fiducials in clockwise order, computing a planar homography, transforming the plurality of model fiducials into image coordinates using the computed planar homography, finding a model fiducial from the plurality of model fiducials that matches the fiducial image, and computing the residual error.
The controller may also select a plurality of first unique identifiers from the fiducial image by selecting the plurality of first unique identifiers, selecting the region having the plurality of first unique identifiers, counting the number of first unique identifiers in the selected region, and comparing the number of first unique identifiers in the selected region to a predetermined number. If the number of first unique identifiers in the selected region equals the predetermined number, the method proceeds to arranging the plurality of first unique identifiers in clockwise order. If the number of first unique identifiers in the selected region does not equal the predetermined number, a new region is selected and the number of first unique identifiers in the new region is counted. The first threshold or the second threshold is a dynamic threshold.
In yet another aspect of the present disclosure, a planning and navigation system is provided. The planning system includes a memory configured to store a plurality of images and a first controller configured to render the plurality of images in three dimensions, automatically segment the plurality of images to demarcate a target area, and automatically determine a treatment plan based on the target area. The navigation system includes an ultrasound device having a fiducial pattern disposed thereon and configured to obtain an ultrasound image in a scan plane, a surgical device having an image capture device configured to capture a fiducial image of the fiducial pattern, and a second controller configured to receive the ultrasound image and the fiducial image, wherein the controller determines a position of the surgical device in relation to the scan plane based on the fiducial image. The planning and navigation system also includes a third controller configured to receive the rendered plurality of images, the target area, the treatment plan, the ultrasound image, and the position of the surgical device in relation to the scan plane, and a display configured to display a first display having the rendered plurality of images, the target area, and the treatment plan, and configured to display a second display having the ultrasound image, a virtual image of the surgical device based on the position of the surgical device in relation to the scan plane, and the treatment plan.
In the planning and navigation system, the first display and the second display are displayed on a single screen and may be displayed simultaneously or a user can switch between the first display and the second display. Alternatively, the display may have two screens and the first display is displayed on a first screen and the second display is displayed on a second screen. The display may provide a graphical user interface.
The first controller performs a volumetric analysis to determine a treatment plan and an input means may be provided to adjust the treatment plan. The first controller may segment at least one vessel and adjust the treatment plan based on the proximity of the vessel to the target. The first controller may segment at least one organ and adjust the treatment plan based on a position of the target in relation to the organ.
In the planning and navigation system, the fiducial pattern is affixed to a known location on the ultrasound device and the image capture device is affixed to a known location on the surgical device. The fiducial pattern has a plurality of markings of known characteristics and relative positions that reside within a known topology.
The second controller corresponds the fiducial image to a model image, estimates a camera pose, and transforms the surgical device to model coordinates. The second controller also corrects the fiducial image for lens distortion. Additionally, the second controller can recognize a topology within the fiducial marker where the topology references two or more independent unique identifiers located in known positions on a single pattern on a marker.
In yet another aspect of the present disclosure, a planning and navigation system is provided. The planning system includes a memory configured to store a plurality of images and a first controller configured to render the plurality of images in three dimensions, automatically segment the plurality of images to demarcate a target area, and automatically determine a treatment plan based on the target area. The navigation system includes an ultrasound device having a fiducial pattern disposed thereon and configured to obtain an ultrasound image in a scan plane, an ablation needle having an image capture device configured to capture a fiducial image of the fiducial pattern, and a second controller configured to receive the ultrasound image and the fiducial image, wherein the controller determines a position of the ablation needle in relation to the scan plane based on the fiducial image. The planning and navigation system also includes a third controller configured to receive the rendered plurality of images, the target area, the treatment plan, the ultrasound image, and the position of the ablation needle in relation to the scan plane and a display configured to display a first display having the rendered plurality of images, the target area, and the treatment plan, and configured to display a second display having the ultrasound image, a virtual image of the ablation needle based on the position of the ablation needle in relation to the scan plane, and the treatment plan.
In the planning and navigation system, the first display and the second display are displayed on a single screen and may be displayed simultaneously, or, a user can switch between the first display and the second display. Alternatively, the display may have two screens and the first display is displayed on a first screen and the second display is displayed on a second screen. The display may provide a graphical user interface.
The first controller performs a volumetric analysis to determine a treatment plan and an input means may be provided to adjust the treatment plan. The first controller may segment at least one vessel and adjust the treatment plan based on the proximity of the vessel to the target. The first controller may segment at least one organ and adjust the treatment plan based on a position of the target in relation to the organ.
In the planning and navigation system, the fiducial pattern is affixed to a known location on the ultrasound device and the image capture device is affixed to a known location on the ablation needle. The fiducial pattern has a plurality of markings of known characteristics and relative positions that reside within a known topology.
The second controller corresponds the fiducial image to a model image, estimates a camera pose, and transforms the ablation needle to model coordinates. The second controller also corrects the fiducial image for lens distortion. Additionally, the second controller can recognize a topology within the fiducial marker where the topology references two or more independent unique identifiers located in known positions on a single pattern on a marker.
In yet another aspect of the present disclosure, an ablation planning and navigation system is provided. The planning system includes a memory configured to store a plurality of images and a first controller configured to render the plurality of images in three dimensions, automatically segment the plurality of images to demarcate a target area, and automatically determine a treatment plan based on the target area. The navigation system includes an ultrasound device having a fiducial pattern disposed thereon and configured to obtain an ultrasound image in a scan plane, an ablation needle having an image capture device configured to capture a fiducial image of the fiducial pattern, and a second controller configured to receive the ultrasound image and the fiducial image, wherein the controller determines a position of the ablation needle in relation to the scan plane based on the fiducial image. The ablation planning and navigation system also includes a third controller configured to receive the rendered plurality of images, the target area, the treatment plan, the ultrasound image, and the position of the ablation needle in relation to the scan plane and a display configured to display a first display having the rendered plurality of images, the target area, and the treatment plan and configured to display a second display having the ultrasound image, a virtual image of the ablation needle based on the position of the ablation needle in relation to the scan plane, and the treatment plan.
In the ablation planning and navigation system, the first display and the second display are displayed on a single screen and may be displayed simultaneously or a user can switch between the first display and the second display. Alternatively, the display may have two screens and the first display is displayed on a first screen and the second display is displayed on a second screen. The display may provide a graphical user interface.
The first controller performs a volumetric analysis to determine a treatment plan and an input means may be provided to adjust the treatment plan. The first controller may also segment at least one vessel and adjust the treatment plan based on the proximity of the vessel to the target, or, the first controller may segment at least one organ and adjust the treatment plan based on a position of the target in relation to the organ.
In the ablation planning and navigation system, the fiducial pattern is affixed to a known location on the ultrasound device and the image capture device is affixed to a known location on the ablation needle. The fiducial pattern has a plurality of markings of known characteristics and relative positions that reside within a known topology.
The second controller corresponds the fiducial image to a model image, estimates a camera pose, and transforms the ablation needle to model coordinates. The second controller also corrects the fiducial image for lens distortion. Additionally, the second controller can recognize a topology within the fiducial marker where the topology references two or more independent unique identifiers located in known positions on a single pattern on a marker.
BRIEF DESCRIPTION OF THE DRAWINGSThe above and other aspects, features, and advantages of the present disclosure will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings in which:
FIG. 1 is a system block diagram of a planning and navigation system according to an embodiment of the present disclosure;
FIGS. 2A and 2B are schematic diagrams of an ablation needle according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a radiation pattern of the ablation needle ofFIGS. 2A and 2B;
FIG. 4 is a schematic diagram of a planning system according to an embodiment of the present disclosure;
FIG. 5 is a flowchart depicting overall operation of the planning system according to an embodiment of the present disclosure;
FIGS. 6 and 7 are schematic diagrams of graphical user interfaces used in the planning system in accordance with an embodiment of the present disclosure;
FIG. 8 is a flowchart depicting an algorithm for image segmentation and inverse planning according to an embodiment of the present disclosure;
FIG. 9 is a flowchart depicting an algorithm for segmenting a nodule according to an embodiment of the present disclosure;
FIGS. 10A-10B are graphical representations of relationships between ablation zones and energy delivery;
FIG. 11A is a schematic diagram of a relationship between a vessel and a target according to another embodiment of the present disclosure;
FIG. 11B is a graphical representation of an alternate dosing curve according to another embodiment of the present disclosure;
FIGS. 12A-12C are schematic diagrams of a planning method according to another embodiment of the present disclosure;
FIG. 13 is a schematic diagram of a navigation system according to an embodiment of the present disclosure;
FIGS. 14A and 14B are schematic diagrams of graphical user interfaces used in the navigation system ofFIG. 13;
FIG. 15 is a flowchart depicting a fiducial tracking algorithm according to an embodiment of the present disclosure;
FIGS. 16A and 16B depict an image taken by a camera and a corrected version of the image, respectively;
FIG. 17 is a flowchart depicting an algorithm for finding white circles according to an embodiment of the present disclosure;
FIGS. 18A-18C depict intermediate image results of the algorithm depicted inFIG. 17;
FIG. 19 is a flowchart depicting an algorithm for finding black circles and black regions according to an embodiment of the present disclosure;
FIGS. 20A-20D depict intermediate image results of the algorithm depicted inFIG. 19;
FIG. 21A is a flowchart depicting a correspondence algorithm according to an embodiment of the present disclosure;
FIG. 21B is a flowchart depicting an algorithm for applying a topology constraint according to an embodiment of the present disclosure;
FIG. 22A-22D are a schematic diagrams of fiducial models used in the algorithm ofFIG. 21A;
FIG. 23 is a schematic diagram of an integrated planning and navigation system according to another embodiment of the present disclosure;
FIG. 24 is a schematic diagram of an integrated planning and navigation system according to yet another embodiment of the present disclosure;
FIGS. 25A and 25B are schematic diagrams of a navigation system suitable for use with the system ofFIG. 24; and
FIGS. 26-29 are schematic diagrams of graphical user interfaces used in the system ofFIG. 24 in accordance with various embodiments of the present disclosure.
DETAILED DESCRIPTIONParticular embodiments of the present disclosure are described hereinbelow with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the disclosure and may be embodied in various forms. Well-known functions or constructions are not described in detail to avoid obscuring the present disclosure in unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure. Like reference numerals may refer to similar or identical elements throughout the description of the figures.
Turning to the figures,FIG. 1 depicts an overview of a planning and navigation system according to various embodiments of the present disclosure. As shown inFIG. 1,pre-operative images15 of a patient “P” are captured via animage capture device10.Image capture device10 may include, but is not limited to, a MRI device, a CAT device, or an ultrasound device that obtains two-dimensional (2D) or three-dimensional (3D) images.Image capture device10 storespre-operative images15 that are transferred toplanning system100.Pre-operative images15 may be transferred toplanning system100 by uploadingimages15 to a network, transmittingimages15 toplanning system100 via a wireless communication means, and/or storingimages15 on a removable memory that is inserted intoplanning system100. In an embodiment of the present disclosure,pre-operative images15 are stored in a DICOM format. In some embodiments,image capture device10 andplanning system100 may be incorporated into a standalone unit.
Planning system100, which is described in more detail below, receives thepre-operative images15 and determines the size of a target. Based on the target size and a selected surgical device,planning system100 determines settings that include an energy level and a treatment duration to effect treatment of the target.
Navigation system200, which is described in more detail below, utilizes a fiducial pattern disposed on a medical imaging device (e.g., an ultrasound imaging device) to determine an intracorporeal position of an surgical device. The intracorporeal position of the surgical device is displayed on a display device in relation to an image obtained by the medical imaging device. Once the surgical device is positioned in the vicinity of the target, the user effects treatment of the target based on the treatment zone settings determined by the planning system.
In some embodiments, a user determines the treatment zone settings usingplanning system100 and utilizes the treatment zone settings in effecting treatment usingnavigation system200. In other embodiments, theplanning system100 transmits the treatment zone settings tonavigation system200 to automatically effect treatment of the target when the surgical device is in the vicinity of the target. Additionally, in some embodiments,planning system100 andnavigation system200 are combined into a single standalone system. For instance, a single processor and a single user interface may be used for planningsystem100 andnavigation system200, a single processor and multiple user interfaces may be used to forplanning system100 andnavigation system200, or multiple processors and a single user interface may be used for planningsystem100 andnavigation system200.
FIG. 2A shows an example of a surgical device in accordance with an embodiment of the present disclosure. Specifically,FIG. 2A shows a side view of a variation on anablation needle60 with anelectrical choke72 andFIG. 2B shows a cross-section side view2B-2B fromFIG. 2A.Ablation needle60shows radiating portion62 electrically attached via feedline (or shaft)64 to a proximally locatedcoupler66. Radiatingportion62 is shown withsealant layer68 coated oversection62.Electrical choke72 is shown partially disposed over a distal section offeedline64 to formelectrical choke portion70, which is located proximally of radiatingportion62.
To improve the energy focus of theablation needle60, theelectrical choke72 is used to contain field propagation or radiation pattern to the distal end of theablation needle60. Generally, thechoke72 is disposed on theablation needle60 proximally of the radiating section. Thechoke72 is placed over a dielectric material that is disposed over theablation needle60. Thechoke72 is a conductive layer that may be covered by a tubing or coating to force the conductive layer to conform to theunderlying ablation needle60, thereby forming an electrical connection (or short) more distally and closer to the radiatingportion62. The electrical connection between thechoke72 and theunderlying ablation needle60 may also be achieved by other connection methods such as soldering, welding, brazing, crimping, use of conductive adhesives, etc.Ablation needle60 is electrically coupled to a generator that providesablation needle60 with electrosurgical energy.
FIG. 3 is a cross-sectional view of an embodiment of theablation needle60 shown with a diagrammatic representation of an emitted radiation pattern in accordance with the present disclosure.
FIGS. 4 to 12C describe the operation ofplanning system100 in accordance with various embodiments of the present disclosure. Turning toFIG. 4,planning system100 includes areceiver102,memory104,controller106, input device108 (e.g., mouse, keyboard, touchpad, touchscreen, etc.), and adisplay110. During operation of theplanning system100,receiver102 receivespre-operative images15 in DICOM format and stores the images inmemory104.Controller106 then processesimages15, which is described in more detail below, and displays the processed images ondisplay110. Usinginput device108, a user can navigate through theimages15, select one of the images fromimages15, select a seed point on the selected image, select an ablation needle, adjust the energy level, and adjust the treatment duration. The inputs provided byinput device108 are displayed ondisplay110.
FIG. 5 depicts a general overview of an algorithm used by planningsystem100 to determine a treatment plan. As shown inFIG. 5, instep120, images in a DICOM format are acquired via a wireless connection, a network, or by downloading the images from a removable storage medium and stored inmemory104.Controller106 then performs an automatic three dimensional (3D) rendering of theimages15 and displays a 3D rendered image (as shown inFIG. 6) instep122. Instep124, image segmentation is performed to demarcate specific areas of interest and calculate volumetrics of the areas of interest. As described below, segmentation can be user driven or automatic. Instep126, the controller performs an inverse planning operation, which will also be described in more detail below, to determine a treatment algorithm to treat the areas of interest. The treatment algorithm may include selection of a surgical device, energy level, and/or duration of treatment. Alternatively, a user can select the surgical device, energy level, and/or duration of treatment to meet the intentions of a treating physician that would include a “margin value” in order to treat the target and a margin of the surrounding tissue.
FIGS. 6 and 7 depict graphical user interfaces (GUIs) that may be displayed ondisplay110. As shown inFIGS. 6 and 7, each GUI is divided into a number of regions (e.g.,regions132,134, and136) for displaying the rendered DICOM images. For example,region132 shows an image of patient “P” along a transverse cross-section andregion134 shows an image of patient “P” along a coronal cross-section.Region136 depicts a 3D rendering of patient “P”. In other embodiments, a sagittal cross-section may also be displayed on the GUI. The GUI allows a user to select different ablation needles in drop downmenu131. The GUI also allows a user to adjust the power and time settings inregions133 and135, respectively. Additionally, the GUI has a number of additional tools in region137 that include, but are not limited to, a planning tool that initiates the selection of a seed point, a contrast tool, a zoom tool, a drag tool, a scroll tool for scrolling through DICOM images, and a 3D Render tool for displaying the volume rendering of the DICOM dataset.
The flowchart ofFIG. 8 depicts the basic algorithm for performing theimage segmentation step124 and theinverse planning step126. As shown inFIG. 8, a user selects a seed point in step140 (seeFIG. 6 where a cross hair is centered on the target “T” inregions132 and134). After the seed point is manually selected,planning system100 segments a nodule to demarcate a volume of interest instep142. In other embodiments, the seed point may be automatically detected based on the intensity values of the pixels.
FIG. 9 depicts a flowchart of an algorithm used to segment a nodule. As shown inFIG. 9, once a seed point is identified instep151, the algorithm creates a Region of Interest (ROI) instep152. For example, the ROI may encompass a volume of 4 cm3. Instep153, a connected threshold filter applies a threshold and finds all the pixels connected to the seed point in the DICOM images stored inmemory104. For example, the threshold values may start at −400 Houndsfields Units (HU) and end at 100 HU when segmenting lung nodules.
Instep154,controller106 applies a geometric filter to compute the size and shape of an object. The geometric filter enables the measurement of geometric features of all objects in a labeled volume. This labeled volume can represent, for instance, a medical image segmented into different anatomical structures. The measurement of various geometric features of these objects can provide additional insight into the image.
The algorithm determines if a predetermined shape is detected instep155. If a predetermined shape is not detected, the algorithm proceeds to step156 where the threshold is increased by a predetermined value. The algorithm repeatssteps153 to155 until a predetermined object is detected.
Once a predetermined object is detected, the algorithm ends instep157 and theplanning system100 proceeds to step144 to perform volumetric analysis. During the volumetric analysis, the following properties of the spherical object may be calculated by controller106: minimum diameter; maximum diameter; average diameter; volume; sphericity; minimum density; maximum density; and average density. The calculated properties may be displayed ondisplay110 as shown inregion139 ofFIG. 7. The volumetric analysis may use a geometric filter to determine a minimum diameter, a maximum diameter, volume, elongation, surface area, and/or sphericity. An image intensity statistics filter may also be used in conjunction with the geometric filter instep144. The image intensity statistics filter calculates a minimum density, maximum density, and average density.
Instep146, power and time settings are calculated for a demarcated target.FIG. 10 depicts various graphs of the relationship between energy deposited into tissue and the resulting ablation zone for a given time period. This relationship allows for inverse planning by considering the dimension and characteristics of a target tissue (i.e., tumors, fibroids, etc.) and the energy dose/antenna design of a specific ablation needle. Table 1 below shows an example of a relationship between ablation volume, power, and time for an ablation needle.
| TABLE 1 |
|
| Ablation Volume (cm3) | Power (W) | Time (s) |
|
|
| 6 | 140 | 1 |
| 22 | 140 | 3 |
| 41 | 140 | 5 |
| 31 | 110 | 5 |
| 23 | 80 | 5 |
|
Using the values in Table 1, a linear equation can be derived from the table to compute optimal power and time settings. For example, using a linear regression analysis, Table 1 provides the following equation:
Volume=0.292381*Power+8.685714*Time−44.0762 (1)
which can be written as
Power=(Volume−8.685714*Time+44.0762)/0.292381. (2)
The desired volume can be calculated using the maximum diameter from the volumetric analysis plus a 1 centimeter margin as follows:
DesiredVolume=4/3*pi*DesiredRadiuŝ3 (3)
where the desired radius is calculated as follows:
DesiredRadius=MaximumNoduleDiameter/2+Margin. (4)
Substituting the desired volume into equation (1) or (2) leaves two unknowns, power and time. Using equation (2)controller106 can solve for power by substituting values for time.Controller106 chooses the smallest value for time that maintains power below 70 W, or some other predetermined value, so that the user can perform the procedure as quickly as possible while keeping power in a safe range.
Once the power and time are calculated146, the power and time are displayed ondisplay110 as shown inFIG. 7 (see133 and135). A user can adjust the calculated power and/ortime using controls133 and135, respectively, to adjust thetreatment zone138aand/ormargin138b.
Memory104 and/orcontroller106 may store a number of equations that correspond to different surgical devices. When a user selects a different surgical devices in drop downmenu131,controller106 can perform the same analysis described above to determine the smallest value for time that keeps the power below 70 W or some other predetermined value.
Although the above described procedure describes the use of a single seed point to determine a predetermined object, some targets may have an irregular shape that cannot be treated by the predetermined treatment zone without causing damage to other tissue. In such instances, multiple seed points may be used to create an irregular shaped treatment plan using a single surgical device that is repositioned in a number of places or multiple surgical devices that may be used concurrently to treat an irregularly shaped region.
In other embodiments,memory104 and/orcontroller106 may store a catalog of surgical devices and treatment zone performance, which includes power, time, number of instruments, and spacing of instruments required to achieve treatment zones ex vivo or in vivo. Based on the results of the image segmentation and volumetric analysis, the controller may automatically select device types, numbers of devices, spacing of multiple devices, and/or power and time settings for each device to treat the ROI. Alternatively, a user can manually select device types, numbers of devices, spacing of multiple devices, power and/or time settings for each device to treat the ROI using the GUI to generate a treatment plan.
In another embodiment according to the present disclosure,planning system100 may also segment organs and other vital structures in addition to targets. Segmentation of organs and other structures, such as vessels, are used to provide a more advanced treatment plan. As described above with regard toFIG. 10, treatment zones correlate to energy delivery in a regular fashion. Further, it is known that vessels greater than three (3) millimeters may negatively affect treatment zone formation. Segmentation of a vessel would allow the interaction between the vessels and the target to be estimated, including the vessel diameter (D1) and distance (D2) (seeFIG. 11A) between the vessel and a proposed target. This interaction may be estimated manually by a user or automatically bycontroller106. Using the vessel diameter D1 and the distance D2,planning system100 may automatically suggest an alternate dose curve to be used for treatment purposes as shown inFIG. 11B. Alternatively,controller106 may provide a recommendation to the user viadisplay110 to move the treatment zone. Additionally, a different treatment zone projection could be displayed ondisplay110. Further, in the compute power and time settings step146 ofFIG. 8, the controller could leverage different curves depending on the vessel's diameter and distance to the target area.
FIGS. 12A-12C depict an advanced treatment planning using organ segmentation. Segmentation of an organ allows for at least two advantages in planning a course of treatment. In a first instance, minimally invasive treatments are often chosen to be organ sparing. By segmenting the organ,controller106 can calculate theorgan volume160 and subtract thedetermined ablation zone162 to determine the volume of organ being spared164 as shown inFIG. 12A. Ifcontroller106 determines that volume of organ being spared is too low,controller106 may alert a user that an alternate treatment plan is needed or it may suggest an alternate treatment plan.
FIGS. 12B and 12C depict a treatment plan for a target “T” located on the surface of an organ. Conventionally, treatment near an organ surface is often avoided or additional techniques may be required to separate the organ from other organs before treatment can be performed. In another embodiment in accordance with the present disclosure, after the organ is segmented, the position of a target “T” can also be determined. If thetreatment zone162 in the treatment plan projects outside the surface of the organ and the target “T” is located on the surface,controller106 may alert the user thattreatment zone162 may affect other organs and/or structures in the vicinity of the target “T” and that the treatment plan needs to be altered. In another embodiment,controller106 may automatically make recommendations to the user indicating the surgical device, energy level, duration of treatment.Controller106 may also suggest asmaller treatment zone162 as shown inFIG. 12B or it may suggest moving thetreatment zone162 as shown inFIG. 12C.
In other embodiments, after targets, tissues, organs, and other structures are segmented, known tissue properties can be attributed to these structures. Such tissue properties include, but are not limited to, electrical conductivity and permittivity across frequency, thermal conductivity, thermal convection coefficients, and so forth. The planning algorithm ofFIG. 8 may use the tissue properties attributed to the segmented tumors, tissues, organs, and other structures to solve the Pennes bioheat equation in order to calculate a dose required to ablate a selected target. Keys to successful implementation of this more comprehensive solution using the bioheat equation include: utilizing known tissue properties at steady-state to predict an initial spatial temperature profile, utilizing tissue properties as temperature rises to adjust spatial properties in accordance with temperature elevation, and utilizing tissue properties at liquid-gas phase transition.
Turning toFIG. 13, a navigation system in accordance with an embodiment of the present disclosure is shown generally as200. Generally,navigation system200 incorporates a reference patch orfiducial patch204 that is affixed to anultrasound transducer202.Fiducial patch204 may be printed onultrasound transducer202, attached toultrasound transducer202 via an adhesive, or removably coupled toultrasound transducer202. In some embodiments, the fiducial patch is disposed on a support structure that is configured to be removably affixed, e.g., “clipped onto”, the housing of an ultrasound transducer.Ultrasound transducer202 is coupled to anultrasound generator210 that generates acoustic waves.Ultrasound transducer202 andultrasound generator210 may be incorporated into a standalone unit.Ultrasound transducer202 emits the acoustic waves toward patient “P”. The acoustic waves reflect off various structures in patient “P” and are received byultrasound transducer202.Ultrasound transducer202 transmits the reflected acoustic waves to anultrasound generator210 that converts the reflected acoustic waves into a two dimensional (2D) image in real time. The 2D image is transmitted to acontroller212.Controller212 processes the 2D image and displays the 2D image asimage218 includingtarget220 ondisplay214.Image218 is a real time representation of scan plane “S” which may include target “T”.
The navigation system also incorporates acamera208 affixed to ansurgical device206. Thecamera208 captures an image offiducial patch204 in real time in order to determine the position of thesurgical device206 in relation to the scan plane “S”. In particular,fiducial patch204 has a defined spatial relationship to scan plane “S”. This defined spatial relationship is stored incontroller212.Camera208 also has a known spatial relationship tosurgical device206 that is stored incontroller212. In order to determine the spatial relationship betweensurgical device206 and scan plane “S”,camera208 captures an image offiducial patch204 and transmits the image tocontroller212. Using the image of thefiducial patch204,controller212 can calculate the spatial relationship between thesurgical device206 and the scan plane “S”.
Aftercontroller212 determines the spatial relationship between thesurgical device206 and scan plane “S”,controller212 displays that relationship ondisplay214. As shown inFIG. 13,display214 includes animage218 of scan plane “S” including atarget image220 of target “T”. Additionally,controller212 superimposes avirtual image206aofsurgical device206 in relation to image218 to indicate the position of thesurgical device206 in relation to scan plane “S”. Based on the angle and position of theablation needle206,controller212 can calculate a trajectory of thesurgical device206 and display the calculated trajectory shown generally as216. In some embodiments, a crosshair or target may be superimposed onimage218 to indicate where thesurgical device206 will intersect the scan plane “S”. In other embodiments, thecalculated trajectory216 may be shown in red or green to indicate the navigation status. For instance, ifsurgical device206 is on a path that will intersect target “T”,calculated trajectory216 will be shown in green. Ifsurgical device206 is not on a path that will intersect target “T”,calculated trajectory216 will be shown in red.
Controller212 can also be controlled by a user to input the surgical device type, energy level, and treatment duration. The surgical device type, energy level, and treatment duration can be displayed ondisplay214 as shown inFIG. 14A. Whensurgical device206 intersects target “T”, a virtual ablation zone222 is projected ontoimage218 as shown inFIG. 14B. The energy level and treatment duration can then be adjusted by a user and thecontroller212 will adjust the virtual ablation zone222 to reflect the changes in the energy level and treatment duration.
The fiducial tracking system is described hereinbelow with reference toFIGS. 15-22. In the fiducial tracking system,controller212 receives a fiducial image fromcamera208.Controller212 also includes camera calibration and distortion coefficients forcamera208, fiducial system models, and camera-antenna calibration data previously stored thereon. In other embodiments, camera calibration and distortion coefficients forcamera208, fiducial system models, and camera-antenna calibration data can be entered intocontroller212 during a navigation procedure. Based on the fiducial image, camera calibration and distortion coefficients forcamera208, fiducial system models, and camera-antenna calibration data,controller212 can output the position ofablation needle206 to display214 as well as diagnostic frame rate, residual error, and tracking status. In some embodiments, the distance between thecamera208 and thefiducial patch204 may be in the range of about 5 to about 20 centimeters. In some embodiments, the distance betweencamera208 andfiducial patch204 may be in the range of about 1 to about 100 centimeters.
FIG. 15 shows a basic flowchart for the fiducial tracking algorithm employed bycontroller212. As shown inFIG. 15, an image frame is captured instep230. Instep231,controller212 corrects for lens distortion using the camera calibration and distortion coefficients. Images captured bycamera208 may exhibit lens distortion as shown inFIG. 16A. Thus, before an image can be used for further calculations, the image needs to be corrected for the distortion. Beforecamera208 is used during a navigation procedure,camera208 is used to take multiple images of a checkerboard pattern at various angles. The multiple images and various angles are used to create a camera matrix and distortion coefficients.Controller212 then uses the camera matrix and distortion coefficients to correct for lens distortion.
Instep232,controller212 finds the white circles in the image frame using the algorithm ofFIG. 17. As shown inFIG. 17, the image frame received in step241 (FIG. 18A) is thresholded instep243 using a dynamic threshold (seeFIG. 18B). When using a dynamic threshold, after each valid frame, the dynamic threshold algorithm computes a new threshold for the next frame using the circles that were found in the valid frame. Using the circles that were found in the valid frame,controller212 calculates a new threshold based on equation (5) below:
threshold=(black circle intensityaverage+white circle intensityaverage)/2 (5)
A predetermined threshold may be used to capture the initial valid frame which is then used to calculate a new threshold.
Alternatively,controller212 may scan for an initial threshold by testing a range of threshold values until a threshold value is found that results in a valid frame. Once an initial threshold is found,controller212 would use equation (5) for dynamic thresholding based on the valid frame.
In other embodiments, a fixed threshold may be used. The fixed threshold may be a predetermined number stored incontroller212 or it may be determined by testing the range of threshold values until a threshold value is found that results in a valid frame.
After a threshold and automatic gain control is applied to the image, a connected component analysis is performed instep244 to find all the objects in the thresholded image. A geometric filter is applied to the results of the connected component analysis and the image frame instep245. The geometric filter computes the size and shape of the objects and keeps only those objects that are circular and about the right size as shown inFIG. 18C. Weighted centroids are computed and stored for all the circular objects.
Turning back toFIG. 15, in addition to finding the white circles instep232,controller212 also finds the black circles instep233 using the algorithm depicted inFIG. 19. The algorithm for finding the black circles is similar to the algorithm shown inFIG. 17 for finding the white circles. In order to find the black circles, after an image frame is received in step241 (seeFIG. 20A),controller212 inverts the intensities of the image frame instep242 as shown inFIG. 20B. Then, as described above with regard toFIG. 17, the image is thresholded as shown inFIG. 20C and the connected component analysis is performed and geometric filter is applied to obtain the image shown inFIG. 20D. The weighted centroids are computed and stored for all the black circles instep248. Further, instep245,controller212 applies a geometric filter to determine the black regions in addition to the black circles in the image frame.Controller212 stores the determined black regions instep249.
Instep234 ofFIG. 15,controller212 finds a correspondence between the fiducial image and fiducial models using the algorithm of shown inFIG. 21A. Instep251 ofFIG. 21A,controller212 uses a topology constraint to select the four white circles as shown inFIG. 21B. As shown inFIG. 21B, instep261,controller212 obtains the black regions stored instep249 ofFIG. 19 and obtains the white circles stored instep246 ofFIG. 17.Controller212 then selects a first black region instep263 and counts the number of white circles in the first black region instep264.Controller212 determines whether the number of circles in the selected black region matches a predetermined number of circles instep265. If the number of circles does not match the predetermined number of circles, the algorithm proceeds to step266 where the next black region is selected and the number of circles in the next black region is counted again instep264. This process repeats until the number of circles counted instep264 matches the predetermined number of circles. Once the number of circles counted instep264 matches the predetermined number of circles, the algorithm proceeds to step267 where the topology constraint algorithm is completed. In other embodiments,controller212 selects the four white circles by selecting the four roundest circles.
After the four circles are chosen, they are arranged in a clockwise order using a convex hull algorithm instep252. The convex hull or convex envelope for a set of points X in a real vector space V is the minimal convex set containing X. If the points are all on a line, the convex hull is the line segment joining the outermost two points. In the planar case, the convex hull is a convex polygon unless all points are on the same line. Similarly, in three dimensions the convex hull is in general the minimal convex polyhedron that contains all the points in the set. In addition, the four matching fiducials in the model are also arranged in a clockwise order.
Instep253, a planar homography matrix is computed. After a planar homography matrix is calculated, the homography matrix is used to transform the fiducial models to image coordinates using the four corresponding fiducial models shown inFIG. 22 to find the closest matching image fiducials (steps254 and255).Controller212 also computes the residual error instep256. The algorithm uses the resulting 3D transform to transform the 3D fiducial model into the 2D image. It then compares the distances between fiducials mapped into the 2D image with the fiducials detected in the 2D image. The residual error is the average distance in pixels. This error is used to verify accuracy and partly determine the red/green navigation status.Controller212 then selects the model with the most matches and the smallest residual error. In order for a more accurate result, there has to be a minimum number of black fiducial matches (e.g., three).
Instep235 ofFIG. 15, camera pose estimation is performed. The camera pose estimation involves calculating a 3D transform between the camera and the selected model by iteratively transforming the model fiducials onto the fiducial image plane and minimizing the residual error in pixels. The goal is to find the global minimum of the error function. One problem that may occur is the occurrence of significant local minima (e.g., an antenna imaged from the left looks similar to an antenna imaged from the right) in the error function that needs to be avoided.Controller212 avoids the local minima by performing minimization from multiple starting points and choosing the result with the smallest error. Once the 3D transform is calculated, the controller can use the 3D transform to transform the coordinates of thesurgical device206 to a model space and display thesurgical device206 as virtualsurgical device206aindisplay214.
Because object boundaries expand and contract under different lighting conditions, a conventional square corner fiducials location may change depending on lighting conditions.Fiducial patch204 uses black and white circles, and, thus, is not hampered by this problem because the center of the circle always stays the same and continues to work well for computing weighted centroids. Other contrasting images or colors are also contemplated.
In another embodiment of the present disclosure, and as shown inFIG. 23, a planning andnavigation system300 is provided.System300 includesplanning system302 andnavigation system304 that are connected to acontroller306.Controller306 is connected to adisplay308 that may include a single display screen or multiple display screens (e.g., two display screens).Planning system302 is similar toplanning system100 andnavigation system304 is similar tonavigation system200. Insystem300,display308 displays the planning operation and navigation operation described hereinabove. The planning operation and the navigation operation may be displayed as a split screen arrangement on a single display screen, the planning operation and the navigation operation may be displayed on separate screens, or the planning operation and the navigation operation may be displayed the same screen and a user may switch between views.Controller306 may import dose settings from the planning system and use the dose setting during a navigation operation to display the ablation zone dimensions.
In other embodiments of the present disclosure, CT navigation and software can be integrated withplanning system100. Turning toFIGS. 24, 25A, and 25B a planning and navigation system is shown generally as400.System400 includes animage capturing device402 that captures CT images of a patient “P” having anelectromagnetic reference428 and/oroptical reference438. The CT images are provided in DICOM format toplanning system404 that is similar toplanning system100.Planning system400 is used to determine a treatment plan as described above and the treatment plan is provided tocontroller408 and displayed as a planning screen412 ondisplay410 as shown inFIG. 26.
Navigation system406 may use an electromagnetic tracking system as shown inFIG. 25A, an infrared tracking system or an optical tracking system as shown inFIG. 25B. Turning toFIG. 25A, anavigation system420 includes anelectromagnetic field generator422, ansurgical device424 having anelectromagnetic transducer426, and anelectromagnetic reference428 disposed on the patient. Thefield generator422 emits electromagnetic waves which are detected by electromagnetic sensors (not explicitly shown) on thesurgical device424 andelectromagnetic reference428 and then used to calculate the spatial relationships betweensurgical device424 andelectromagnetic reference428. The spatial relationships may be calculated by thefield generator422 or thefield generator422 may provide the data tocontroller408 to calculate the spatial relationship between theablation needle424 and theelectromagnetic reference428.
FIG. 25B depicts analternate navigation system430 that is similar to the navigation system described inFIG. 13 above. InFIG. 25B, an optical reference orfiducials438 is placed on a patient. Acamera436 attached tosurgical device424 takes an image of thefiducials438 and transmits the image tocontroller408 to determine a position of the ablation needle in relation to thefiducials438.
After receiving data fromnavigation system406,controller408 may correlate the position of thesurgical device424 with the CT images in order to navigate thesurgical device424 to a target “T” as described below. In this case, the patient reference (of any type) may have radiopaque markers on it as well to allow visualization during CT. This allows the controller to connect the patient CT image coordinate system to the instrument tracking coordinate system.
Controller408 anddisplay410 cooperate with each other to display the CT images on anavigation screen440 as shown inFIG. 27. As shown inFIG. 27,display screen440 includes atransverse view442,coronal view444, andsagittal view446. Each view includes a view of the target “T” and an ablation zone452 (including a margin). Thetransverse view442,coronal view444 andsagittal view446,ablation zone452 are all imported from planningsystem404. Additionally, all planning elements (e.g., device selection, energy level, and treatment duration) are automatically transferred to thenavigation screen440. Thenavigation screen440 is also a graphical user interface that allows a user to adjust the device selection, energy level, and treatment duration.
Anavigation guide screen448 is provided ondisplay screen440 to assist in navigating the ablation needle to the target “T”. Based on the data received from thenavigation system406, the controller can determine if thesurgical device424 is aligned with target “T”. If thesurgical device424 is not aligned with target “T”, thecircle454 would be off-centered fromouter circle453. The user would then adjust the angle of entry for thesurgical device424 until the center ofcircle454 is aligned with the center ofouter circle453. In some embodiments,circle454 may be displayed as a red circle when the center ofcircle454 is not aligned with the center ofouter circle453 orcircle454 may be displayed as a green circle when the center ofcircle454 is aligned with the center ofouter circle453. Additionally,controller408 may calculate the distance between the target “T” and thesurgical device424.
In another embodiment depicted inFIG. 28,controller408 superimposes a virtualsurgical device424aover a 3D rendered image and displays the combined image onscreen462. Similar to the method described above, a user can align the center ofcircle453 with the center ofcircle454 to navigate thesurgical device424 to the target “T”. Alternatively, the user can determine the position ofsurgical device424 in relation to the target “T” by viewing virtualsurgical device424aonscreen462 to navigate thesurgical device424 to the target “T”.
FIG. 29 depicts another embodiment of the present disclosure. Similarly to screen462 above, in the embodiment ofFIG. 29,screen472 depicts a virtualsurgical device424ain spatial relationship to previously acquired and rendered CT image. The CT image has been volume rendered to demarcate the target “T” as well as additional structures, vessels, and organs. By volume rendering the target “T”, as well as the additional structures, vessels, and organs, the user can navigate thesurgical device424 into the patient while also avoiding the additional structures, vessels, and organs to prevent unnecessary damage.
It should be understood that the foregoing description is only illustrative of the present disclosure. Various alternatives and modifications can be devised by those skilled in the art without departing from the disclosure. Accordingly, the present disclosure is intended to embrace all such alternatives, modifications and variances. The embodiments described with reference to the attached drawing figures are presented only to demonstrate certain examples of the disclosure. Other elements, steps, methods and techniques that are insubstantially different from those described above and/or in the appended claims are also intended to be within the scope of the disclosure.