FIELD OF THE INVENTIONThis application relates generally to stereoscopic viewing of three dimensional images.
BACKGROUND OF THE INVENTIONCurrent three dimensional viewing techniques include using multiple x-ray images taken from different points of view, to generate a perceived three dimensional image. Other techniques of three dimensional viewing include using tomosynthesis to generate perceived three dimensional images of an object where the perceived three dimensional images are further segmented into multiple two dimensional cross-sections of the perceived three dimensional image. This technique further includes viewing each cross-sectional slice image of the perceived three dimensional images in cine mode.
Methods exist for marking two dimensional and perceived three dimensional images. Such methods include providing either a two dimensional or three dimensional cursor that can be placed within the image to mark a particular location.
SUMMARY OF THE INVENTIONIn one aspect, described is a method for stereoscopic three dimensional viewing. The method includes retrieving a collection of images representative of a volume. Image elements are identified within the retrieved collection of images, and at least one image element value is generated based on the identified image elements. A projection image is generated, where the projection image is based in part on the at least one image element value. The projection image is then displayed.
In one embodiment, retrieving the collection of images includes retrieving at least one image.
Still another embodiment includes generating a projection image that includes a stereo pair of projection images.
BRIEF DESCRIPTION OF THE DRAWINGSThe following figures depict certain illustrative embodiments of a method and system for three dimensional viewing, where like reference numerals refer to like elements. Each depicted embodiment is illustrative of the method and system and not limiting.
FIG. 1 depicts an embodiment of an environment for creating, processing and viewing images.
FIGS. 2A-2B depict an embodiment of a viewing system for viewing images.
FIG. 3 depicts an embodiment of a system for generating a stereo view of collected images.
FIGS. 4A-4B are illustrative flow diagrams of an embodiment of a method for displaying images.
FIGS. 5A-5C depict an embodiment of a system for marking stereo pairs of images.
FIG. 6 is an illustrative flow diagram of an embodiment of a method for marking stereo pairs of images.
FIG. 7 is an illustrative flow diagram of an embodiment of a method for displaying a sequence of images.
FIG. 8 is an illustrative flow diagram of an embodiment of a method for displaying a slab of images.
FIG. 9 is an illustrative flow diagram of an embodiment of a method for displaying a slab of images in stereo.
DETAILED DESCRIPTIONIllustrated inFIG. 1 is one embodiment of anenvironment100 for creating, processing and viewing images captured by animaging machine105 included within theenvironment100. Theimaging machine105 operates to generate image data and further transfer that data to acomputing machine120 within theenvironment100. Image data is transferred to thecomputing machine120 and to aviewing apparatus115 within theenvironment100, via anetwork110. Theviewing apparatus115 is further connected to thecomputing machine120, which is further connected to aninput device125.
Referring toFIG. 1, and in more detail, theimaging machine105 is, in some embodiments, a breast tomosynthesis imaging unit able to produce digital images of breast tissue from those images of the breast taken during a mammogram. In particular, the breast tomosynthesis imaging unit acquires multiple projections of the breast, taken from a narrow range of angles. These projections are created by rotating an x-ray tube and detector mounted to a C-arm, around a static and compressed breast and over an angular range of 10 to 40 or more degrees. As the x-ray tube traverses the compressed breast in angular intervals within the 10 to 40 or more degree range; the x-ray tube pulses, emits radiation, multiple times, and at equally spaced intervals, during traversal of the breast. Anywhere from 0 to 30 or more projections may result from pulsing the x-ray tube during traversal around the compressed breast, and together the projections constitute a tomosynthesis projection set. Pairs of these projection images may be used as stereo pairs to generate a perceived three dimensional image. In addition, the breast tomosynthesis imaging unit reconstructs the projections within the tomosynthesis projection set using a back-projection algorithm. Each image is reconstructed in a plane parallel to the breast compression plate, and what results is a transformation of the tomosynthesis projection set into a volumetric stack of reconstructed slices. Stereo pairs of projections through the reconstructed slices may be used to generate a perceived three dimensional image.
Other embodiments include animaging machine105 that is a computed axial tomography (CT) scanner. In this embodiment, the volumetric data set captured by the CT scanner, once reconstructed into image slices representative of a section of the captured organ, may be further processed by components within thesystem100 to create both non-stereo and stereo images. Still other embodiments include animaging machine105 that is any of the following types of CT scanners: electron beam CT scanner; spiral CT scanner; multi-slice CT scanner; dual source CT scanner; 256 slice CT scanner; inverse geometry CT scanner; and any other type of imaging device that utilizes volumetric x-ray data to generate a perceived three dimensional image of the object illustrated within the volumetric x-ray data. Still further embodiments include an imaging machine that is any one of the following types of technology: any type or form of magnetic resonance imaging (MRI); positron emission tomography (PET); thermal tomography; infrared imaging; three dimensional ultrasound; or any other type of modality producing a volumetric data set able to be formatted via the systems and methods discussed herein.
Still referring toFIG. 1, theimaging machine105 communicates with acomputing machine120 via anetwork110. Thecomputing machine120 can, in some embodiments, be any of the following: a personal computer; a client; a server; a windows-based terminal; an informational appliance; a workstation; a minicomputer; a main-frame computer; or any other computing device able to carry out those systems and methods herein described. In one embodiment, thecomputing machine120 includes each of the following components: a processor (not shown); volatile memory (not shown); an operating system (not shown); a persistent storage memory (not shown); and a network interface card (not shown). Other embodiments include acomputing machine120 with any combination of the above mentioned computing components, and/or additional computing components. The processor (not shown) can, in some embodiments, include a single core, dual core, or multi-core processor. One embodiment includes acomputing machine120 that executes any of the following operating systems (not shown): any version of the WINDOWS OS; any version of the LINUX OS; any version of UNIX; any type or version of embedded operating system; any version of MAC OS; and any other type or version of operating system able to control the operation of the components included within thecomputing machine120. One embodiment includes acomputing machine120 integrated into theimaging machine105 such that theimaging machine105 communicates directly with theviewing apparatus115, and theinput device125 communicates with theimaging machine105.
In one embodiment, theimaging machine105 communicates with thecomputing machine120 and viewingapparatus115 via a network110. This embodiment includes anetwork110 that can be any of the following network types or forms: a serial network; a parallel network; a token ring network; a local-area-network (LAN); a wide-area-network (WAN); a personal area network (PAN); a private network; a public network; a packet switched network; or any other network type or form able to carry out the systems and methods herein described. Other embodiments include anetwork110 that establishes connections via any of the following protocols: TCP/IP; IPX; SPX; NetBIOS; NetBEUI; SONET; SDH; T/TCP; TCPSACK; TCP-Vegas; UDP over IP; WiFi protocols such as 802.11A, 802.11B, 802.11G, or 802.11N; Bluetooth® or any other short-range communication protocol; RS-232; RS-485; or any other network protocol able to carry out the systems and methods herein described. One embodiment includes animaging machine105 that communicates with thecomputing machine120 over a multi-user network, while other embodiments include animaging machine105 that communicates over a network that includes only those devices within theenvironment100. In one embodiment, thenetwork110 includes a single wire installed between theimaging machine105 and thecomputing machine120, and/or a single wire installed between theimaging machine105 and theviewing apparatus115. Still other embodiments include anenvironment100 where anetwork110 is installed between theimaging machine105 and thecomputing machine120 that excludes theviewing apparatus115, while other embodiments include anenvironment100 where anetwork110 is installed between theimaging machine105 and theviewing apparatus115 that excludes thecomputing machine120.
Thecomputing machine120, in some embodiments, is connected to aninput device125 able to control the display of images on theviewing apparatus115. In some embodiments, theinput device125 can be used to control any of the following: image contrast; grayscale; image brightness; point of view within a stereo view; tilt of the image about an axis; movement through the image or composition of image slices; magnification of portions of the image; selection of areas within the image; or any other aspect relative to the display of the image. Theinput device125, in one embodiment, can be any of the following components either in combination or alone: a mouse; a keyboard; an infrared pointing device; a laser pointer; a keypad; a joystick; a stylus and tablet, where contact between the stylus and tablet generates an input signal representative of the location of the contact between the stylus and the tablet; and any other input device compatible with the system and method herein described. In one embodiment, theinput device125 is a controller with multiple degrees of freedom, and/or a controller that includes any of the following sensors either in combination or along: a tilt sensor; an accelerometer; a gyroscope; an infrared sensor; an inertial measurement unit (IMU); or any other sensor able to capture the movement of the controller over multiple degrees of freedom. Still other embodiments include aninput device125 that allows for haptic feedback. In one embodiment, theinput device125 communicates either via physical connection or wirelessly with any of thecomputing machine120, theimaging machine105, theviewing apparatus115, or any other device included within theenvironment100. Further embodiments include aninput device125 that communicates with animaging machine105 via theviewing apparatus115 andnetwork110.
Still referring toFIG. 1, included within theenvironment100 is aviewing apparatus115 that, in one embodiment, is theviewing apparatus115 illustrated inFIG. 2A andFIG. 2B. Other embodiments include a viewing apparatus that includes any of the following display devices: an LCD monitor; a CRT monitor; a plasma display; a surface-conduction electron emitter display; an organic LED display; or any other medium able to display the system and method herein described.
While the above embodiments contemplate anenvironment100 that includes devices able to capture images of organs included within the body, other embodiments may include anenvironment100 with animaging machine105 and other components necessary to capture images of external features of the body such as the skin. In other embodiments, theenvironment100 includes animaging machine105 and related environmental components necessary to capture images in any of the following contexts: oil exploration; improvised explosive device detection and other bomb detection; identification of human bodies within vehicles and other transportation devices; mining; security applications such as body cavity searching, baggage searching, and other security-based searching; detection of underground facilities; cargo tracking to track the contents of cargo containers; and forms of medical imaging for various organs such as the breast, and the prostate.
Other embodiments of theenvironment100 include acomputing machine120 andviewing apparatus115 able to network with clients on thenetwork110 such that the clients on thenetwork110 may remotely view the images generated by theimaging machine105, and may further view the images displayed on theviewing apparatus115. In this embodiment, the end user may have a substantially similar viewing apparatus to theviewing apparatus115 in communication with thecomputing machine120, and so may have the ability to view stereo imaging of the restructured breast volume generated by theimaging machine105 and formatted by the methods and systems herein described. Further embodiments include remote viewing capabilities that provide an end user with the ability to interact with the slices generated by theimaging machine105 in a manner similar to that of the methods and systems herein described.
Illustrated inFIGS. 2A and 2B are embodiments of aviewing apparatus115 included within theenvironment100 depicted inFIG. 1. Each ofFIGS. 2A and 2B illustrate aviewing apparatus115 that includes display units connected at a joint155 and attached to abeam splitter150. Depicted inFIG. 2A is an embodiment of aviewing apparatus115 that includes atop display unit165 connected to abottom display unit160 at a joint155 that spans the width of eachdisplay unit160,165. Thedisplay units160,165 are further connected to abeam splitter150. Depicted inFIG. 2B is an embodiment of aviewing apparatus115 that includes a lefttop display unit180, a righttop display unit185, a rightbottom display unit175, and a leftbottom display unit170; where each of thetop display units180,185 are connected to each of thebottom display units170,175 via a joint155 that spans the combined width of each top display unit set and each bottom display unit set. Abeam splitter150 is connected to each of the display units such that thebeam splitter155 spans the combined width of the left and righttop display units180,185, and the combined width of the left and rightbottom display units170,175. Thedisplay units170,175,180,185 are further connected via a joint155 that spans the width of thebeam splitter150.
Further referring toFIGS. 2A and 2B, and in more detail, in one embodiment each includeddisplay unit160,165,170,175,180,185 is a Dome E5 AMLCD flat-panel 21.3 inch grayscale monitor. Other embodiments include display units that can be any of the above described display devices. Still other embodiments include display units that include any of the following: a five mega pixel display device; a display device able to display at a resolution of 2560 to 2048 pixels; a sixteen mega pixel display device; or any other display device with a resolution able to display the images generated by theimaging device105. In another embodiment, each display unit is configured to emit a polarized image, where the polarization axes of the two monitors are orthogonal to each other. Theviewing apparatus115 can, in some embodiments, include display units configured to emit a polarized image, where the display units employ circular polarization techniques such that the orthogonal directions are clockwise and anti-clockwise.
Embodiments include abeam splitter150 that has a glass plate coated with half-silvered coating that is approximately fifty percent transmissive and fifty percent reflective. In this embodiment, the beam splitter extends out from the joint155 connecting the top display unit(s) to the bottom display unit(s). Thebeam splitter150 bisects the angle created at the joint155 connecting the top display unit(s) to the bottom display unit(s), and can be movably rotated about the joint155 to provide a direct view of either the top display unit(s) or the bottom display unit(s). In this embodiment, thebeam splitter150 operates to reflect the image projected by the top display unit(s)165,180,185, and transmit the image projected by the bottom display unit(s)160,170,175. Thebeam splitter150 maintains both the transmitted and reflected image so that a user viewing thebeam splitter150 through polarized glasses where the lenses of the glasses are cross-polarized, may see a three dimensional representation of the image. This occurs when the User's visual system fuses the reflected image from the top display units as viewed through one eye, with the transmitted image from the bottom display units as viewed through the other eye. The single fused image produces a three dimensional view of a stereo pair of two dimensional projection images of either a volumetric set of data or a physical volume. Other embodiments include abeam splitter150 that can be locked in a single position so that a user may directly view the lower display unit(s)160,170,175.
Referring toFIG. 2A, and in more detail, in one embodiment, theviewing apparatus115 includes a singletop display unit165 and a singlebottom display unit160. Eachdisplay unit160,165 is configured to display the reconstructed two dimensional images, stereo pair of projection images or image slices generated by theimaging device105. In one embodiment, thedisplay units160,165 may display a single reconstructed image of the volume, organ or other object; while in other embodiments thedisplay units160,165 may display multiple views of a displayed two dimensional projection image, or stereo pair of images. Further embodiments includedisplay units160,165 able to display two different projection images, a stereo pair of images, or image slices. In this embodiment, a first displayed image or stereo pair of images corresponds to a recent, younger or current view of the displayed volume, object or organ; while a second displayed image or stereo pair of images corresponds to a view of the displayed volume, object or organ that is older than the recent, younger or current view. Differences between the older image of the volume, object or organ, and the current image of the now older volume, object or organ may be highlighted using a pixel comparison or other comparison algorithm. This embodiment is of particular use when illustrating the effects of change, aging and time on the volume, object or organ; or to illustrate development or recession of changes, growths, cancer or another disease. Such an embodiment may be used in another embodiment to compare an image representative of a recently captured volume, object or organ image to a test image. This embodiment may be used as a method for teaching operators, users of the systems and methods described herein, doctors or other medical associates how to identify tumors, illnesses, or other structures, densities, or elements within an image.
Referring toFIG. 2B, and in more detail, in one embodiment theviewing apparatus115 includes a lefttop display unit180 connected to a righttop display unit185, and a left bottomtop display unit170 connected to a righttop display unit175. Together the left and right top display units create a single top display unit, while together the left and right bottom display units create a single bottom display unit. In one embodiment, the multiple display units can be used to display either different views of the same volume, object or organ, or differing views of different volumes, objects or organs. As described above, these multiple display methods can be used to display the effects of time, changes, aging, cancer, disease, lack of disease, comparison to a test case, or other contrasting view of a same or different volume, object or organ to further identify points of interest within either or both of the displayed sets of images, or images. Other embodiments of theviewing apparatus115 include more than two top and/or bottom display units. Still other embodiments of theviewing apparatus115 include the ability to display animation of a displayed image, stereo pair of images or set of images.
Further referring toFIGS. 2A and 2B, aviewing apparatus115 includes a joint155 installed between top display units and bottom display units. In one embodiment, the joint155 is a hinge movable about a central axis, so that the angle between the top display units and the bottom display units may be altered. Other embodiments of theviewing apparatus115 include anapparatus115 driven by a graphics controller card (not shown.) In one embodiment, the graphics controller card (not shown) that has onboard memory such that the graphics card can render a stereo projection through a moving slab of slices in real time, and further allow a user to alter the point of view within the stereo mode.
In one embodiment, theviewing apparatus115 can be any stereo displaying systems, such as the following devices manufactured by Planar Systems: the SD1710 Stereoscopic 3D Display; the SD2020 Stereoscopic 3D Display; the SD2420W Stereoscopic 3D Display; and the SD2620W Stereoscopic 3D Display. Other embodiments include a stereoscopic three dimensional display created using the Planar Systems Dome Z16 display.
Still other embodiments include aviewing apparatus115 that employs methods of displaying which include rapid temporal alternative of two images, where each image is a member of a stereo pair of images. In this embodiment, the viewing apparatus rapidly and temporally alternates display of the stereo pair of images in synchronization with a user's electronic shutter glasses. In another embodiment, theviewing apparatus115 includes an auto stereoscopic display where two images, where each image is a member of a stereo pair of images, are individually directed towards a user's eyes such that the user can view the stereo pair of images as a three dimensional image without the need for an additional viewing apparatus such as stereo glasses. Other embodiments include aviewing apparatus115 that employs any other type stereo display technology or viewing method that permits an end user to view a three dimensional image from a stereo pair of images with or without the aid of an additional viewing apparatus.
Illustrated inFIG. 3, is an embodiment of a system for generating a stereo view of a stereo pair of two dimensional images representative of a physical volume or a volumetric set of data. In thissystem401, the stereo pair of two dimensional images or image slices, are compiled into a singlereconstructed volume405. Within thevolume405 is a collection of two dimensional images or a slab ofslices415 that are included within a projection set, and that contain images of either a physical volume or a volumetric set of data. Views of the reconstructedvolume405 are taken from a first point ofview430 with a first line ofsight435, and a second point ofview420 with a second line of sight425.
Further referring toFIG. 3, and in more detail, using two dimensional projection images of the reconstructedvolume405, a first projection image is generated from a first point ofview430 while a second projection image is generated from a second point ofview420 to create a stereo pair of images. This stereo pair of images, when viewed through a stereo viewing apparatus, provides a three dimensional view of a portion of thevolume405. In one embodiment, the first point ofview430 and the second point ofview420 are separated by a distance within the range of 0 to 10 centimeters. Other embodiments include spacing between each point of view substantially greater than, equivalent to, or less than a value representative of an average distance between a human pair of eyes. Further embodiments include a first and second point ofview430,420 that are located parallel to each other so that the view from each point results in a visually correct pair of stereo images. Other embodiments include a first and second point ofview430,420 directed toward a common focal point according to the camera toe-in method. In this embodiment, the vertical parallax created by directing the view points towards a single focal point may result in visual discomfort.
In one embodiment, creation of the stereo pair of images includes constructing two projection images from a slab of slices, where each projection image is calculated from the volumetric set of data or physical volume represented by images, and where each image is taken from a separate and distinct viewpoint. One embodiment includes determining a stereo pair of images by calculating a first projection image from the first point ofview430, and then calculating a second projection image from the second point ofview420. Each image is calculated using ray tracing in conjunction with a weighting function to pass a virtual ray through the volumetric data represented within the slab, to weight the value of each of the pixels encountered in successive slices as the ray passes through the slab in constructing the two dimensional projection image. Different weighting functions may be employed, including, but not restricted to: linear averaging; depth-weighted averaging; maximum intensity projection; or any other appropriate weighting function. In some embodiments, the identified weighted pixel values are used to construct a two dimensional image representative of a view of the volumetric set of data from either the first or second point of view. The created two dimensional images display pixels at a brightness that correlates to each pixel identified from each images respective point of view.
Other embodiments include a recalculation of the displayed stereo pair of images when the point of view of the stereo image changes in response to a tilt command and/or in response to user input indicating that the point of view should be changed. In this embodiment, the change in point of view causes each of the points ofview430,420 to change such that each point of view identifies a different set of points within the volumetric data represented by the slab. The change in the point of view requires that the displayed two dimensional images to be re-calculated using ray tracing and using the weighting function. What results is a different pair of stereo images representative of a viewpoint of the volumetric image data represented by the slab that differs in a manner substantially in accordance with the user-specified change in point of view or tilt.
In still another embodiment, the system described inFIG. 3 can be used to generate a stereo view of images obtained from an imaging machine able to obtain images of any of the following: a physical volume, external features of the human body or of any other living organism; images generated in relation to oil exploration; images of an improvised explosive device or any other images generated during bomb detection; identification of human bodies within vehicles or other transportation devices; images generated by security applications that perform body cavity searching, baggage searching, or any other security detection process; detection of underground facilities; images generated during cargo tracking to track the contents of cargo containers; or any images generated by any device capable of generating internal and/or external images of the human body. In such an embodiment, the images generated are images of a three dimensional volume.
Illustrated inFIG. 4A is one embodiment of amethod601 for three dimensional viewing themethod601 including retrieving a collection of images representative of a volume (step605). Within the retrieved collection of images, image elements are identified (step610). At least one image element value is generated, where the image element value is based on the identified image elements (step615). A projection image is generated, where the projection image is based in part on the at least one image element value (step620). The generated projection image is then displayed (step625).
Further referring toFIG. 4A, and in more detail, retrieving a collection of images representative of a volume (step605) can include retrieving a collection of images generated by theimaging machine105. Other embodiments of themethod601 can include retrieving a collection of images from a storage repository located on thecomputing machine120 or on any other computing machine or device in communication with either thecomputing machine120 or theimaging machine105. In one embodiment, the collection of images includes a single image, where in another embodiment, the collection of images includes more than one image.
Image elements are identified by acomputing machine120 from within the collection of images (step610) retrieved by theimaging machine105 or from a storage repository. In one embodiment, image elements are identified by executing a ray tracing function to identify image elements along a path that passes through the collection of images. Within this embodiment, the collection of images can include a sequentially ordered stack of images. The path is directed through the sequentially ordered stack of images, and is directed using any one of a parallel axis, an asymmetric frustrum, a camera toe-in, or a perspective projection. In still another embodiment, thecomputing machine120 is further configured to apply a weighting function to intensity values that are related to the identified image elements. Other embodiments include applying a weighting function, where the weighting function can be any one of a pixel density averaging function; a depth-weighted function; a maximum intensity projection function; or any other weighting function able to identify image elements able to be used within the methods and systems described herein.
Using the identified image elements, thecomputing machine120 is further able to generate at least one image element value (step615). In one embodiment, the generated image element value is associated with the path and based in part on the results of the execution of the ray tracing function and the results of the application of the weighting function.
In one embodiment, thecomputing machine120 generates a projection image that is based on the at least one image element value (step620). Other embodiments include acomputing machine120 that generates a stereo pair of projection images from the at least one image element value. The stereo pair of projection images is generated by executing the ray tracing function from two different points of view and along two different paths, where each path originates from one of the two different points of view. This embodiment can include generating a collection of stereo pairs of projection images.
In another embodiment, theviewing apparatus115 displays the generated stereo projection image(s) (step625). The projection images can be sequentially displayed by stepping through the collection of retrieved images. In one embodiment, the display conditions can be adjusted when the projection image(s) are displayed. Examples of display conditions that can be adjusted include: a pixel intensity transfer function, slab thickness, viewing mode, point of view, or any other display condition able to be adjusted by the systems and methods described herein. Methods of displaying the stereo pair of projection images include removing a pair of stereo images from the collection of stereo images and adding a new pair of stereo images. In this method the pair of stereo images that is added is a subsequent pair of stereo images in the sequence of images within the collection of stereo image pairs. Other embodiments include displaying the projection images in a non-stereo single slice viewing mode, a non-stereo cine mode or a non-stereo slab viewing mode.
Illustrated inFIG. 4B is another embodiment of amethod650 of three dimensional viewing. Themethod650 includes retrieving a collection of images representative of a volume (step652), selecting a stereo pair, or stereo pairs, of projection images taken through a physical volume (step654), and displaying the selected stereo pair(s) of images (step656).
Further referring toFIG. 4B and in more detail, retrieving a collection of images representative of a volume (step652) includes using theimaging machine105 to retrieve images representative of a physical volume. In one particular example, this includes using an x-ray machine to obtain a collection of x-ray images of a physical volume. Still other embodiments include using anyimaging machine105 able to obtain images of a physical volume to obtain a collection of images representative of that physical volume.
In one embodiment, stereo pairs of projection images are selected from the retrieved collection of images (step654). Pairs of projection images are selected based on whether the images, when viewed through a stereo viewing apparatus, provided a displayed image of a three dimensional volume.
Other embodiments include displaying the selected stereo pair(s) of images (step656) on a stereo viewing apparatus such as those described above.
In one embodiment of either of the above-describedmethods601,650, a cursor is provided that can be positioned within an image to identify a location within a projection image. The cursor can, in one embodiment be a stereo cursor, or a shape comprised of a number of pixel elements. The shape, for example, might be an arrow, a circle or square or even a 3-dimensional shape such as an outline cube. In one embodiment, the shape can be drawn in both a first and second projection image. The vertical coordinate of corresponding elements in the two shapes can be the same in each of the two projection images, while the horizontal coordinate of corresponding elements in the two shapes may differ, depending upon their intended perceived location in depth. Corresponding elements with identical horizontal coordinates will be perceived to lie at the surface of the display device. Those for which the horizontal coordinate for the cursor as seen by the right eye is to the left of that seen by the left eye can be perceived to lie in front of the surface of theviewing apparatus115. Those for which the horizontal coordinate for the element seen by the right eye is to the right of that seen by the left eye can be perceived to lie behind the surface of theviewing apparatus115. In either case, the perceived distance from the display surface can be proportional to the degree of disparity between the two horizontal coordinates. The perceived horizontal location of the cursor in stereo can be midway between the horizontal coordinates of the cursor as drawn within each image member of the pair of stereo images. The perceived vertical location of the cursor in stereo can be at the shared vertical coordinate.
If an identical cursor is drawn in both projection images, displaced by a horizontal distance value but positioned at substantially the same vertical coordinate, then the cursor can be perceived to lie in a plane parallel to the display surface, at a depth determined by the amount of horizontal disparity. Moreover, it is possible to draw a three-dimensional cursor by shifting different elements of the cursor by different amounts horizontally according to the desired depth location of each aspect of the cursor.
The location of the stereo cursor in the perceived volume resulting from the displayed stereo projection images, can be interactively controlled by a user via aninput device125 connected to thecomputing machine120. For example, using aninput device125 such as a mouse, the horizontal and vertical location of the cursor can be controlled by horizontal and vertical movements of the mouse. The location of the marker in depth can be controlled by rotation of the mouse scroll wheel. This capability also makes it possible for the user to make length and volume measurements in the displayed volume. By using the stereo cursor to mark two locations by placing markers in the volume, possibly at different depths in the volume, an application executing on thecomputing machine120 would be able to calculate the distance between two or more placed markers Similarly, by using the stereo cursor to mark additional locations around some region in the volume, an application executing on thecomputing machine120 would be able to estimate a value representative of volume contained by the marked locations. These calculations could further be translated into coordinates, distance values, area values, volume values, or any other measurement or descriptive representative of the region, space or location tagged by the placed marker(s).
The location of the stereo marker in the displayed volume can also be controlled by some other agent. For example, Computer-Aided Detection (CAD) software analyses may be used with mammographic or other medical images to identify potential regions of abnormality. Currently, these are indicated to the radiologist by placing markers on the two dimensional projection images. In the case of stereo imaging, software could correlate regions identified independently in the two images and, for corresponding regions, place a stereo marker at the appropriate location in each image. This would be perceived to mark the region not only by its horizontal and vertical location, but also in depth.
In one embodiment, the cursor can be positioned to identify a section of the projection image(s). In another embodiment, two cursors can be positioned to identify a section of the projection image(s). The distance between each of the cursors can be measured either by a user or automatically via an application executing on thecomputing machine120.
Alternatively, the CAD application could be used to mark and display all elements or regions it finds of interest in each of the two images of the stereo pair. Two visual methods could be employed by the user to identify likely false positives, i.e., those elements or regions marked by the CAD algorithm in one of the images but not in the other. The first depends on the appearance and location in depth of marked elements or regions. A CAD-marked element or region that has no correspondent in the other image will have a shimmering appearance in the perceived stereo image due to binocular rivalry and will visually appear to lie at the surface of the display as though the element or region has zero horizontal disparity. This method can be effective when the volumetric structure being displayed in stereo is located behind or in front of the display surface, thereby segregating in depth the non-corresponding elements and regions from those that do have a correspondence.
The second visual method for identifying non-corresponding elements or regions marked by a CAD application uses temporal oscillation of the lateral position of the two images in the stereo display. If the position of each image undergoes a small amount of sinusoidal oscillation horizontally in the stereo display, with theoscillation 180 degrees out of phase for the two images, then corresponding elements or regions within the fused stereo image will be perceived by a user to oscillate back and forth in depth, with no change in position horizontally or vertically. On the other hand, non-corresponding elements or regions, since they are seen by only one eye, will be perceived to lie at the screen surface and to oscillate laterally and not in depth. The difference in horizontal versus in-depth oscillations will segregate and distinguish between CAD marks for corresponding and non-corresponding elements and regions. This method may be applied only to the CAD-positioned marks, leaving the two images comprising the stereo pair static. In this case the marks only undergo a small amount of sinusoidal oscillation horizontally in the stereo display, with theoscillation 180 degrees out of phase for the two images. Corresponding marks in the two images will be perceived to oscillate back and for the in depth, with no change in position horizontally or vertically, while a mark with no correspondent in the other image will be perceived to oscillate laterally and not in depth.
Illustrated inFIGS. 5A-5C is a system for marking a perceived three dimensional image. Each figure illustrates a threedimensional space801 having three axis of measurement, i.e. an X axis, a Y axis and a Z axis. Within the threedimensional space801 is a perceived threedimensional image805 having dimensions measurable on each of the X axis, the Y axis and the Z axis. Placed within the perceived threedimensional images805 are afirst marker810 and asecond marker815, as shown in each ofFIGS. 5A,5B and5C. Further placed within the perceived threedimensional images805 is athird marker820 as is shown inFIGS. 5B and 5C. Still further placed within the perceived threedimensional images805 is afourth marker825, as is shown inFIG. 5C. In some embodiments, placed within the perceived threedimensional images805 are four or more markers.
Still referring toFIGS. 5A-5C, and in more detail, the threedimensional space801 has, in one embodiment, three axes of measurement. The axis of measurement can in one embodiment be marked as the X axis, the Y axis and the Z axis. In other embodiments, the axis can be marked using alternative letters, numbers or identifiers. Still other embodiments include axes that are unmarked. Coordinates of images displayed within the three dimensional space can be measured using any one of the following coordinate systems: Cartesian; cylindrical; spherical; parabolic; polar; or any other coordinate system that comports with the systems and methods described herein.
In one embodiment, illustrated inFIGS. 5A-5C is a perceived threedimensional image805 having dimensions that are measurable along three axis of measurement. Other embodiments include a perceivedimage805 that is measurable along two axis of measurement. In one embodiment, the perceived threedimensional images805 are created using the systems and methods described herein.
Themarkers810,815,820 and825 are, in one embodiment, placed using three dimensional or two dimensional stereo cursors. In still other embodiments, the threemarkers810,815,820 and825 can be any one of the following shapes: square; cube; arrow; circle; oval triangle; rectangle; polygon; star; rhombus; trapezoid; or any other two-dimensional or three-dimensional shape able to enclose at the very least an area. Themarkers810,815,820 and825 are in one embodiment three dimensional and are in other embodiments two dimensional. Embodiments can includemarkers810,815,820 and825 that are patterned, colored, glowing, textured or otherwise exhibiting a type of design. Themarkers810,815,820 and825 can be in some embodiments, a set of pixel elements that are drawn into theimage805 using an application that executes on thecomputing machine120. Still other embodiments includemarkers810,815,820 and825 that can measure any of: the precise location of a selected element within theimage805; distances between selected elements within theimage805; contours, areas or volume defined by a set of markers or a selection of multiple elements within theimage805; or differences in selected aspects of the image805 (for example geometries within the image805) where theimage805 changes because of a lapse of time or the capturing of theimage805 using different imaging techniques, or under different conditions or contexts. In one embodiment one, two, three, four or more than fourmarkers810,815,820 and825 are inserted into theimage805. Still other embodiments include markers that are uniform, meaning each marker is the same as the other markers. Other embodiments includemarkers810,815,820 and825 where one or more markers are different from the other markers in that they have a different: color; shape; size; design; or other identifying characteristic.
Illustrated inFIG. 5A is an example of the use of twomarkers810 and815 to select or otherwise identify a section of the perceivedimage805. In one embodiment, the length extending from one of themarkers810 to theother marker815 can be measured to obtain a value representative of the length. Measurement of the length can include obtaining the coordinates of one of themarkers810, obtaining coordinates of theother marker815 and then using the differences in the obtained coordinates to generate a value representative of the length. Other embodiments include using a computer application executing on thecomputing machine120 to automatically identify a value associated with the length. In one embodiment, the computer application uses a method similar to that described above in that the computer application obtains the coordinates of bothmarkers810,815 and then calculates the differences between the coordinates to further generate a value representative of the length.
Illustrated inFIG. 5B is an example of the use of threemarkers810,815 and820 to select or otherwise identify a section of the perceivedimage805. In one embodiment, the lengths extending betweenmarkers810 and815,markers815 and820, andmarkers810 and820 can be measured to obtain values representative of the lengths. Still other embodiments include obtaining a value of the area bounded bymarkers810,815 and820. One embodiment includes obtaining the coordinates of each of themarkers810,815 and820 and then using the differences between the coordinates to obtain values representative of the lengths between the markers or to obtain a value representative of the area bounded by the markers. Still other embodiments include obtaining any one of the marker coordinates, the lengths between the markers or the area bounded by the markers, automatically using a computer application executing on thecomputing machine120.
Illustrated inFIG. 5C is an example of the use of fourmarkers810,815,820 and825 to select or otherwise identify a section of the perceivedimage805. In one embodiment, the lengths extending betweenmarkers815 and810,markers815 and825,markers815 and820,markers810 and820,markers825 and810, andmarkers825 and820 can be measured to obtain values representative of the above-described lengths. Still other embodiments include obtaining a value representative of the areas and volumes bounded by any portion ofmarkers810,815,820 and825. In one embodiment, obtaining values representative of the areas and volumes bounded by any portion of themarkers810,815,820 and825, by obtaining the coordinates of each of themarkers810,815,820 and825 and then using the differences between the coordinates to obtain values representative of: lengths between themarkers810,815,820 and825; areas bounded in part by themarkers810,815,820 and825; or volumes bounded in part by themarkers810,815,820 and825.
Illustrated inFIG. 6 is one embodiment of amethod901 for displaying a perceived three dimensional image. The method includes retrieving a collection of images representative of a volume (step905), and identifying image elements within the retrieved collection of images to further generate image element values that are based on the identified image elements (step910). A stereoscopic pair of projection images is generated, where the stereoscopic pair of projection images includes two member images, each member of the stereoscopic pair of images is based in part on at least one image element value (step915). The stereoscopic pair of images is further displayed on a viewing apparatus such that when the stereoscopic pair of projection images is viewed through at least a portion of the viewing apparatus, a user is able to view a three dimensional image determined by the stereoscopic pair of projection images (step920). An input device or a software application can then be used to position a marker within the stereoscopic pair of projection images to further identify a location within the perceived three dimensional images (step925).
Further referring toFIG. 6 and in more detail, in one embodiment, a stereo cursor is used to position a stereo marker within the stereoscopic pair of projection images (step925). Using the stereo cursor to identify a location or section of the perceived three dimensional image requires, in one embodiment, the placement of an identical cursor icon in each member of the stereoscopic pair of projection images. Thus, the positioned stereo cursor icons will have at least one coordinate in common and differ by a single horizontal coordinate. For example, the y coordinate of the positioned stereo cursor icons will have the same coordinate value; however, the x coordinate of the positioned stereo cursor icons will have a different coordinate value. Thus, a user viewing the perceived three dimensional image will perceive the horizontal location (the location along the x axis) to have a value that is midway between the x coordinate of each of the cursors, while the perceived vertical location (the location along the y axis) will have the y coordinate value shared by each of the icons. Positioning of the stereo cursor can be controlled using aninput device125 that when actuated by a user, can control the movement of the cursor along any of the axis within the three dimensional volume. In another embodiment, the placement of the stereo cursor within the perceived three dimensional images is controlled by a computer aided detection application executing on thecomputing machine120. The computer aided detection application may, in this embodiment, be configured to mark and display all elements or regions identified as an element or region of interest by either the computer aided detection application or by a user.
Illustrated inFIG. 7 is one embodiment of amethod201 for displaying single slices from a reconstructed stack of slices created by theimaging machine105. Themethod201 includes retrieving images from the imaging machine105 (step205), where the images are reconstructed two dimensional images generated during scanning of an object or volumetric structure. The two dimensional images are sequentially displayed based on the user-chosen view, (e.g., either the cranio-caudal (CC) view or the mediolateral oblique (MLO) view of a breast (step210)). Either before the slices are displayed or subsequent to when the slices are displayed, the user can invert the grayscale, adjust the brightness, and adjust the contrast (step215). A check is performed to determine if a start or stop flag is present (step220), and when one is present, a further determination is made as to whether or not the flag is a stop flag (step225). If the flag is a stop flag, then display of the slices stops on the current slice (step230). If the flag is a start flag, then movement through the stack of two dimensional slices continues in a chosen direction. The direction of movement is determined (step240), and the slices are either displayed in forward motion (step235) or backward motion (step245) depending on the movement direction determination.
Further referring toFIG. 7 and in more detail, in one embodiment themethod201 retrieves the two dimensional stack of images from theimaging machine105. Other embodiments, when themethod201 is executed on theimaging machine105, the two dimensional stack of images is present on themachine105 executing themethod201. In this embodiment, themethod201 can retrieve the stack of slices from a persistent or non-persistent memory location, or may process the stack of slices substantially instantaneously to provide substantially real-time data output to theviewing apparatus115 as the slices are generated by theimaging machine105.
In one embodiment, a user-selected view determines which view can be displayed on the viewing apparatus115 (step210), as well as the settings for grayscale, brightness, and contrast (step215). A default view, in some embodiments, may be available such that when theviewing apparatus115 displays a stack of slices, the default view (e.g., either the CC or the MLO view) can always be displayed first. Further embodiments include aviewing apparatus115 with two bottom display units, where one display unit displays the stack of slices corresponding to one view (e.g., the CC view), and the other display unit displays the stack of slices corresponding to a different view (e.g., the MLO view). Still other embodiments allow the user to alter the contrast, brightness, and to invert the grayscale at any point during viewing.
Embodiments of themethod201 can display the slices in cine mode, and allow the user to stop the continuous display of the slices (step230). When stopped, the last viewed slice remains on the screen until a start flag is detected (step220). Movement forward (step235) and backward (step245) through the stack can be done, in some embodiments, slice-by-slice; while in other embodiments, movement can be done more than one slice at-a-time. Still other embodiments of themethod201 may perform substantially similar functionality, but utilize different steps to carry out themethod201.
Illustrated inFIG. 8 is a method of301 of providing a non-stereo slab viewing condition. Images are retrieved from the imaging machine105 (step305) and initially displayed as a single slab such that the view provided is a non-stereo projection view through the entire stack of slices, meaning the entire set of volumetric data provided by theimaging machine105, (step310). When this initial composite view of the entire set of volumetric data is displayed (step330), the grayscale, brightness, and contrast can be adjusted according to user specifications (step335). Due to the composite nature of the initial slab view, movement through the slices is not possible and so a determination is made as to whether a tilt command is detected (step340). If a tilt command is detected, the slab is tilted according to the change in point of view inputted by the user (step345). Once the slab is tilted, or if no tilt command is detected (step340), a determination is made as to whether or not the slab button was pressed (step315). If the slab button was not pressed, then the composite slab view continues to be displayed (step330). When the slab button is depressed, the user-defined or default slab thickness is retrieved (step320). A projection view is then provided through a number of slices equivalent to the desired slab thickness (step325). Once slab views are instituted (step325), or in the event that the slab button is not pressed (step315), the slab views or slices are displayed in sequential order (step330). The grayscale, brightness, or contrast of the displayed slab view or slice is adjusted according to user specifications (step335). Determinations are made as to whether or not a start or stop flag is present, and as to whether or not the user-specified direction of movement is forward or backward (step355). A further determination is made as to whether or not a tilt command is detected (step340), and when such a command is detected, the slab view or slice is tilted according to the change in two dimensional projection effected by user-input (step345). Once a tilt has been implemented, or a determination is made that no tilt command was detected (step340); themethod301 recursively begins again with a determination as to whether or not the slab button was pressed (step315).
Further referring toFIG. 8, and in more detail, in one embodiment, when a set of slices is initially displayed on theviewing apparatus115 while in non-stereo slab viewing mode; the full set of two dimensional slices, or the entire volume, is displayed as a single slab (step310). In this embodiment, and in other embodiments where a slab composed of individual slices is displayed, the two dimensional projection images viewable through a created slab are created using ray tracing methods that require an optimal radiographic density weighting. This radiographic density weighting can, in some embodiments, include a method of weighting radiographic densities encountered along each computed ray as the ray passes through a set of slices. In this embodiment, the method of weighting radiographic densities further includes: pixel density averaging at each slice encountered by the ray; depth-weighted averaging; and maximum intensity projection. In one embodiment, the initial display of the full set of volumetric data includes displaying more than one slab versus a single slab that includes all slices. In this embodiment, the slab width is defined by a default slab width either calculated based on optimal viewing standards or predefined. In embodiments where a default slab width is defined, the default slab width or thickness can be an empirical determination arrived at by determining the optimal thickness for detecting masses and calcification clusters.
In one embodiment, a determination is made as to whether or not a slab button was pressed (step315). When the slab button is pressed, the user is provided with a slab view of the slices created by displaying a projection through a number of slices, where the number of slices displayed corresponds to a user-defined, or default slab thickness (step325). Other embodiments include an input device that allows the user to select the set of slices to be displayed within a single slab.
Display of the slab(s) (step330), and movement through the slab(s) (step355) in one embodiment includes allowing the user to move smoothly and continuously through the entire stack while each slab displays a two dimensional view of a set of slices. The movement forward or backward through the stack of slab(s) or slices (step355) can, in some embodiments, further include deleting a previous slab/slice view and adding the next slab/slice view, where the deletion and addition can include a recalculation of the provided view so that the projection is through the next incremental set of slices. In this embodiment, the substantially instantaneous deletion and addition of the previous or next slab can conserve energy and further cause it to appear as though only a single slab view is present for viewing. Still other embodiments include movement forward or backward through the slabs where each subsequent or previous view is calculated by deleting or adding a single slice to the slab view; while in other embodiments the forward or backward movement includes deleting or adding an entire set of slices to the slab view.
In some embodiments, movement through the slab(s) or slices can be stopped and started (step350) in response to user input indicating as such. When, in one embodiment, a user chooses to stop movement through the slab(s) or slices; the user can, while movement is stopped, perform any of the following functions: alter the thickness of the slabs (i.e. increase or decrease the thickness by either entering in a thickness value or choosing the number of slices to include in a slab); altering the brightness or contrast of the images; inverting the grayscale; reducing each slab to a single slice so that the display method can mimic that of the method illustrated inFIG. 7; or alter the point of view of the slab by issuing a tilt command. In one embodiment, the thickness of a slab is altered by actuating a control for increasing thickness, or actuating a control for decreasing thickness such that a single actuation of either control will cause the thickness of a slab to change by a single slice, while continuous actuation of either control will cause the thickness of a slab to change by a value representative of the length of time that either control was actuated. Other embodiments include amethod301 where issuance of a tilt command, alteration of brightness and contrast, and inversion of the grayscale, can be done during movement through the stack of slab(s). Still other embodiments include amethod301 where alteration of slab(s) thickness can be done substantially instantaneously while moving through the stack of slab(s).
Tilting a displayed slab (step345), or altering the point of view of a displayed slab, changes the two dimensional projection of the slab along a chosen axis. Altering the point of view of a displayed slab can, in some embodiments, provide a view of the slab that shifts superimposed tissue out of the way so that the user can more easily view a region of interest. In one embodiment, tilting a slab can be limited to a predetermined range of angles about the z-axis. Other embodiments include a system where there is no limitation on movement or tilt of a slab, and in such an embodiment, the slab has multiple degrees of freedom such that the slab may be freely tilted about the z-axis or other axis of movement.
Illustrated inFIG. 9 is a method of501 of providing a stereo slab viewing condition. Images are retrieved from the imaging machine105 (step505) and displayed as a single stereo view of the entire volume. Themethod501 utilizes a pair of stereo images and theviewing apparatus115 to provide a stereo viewing mode through the entirety of the reconstructed volume made up of the two dimensional slices as retrieved from the imaging machine104 (step510). This initial stereo view is displayed such that the point of view passes perpendicularly through the sequentially displayed stack of slices (step530), and the grayscale, brightness, and contrast can be adjusted according to user specifications (step535). Due to the composite nature of the initial stereo view, movement through the stereo image is not possible and so a determination is made as to whether a tilt command is detected (step540). If a tilt command is detected, the point of view within the stereo image is tilted or changed according to a change in the point of view inputted by the user (step545). Once the stereo view of the entire reconstructed volume is tilted, or if no tilt command is detected (step540); a check is made to determine whether or not the slab button was pressed (step515). If the slab button was not pressed, then the entire reconstructed set of volumetric data continues to be displayed (step530). When the slab button is depressed, the user-defined or default slab thickness is retrieved (step520). Slabs of individual slices are then viewed according to slab thickness (step525). Once slabs are created (step525), or in the event that the slab button is not pressed (step515), the slabs, slices or the full volume of slices are displayed in sequential order (step530). The grayscale, brightness, or contrast of the displayed slabs, slices or full volume of slices is adjusted according to user specifications (step535). Determinations are made as to whether or not a start or stop flag is present, and as to whether or not the user-specified direction of movement is forward or backward (step555). A further determination is made as to whether or not a tilt command is detected (step540), and when such a command is detected, the slabs, slices or full volume of slices are tilted according to the change in two dimensional projection instituted by user-input (step545). Once a tilt has been implemented, or a determination is made that no tilt command was detected (step540); themethod501 recursively begins again with a determination as to whether or not the slab button was pressed (step515).
Further referring toFIG. 9, and in more detail, in one embodiment, themethod501 illustrated inFIG. 9 is substantially similar to themethod301 illustrated inFIG. 8. Other embodiments include amethod501 of displaying a stereo slab view that is substantially different from amethod301 of displaying a non-stereo slab view.
When a determination is made indicating that the slab button was pressed (step515), in some embodiments, a slab thickness can be retrieved (step520). In this embodiment, the stereo slab view provides slab views of the image slices as opposed to a projection view through the entire volume. Slab views of the image slices are achieved by providing a projection view through a predetermined number of image slices, where the predetermined number of image slices corresponds to the slab thickness. Advantages of this embodiment include providing the user with the ability to more closely examine and select potentially suspect sections of the reconstructed volume. In other embodiments, when the slab thickness is a value of one or is some other value indicating that a single slice should be displayed, the user is shown a projection view through a single slice. Still other embodiments include detecting a slab thickness value indicating that the entire set of reconstructed volumetric data should be displayed. In this embodiment, the view displayed is a projection view through the entire set of volumetric data provided by theimaging machine105. Embodiments of themethod501 display any one of a user selected slab, slice, or full volume sequentially (step530), and in stereo. In embodiments where the user indicates that a single slice should be displayed, there can be no depth information available as only a single image slice is shown.
In one embodiment, a detected tilt command (step540) or change in point of view can be inputted via theinput device125 which is connected to thecomputing machine120. In some embodiments theinput device125 has a limited number of degrees of freedom and so alteration of the point of view within stereo mode involves moving the viewing to other points within the displayed image. Other embodiments where theinput device125 has multiple degrees of freedom, an alteration of the point of view within stereo mode occurs substantially instantaneously in response to an alteration of the input device's125 position.
The above disclosure includes one or more embodiments of the methods and systems herein described. These illustrated and explained embodiments are not meant to be limiting and additional embodiments of the methods and systems herein described may be implemented using similar or functionally equivalent technology.