This application claims, application number that submit on November 7th, 2013 is US61901279, the priority of the application that name is called " Intra-Abdominal Lightfield 3D camera and Method of Making theSame ".
Detailed description of the invention
With reference to the accompanying drawings embodiments of the invention are described.The element described in an accompanying drawing of the present invention or a kind of embodiment and feature can combine with the element shown in one or more other accompanying drawing or embodiment and feature.It should be noted that for purposes of clarity, accompanying drawing and eliminate expression and the description of unrelated to the invention, parts known to persons of ordinary skill in the art and process in illustrating.
First embodiment
Shown in Figure 1, be the structure chart of the first embodiment of three-dimensional endoscope of the present invention.
In the present embodiment, three-dimensional endoscope comprises image-generating unit 100 and control unit 105.
Image-generating unit 100 comprises housing 103, and is positioned at imaging sensor array and the luminaire 101 of housing 103.Imaging sensor array comprises multiple imaging sensor 102, for gathering the two dimensional image of destination object 108 under the illumination that provides at luminaire.
Control unit 105 synthesizes the 3-D view of destination object for the two dimensional image of the destination object 108 collected based on each imaging sensor 102.
In use, image-generating unit 100 can be placed in the body interior (such as intraperitoneal) of patient, and control unit 105 can be placed in outside patient body.
Shown in Figure 2, be the image-forming principle schematic diagram of three-dimensional endoscope of the present invention.
The three-dimensional information (that is, all light information that can see) that destination object 108 is complete can be described by light field.In calculating light field theory, light field can be expressed by the two dimensional image of a series of different visual angles usually.Contained one group of light enriched by the image that image sensor array 102 is taken, namely these light are the part light fields produced by destination object 108.In fig. 2, light field is the stacking expression of multiple two dimensional images obtained by light field three-dimensional endoscope.Light field provides the two and three dimensions image of full resolution, is convenient to visual three dimensional display of three-dimensional surface rebuilding, three-dimensional measurement and free view-point etc.By processing the light of catching, three-dimensional surface rebuilding can be completed, play up and generating three-dimensional figures picture.
As a kind of embodiment, imaging sensor 102 can comprise charge-coupled image sensor (CCD) or complementary metal oxide semiconductors (CMOS) sensor (CMOS).Can being used with the CCD/CMOS sensor assembly of digital version of simulation.Such as, can select the CMOS chip of OmniVision company, this chip has the image resolution ratio of 672x492 pixel, image area 4.032 millimeters of x2.952 millimeters, Pixel Dimensions 6x6 μm.Imaging sensor 102 can use the minisize optical lens of high-quality to obtain suitable visual angle, field (FOV) (such as 120 degree of visual fields).
In the present embodiment, the geometric position of all imaging sensors 102 can be arbitrary, but should be known or obtain by collimation technique.Such as, each imaging sensor 102 in sensor array can linearly arrange.
Imaging sensor 102 in sensor array can be identical, also can have different optics, machinery and/or characteristic electron.Such as, these sensors can have different focal lengths, the visual field, spectral region, pixel resolution or any other performance indications.Can obtain from these sensors regardless of image or non-image signal.
As a kind of embodiment, luminaire 101 can adopt LED, and other also can be used can to provide the mode (as optical fiber) of suitably illumination.Such as, the mini LED that Nichia company can be adopted to produce.The brightness of this LED is controlled.
Traditional two-dimentional peritoneoscope and/or endoscope only provide two dimensional image, do not have three dimensional depth clue.And conventional stereo endoscope is such as used in Vinci, da robot, two target scene images that visual angle can only be provided slightly different.The shortcoming of traditional stereo endoscope comprises:
(1) stereo-picture only wears special glasses, or just can see at a specially designed observation platform separated with surgeon and operating room environment completely;
(2) block can affect accurate three-dimensional reconstruction and measure scene;
(3) observer not movable sensor freely can not change the visual angle of target, this LNR operation during this is difficult to accomplish;
(4) owing to can not obtain the view of sufficient amount, traditional stereo endoscope is unfavorable for giant-screen, looks up, not wearing spectacles (automatic stereo) and interactively three dimensional display.
Owing to having multiple high-resolution imaging sensor, three-dimensional endoscope of the present invention overcomes the above-mentioned shortcoming of traditional stereo endoscope.
In one embodiment, three-dimensional endoscope can also comprise flexible cable 104.
Flexible cable 104 is connected between control unit 105 and image-generating unit 100, for providing electric power to supply for image-generating unit 100, and by imaging sensor array acquisition to multiple two dimensional images transfer to control unit 105.
Owing to have employed flexible cable 104, eliminate the hard axle of conventional laparoscopic and endoscope, for other operating theater instruments stretching into wound saves valuable space, avoid apparatus collision.In addition, image-generating unit 100 can be placed on peritoneal cavity Anywhere, need not by the constraint of any arbor.Usually, can light field three-dimensional endoscope 100 be placed near operative site, go to obtain the wider image in the visual field, even if away from wound, still can avoid unsighted in pipeline and angle of strabismus.
Second embodiment
Shown in Figure 3, be the three-dimensional endoscope of second embodiment of the invention.
In the present embodiment, luminaire 101 comprises structured light projection unit 110.
Structured light projection unit 110 is for the Surface Creation structuring texture at destination object.Each imaging sensor 102 in imaging sensor array is for gathering the two dimensional image of structuring texture and transferring to control unit 105.Control unit 105 carries out three-dimensional reconstruction based on the two dimensional image of multiple structuring texture to destination object.
In figure 3, structured light projection unit 110 produces the structuring texture 111 of spatial variations on the surface of target 108.Structured light is a well-known three-dimensional surface imaging technique.In the present invention, we are applied to structured illumination technology in three-dimensional endoscope.
By the structuring texture 111 projected out by structured light projection unit 110, we are easy to the surface character distinguishing destination object.Based on the three-dimensional reconstruction of multi views, reliable three-dimensional surface rebuilding can be carried out.Such calculating does not need the geometric position/direction of calibration structure light projection unit 110.The surface topography be projected can strengthen the surface character of target, thus improves the q&r of three-dimensional reconstruction result.
Three-dimensional surface rebuilding also can use and carry out from the structured light projection of calibrated projector.In this case, the geological information (position/orientation) of structured light projection can draw according to calibration information.Fig. 4 shows this system example only having an imageing sensor, without loss of generality.This principle can expand to multiple imaging sensor and/or multiple structure light projection system.Imaging sensor, geometrical relationship between structured light projection unit and the point of body surface can by the expression of principle of triangulation:
Key based on the triangulation of 3 Dimension Image Technique is a kind of technology of the hot spot being used for distinguishing single projection from the image taken under two-dimensional projection's pattern.Structural light stripes light illumination mode provides a kind of simple mechanism to perform corresponding relation.Known base line B and two angle [alpha] and β, the three-dimensional distance R of the surface point P of a destination object can accurately calculate.Wherein, baseline B is the distance between structured light projection unit 110 and lighting unit 102 photocentre.α be destination object surface point P to lighting unit 102 photocentre between line and baseline between angle.β be then destination object surface point P to structured light projection unit 110 photocentre between line and baseline between angle.
Structured light projection unit 110 can be designed to various forms.Fig. 5 shows a kind of typical embodiment.
Shown in Figure 5, in the present embodiment, structured light projection unit comprises light source 201, mode screen 202 and object lens 203.Wherein light source 201 and object lens 203 are positioned at the both sides of mode screen 202.Light source 201 is for providing illumination for mode screen 202.Mode screen 202 has predetermined pattern.Object lens 203 for light source 201 is sent, the surface that projects destination object through the light of mode screen 202, make the Surface Creation structuring texture of destination object.
Light source 201 can be that an incoherent light source is as LED or optical fiber lighting device.Structure based light principle design (single-shot) in the pattern of structured light patterns 202.Object lens are the multi-lens optical systems that can generate high quality graphics projection.
Light source 201 also can be relevant, such as laser.Mode screen 202 can be diffraction optical element (DOE), it is designed with certain diffraction pattern.This diffraction pattern can as Structured Illumination pattern.An energy can be used from the miniature diffraction optical element (DOE) of light sources transmit light, a GRIN collimating lens and a single-mode fiber to form structured light projection unit.Projection mode provides unique labelling at target surface.Then, three-dimensional surface profile can be obtained (as shown in Figure 4) by application triangulation.
3rd embodiment
Shown in Figure 6, in the present embodiment, imaging sensor array comprises the spectral characteristic multiple imaging sensors (302 ~ 305) different with polarization characteristic.
To multiple imaging sensors of light field three-dimensional endoscope, some sensors can be configured and obtain image at different-waveband and different polarization direction.Such as, some or multiple imaging sensor is made only to catch light in certain spectral region by increasing narrow band filter, thus Enhanced Imaging contrast (signal to noise ratio).Polarization imaging collection can suppress surface reflection on the impact of image quality.
Light spectrum image-forming and polarization imaging are completely independently imaging modes.The two can according to specific application demand simultaneously or be used alone.
Imaging sensor array 301 shown in Fig. 6 has eight optical channels, respectively has spectrum and the polarization characteristic of its uniqueness.They are used to the multispectral combination picture obtaining target object surface and surface, Asia, thus the three-dimensional surface profile of reconstructed object object.
4th embodiment
Shown in Figure 7, in the present embodiment, imaging sensor array comprises two imaging sensors.
These two imaging sensors can lay respectively at the two ends of housing, for gathering the two dimensional image of destination object respectively with visual angle, left side and visual angle, right side.
In the present embodiment, utilize a pair imaging sensor to go to simulate the image that the anthropoid binocular vision of class removes to obtain destination object, thus obtain the three-dimensional information of target object surface.Matching algorithm can realize the exact matching of the identical surface point P of two images.Geometrical relationship between two imaging sensors and body surface point P can by the expression of principle of triangulation:
Wherein baseline B is the connecting line between two imaging sensor photocentres, R is connecting line between one of them imageing sensor photocentre and surface point P of destination object, α is the angle between R, B, and β is then the angle between another sensor photocentre and P between connecting line and B.The coordinate figure (x, y, z) of P point can draw according to R, β, α accurate calculation.
5th embodiment
Shown in Figure 8, be the structure chart of the 5th embodiment of three-dimensional endoscope of the present invention.
In the present embodiment, three-dimensional endoscope comprises the first wireless communication link module 307 and the second wireless communication link module (not shown).
First wireless communication link module 307 is placed in image-generating unit 300, and the second wireless communication link module is placed in control unit 305.
First wireless communication link module 307 is for transferring to the second wireless communication link module by multiple two dimensional images of imaging sensor array acquisition.
The set of cells 304 for powering for image-generating unit 300 is also comprised in image-generating unit 300.Set of cells 304 can be the minicell of any type, as lithium battery, as long as the capacity of battery enough maintains the normal of three-dimensional endoscope to run.First wireless communication link module 307 and the second wireless communication link module can complete the transmission of application high speed multichannel image data.
Present embodiment, without the need to any communication and supply line, can remove connection cord, further facilitates the use of various apparatus in clinical operation process.
6th embodiment
Shown in Figure 9, be the structure chart of the 6th embodiment of three-dimensional endoscope of the present invention.
In the present embodiment, three-dimensional endoscope also comprises magnetic guide apparatus 401 and magnetic guidance controller 400;
Magnetic guide apparatus 401 is arranged on image-generating unit 100, under the control of magnetic guidance controller, drives image-generating unit translation and/or rotation.
In clinical operation, magnetic guide apparatus 401 and image-generating unit 100 can be placed in the intraperitoneal of patient, and magnetic guidance controller 400 can be placed in outside abdominal cavity, and magnetic guide apparatus 401 and image-generating unit 100 are fixed on peritoneum inwall by magnetic force.
As a kind of embodiment, magnetic guidance controller 400 comprises pair of magnets 402, for producing magnetic-adsorption to magnetic guide apparatus 401.This magnet 402 can produce enough magnetic force, and dragging image-generating unit 100 and magnetic guide apparatus 401 are to the position of specifying and direction.
Preferably, magnetic guidance controller 400 can also comprise shaft rotating device 403.Shaft rotating device 403 can be arranged on magnet 402.Operator can manual (or electronics) axial-rotation of magnet 402 of controlling.The rotation of magnet 402 makes magnetic direction change, thus drives the rotation of magnetic guide apparatus 401, and then makes image-generating unit 100 produce rotary motion.
Preferably, magnetic guidance controller 400 can also comprise handle 404, thus the operation magnetic guidance controller 400 of safe ready more.
In addition, the three-dimensional endoscope in the above first to the 6th embodiment can also comprise display device.
Display device is connected with control unit, for the 3-D view of the destination object that indicative control unit generates.
Three-D imaging method
Shown in Figure 10, be the flow chart of a kind of embodiment of the three-D imaging method based on three-dimensional endoscope as above.
In the present embodiment, three-D imaging method comprises:
S10: the two dimensional image gathering destination object under the illumination that the multiple imaging sensors in imaging sensor array provide at luminaire;
S20: the two dimensional image of the destination object that control unit collects based on each imaging sensor synthesizes the 3-D view of destination object.
Optionally, three-D imaging method can also comprise:
S30: being connected to flexible cable between control unit and image-generating unit for image-generating unit provides electric power to supply, and by imaging sensor array acquisition to multiple two dimensional images transfer to control unit.
In one embodiment, S10 can specifically comprise:
S11: the structured light projection unit in luminaire is managed at the Surface Creation structuring stricture of vagina of destination object;
S12: each imaging sensor in imaging sensor array gathers the two dimensional image of structuring texture and transfers to control unit.
S20 can specifically comprise:
S21: control unit carries out three-dimensional reconstruction based on the two dimensional image of multiple structuring texture to destination object.
In one embodiment, S11 can specifically comprise:
S111: light source provides illumination for mode screen;
S112: mode screen has predetermined pattern;
S113: that light source sends by object lens, project destination object through the light of mode screen surface, makes the Surface Creation structuring texture of destination object.
In another embodiment, S10 can also specifically comprise:
S12: the multiple imaging sensors in imaging sensor array gather the two dimensional image of destination object with different spectral characteristics and polarization characteristic.
In another embodiment, S10 can also specifically comprise:
S13: imaging sensor array comprises two imaging sensors; Two imaging sensors in imaging sensor array gather the two dimensional image of destination object respectively with visual angle, left side and visual angle, right side.
Optionally, three-D imaging method can also comprise:
S40: multiple two dimensional images of imaging sensor array acquisition are transferred to the second wireless communication link module be placed in control unit by the first wireless communication link module be placed in image-generating unit.
Optionally, three-D imaging method can also comprise:
S50: be arranged on magnetic guide apparatus on image-generating unit housing under the control of magnetic guidance controller, drives imaging sensor array in housing and housing and luminaire translation and/or rotation.
In one embodiment, three-D imaging method can also comprise:
S60: the 3-D view of the destination object that the display device indicative control unit be connected with control unit generates.
Adopt three-dimensional endoscope of the present invention and three-D imaging method, have the following advantages:
(1) solve that existing laparoscope is ubiquitous to have tunnel vision and rotary viewing angle problem; , thus acquisition one has appropriate visual angle, unsheltered full filed surgical scene;
(2) avoid surgical wound continue take;
(3) three-dimensional endoscope of the present invention can be placed near operative site and to carry out correct space orientation, by its three-dimensional imaging and disposal ability, can provide the real time imaging possessing correct direction and appropriate visual angle for surgeon;
(4) provide three dimensional depth clue: three-dimensional endoscope and three-D imaging method can provide the real-time three dimensional depth figure possessing high-resolution texture information, therefore the 3D vision feedback of enhancing can be provided for operation, location and hands art for surgeon;
(5) surgical target dimensional measurement: by means of the three-dimensional imaging ability of its uniqueness, can provide the quantitative three-dimensional measurement of object in surgical scene;
(6) carry out the intervention (IGI) of image guiding: the 3-D view of generation can represent between the three dimensional surface data easily and accurately in the preoperative in CT/MRI data and body, thus carry out image and guide and get involved.
(7) its 3-D view generated can make surgeon without the need to wearing any special glasses and identifiable design.
Above some embodiments of the present invention are described in detail.As one of ordinary skill in the art can be understood, whole or any step of method and apparatus of the present invention or parts, can in the network of any computing equipment (comprising processor, storage medium etc.) or computing equipment, realized with hardware, firmware, software or their combination, this is that those of ordinary skill in the art use their basic programming skill just can realize when understanding content of the present invention, therefore need not illustrate at this.
In addition, it is evident that, when relating to possible peripheral operation in superincumbent explanation, any display device and any input equipment, corresponding interface and control sequence that are connected to any computing equipment will be used undoubtedly.Generally speaking, related hardware in computer, computer system or computer network, software and realize the hardware of the various operations in preceding method of the present invention, firmware, software or their combination, namely form equipment of the present invention and each building block thereof.
Therefore, based on above-mentioned understanding, object of the present invention can also be realized by an operation program or batch processing on any messaging device.Described messaging device can be known common apparatus.Therefore, object of the present invention also can realize only by the program product of providing package containing the program code realizing described method or equipment.That is, such program product also forms the present invention, and stores or the medium that transmits such program product also forms the present invention.Obviously, described storage or transmission medium can be well known by persons skilled in the art, or the storage of any type developed in the future or transmission medium, therefore also there is no need to enumerate various storage or transmission medium at this.
In equipment of the present invention and method, obviously, each parts or each step reconfigure after can decomposing, combine and/or decomposing.These decompose and/or reconfigure and should be considered as equivalents of the present invention.Also it is pointed out that the step performing above-mentioned series of processes can order naturally following the instructions perform in chronological order, but do not need necessarily to perform according to time sequencing.Some step can walk abreast or perform independently of one another.Simultaneously, above in the description of the specific embodiment of the invention, the feature described for a kind of embodiment and/or illustrate can use in one or more other embodiment in same or similar mode, combined with the feature in other embodiment, or substitute the feature in other embodiment.
Should emphasize, term " comprises/comprises " existence referring to feature, key element, step or assembly when using herein, but does not get rid of the existence or additional of one or more further feature, key element, step or assembly.
Although described the present invention and advantage thereof in detail, be to be understood that and can have carried out various change when not exceeding the spirit and scope of the present invention limited by appended claim, substituting and conversion.And the scope of the application is not limited only to the specific embodiment of process, equipment, means, method and step described by description.One of ordinary skilled in the art will readily appreciate that from disclosure of the present invention, can use perform the function substantially identical with corresponding embodiment described herein or obtain and its substantially identical result, existing and that will be developed in the future process, equipment, means, method or step according to the present invention.Therefore, appended claim is intended to comprise such process, equipment, means, method or step in their scope.