The content of the invention
In order to solve the above technical problems, the present invention propose a kind of navigation of endoscope micro-wound method for displaying image,Device and system.
Embodiment of the present invention provides a kind of method for displaying image of endoscope micro-wound navigation, comprises the following steps:
S1, obtain CT images and obtain endoscopic images in real time;
S2, the position and direction of endoscope tip are obtained in real time;
S3, according to the position and direction of the endoscope tip, cube cutting is carried out to the CT images, cutCube metadata afterwards;
S4, differentiation is carried out to the cube metadata after the cutting based on distance weighted light projection method and rendered,Cube metadata after being rendered;
S5, the cube metadata after described render and the endoscopic images are subjected to virtual reality fusion, obtain virtual reality fusionImage, and show.
Alternatively, further comprise before the step S3:
S6, using based on region growing and fast marching methods in the CT images to predetermined critical anatomical structures carry out3D is split, and the critical anatomical structures of 3D segmentations are labeled.
Alternatively, further comprise after the step S6:
S7, color mapping is carried out to the critical anatomical structures obtained by 3D segmentations.
Alternatively, further comprise before the step S3:
S8, carry out registering between the CT images and patient's pose, the CT images after acquisition registration, for step S3Cube cutting.
Alternatively, further comprise after the step S8:
S9, according to the CT images and the position and direction of endoscope tip after the registration, obtain endoscope and patientThe distance between relative position and endoscope and surgical target between human body, and show.
Alternatively, step is further comprised before the step S5:
S10, the endoscopic images are carried out based on the far and near transparency mapping in range image center, and to by transparentThe endoscopic images of degree mapping carry out edge attenuation processing, so that the endoscopic images after edge attenuation processing are cut with describedCube metadata after cutting carries out virtual reality fusion.
Alternatively, further comprise before the step S10:
S11, distortion correction is carried out to the endoscopic images.
Alternatively, the step S2 is specifically included:On the operation tool that the endoscope is obtained by optictracking deviceThe position of predetermined flag point, and according to the position of the predetermined flag point, calculate the position and side for obtaining the endoscope tipTo;Wherein, the endoscope tip is to stretch into one end in patient body.
Alternatively, the step S3 is specifically included:
According to the position and direction of the endoscope, cube cutting is carried out to the CT images;Wherein, in CT imagesIn the cube of cutting, using the endoscope shaft to for depth direction and depth is d, the focal plane of the endoscope is starting point,Form a cubical side;Meanwhile cubical two other side m and n is set according to the size of indication range.
A kind of image display device for endoscope micro-wound navigation that embodiment of the present invention provides, including display screen,Processor and data-interface;Wherein, the data-interface be used for connect endoscope and CT equipment, with obtain endoscopic images andPreoperative CT images;The processor is used for the image display side for the endoscope micro-wound navigation for performing any of the above-described embodimentMethod, to obtain virtual reality fusion image;The display screen is used for the virtual reality fusion image for showing that the processor obtains.
Alternatively, the processor includes CPU processing units and GPU processing units, wherein the CPU processing units are usedIn carrying out CT images 3D segmentations registering and critical anatomical structures with patient's pose, GPU processing units are used for CT imagesCube cutting, rendered based on distance weighted cube metadata and the edge attenuation processing of endoscopic images.
Alternatively, the processor is further used for the real time position according to endoscope, obtains corresponding virtual reality fusion figureThe relative position view of picture, endoscope and human body, and be updated to the display screen and shown.
Embodiment of the present invention provides a kind of endoscope micro-wound navigation system, including computer installation and optics withTrack equipment, the optictracking device are used to obtain the position of ESS instrument and the tracking to patient's pose in real time,The computer installation is used to obtain endoscopic images and CT images, and combines the positional information of optictracking device tracking,And the method for displaying image using any of the above-described embodiment, the endoscopic images and CT images are handled, obtainedEndoscopic images and the virtual reality fusion image of CT images are obtained, and are shown.
Alternatively, the computer installation includes the image display device of any of the above-described embodiment.
Alternatively, the endoscope micro-wound navigation system application is performed the operation in the malignant tumor of nose and nasal sinuses and basis cranii swellsKnurl surgical navigational.
To sum up, the method for displaying image of above-mentioned endoscope micro-wound navigation, the navigation of relatively conventional ESS are aobviousShow method, have the following advantages that:
(1) in the virtual reality fusion image of embodiment of the present invention not only can the image that detects of real-time display endoscope, andAnd the cube metadata progress differentiation after cutting is rendered using based on the distance weighted mode that renders, it can be calculated reducingComplexity, while accelerating rendering speed, there is provided more accurate depth perception, more effectively improve the phase between anatomical structureTo relation, make doctor's blocking with front and rear judgement definitely to anatomical structure, more accurate assisting in diagnosis and treatment is provided for doctorAbility;
(2) endoscopic images are handled using Gauss edge decay algorithm, realizes that endoscopic images are empty with CT imagesThe seamless transitions merged in fact, visually reach smooth transition, can by the naked eyes visible structure in endoscopic images with againThe matching and transition of well-formed is built, the periphery expansion of relatively conventional endoscopic images, more structural informations can be shown, in additionThe lesion information behind endoscopic images can be shown in same view, hence it is evident that improve the prompting of realtime graphic in surgical navigationalEffect;
(3) be layered virtual reality fusion and render mode and realize visual field viewing area increasing in the visual field viewing area of endoscopeStrong reality guiding, positioning cube is employed on region cut open showing and rendering, the change of its position endoscopically position and directionChange and change, have lifting on perceived distance and scene feeling of immersion.
Embodiment
Below with reference to the accompanying drawings embodiments of the invention are illustrated.Retouched in the accompanying drawing of the present invention or a kind of embodimentThe element and feature that the element and feature stated can be shown in one or more other accompanying drawings or embodiment are combined.ShouldWork as attention, for purposes of clarity, eliminated in accompanying drawing and explanation known to unrelated to the invention, those of ordinary skill in the artPart or processing expression and description.
The present invention is described further below in conjunction with the accompanying drawings.
Embodiment of the present invention provides the method for displaying image in a kind of endoscope micro-wound navigation procedure, and this is interior to peepMirror Minimally Invasive Surgery such as, but not limited to includes the malignant tumor of nose and nasal sinuses operation, tumor of base of skull operation etc., naturally it is also possible to includingOther utilize the operation of endoscope.
Specifically, referring to Fig. 1, Fig. 1 is shown in the endoscope micro-wound navigation procedure of an embodiment of the present inventionMethod for displaying image, the method for displaying image are specifically related to strengthen based on virtual reality fusion of the CT images navigation with reference to endoscopic imagesReality, comprise the following steps:
S101, obtain endoscopic images and CT images;
Stretched into patient body using the detecting lenses of endoscope, to obtain endoscopic images.Because endoscope can be sent out in artChanging, therefore the endoscopic images are acquisition in real time.Preoperative scanning is carried out to the predetermined position of patient using CT equipment, to obtainPreoperative CT images are taken, the CT images are 3-D view.Predetermined position such as human body head.
S102, obtain the position and direction of endoscope tip;
Endoscope tip is the detecting lenses that endoscope stretches into one end, i.e. endoscope in patient body.Due to the endoscopeTop is stretched into patient body, the more difficult acquisition of apical position and direction, therefore passes through the ESS work outside human bodyThe position of tool carries out conversion acquisition.As shown in Fig. 2 the operation tool 300 of the endoscope is provided with 4 index points, optics is utilizedTracking equipment 200 is tracked monitoring to 4 index points, obtains the positional information of 4 index points.Between two individual data itemsCoordinate transformation relation registration transformation can be carried out by below equation:
In above formula,The coordinate at CT data coordinate systems midpoint is represented,Represent in optictracking device coordinate systemThe coordinate of corresponding points,WithIt is then spin matrix and translation vector respectively.According to the positional information of 4 index points, you canCalculated using DLT (Direct Linear Transform) algorithmWithIn addition, the position and direction of the endoscope tipIt will in real time obtain, and to track the change in location of endoscope in time, be easy to image update below.
S103, according to the position of the endoscope tip, cube cutting, cube after being cut are carried out to CT imagesVolume data;
According to the positional information of endoscope tip, it is determined that the cube parameter cut, and built based on the cube parameterCube CT images are cut, obtain cube metadata.In one embodiment, reference picture 3, the cube parameter of the cuttingSpecially:In the space O that CT images are formedCTIt is interior, with the focal plane O of endoscopeVIt is axial for depth direction with endoscope for starting pointAnd length is d, forms cubical a line;Meanwhile set cubical other two according to the size of endoscope indication rangeBar side m and n.In this way, cube parameter (the i.e. focal plane O based on the determinationVAnd 3 sides), then build a cube.According toThe cube of the structure is cut to CT images, you can the cube metadata after being cut.
S104, differentiation is carried out to the cube after the cutting based on distance weighted light projection method and rendered, is obtainedCube metadata after must rendering;
The cube of cube parameter structure according to Fig. 3 is cut to CT images, after obtaining cube metadata,Differentiation is carried out using distance weighted light projection method to the cube metadata after cutting to render.Specifically, stood from dataThe preceding surface (i.e. the focal plane Ov of endoscope in Fig. 3) of cube starts, and is to the distance between the rear surface of data cubeD, with the increase (i.e. the increase of d numerical value) of distance, the decimation factor of each sampled point on every light is corresponding with d valuesBrightness value, rendered so as to carry out differentiation to cube metadata according to the corresponding transparence value.With continued reference to Fig. 3, with depth pSurgical target exemplified by, in data cube body any point light projection the sampling weight factor, apart from focal plane OVIt is more remoteAbsorption factor of the voxel in light projection functions it is fewer so that in cube metadata diverse location anatomical structure shapeDistinguished into otherness.Sampling location and the mapping relations of decimation factor transparency are as follows:
Wherein m, n, d are respectively the length of side of data cube, and the coordinate position of sampling location is (x, y, z).
By being rendered based on distance weighted, cube can be effectively improved while rendered structure texture is become more meticulousRelative position relation between internal anatomical structure, clear performance anatomical structure and its positional information.
S105, the cube metadata after described render and the endoscopic images are subjected to virtual reality fusion, obtain actual situation and meltImage is closed, and is shown.
Obtain cube metadata and after being rendered to it, then the endoscopic images that itself and step S101 are obtained carry out it is emptyReal fusion treatment, obtain virtual reality fusion image.
In the virtual reality fusion image of embodiment of the present invention not only can the image that detects of real-time display endoscope, and adoptWith based on it is distance weighted render mode differentiation carried out to the cube metadata after cutting render, can reduce calculate it is complicatedDegree, while accelerating rendering speed, there is provided more accurate depth perception, more effectively improve the relative pass between anatomical structureSystem, make doctor's blocking with front and rear judgement definitely to anatomical structure, more accurate assisting in diagnosis and treatment ability is provided for doctor.
Further, as shown in figure 4, after obtaining endoscopic images in above-mentioned steps S101, it will be handled as follows:
Step S201, distortion correction is carried out to the endoscopic images;
, will be to the endoscopic images distortion correction so that the serious endoscope figure of radial distortion after obtaining endoscopic imagesAs the fast quick-recovery of energy, to eliminate because pattern distortion causes endoscopic images distortion during virtual reality fusion is shown, mismatched with material object.
Step S202, the endoscopic images are carried out based on the far and near transparency mapping in range image center, and to warpThe endoscopic images for crossing transparency mapping carry out edge attenuation processing.
Endoscopic images by distortion correction are carried out to map based on the far and near transparency in range image center, specificallyGround, using picture centre as the center of circle, using radius as transparency mapping parameters, with the more remote image of picture centre distance, transparency is got overHeight is that is, more transparent.The image for retaining endoscopic centres region is so, it is possible, so as to be declined at the edge to endoscopic imagesWhen subtracting processing, layer rendering can be realized, the feeling of immersion of fusion display can be effectively improved, the front and back scene of virtual reality fusion is mergedIt is more life-like.
Further, the edge attenuation processing is such as, but not limited to handled using Gaussian function.As shown in figure 5, Fig. 5 is shownOne conchoscope image carries out the schematic diagram of edge Gauss decay and transparency mapping.For m × n endoscope figurePicture, if any point P (i, j) and the distance of picture centre are in its pictureWherein 0<i≤m-1,0<j≤n-1.Then according to its distance with picture centre, the zone of opacity radius in endoscopic images can be setFor t, image maximum radius is R, i.e., attenuation region is then R-t.The transparency of attenuation region can be then defined as:
In embodiment of the present invention, endoscopic images are handled using Gauss edge decay algorithm, realize endoscopeThe seamless transitions of image and CT images virtual reality fusion, visually reach smooth transition, can be by the meat in endoscopic imagesEye visible structure and the matching and transition for rebuilding well-formed, the periphery expansion of relatively conventional endoscopic images, can show moreStructural information, the lesion information behind endoscopic images can be shown in same view in addition, hence it is evident that improve surgical navigationalThe prompting effect of middle realtime graphic.
Further, as shown in fig. 6, step S103 carries out including following processing before cube cutting to CT images:
S301, by being entered based on region growing and fast marching methods in the CT images to predetermined critical anatomical structuresRow 3D is split, and the critical anatomical structures of 3D segmentations are labeled;
Using the CT images of preoperative acquisition as benchmark, by based on region growing and fast marching methods to predetermined keyAnatomical structure carries out 3D segmentations, and the critical anatomical structures after 3D segmentations are labeled.The predetermined critical anatomical structures rootDetermined according to specific operative site, such as blood vessel, tumour and nerve.Moreover, the critical anatomical structures by doctor in CT imagesThe middle particular location for determining critical anatomical structures.
Further, since in CT images predetermined critical anatomical structures have been carried out with 3D segmentations, therefore the pass after 3D segmentationsKey anatomical structure carry out step S104 render processing after, it is achieved thereby that to the difference of anatomical structure inside cube metadataChange display, be easy to observe in doctor's art, quickly determine surgical target, such as the tumour to be cut off.
S302, color mapping is carried out to the critical anatomical structures obtained by 3D segmentations;
Carry out color mapping by splitting the critical anatomical structures obtained to 3D, for example, blood vessel be red, tumour be green,Nerve is yellow, so that the critical anatomical structures differentiation in image is more obvious, while is also accelerated for virtual reality fusion processingSpeed, and the accuracy of perceived distance provides guarantee when being handled for virtual reality fusion.
The critical anatomical structures of color mapping are carried out in step S104 when being rendered based on distance weighted differentiation, distanceThe data that the focal plane of endoscope is more remote also render the decay for carrying out color, i.e., more remote structure is more not easy to be observed.Such asThis, more effectively improves the relativeness between critical anatomical structures, makes doctor to blocking with before between critical anatomical structuresThe judgement of relation definitely, more accurate assisting in diagnosis and treatment ability is provided for doctor afterwards.
S303, carry out registering between CT images and patient's pose, the CT images after acquisition registration;
Specifically, position corresponding with the critical anatomical structures in CT images is determined according to predetermined critical anatomical structures,And as reference point.Optictracking device then according to the reference point, realized on patient body to should reference point markThe positioning of will point, then carry out CT using 3PCHM (3-Points Convex Hull Matching) rapid registering computational methodsSpin matrix and translation vector between image and patient's pose, and the CT images after being changed.
S304, according to the CT images and the position and direction of endoscope tip after registration, obtain endoscope and patient itBetween relative position and the distance between endoscope and surgical target.
Unify afterwards into same coordinate space because patient's pose is registering with CT images process, now according to opticsThe endoscope real time position that tracking equipment obtains, the relative position between endoscope and human body can be accessed, but alsoThe distance between endoscope and surgical target (for example, tumour to be cut off etc.) are can determine that, to carry out operating theater instruments with respect to positionThe display put.
It should be noted that above-mentioned steps 303 and step S301 does not limit sequencing, can perform parallel.
Further, as shown in fig. 7, Fig. 7 shows the endoscope micro-wound navigation of a further embodiment of this inventionVirtual reality fusion display methods.In embodiment of the present invention, further comprise after step s 304 described above:
Step S305, orthogonal cutting is carried out along the direction parallel and vertical with endoscope to CT images, and added based on distanceThe light projection method of power carries out differentiation to the data of orthogonal cutting and rendered, and obtains orthogonal cutting data, for showing phaseThe Section View answered.
By carrying out orthogonal cutting on the basis of endoscope to CT images, the endoscope on cutting plane and target location are realizedDistance display, more effectively intuitively show the position and posture of endoscope and operation tool.In addition, thrown using distance weighted lightShooting method carries out differentiation to the data of orthogonal cutting and rendered so that the distance between endoscope and target location show more clearIt is clear.
As shown in figure 8, Fig. 8 shows the Transnasal endoscopy operation navigation virtual reality fusion display interface of an embodiment of the present invention.Fig. 8Shown display interface includes endoscope and human body relative position view, axially position Section View, radial positioning cuttingView, and to the cube metadata of cutting after being rendered based on distance weighted differentiation with by transparency mapping and sideThe virtual reality fusion of the endoscopic images of edge attenuation processing shows view.And the equal endoscopically top of each view in the display interfaceThe change in location at end and update.Based on the display interface shown in Fig. 8, according to endoscope and human body relative position view and axleInto radial positioning Section View, can be clear and intuitive observe endoscope and human body in object construction between away fromFrom and position relationship.View is shown according to virtual reality fusion, the reality rendered based on distance weighted differentiation can be observed simultaneouslyWhen cutting cube metadata, edge Gauss decay and the endoscopic images of transparency mapping and the critical anatomical target of color mappingInformation, while the anatomical structures such as the nasal cavity in endoscopic images are extended naturally and extended in virtual scene, by based on distanceThe differentiation of weighting renders display and provides the anatomical structure in virtual scene effective prompting.
It should be noted that due in accompanying drawing can not display color, therefore represented with different lines, actual display imageMiddle different anatomical structure structure is shown by different colors.Axially position Section View wherein shown in Fig. 9It is orthogonal cutting is carried out to CT images in step S105 and obtained after rendering orthogonal Section View with radial positioning Section View.
To sum up, the virtual reality fusion display methods of above-mentioned endoscope micro-wound navigation, the navigation of relatively conventional endoscope are aobviousShow method, have the following advantages that:
(1) in the virtual reality fusion image of embodiment of the present invention not only can the image that detects of real-time display endoscope, andAnd the cube metadata progress differentiation after cutting is rendered using based on the distance weighted mode that renders, it can be calculated reducingComplexity, while accelerating rendering speed, there is provided more accurate depth perception, more effectively improve the phase between anatomical structureTo relation, make doctor's blocking with front and rear judgement definitely to anatomical structure, more accurate assisting in diagnosis and treatment is provided for doctorAbility;
(2) endoscopic images are handled in real time using Gauss edge decay algorithm, realizes endoscopic images and CT shadowsSeamless transitions as carrying out virtual reality fusion, visually reach smooth transition, can be visible by the naked eyes in endoscopic imagesStructure and the matching and transition for rebuilding well-formed, the peripheral expansion of relatively conventional endoscopic images, can show more structuresInformation, the lesion information behind endoscopic images can be shown in same view in addition, hence it is evident that improve in surgical navigational in real timeThe prompting effect of image;
(3) mode of virtual reality fusion and layer rendering is carried out to visual field viewing area reality in the visual field viewing area of endoscopeExisting augmented reality guiding, and positioning cube is employed on region cut open showing and rendering, its position endoscopically position andThe change in direction and change, have lifting on perceived distance and scene feeling of immersion;
(4) orthogonal cutting is carried out along the direction parallel or vertical with endoscope to CT images, efficiently avoid three-view diagramShow apart from upper display shortcoming, and to relative position of the operating theater instruments (such as endoscope) between human bodyShow, apparatus and the distance between human body relation are accurately prompted;
(5) this method realize relative position view between endoscope and human body, on the basis of endoscope to CTVirtual reality fusion between the orthogonal Section View and endoscopic images and CT images of image shows the display of view so that doctorIt is raw to combine each view, accurately understand process in endoscope position and art, improve the security of endoscope micro-wound.
Accordingly, the virtual reality fusion display methods of above-mentioned endoscope micro-wound navigation can use the side of combination of hardware softwareFormula is realized, can also use the form of pure software code, and run in a computer.Specifically, as shown in figure 9, Fig. 9 is shownThe image display device of the endoscope micro-wound navigation of an embodiment of the present invention, the virtual reality fusion device may include to showScreen 10, processor 20 and data-interface 30, wherein, the data-interface is used to connect endoscope and CT equipment, with acquisitionSight glass image and CT images;The processor 20 is used for the Minimally Invasive Surgery navigation virtual reality fusion for performing above-mentioned any one embodimentDisplay methods, to obtain virtual reality fusion image;The display screen 10 is used for the virtual reality fusion figure for showing that the processor 20 obtainsPicture.
Further, as shown in Figure 10, above-mentioned processor 20 includes CPU processing units 21 and GPU processing units 22, itsDescribed in CPU processing units 21 be mainly used in performing the function such as mathematical computations and image configuration, such as CT images and patient's poseRegistration and critical anatomical structures 3D segmentation.Certainly, the CPU processing units are additionally operable to perform other processing, such as from numberRead endoscopic images and CT images according to interface 30, and from optictracking device 200 obtain the real time position of endoscope withAnd the positional information such as pose of patient.
GPU processing units 22 are used to perform the function relevant with graphics process, such as the cube of CT images cuts, is based onDistance weighted cube metadata renders, the transparency of endoscopic images mapping and edge attenuation processing, to the orthogonal of CT imagesCutting etc..
Further, processor 20 is further used for:According to the real time position of endoscope, corresponding virtual reality fusion figure is obtainedRelative position view, CT images as, endoscope and human body are carried out just along the direction parallel and vertical with the endoscopeThe Section View of cutting is handed over, and is updated to the display screen 10 and is shown.
As shown in figure 11, embodiment of the present invention additionally provides a kind of endoscope micro-wound navigation system, such as but notIt is limited to be applied to the malignant tumor of nose and nasal sinuses operation and tumor of base of skull surgical navigational.The operation guiding system specifically includes:MeterCalculation machine device 100 and optictracking device 200, the optictracking device 200 are used to obtain ESS instrument in real time300 position and the tracking to patient's pose, the computer installation 100 are used to obtain endoscopic images and CT images,And the positional information that optictracking device 200 tracks, and the method for displaying image using above-mentioned any one embodiment are combined,The endoscopic images and CT images are handled, you can endoscopic images and the virtual reality fusion image of CT images are obtained,And show.
Alternatively, the computer installation includes the image display device shown in Figure 10.
It should be noted that the computing device in above-mentioned embodiment can add required general hardware platform by softwareMode is realized, naturally it is also possible to is realized by hardware, but the former is more preferably embodiment in many cases.Based on suchUnderstand, the part that the technical scheme of embodiment of the present invention substantially contributes to prior art in other words can be produced with softwareThe form of product embodies, that is to say, that the computational methods of any of the above-described embodiment are performed by sequence of program instructions,Perform the computational methods computer software product be stored in a computer-readable storage medium (such as but do not limit ROM/RAM,Magnetic disc, CD etc.) in, including some instructions, make it that a station terminal equipment (can be computer, Medical Devices, serverDeng) perform the computational methods of any embodiment of the present invention.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the scope of the invention, every utilizationEquivalent structure or the flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other related skillsArt field, is included within the scope of the present invention.