Movatterモバイル変換


[0]ホーム

URL:


CN110033465B - Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image - Google Patents

Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image
Download PDF

Info

Publication number
CN110033465B
CN110033465BCN201910315817.0ACN201910315817ACN110033465BCN 110033465 BCN110033465 BCN 110033465BCN 201910315817 ACN201910315817 ACN 201910315817ACN 110033465 BCN110033465 BCN 110033465B
Authority
CN
China
Prior art keywords
point
dimensional
image
camera
organ surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910315817.0A
Other languages
Chinese (zh)
Other versions
CN110033465A (en
Inventor
宋丽梅
尤阳
郭庆华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiangong University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic UniversityfiledCriticalTianjin Polytechnic University
Priority to CN201910315817.0ApriorityCriticalpatent/CN110033465B/en
Publication of CN110033465ApublicationCriticalpatent/CN110033465A/en
Application grantedgrantedCritical
Publication of CN110033465BpublicationCriticalpatent/CN110033465B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to a real-time three-dimensional reconstruction method applied to a binocular endoscopic medical image. And then, according to the epipolar constraint principle, sequentially searching the epipolar corresponding to the right viewing angle for the contour information shot by the left viewing angle. In order to obtain accurate matching point pairs, combining with ORB feature description operators, rapidly and accurately positioning the positions of intersection points of corresponding areas in a camera, and calculating three-dimensional data of a boundary skeleton through corresponding position relations. And finally, acquiring a coordinate relative relation in the segmented subareas by adopting an SFS method, and calculating three-dimensional coordinate information among all the areas by combining gradient differences of different colors to acquire all three-dimensional morphological coordinates of organs in the scene. The invention solves the difficult problem of high-precision three-dimensional reconstruction of the endoscope, has simple operation, high reliability and low operation risk compared with the existing three-dimensional reconstruction method, and relieves the pain of patients.

Description

Translated fromChinese
一种应用于双目内窥镜医学图像的实时三维重建方法A real-time 3D reconstruction method for binocular endoscopic medical images

技术领域Technical Field

本发明涉及一种应用于双目内窥镜医学图像的实时三维重建获方法,更具体的说,本发明涉及一种能够用于展现内窥镜图像内器官精准三维形貌坐标的方法。The present invention relates to a real-time three-dimensional reconstruction method for binocular endoscopic medical images. More specifically, the present invention relates to a method that can be used to display the precise three-dimensional morphological coordinates of organs in endoscopic images.

背景技术Background Art

如今,全球每年约有1750万人死于心脏病,占到了全部死亡人数的30%。而我国的心血管病人数已达到2.9亿人,且死亡率远高于其他疾病,可见其对人民健康的影响之大。传统的手术方式需要剖开胸腔,锯断胸骨,对患者的呼吸功能产生巨大影响。由于胸骨切口张力较高,使得体质较差的患者,术后恢复十分困难。Today, about 17.5 million people die of heart disease each year, accounting for 30% of all deaths worldwide. The number of cardiovascular patients in my country has reached 290 million, and the mortality rate is much higher than that of other diseases, which shows the great impact on people's health. Traditional surgical methods require cutting open the chest and sawing off the sternum, which has a huge impact on the patient's respiratory function. Due to the high tension of the sternum incision, it is very difficult for patients with poor physical fitness to recover after surgery.

微创手术方式不仅能够降低手术的风险,更能减少病人治疗的痛苦。内窥镜是微创手术的重要信号采集方式,医生不再需要开胸,仅需在胸壁上打3个小孔,分别放置胸腔镜成像装置、超声波手术刀以及手术废弃物吸收装置。手术后,表皮创伤就可以自行愈合,大大减少了病人的创伤和痛苦,也缩短了术后的康复时间。Minimally invasive surgery can not only reduce the risk of surgery, but also reduce the pain of treatment for patients. Endoscope is an important signal collection method for minimally invasive surgery. Doctors no longer need to open the chest. They only need to make three small holes in the chest wall to place the thoracoscopic imaging device, ultrasonic scalpel and surgical waste absorption device. After the operation, the epidermal wound can heal itself, which greatly reduces the trauma and pain of the patient and shortens the recovery time after the operation.

传统二维内窥镜由于无法在手术医师脑中产生直观的三维位置信息的准确对应关系,需要经过长期培训的医师才能熟练地利用其进行关键部位的手术。现有二维内窥镜在使用过程中仍存在如下风险:Because traditional two-dimensional endoscopes cannot produce an accurate correspondence of intuitive three-dimensional position information in the surgeon's brain, it requires doctors who have undergone long-term training to skillfully use them to perform surgery on key parts. Existing two-dimensional endoscopes still have the following risks during use:

(1)二维内窥镜缺少图像深度感,致使医生在手术过程中对重要的解剖结构及其相对的位置产生视觉上的误判。并且由于深度感的缺失,医生无法准确地判断出进刀位置的深浅,极易因操作失误引起意外出血。(1) The lack of image depth perception in two-dimensional endoscopes causes doctors to visually misjudge important anatomical structures and their relative positions during surgery. In addition, due to the lack of depth perception, doctors cannot accurately judge the depth of the incision, which can easily cause accidental bleeding due to operational errors.

(2)二维内窥镜图像畸变较大,人体组织结构又十分复杂,因而会影响手术的流畅性和进度,延长手术时间。(2) The two-dimensional endoscopic image is highly distorted and the human body tissue structure is very complex, which will affect the smoothness and progress of the operation and prolong the operation time.

三维内窥镜带来的人体三维空间操作在医学领域掀起了革命性的变化,使用三维技术的微创手术将成为手术的主流。三维内窥镜的使用大大减轻了病人的术中痛苦,也缩短了术后的愈合时间。如果在获取三维影像的同时能够获得关键部位的三维数据,则可以大大缩短手术时间,减少手术风险。本发明提出的内窥镜医学图像的实时三维重建方法,正是为了解决上述问题而提出来的。The three-dimensional space operation of the human body brought by the three-dimensional endoscope has set off a revolutionary change in the medical field. Minimally invasive surgery using three-dimensional technology will become the mainstream of surgery. The use of three-dimensional endoscopes has greatly reduced the pain of patients during surgery and shortened the healing time after surgery. If three-dimensional data of key parts can be obtained while obtaining three-dimensional images, the operation time can be greatly shortened and the surgical risks can be reduced. The real-time three-dimensional reconstruction method of endoscopic medical images proposed in the present invention is proposed to solve the above problems.

本发明设计了一种超像素分割生成三维骨架与SFS(Shape From Shading)融合的快速三维重建方法,由于内脏器官的构造不同,颜色深浅分布也不相同,单一三维重建方法难以获得整体的三维形貌信息。本发明首先通过超像素分割方法,将颜色复杂区域进行分割,不同区域的边界构成器官的三维骨架。然后依据外极线约束原则,对左面视角拍到的轮廓信息,依次寻找右视角上对应的极线。为了获得精准的左右匹配点对,需要与ORB(Oriented FAST and Rotated BRIEF)特征描述算子结合,快速准确定位左右相机中对应区域交点的位置,通过左右相机的对应位置关系计算边界骨架的三维数据。最后,将三维骨架与SFS融合,已有的SFS算法重建精度高度依赖其成像的光源模型,仅对颜色一致的区域具有较好的三维效果,但是对于颜色不同区域,三维数据存在较大偏差。本发明在超像素分割后的子区域内部采用SFS方法获得区域内的坐标相对关系,以生成的三维骨架坐标为基准,结合不同颜色的梯度差别,依次推算出各个区域之间的三维坐标信息,进而获得场景内器官的全部三维形貌坐标。实现内窥镜在手术场景中的实时三维重建,为医生提供准确和有效的导航信息。The present invention designs a fast 3D reconstruction method that generates a 3D skeleton by superpixel segmentation and merges it with SFS (Shape From Shading). Due to the different structures of internal organs and the different distribution of color depth, it is difficult for a single 3D reconstruction method to obtain the overall 3D morphological information. The present invention first divides the complex color area by the superpixel segmentation method, and the boundaries of different areas constitute the 3D skeleton of the organ. Then, according to the principle of epipolar constraint, the corresponding epipolar lines on the right perspective are sequentially found for the contour information captured from the left perspective. In order to obtain accurate left and right matching point pairs, it is necessary to combine with the ORB (Oriented FAST and Rotated BRIEF) feature description operator to quickly and accurately locate the position of the intersection of the corresponding areas in the left and right cameras, and calculate the 3D data of the boundary skeleton through the corresponding position relationship of the left and right cameras. Finally, the 3D skeleton is merged with SFS. The reconstruction accuracy of the existing SFS algorithm is highly dependent on the light source model of its imaging, and only has a good 3D effect for areas with consistent colors, but for areas with different colors, the 3D data has a large deviation. The present invention uses the SFS method to obtain the relative coordinate relationship within the sub-region after superpixel segmentation, and uses the generated three-dimensional skeleton coordinates as a reference, combined with the gradient differences of different colors, to sequentially calculate the three-dimensional coordinate information between each region, and then obtain the full three-dimensional morphological coordinates of the organs in the scene. Real-time three-dimensional reconstruction of the endoscope in the surgical scene is achieved, providing accurate and effective navigation information for doctors.

发明内容Summary of the invention

本发明设计了一种应用于内窥镜医学图像的实时三维重建获方法,该方法能够应用于采用三维内窥镜的手术中,在获取三维影像的同时能够获得关键部位的三维坐标,可以大大缩短手术时间,减少手术风险。The present invention designs a real-time 3D reconstruction method for endoscopic medical images. The method can be applied to surgeries using 3D endoscopes. While acquiring 3D images, the 3D coordinates of key parts can be obtained, which can greatly shorten the operation time and reduce the risk of surgery.

所述的内窥镜医学图像实时三维重建的硬件装置包括:The hardware device for real-time three-dimensional reconstruction of endoscopic medical images includes:

LED冷光源一个;One LED cold light source;

光学硬杆内窥镜两根;Two optical rigid-rod endoscopes;

用于建立高精度坐标基准的标定平台;Calibration platform for establishing high-precision coordinate benchmarks;

用于采集图像的1200*1600工业彩色相机两个;Two 1200*1600 industrial color cameras for image acquisition;

用于精度控制、图像采集和数据处理的计算机;Computers for precision control, image acquisition, and data processing;

用于放置所述的光源和所述的相机的扫描平台。A scanning platform for placing the light source and the camera.

本发明所设计的内窥镜医学图像的实时三维重建获方法,具体操作步骤如下:The real-time three-dimensional reconstruction method of endoscopic medical images designed by the present invention has the following specific operation steps:

步骤1:对双目相机进行标定,设左相机A所在的坐标系为OaXaYaZa,设右相机B所在的坐标系为ObXbYbZb,两个相机之间的旋转矩阵为R,平移矩阵为T,标定的公式如公式(1)所示,

Figure GSB0000204024880000021
r11-r33为所述的右相机B相对于所述的左相机A的旋转矩阵分量,tx,ty,tz为所述的右相机B相对于所述的左相机A的平移矩阵分量;Step 1: Calibrate the binocular camera. Assume that the coordinate system of the left camera A isOaXaYaZa ,and the coordinate system of the right cameraBisObXbYbZb. The rotation matrix between the twocameras isR , and the translation matrix is T. The calibration formula is shown in formula (1).
Figure GSB0000204024880000021
r11 -r33 are rotation matrix components of the right camera B relative to the left camera A, tx ,ty , tz are translation matrix components of the right camera B relative to the left camera A;

Figure GSB0000204024880000022
Figure GSB0000204024880000022

步骤2:将双目内窥镜的探测镜头伸入病人体内,以获取器官表面图像,并将内窥镜采集的器官表面图像采用中值滤波法进行去噪和图像平滑处理,保护图像的细节信息;Step 2: Insert the detection lens of the binocular endoscope into the patient's body to obtain the organ surface image, and use the median filtering method to perform denoising and image smoothing on the organ surface image collected by the endoscope to protect the image detail information;

步骤3:利用SLIC(Simple Linear Iterative Clustering)超像素分割方法分割步骤2得到的器官表面图像,首先通过相邻像素的颜色、亮度、纹理特征,将器官表面图像细分为多个子区域,再将各子区域图像从RGB颜色空间转换到CIE-Lab颜色空间,按照超像素个数,在图像内均匀分配种子点,在种子点的邻域范围内利用三维的颜色信息以及二维的空间位置信息,计算每个搜索到的像素点到该种子点的距离,来对像素点进行聚类,并通过超像素分割区域的目标数量,控制分割区域的大小,最后进行迭代优化及增强连通性,得到分割后的器官表面图像,距离计算如公式(2)所示,dc代表颜色距离,ds代表空间距离,Ns代表类内最大空间距离,Nc代表最大的颜色距离;Step 3: Use the SLIC (Simple Linear Iterative Clustering) superpixel segmentation method to segment the organ surface image obtained in step 2. First, the organ surface image is subdivided into multiple sub-regions according to the color, brightness, and texture features of adjacent pixels. Then, each sub-region image is converted from the RGB color space to the CIE-Lab color space. According to the number of superpixels, seed points are evenly distributed in the image. The distance from each searched pixel to the seed point is calculated within the neighborhood of the seed point using the three-dimensional color information and the two-dimensional spatial position information to cluster the pixels. The size of the segmented region is controlled by the target number of superpixel segmentation regions. Finally, iterative optimization and connectivity enhancement are performed to obtain the segmented organ surface image. The distance calculation is shown in formula (2), where dc represents the color distance, ds represents the spatial distance, Ns represents the maximum spatial distance within the class, and Nc represents the maximum color distance.

Figure GSB0000204024880000031
Figure GSB0000204024880000031

步骤4:将步骤3分割后的器官表面图像,根据外极限匹配原则,在左相机采集到分割后的器官表面图像分割边界选取点,在右相机采集到分割后的器官表面图像上确定极线以及极线与分割边界交点,获得精准的左右匹配点对,再与ORB(Oriented FAST andRotated BRIEF)特征描述算子结合,定位左右相机中对应区域交点的位置,选取交点中匹配度最高的点为匹配点,设左相机选取点的斜率为ka,则相对的匹配点的斜率kb可由公式(3)获得,ka为左相机采集到的图像骨架中某一点P的斜率,kb为右相机采集到的图像骨架中与所述的P点对应点的斜率,重复执行此步骤,可以获得全部骨架所在位置的三维坐标,构成边界骨架三维坐标,记录每个子区域边界的三维坐标信息;Step 4: For the organ surface image segmented in step 3, according to the outer limit matching principle, select points on the segmentation boundary of the organ surface image after segmentation collected by the left camera, determine the epipolar line and the intersection point of the epipolar line and the segmentation boundary on the organ surface image after segmentation collected by the right camera, and obtain accurate left and right matching point pairs. Then, combined with the ORB (Oriented FAST and Rotated BRIEF) feature description operator, locate the intersection of the corresponding areas in the left and right cameras, and select the point with the highest matching degree among the intersection points as the matching point. Assuming that the slope of the point selected by the left camera iska , the slopekb of the relative matching point can be obtained by formula (3), whereka is the slope of a point P in the image skeleton collected by the left camera, andkb is the slope of the point corresponding to the point P in the image skeleton collected by the right camera. Repeat this step to obtain the three-dimensional coordinates of the positions of all skeletons, form the three-dimensional coordinates of the boundary skeleton, and record the three-dimensional coordinate information of the boundary of each sub-region;

Figure GSB0000204024880000032
Figure GSB0000204024880000032

步骤5:在步骤4所得到的每一个超像素分割后子区域的内部,首先利用SFS(ShapeFrom Shading)进行三维重建,选取线性法三维建模,利用有限差值法对表面梯度p和q进行离散逼近,然后在高度Z(x,y)方向上根据公式(4)进行线性化处理,最后获取局部点的坐标变化关系;Step 5: First, use SFS (Shape From Shading) to perform 3D reconstruction inside each superpixel segmented sub-region obtained in step 4, select linear method 3D modeling, use finite difference method to discretely approximate surface gradients p and q, then perform linearization processing in the height Z (x, y) direction according to formula (4), and finally obtain the coordinate change relationship of the local point;

Figure GSB0000204024880000033
Figure GSB0000204024880000033

公式(4)中:In formula (4):

Figure GSB0000204024880000034
Figure GSB0000204024880000034

步骤6:将步骤5所获得的坐标变化关系与步骤4所得到的边界骨架坐标信息进行融合,计算器官表面全局三维坐标;运算完毕。Step 6: The coordinate change relationship obtained in step 5 is integrated with the boundary skeleton coordinate information obtained in step 4 to calculate the global three-dimensional coordinates of the organ surface; the operation is completed.

本发明的有益效果是:通过本发明提出的超像素分割与外极线约束下的ORB融合三维骨架的生成方法,以及三维骨架与SFS融合的快速三维重建方法,既可以减少三维重建的特征点匹配的时间,又可以改善特征点匹配的数量和准确度。以生成的三维骨架坐标为基准,结合不同颜色的梯度差别,既可以依次推算出各个区域之间的三维坐标信息,进而又能获得场景内器官的全部三维形貌坐标。The beneficial effects of the present invention are: through the method for generating a superpixel segmentation and ORB fusion three-dimensional skeleton under epipolar line constraints proposed by the present invention, and the fast three-dimensional reconstruction method of three-dimensional skeleton fusion and SFS, the time of feature point matching of three-dimensional reconstruction can be reduced, and the number and accuracy of feature point matching can be improved. Based on the generated three-dimensional skeleton coordinates and combined with the gradient differences of different colors, the three-dimensional coordinate information between each region can be calculated in turn, and then all the three-dimensional morphological coordinates of the organs in the scene can be obtained.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1:三维重建方法流程图;Figure 1: Flowchart of the 3D reconstruction method;

图2:SLIC超像素分割方法前后对比图;Figure 2: Before and after comparison of SLIC superpixel segmentation method;

(a)分割前原始图片;(a) Original image before segmentation;

(b)分割后图片;(b) Segmented image;

(c)分割后生成边界三维骨架图片;(c) Generate a 3D skeleton image of the boundary after segmentation;

图3:SFS处理后每个子区域图像;Figure 3: Image of each sub-region after SFS processing;

(a)整体区域图像;(a) Overall area image;

(b)边界区域图像;(b) Boundary area image;

(c)无边界区域图像。(c) Image of the borderless region.

具体实施方式DETAILED DESCRIPTION

本发明所设计的内窥镜医学图像的实时三维重建获方法,所述的三维重建方法如图1所示,具体操作如下:The real-time three-dimensional reconstruction method of endoscopic medical images designed by the present invention is shown in FIG1 , and the specific operations are as follows:

完成双目内窥镜的搭建,并对双目相机进行标定,设左相机A坐标系为OaXaYaZa,设右相机B所在的坐标系为ObXbYbZb,两个相机之间的旋转矩阵为R,平移矩阵为T,标定的公式如公式(5)所示,

Figure GSB0000204024880000041
r11-r33为所述的右相机B相对于所述的左相机A的旋转矩阵分量,tx,ty,tz为所述的右相机B相对于所述的左相机A的平移矩阵分量。Complete the construction of the binocular endoscope and calibrate the binocular camera. Assume that the coordinate system of the left camera A is Oa Xa Ya Za , and the coordinate system of the right camera B is Ob Xb Yb Zb . The rotation matrix between the two cameras is R, and the translation matrix is T. The calibration formula is shown in formula (5):
Figure GSB0000204024880000041
r11 -r33 are rotation matrix components of the right camera B relative to the left camera A, and tx ,ty , tz are translation matrix components of the right camera B relative to the left camera A.

Figure GSB0000204024880000042
Figure GSB0000204024880000042

将双目内窥镜的探测镜头伸入病人体内,以获取器官表面图像,并将内窥镜采集的器官表面图像采用中值滤波进行去噪、图像平滑处理,保护图像的细节信息。The detection lens of the binocular endoscope is inserted into the patient's body to obtain the organ surface image, and the organ surface image collected by the endoscope is subjected to median filtering for denoising and image smoothing to protect the image detail information.

利用SLIC超像素分割方法分割器官表面图像,超像素分割前后对比图如图2(a)和图2(b)所示;The organ surface image is segmented using the SLIC superpixel segmentation method. The comparison images before and after superpixel segmentation are shown in Figure 2(a) and Figure 2(b);

(1)初始化种子点。按照设定的超像素个数,在图像内均匀的分配种子点。设图片总共有N个像素点,预分割为K个相同尺寸的超像素,那么每个超像素块的大小为N/K,则相邻种子点的距离近似为A;(1) Initialize the seed points. Distribute the seed points evenly in the image according to the set number of superpixels. Assume that the image has a total of N pixels, which are pre-divided into K superpixels of the same size. Then the size of each superpixel block is N/K, and the distance between adjacent seed points is approximately A;

(2)在种子点的n×n邻域内重新选择种子点。具体方法为,计算该邻域内所有像素点的梯度值,将种子点移到该邻域内梯度最小的地方;(2) Reselect a seed point in the n×n neighborhood of the seed point. The specific method is to calculate the gradient values of all pixels in the neighborhood and move the seed point to the place with the smallest gradient in the neighborhood;

(3)在每个种子点周围的邻域内为每个像素点分配类标签。SLIC的搜索范围限制为2S×2S,可以加速算法收敛;(3) Assign a class label to each pixel in the neighborhood around each seed point. The search range of SLIC is limited to 2S×2S, which can accelerate the convergence of the algorithm;

(4)距离度量,包括颜色距离和空间距离。对于每个搜索到的像素点,分别计算它和该种子点的距离。距离计算方法如公式(6)和公式(7)所示,其中,dc代表颜色距离,ds代表空间距离,NS是类内最大空间距离,定义为NS=S=sqrt(N/K),适用于每个聚类。最大的颜色距离Nc,我们取一个固定常数m代替,m的值取10。最终的距离度量如公式(8)所示,由于每个像素点都会被多个种子点搜索到,所以每个像素点都有一个与周围种子点的距离,取最小值对应的种子点作为该像素点的聚类中心;(4) Distance metric, including color distance and spatial distance. For each searched pixel point, calculate its distance from the seed point. The distance calculation method is shown in formula (6) and formula (7), wheredc represents the color distance,ds represents the spatial distance, andNs is the maximum spatial distance within the class, defined asNs = S = sqrt(N/K), which is applicable to each cluster. For the maximum color distanceNc , we take a fixed constant m instead, and the value of m is 10. The final distance metric is shown in formula (8). Since each pixel point will be searched by multiple seed points, each pixel point has a distance with the surrounding seed points, and the seed point corresponding to the minimum value is taken as the cluster center of the pixel point;

Figure GSB0000204024880000051
Figure GSB0000204024880000051

Figure GSB0000204024880000052
Figure GSB0000204024880000052

Figure GSB0000204024880000053
Figure GSB0000204024880000053

(5)迭代优化。实践发现10次迭代后图片都可以得到较理想效果,所以迭代次数取10;(5) Iterative optimization. In practice, it is found that the image can get a relatively ideal effect after 10 iterations, so the number of iterations is 10;

(6)增强连通性。经过上述迭代优化出现的瑕疵:出现多连通情况、超像素尺寸过小和单个超像素被切割成多个不连续超像素,可以通过增强连通性解决。(6) Enhanced connectivity. The defects that appear after the above iterative optimization: multiple connections, superpixel size is too small, and a single superpixel is cut into multiple discontinuous superpixels, can be solved by enhancing connectivity.

在双目立体视觉测量中,立体匹配是关键技术,极线约束起着重要作用。观察场景点的两个相机中心C0和C1追踪连接空间三维点X到相机中心的直线,可以找到空间点X在一幅图像中的点p。相反,通过该点p可以找到在另一幅图像中的对应点q。沿着这条直线在另一个图像面进行搜索,直线在另一个图像面形成了一条虚构的直线L,此直线被称为点p的极线。该极线的一个端点以原始观察线上的无穷远处的投影为界,另一个端点以原相机中心在第2个图像面的投影为界,即是极点e。基础矩阵F将一个视角中的二维图像点p映射到另一个视角中的极线上。设左相机选取点的斜率为ka,则与之相对的匹配点的斜率kb可由公式(9)获得。ka为左相机采集到的图像骨架中某一点P的斜率,kb为右相机采集到的图像中与P点对应点的斜率,r11-r33为所述的右相机B相对于所述的左相机A的旋转矩阵分量,tx,ty,tz为所述的右相机B相对于所述的左相机A的平移矩阵分量。In binocular stereo vision measurement, stereo matching is a key technology, and epipolar constraints play an important role. The two camera centersC0 andC1 that observe the scene point trace the straight line connecting the three-dimensional point X in space to the camera center, and the point p of the spatial point X in one image can be found. Conversely, the corresponding point q in the other image can be found through this point p. Searching along this straight line in another image plane, the straight line forms an imaginary straight line L on the other image plane, which is called the epipolar line of point p. One end point of the epipolar line is bounded by the projection of the original observation line at infinity, and the other end point is bounded by the projection of the original camera center on the second image plane, that is, the pole e. The basic matrix F maps the two-dimensional image point p in one perspective to the epipolar line in another perspective. Assuming the slope of the point selected by the left camera iska , the slope of the corresponding matching pointkb can be obtained by formula (9).ka is the slope of a point P in the image skeleton captured by the left camera,kb is the slope of the point corresponding to point P in the image captured by the right camera,r11 -r33 are the rotation matrix components of the right camera B relative to the left camera A, andtx ,ty ,tz are the translation matrix components of the right camera B relative to the left camera A.

Figure GSB0000204024880000061
Figure GSB0000204024880000061

在处理特征不明显的器官表面图像时,采用外极线约束与ORB匹配结合的算法,可以有效弥补采用ORB匹配算法的匹配误差,提高匹配精度,可有效发挥外极线约束与ORB匹配相结合算法的优势。通过ORB特征提取特征点并进行匹配和筛选,提取左相机中某一区域交点位置,通过相机标定参数可以获取其在另一个相机中位置外极线,计算与该极线相交的所有区域边界点的位置的ORB特征,将其与左相机中待匹配点的ORB特征进行相似度对比,找出右相机中相似度最高的点作为待匹配特征。重复执行此步骤,可以获得全部骨架所在位置的三维坐标,构成边界骨架三维坐标,记录每个子区域边界的三维坐标信息,边界三维骨架图片如图2(c)所示;When processing organ surface images with unclear features, the algorithm combining epipolar constraint with ORB matching can effectively compensate for the matching error of the ORB matching algorithm, improve the matching accuracy, and effectively give play to the advantages of the algorithm combining epipolar constraint with ORB matching. Through ORB feature extraction, matching and screening, the intersection position of a certain area in the left camera is extracted. The epipolar line of its position in another camera can be obtained through the camera calibration parameters. The ORB features of the positions of all regional boundary points that intersect with the epipolar line are calculated, and the ORB features are compared with the ORB features of the points to be matched in the left camera for similarity, and the points with the highest similarity in the right camera are found as the features to be matched. Repeat this step to obtain the three-dimensional coordinates of the positions of all skeletons, form the three-dimensional coordinates of the boundary skeleton, and record the three-dimensional coordinate information of the boundary of each sub-area. The boundary three-dimensional skeleton image is shown in Figure 2(c);

ORB特征的检测过程为:The detection process of ORB features is as follows:

(1)在图像中选中像素p,设它的亮度为Ip(1) Select pixel p in the image and set its brightness to Ip ;

(2)设置一个阈值T,值为Ip的20%;(2) Set a threshold T, which is 20% of Ip ;

(3)以p为中心,选取半径为3像素的圆上的16个像素点;(3) With p as the center, select 16 pixels on a circle with a radius of 3 pixels;

(4)若选取的圆上有连续的12个点的亮度大于IP+T或小于IP-T,p即被认为是特征点。(4) If there are 12 consecutive points on the selected circle whose brightness is greater thanIP + T or less thanIP - T, then p is considered as a feature point.

采用三维骨架与SFS融合的三维成像方法。SFS算法是一种快速有效的三维重建方法,但该方法目前的重建精度高度依赖其成像的光源模型,并且由于内脏器官的构造不同,其颜色深浅分布也不相同,采用一种光源模型容易造成三维数据的恢复偏差。超像素分割后的区域减少了复杂颜色背景的影响,再对分割后不同区域进行SFS三维重建,可以提高重建精度。为了实现不同区域的坐标统一,以三维骨架为基准对不同的区域进行三维坐标的融合,从而获得颜色变化复杂物体的精确三维点云。A 3D imaging method that uses a fusion of a 3D skeleton and SFS. The SFS algorithm is a fast and effective 3D reconstruction method, but the current reconstruction accuracy of this method is highly dependent on the light source model of its imaging. In addition, due to the different structures of internal organs, their color depth distribution is also different. Using a light source model is likely to cause 3D data recovery deviation. The area after superpixel segmentation reduces the influence of complex color backgrounds, and then performing SFS 3D reconstruction on different areas after segmentation can improve the reconstruction accuracy. In order to achieve the unification of coordinates of different areas, the 3D coordinates of different areas are fused based on the 3D skeleton, thereby obtaining an accurate 3D point cloud of objects with complex color changes.

选取线性法三维建模,本申请所用的SFS为利用有限差值法对表面梯度p和q进行离散逼近,然后在高度Z方向上进行线性化处理,这种方法运算速度快,对于任何的反射函数都适用。对于p和q采用离散逼近,如公式(10)和公式(11)所示:The linear method 3D modeling is selected. The SFS used in this application is to use the finite difference method to discretely approximate the surface gradients p and q, and then perform linearization processing in the height Z direction. This method has a fast calculation speed and is applicable to any reflection function. For p and q, discrete approximation is used, as shown in formula (10) and formula (11):

Figure GSB0000204024880000062
Figure GSB0000204024880000062

Figure GSB0000204024880000071
Figure GSB0000204024880000071

对于某个像素点(x,y)和给定的图像的灰度等级E(x,y),公式(12)中函数f关于高度图Zn-1的线性逼近就可以用泰勒级数展开,然后利用雅克比迭代去求解,经过简化可得:For a certain pixel point (x, y) and a given image grayscale level E(x, y), the linear approximation of the function f in formula (12) with respect to the height map Zn-1 can be expanded using the Taylor series, and then solved using the Jacobi iteration. After simplification, we can obtain:

Figure GSB0000204024880000072
Figure GSB0000204024880000072

然后,对于Z(x,y)=Zn(x,y),第n次迭代的高度图可以直接按公式(13)求解:Then, for Z(x, y) =Zn (x, y), the height map of the nth iteration can be directly solved according to formula (13):

Figure GSB0000204024880000073
Figure GSB0000204024880000073

公式(13)中:In formula (13):

Figure GSB0000204024880000074
Figure GSB0000204024880000074

现在,设所有像素点的初始估计值为Z0(x,y)=0,高度Z就可以通过公式(13)经过迭代得到。SFS处理后每个子区域图像如图3所示。Now, assuming that the initial estimated value of all pixels is Z0 (x, y) = 0, the height Z can be obtained by iteration through formula (13). The image of each sub-region after SFS processing is shown in FIG3 .

将SFS后所获得的坐标变化关系与超像素分割后所得到的边界骨架坐标信息进行融合,计算全局三维坐标。运算完毕。The coordinate change relationship obtained after SFS is combined with the boundary skeleton coordinate information obtained after superpixel segmentation to calculate the global 3D coordinates. The operation is completed.

本发明与现有三维重建方法最大区别有如下三点:The biggest differences between the present invention and the existing 3D reconstruction method are as follows:

(1)提出了超像素分割与外极线约束下的ORB融合三维骨架生成方法,该方法将颜色复杂区域进行分割,不同区域的边界构成器官的三维骨架。本方法既可以减少三维重建的特征点匹配时间,又可以改善特征点匹配的准确度。(1) A 3D skeleton generation method based on ORB fusion under superpixel segmentation and epipolar constraint is proposed. This method segments the complex color region, and the boundaries of different regions constitute the 3D skeleton of the organ. This method can not only reduce the feature point matching time of 3D reconstruction, but also improve the accuracy of feature point matching.

(2)提出了三维骨架与SFS融合的快速三维成像方法,该方法以生成的三维骨架坐标为基准,结合不同颜色的梯度差别,依次推算出各个区域之间的三维坐标信息,进而获得场景内器官的全部三维形貌坐标。(2) A fast 3D imaging method that combines 3D skeleton and SFS is proposed. This method uses the generated 3D skeleton coordinates as a reference and combines the gradient differences of different colors to sequentially calculate the 3D coordinate information between each region, thereby obtaining the complete 3D morphological coordinates of the organs in the scene.

(3)解决三维内窥镜高精度三维重建的难题,助推三维重建在医疗和工业领域的发展和应用。本研究对医生的经验要求低,操作更为简单,可靠性也相应提高;与已有的传统外科手术相比,本研究可以减轻病人的痛苦,缩短手术时间,减少手术风险。此外,本研究为其他领域的高精度内窥镜三维重建提供新的算法和理论研究基础。(3) Solve the problem of high-precision 3D reconstruction of 3D endoscopes and promote the development and application of 3D reconstruction in the medical and industrial fields. This study requires less experience from doctors, is simpler to operate, and has improved reliability. Compared with existing traditional surgical operations, this study can reduce patients' pain, shorten operation time, and reduce surgical risks. In addition, this study provides new algorithms and theoretical research foundations for high-precision 3D reconstruction of endoscopes in other fields.

综上所述,本发明所述的三维重建方法的优点是:In summary, the advantages of the three-dimensional reconstruction method of the present invention are:

(1)可获得精准的三维坐标数据;(1) Accurate three-dimensional coordinate data can be obtained;

(2)减少了复杂颜色背景的影响,对分割后不同区域三维重建速度快,并且重建精度大大提升。(2) The influence of complex color background is reduced, the 3D reconstruction speed of different regions after segmentation is fast, and the reconstruction accuracy is greatly improved.

三维医用内窥镜诊查不但可以缩短医生培训时间,降低手术时间,还可以解决我国微创手术推广的关键技术难题。同时,通过这种三维立体恢复方法构建的三维内窥镜手术技术的推广,可以推动精准医疗和虚拟现实医疗手段的发展,促进我国医疗仪器行业的进步。Three-dimensional medical endoscope examination can not only shorten the training time of doctors and reduce the operation time, but also solve the key technical problems in promoting minimally invasive surgery in my country. At the same time, the promotion of three-dimensional endoscopic surgery technology constructed by this three-dimensional restoration method can promote the development of precision medicine and virtual reality medical methods, and promote the progress of my country's medical instrument industry.

以上示意性的对本发明及其实施方式进行了描述,该描述没有局限性,附图中所示的也只是本发明的实施方式之一。所以,如果本领域的普通技术人员受其启示,在不脱离本发明创造宗旨的情况下,采用其它形式的同类部件或其它形式的各部件布局方式,不经创造性的设计出与该技术方案相似的技术方案与实施例,均应属于本发明的保护范围。The above is a schematic description of the present invention and its implementation mode, which is not limited. The drawings are only one of the implementation modes of the present invention. Therefore, if a person skilled in the art is inspired by it, and adopts other forms of similar components or other forms of component layouts without departing from the purpose of the invention, and designs a technical solution and embodiment similar to the technical solution without creativity, they should all fall within the protection scope of the present invention.

Claims (1)

Translated fromChinese
1.一种应用于双目内窥镜医学图像的实时三维重建方法,其特征是,包含步骤如下:1. A real-time three-dimensional reconstruction method for binocular endoscopic medical images, characterized in that it comprises the following steps:步骤1:对双目相机进行标定,设左相机A所在的坐标系为OaXaYaZa,设右相机B所在的坐标系为ObXbYbZb,两个相机之间的旋转矩阵为R,平移矩阵为T,标定的公式如公式(1)所示,
Figure FSB0000204024870000011
r11-r33为所述的右相机B相对于所述的左相机A的旋转矩阵分量,tx,ty,tz为所述的右相机B相对于所述的左相机A的平移矩阵分量;Step 1: Calibrate the binocular cameras. Assume that the coordinate system of the left camera A isOaXaYaZa ,and the coordinate system of the right cameraBisObXbYbZb. The rotation matrix between the twocameras isR , and the translation matrix is T. The calibration formula is shown in formula (1).
Figure FSB0000204024870000011
r11 -r33 are rotation matrix components of the right camera B relative to the left camera A, tx ,ty , tz are translation matrix components of the right camera B relative to the left camera A;
Figure FSB0000204024870000012
Figure FSB0000204024870000012
步骤2:将双目内窥镜的探测镜头伸入病人体内,以获取器官表面图像,并将内窥镜采集的器官表面图像采用中值滤波法进行去噪和图像平滑处理,保护图像的细节信息;Step 2: Insert the detection lens of the binocular endoscope into the patient's body to obtain the organ surface image, and use the median filtering method to perform denoising and image smoothing on the organ surface image collected by the endoscope to protect the image detail information;步骤3:利用SLIC超像素分割方法分割步骤2得到的器官表面图像,首先通过相邻像素的颜色、亮度、纹理特征,将器官表面图像细分为多个子区域,再将各子区域图像从RGB颜色空间转换到CIE-Lab颜色空间,按照超像素个数,在图像内均匀分配种子点,在种子点的邻域范围内利用三维的颜色信息以及二维的空间位置信息,计算每个搜索到的像素点到该种子点的距离,来对像素点进行聚类,并通过超像素分割区域的目标数量,控制分割区域的大小,最后进行迭代优化及增强连通性,得到分割后的器官表面图像,距离计算如公式(2)所示,dc代表颜色距离,ds代表空间距离,Ns代表类内最大空间距离,Nc代表最大的颜色距离;Step 3: Use the SLIC superpixel segmentation method to segment the organ surface image obtained in step 2. First, the organ surface image is subdivided into multiple sub-regions according to the color, brightness, and texture features of adjacent pixels. Then, each sub-region image is converted from the RGB color space to the CIE-Lab color space. According to the number of superpixels, seed points are evenly distributed in the image. The distance from each searched pixel to the seed point is calculated within the neighborhood of the seed point using the three-dimensional color information and the two-dimensional spatial position information to cluster the pixels. The size of the segmented region is controlled by the target number of superpixel segmentation regions. Finally, iterative optimization and connectivity enhancement are performed to obtain the segmented organ surface image. The distance calculation is shown in formula (2), where dc represents the color distance, ds represents the spatial distance, Ns represents the maximum spatial distance within the class, and Nc represents the maximum color distance.
Figure FSB0000204024870000013
Figure FSB0000204024870000013
步骤4:将步骤3分割后的器官表面图像,根据外极限匹配原则,在左相机采集到分割后的器官表面图像分割边界选取点,在右相机采集到分割后的器官表面图像上确定极线以及极线与分割边界交点,获得精准的左右匹配点对,再与ORB(特征描述算子结合,定位左右相机中对应区域交点的位置,选取交点中匹配度最高的点为匹配点,设左相机选取点的斜率为ka,则相对的匹配点的斜率kb可由公式(3)获得,ka为左相机采集到的图像骨架中某一点P的斜率,kb为右相机采集到的图像骨架中与所述的P点对应点的斜率,重复执行此步骤,可以获得全部骨架所在位置的三维坐标,构成边界骨架三维坐标,记录每个子区域边界的三维坐标信息;Step 4: The organ surface image segmented in step 3 is selected according to the outer limit matching principle. The segmentation boundary points of the organ surface image segmented by the left camera are selected. The epipolar line and the intersection point of the epipolar line and the segmentation boundary are determined on the organ surface image segmented by the right camera to obtain accurate left and right matching point pairs. Then, the ORB (feature description operator) is combined to locate the intersection of the corresponding areas in the left and right cameras. The point with the highest matching degree in the intersection is selected as the matching point. Assuming that the slope of the point selected by the left camera iska , the slopekb of the relative matching point can be obtained by formula (3).Ka is the slope of a point P in the image skeleton captured by the left camera, andkb is the slope of the point corresponding to the point P in the image skeleton captured by the right camera. Repeat this step to obtain the three-dimensional coordinates of the positions of all skeletons, form the three-dimensional coordinates of the boundary skeleton, and record the three-dimensional coordinate information of the boundary of each sub-region;
Figure FSB0000204024870000021
Figure FSB0000204024870000021
步骤5:在步骤4所得到的每一个超像素分割后子区域的内部,首先利用SFS进行三维重建,选取线性法三维建模,利用有限差值法对表面梯度p和q进行离散逼近,然后在高度Z(x,y)方向上根据公式(4)进行线性化处理,最后获取局部点的坐标变化关系;Step 5: First, in each sub-region obtained in step 4 after superpixel segmentation, SFS is used to perform 3D reconstruction, and linear method 3D modeling is selected. The surface gradients p and q are discretely approximated using the finite difference method, and then linearization is performed in the height Z (x, y) direction according to formula (4), and finally the coordinate change relationship of the local point is obtained;
Figure FSB0000204024870000022
Figure FSB0000204024870000022
公式(4)中:In formula (4):
Figure FSB0000204024870000023
Figure FSB0000204024870000023
步骤6:将步骤5所获得的坐标变化关系与步骤4所得到的边界骨架坐标信息进行融合,计算器官表面全局三维坐标;运算完毕。Step 6: The coordinate change relationship obtained in step 5 is integrated with the boundary skeleton coordinate information obtained in step 4 to calculate the global three-dimensional coordinates of the organ surface; the operation is completed.
CN201910315817.0A2019-04-182019-04-18Real-time three-dimensional reconstruction method applied to binocular endoscopic medical imageActiveCN110033465B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910315817.0ACN110033465B (en)2019-04-182019-04-18Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910315817.0ACN110033465B (en)2019-04-182019-04-18Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image

Publications (2)

Publication NumberPublication Date
CN110033465A CN110033465A (en)2019-07-19
CN110033465Btrue CN110033465B (en)2023-04-25

Family

ID=67239087

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910315817.0AActiveCN110033465B (en)2019-04-182019-04-18Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image

Country Status (1)

CountryLink
CN (1)CN110033465B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110807803B (en)*2019-10-112021-02-09北京文香信息技术有限公司Camera positioning method, device, equipment and storage medium
CN110992431B (en)*2019-12-162023-04-18电子科技大学Combined three-dimensional reconstruction method for binocular endoscope soft tissue image
CN111080778B (en)*2019-12-232023-03-31电子科技大学Online three-dimensional reconstruction method of binocular endoscope soft tissue image
CN111598939B (en)*2020-05-222021-01-26中原工学院Human body circumference measuring method based on multi-vision system
CN112261399B (en)*2020-12-182021-03-16安翰科技(武汉)股份有限公司Capsule endoscope image three-dimensional reconstruction method, electronic device and readable storage medium
CN113034387B (en)*2021-03-052023-07-14成都国科微电子有限公司Image denoising method, device, equipment and medium
CN113052956B (en)*2021-03-192023-03-10安翰科技(武汉)股份有限公司Method, device and medium for constructing film reading model based on capsule endoscope
CN115245302A (en)*2021-04-252022-10-28河北医科大学第二医院System and method for reconstructing three-dimensional scene based on endoscope image
CN114022547B (en)*2021-09-152024-08-27苏州中科华影健康科技有限公司Endoscopic image detection method, device, equipment and storage medium
CN114092647B (en)*2021-11-192025-02-28复旦大学 A 3D reconstruction system and method based on panoramic binocular stereo vision
CN114078139B (en)*2021-11-252024-04-16四川长虹电器股份有限公司Image post-processing method based on human image segmentation model generation result
CN114531767B (en)*2022-04-202022-08-02深圳市宝润科技有限公司Visual X-ray positioning method and system for handheld X-ray machine
CN115299914A (en)*2022-08-052022-11-08上海微觅医疗器械有限公司Endoscope system, image processing method and device
CN115721248A (en)*2022-11-212023-03-03杭州海康慧影科技有限公司Endoscope system and device and method for measuring characteristic distance of internal body tissue
CN116153147B (en)*2023-02-282025-02-18中国人民解放军陆军特色医学中心 3D-VR binocular stereoscopic vision image construction method and endoscope operation teaching device
CN116797744B (en)*2023-08-292023-11-07武汉大势智慧科技有限公司Multi-time-phase live-action three-dimensional model construction method, system and terminal equipment
CN117952854B (en)*2024-02-022024-11-08广东工业大学Multi-facula denoising correction method and three-dimensional reconstruction method based on image conversion
CN118172343B (en)*2024-03-292025-07-08中国人民解放军空军军医大学 A method for characterizing tumor heterogeneity based on medical imaging
CN119832008B (en)*2024-12-192025-09-16青岛海洋地质研究所 A water interference removal method for deep-sea binocular camera terrain reconstruction

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101587082A (en)*2009-06-242009-11-25天津工业大学Quick three-dimensional reconstructing method applied for detecting fabric defect
WO2015024407A1 (en)*2013-08-192015-02-26国家电网公司Power robot based binocular vision navigation system and method based on
CN107767442A (en)*2017-10-162018-03-06浙江工业大学A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision
CN107907048A (en)*2017-06-302018-04-13长沙湘计海盾科技有限公司A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
WO2018107427A1 (en)*2016-12-152018-06-21深圳大学Rapid corresponding point matching method and device for phase-mapping assisted three-dimensional imaging system
CN108335350A (en)*2018-02-062018-07-27聊城大学The three-dimensional rebuilding method of binocular stereo vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101587082A (en)*2009-06-242009-11-25天津工业大学Quick three-dimensional reconstructing method applied for detecting fabric defect
WO2015024407A1 (en)*2013-08-192015-02-26国家电网公司Power robot based binocular vision navigation system and method based on
WO2018107427A1 (en)*2016-12-152018-06-21深圳大学Rapid corresponding point matching method and device for phase-mapping assisted three-dimensional imaging system
CN107907048A (en)*2017-06-302018-04-13长沙湘计海盾科技有限公司A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN107767442A (en)*2017-10-162018-03-06浙江工业大学A kind of foot type three-dimensional reconstruction and measuring method based on Kinect and binocular vision
CN108335350A (en)*2018-02-062018-07-27聊城大学The three-dimensional rebuilding method of binocular stereo vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宋丽梅 ; 陈昌曼 ; 张亮 ; 董霄 ; .高精度全局解相在多频率三维测量中的应用.光电工程.2012,(第12期),全文.*
董霄 ; 宋丽梅 ; .一种基于相位法的彩色三维形貌测量方法.仪器仪表用户.2012,(第04期),全文.*

Also Published As

Publication numberPublication date
CN110033465A (en)2019-07-19

Similar Documents

PublicationPublication DateTitle
CN110033465B (en)Real-time three-dimensional reconstruction method applied to binocular endoscopic medical image
CN110010249B (en) Augmented reality surgical navigation method, system and electronic device based on video overlay
US11961193B2 (en)Method for controlling a display, computer program and mixed reality display device
Wang et al.A practical marker-less image registration method for augmented reality oral and maxillofacial surgery
US9498132B2 (en)Visualization of anatomical data by augmented reality
CN112618026B (en) Remote surgical data fusion interactive display system and method
Bernhardt et al.Automatic localization of endoscope in intraoperative CT image: a simple approach to augmented reality guidance in laparoscopic surgery
CN107067398B (en)Completion method and device for missing blood vessels in three-dimensional medical model
KR20210051141A (en)Method, apparatus and computer program for providing augmented reality based medical information of patient
Jiang et al.Registration technology of augmented reality in oral medicine: A review
CN108090954A (en)Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning
CN116421313A (en) Augmented reality fusion method in thoracoscopic lung tumor resection surgical navigation
CN110288653B (en) A multi-angle ultrasound image fusion method, system and electronic device
CN106960439B (en)A kind of vertebrae identification device and method
KR102433473B1 (en)Method, apparatus and computer program for providing augmented reality based medical information of patient
CN114886558A (en)Endoscope projection method and system based on augmented reality
Li et al.A fully automatic surgical registration method for percutaneous abdominal puncture surgical navigation
JP2017164075A (en) Image alignment apparatus, method and program
CN116485850A (en)Real-time non-rigid registration method and system for surgical navigation image based on deep learning
CN115105204A (en) A laparoscopic augmented reality fusion display method
Li et al.A vision-based navigation system with markerless image registration and position-sensing localization for oral and maxillofacial surgery
CN113012230A (en)Method for placing surgical guide plate under auxiliary guidance of AR in operation
CN113274130A (en)Markless surgery registration method for optical surgery navigation system
KR20220096157A (en)3d image registration method based on markerless, method for tracking 3d object and apparatus implementing the same method
CN115018890A (en) A three-dimensional model registration method and system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp