Movatterモバイル変換


[0]ホーム

URL:


CN118628539A - A method for position and pose registration of objects under microscope based on 3D contour matching - Google Patents

A method for position and pose registration of objects under microscope based on 3D contour matching
Download PDF

Info

Publication number
CN118628539A
CN118628539ACN202410917906.3ACN202410917906ACN118628539ACN 118628539 ACN118628539 ACN 118628539ACN 202410917906 ACN202410917906 ACN 202410917906ACN 118628539 ACN118628539 ACN 118628539A
Authority
CN
China
Prior art keywords
posture
contour
dimensional
joint
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410917906.3A
Other languages
Chinese (zh)
Inventor
王贤成
周少燚
苗炳义
何宏炜
黄浪
蒋奕帆
廖紫洋
常冬冬
卓越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Yicheng Technology Development Co ltd
Original Assignee
Ningbo Yicheng Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Yicheng Technology Development Co ltdfiledCriticalNingbo Yicheng Technology Development Co ltd
Priority to CN202410917906.3ApriorityCriticalpatent/CN118628539A/en
Publication of CN118628539ApublicationCriticalpatent/CN118628539A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种关节镜术中导航方法,特别是一种基于三维轮廓匹配的镜下对象位姿配准方法,在术前,重建手术关节的三维模型,并进行位姿采样,并得到包含位姿信息的关节轮廓特征模板库;在术中,对关节镜拍摄的视频流,应用HED边缘检测算法提取目标骨骼的轮廓特征,再通过MatchNet特征匹配算法将识别到的特征与模板库进行匹配,获得最大置信度的姿势信息以完成三维位姿虚拟重建,同时在术中结合识别到的轮廓特征与位姿信息结合,以对骨性标记物进行空间定位。本发明解决了“提升关节镜术中骨性标记物的空间定位精度”的技术问题,其通过基于三维轮廓匹配实现了对术中关节骨骼、关节镜和手术器械的相对位姿的实时估计,实现骨性标记物的空间定位。

The present invention discloses an arthroscopic intraoperative navigation method, in particular, an under-scope object posture registration method based on three-dimensional contour matching. Before the operation, a three-dimensional model of the surgical joint is reconstructed, and posture sampling is performed, and a joint contour feature template library containing posture information is obtained; during the operation, the video stream shot by the arthroscopic video stream is subjected to the HED edge detection algorithm to extract the contour features of the target bone, and then the identified features are matched with the template library through the MatchNet feature matching algorithm to obtain the posture information with the maximum confidence to complete the three-dimensional posture virtual reconstruction, and at the same time, the identified contour features are combined with the posture information during the operation to spatially locate the bone marker. The present invention solves the technical problem of "improving the spatial positioning accuracy of bone markers during arthroscopic surgery", and it realizes the real-time estimation of the relative posture of the joint bones, arthroscopes and surgical instruments during the operation based on three-dimensional contour matching, and realizes the spatial positioning of the bone marker.

Description

Translated fromChinese
基于三维轮廓匹配的镜下对象位姿配准方法A method for position and pose registration of objects under microscope based on 3D contour matching

技术领域Technical Field

本发明涉及一种关节镜术中导航方法,特别是一种基于三维轮廓匹配的镜下对象位姿配准方法。The invention relates to an arthroscopic intraoperative navigation method, in particular to an arthroscopic object position and posture registration method based on three-dimensional contour matching.

背景技术Background Art

关节镜手术是将具有照明装置的透镜金属管通过很小的切口插入关节腔内,并在监视器上将关节腔的内部结构放大,观察关节腔内的病变情况及部位。受结构尺寸限制,关节镜为单目相机,临床使用时仅能实时获取关节内环境二维平面图像,医生很难仅通过二维受限成像感知体内整体三维环境,缺乏关节镜手术所需的直观感受和空间立体感受,因此对于操作器械与目标骨骼间的距离和相对位置、病变组织的直径和位置等难以通过关节镜进行精确决策,增加打孔、关节异物清(摘)除、矫正等手术操作难度和操作时长。Arthroscopic surgery involves inserting a lens metal tube with an illumination device into the joint cavity through a very small incision, and magnifying the internal structure of the joint cavity on a monitor to observe the condition and location of the lesion in the joint cavity. Due to the limitation of the structural size, the arthroscope is a monocular camera, which can only obtain two-dimensional plane images of the joint environment in real time during clinical use. It is difficult for doctors to perceive the overall three-dimensional environment in the body through only two-dimensional limited imaging, and lacks the intuitive feeling and spatial stereoscopic feeling required for arthroscopic surgery. Therefore, it is difficult to make accurate decisions through the arthroscope about the distance and relative position between the operating instrument and the target bone, the diameter and position of the lesion tissue, etc., which increases the difficulty and operation time of drilling, joint foreign body removal (excision), correction and other surgical operations.

目前骨科关节镜手术中对于位姿导航的技术主要以手术机器人结合术中X光和红外定位技术来实现。国内具有代表性的是天智航旗下的天玑II骨科手术机器人,天玑II骨科手术机器人丰富了机械臂功能,适应更多复杂的手术应用场景,显著改善手术操作体验,让医生更加容易上手。同时国外手术机器人众多,Zimmer Biomet的 Rosa Robotics、Stryker公司的Mako Total Knee 2.0、Smith+Nephew的Cori和强生的Velys。其中Stryker公司是全球最大的骨科及医疗科技公司之一,旗下Mako手术机器人在全球范围内的手术总量已经超过100万例。At present, the technology for posture navigation in orthopedic arthroscopic surgery is mainly achieved by combining surgical robots with intraoperative X-ray and infrared positioning technology. The representative one in China is the Tianji II orthopedic surgical robot under Tianzhihang. The Tianji II orthopedic surgical robot has enriched the functions of the robotic arm, adapted to more complex surgical application scenarios, significantly improved the surgical operation experience, and made it easier for doctors to get started. At the same time, there are many foreign surgical robots, including Zimmer Biomet's Rosa Robotics, Stryker's Mako Total Knee 2.0, Smith+Nephew's Cori, and Johnson & Johnson's Velys. Among them, Stryker is one of the world's largest orthopedic and medical technology companies, and its Mako surgical robot has performed more than 1 million surgeries worldwide.

表1 当前主流关节手术机器人价格Table 1 Current prices of mainstream joint surgery robots

品牌brand目前售价Current Price天智航 天玑II骨科手术机器人Tianzhihang Tianji II orthopedic surgical robot150-300万元1.5-3 million yuanStryker Mako手术机器人Stryker Mako surgical robot300-800万元RMB 3-8 million

除机器人导航外,国内外也有许多其他方向的解决方案。合肥工业大学李玲团队基于连续腔镜视像的面向术中安全保障的无监督三维感知增强方法,融合连续帧时空关联机制及考虑连续帧深度估计结果能够进一步提升深度估计性能。但由于手术操作和组织蠕动带来的柔性形变,方案体内整体环境的三维重构结果较差。此外还有胡鹏宇团队采用电磁导航系统辅助导航和中国人民解放军陆军军医大学邱洪九与内蒙古自治区人民医院王伟等团队设计使用计算机辅助导航,为低成本,更精准的关节镜下位姿导航不断探索。In addition to robot navigation, there are many other solutions at home and abroad. The team led by Li Ling from Hefei University of Technology developed an unsupervised three-dimensional perception enhancement method for intraoperative safety based on continuous laparoscopic video. The integration of continuous frame spatiotemporal correlation mechanism and consideration of continuous frame depth estimation results can further improve the depth estimation performance. However, due to the flexible deformation caused by surgical operations and tissue peristalsis, the three-dimensional reconstruction results of the overall environment in the body are poor. In addition, the team led by Hu Pengyu uses electromagnetic navigation system to assist navigation, and the team led by Qiu Hongjiu from the Army Medical University of the Chinese People's Liberation Army and Wang Wei from the People's Hospital of Inner Mongolia Autonomous Region designed and used computer-assisted navigation, which continuously explores low-cost and more accurate arthroscopic posture navigation.

国外SpringerLink中提出了一种不依赖外部跟踪设备,由深度视觉和惯性传感器数据融合的关节镜导航系统。该系统通过视觉惯性融合实现关节镜自定位,并结合虚拟视角渲染为外科医生提供导航视图。从初步实验结果看,该系统的定位准确度高,跟踪范围广,有望改善关节镜手术过程中由于遮挡和旋转操作带来的视野限制。另一个同样由SpringerLink 的研究人员开发的,基于视频的计算机导航进行患者个体化 ACL 重建。这种方使用附着在骨骼上的易于识别的小视觉标记和工具来估计它们的相对姿势。该方案通过放置标记物,没有额外的切口或昂贵设备,取得了较高精确度的术前模型与骨骼准确配准和骨骼与仪器工具相对姿势。SpringerLink proposed an arthroscopic navigation system that does not rely on external tracking devices and is fused by deep vision and inertial sensor data. The system achieves self-positioning of the arthroscope through visual inertial fusion, and provides a navigation view for surgeons in combination with virtual perspective rendering. From the preliminary experimental results, the system has high positioning accuracy and a wide tracking range, and is expected to improve the field of view limitations caused by occlusion and rotation operations during arthroscopic surgery. Another method, also developed by researchers at SpringerLink, is to perform patient-specific ACL reconstruction based on video-based computer navigation. This method uses small, easily identifiable visual markers and tools attached to the bones to estimate their relative posture. This scheme achieves high-precision preoperative model and bone accurate registration and bone and relative posture of instrument tools by placing markers without additional incisions or expensive equipment.

综上所述可见,现有的关节镜只提供二维视角让医生观察关节腔内的病变情况及部位,并进行全面检查和清理病损部位,缺乏关节镜手术所需的直观感受和空间立体感受,对于操作器械与目标骨骼间的距离和相对位置、病变组织的直径和位置等难以精确决策,增加打孔、关节异物清(摘)除、矫正等手术操作难度和操作时长。现有的Mako和天玑作为代表的关节手术机器人系统,通过计算机辅助技术和机器人技术提高了关节置换手术的精准性和效率,取得了较好的临床效果,在关节置换手术中的应用越来越广泛,但是价格较高也限制了推广。现有的电磁和红外导航在临床应用仍存在很多限制。From the above, it can be seen that the existing arthroscope only provides a two-dimensional perspective for doctors to observe the lesions and locations in the joint cavity, and conduct a comprehensive inspection and clean up the lesions. It lacks the intuitive feeling and spatial stereoscopic feeling required for arthroscopic surgery. It is difficult to make accurate decisions on the distance and relative position between the operating instrument and the target bone, the diameter and location of the lesion tissue, etc., which increases the difficulty and operation time of drilling, joint foreign body removal (removal), correction and other surgical operations. The existing Mako and Tianji, as representative joint surgical robot systems, have improved the accuracy and efficiency of joint replacement surgery through computer-aided technology and robotic technology, and have achieved good clinical results. They are increasingly used in joint replacement surgery, but the high price also limits their promotion. Existing electromagnetic and infrared navigation still have many limitations in clinical applications.

故此,提出了本发明。Therefore, the present invention is proposed.

发明内容Summary of the invention

本发明所提供一种基于三维轮廓匹配的镜下对象位姿配准方法,其发明的目的是在现有的关节镜硬件条件下,不用更换现有的关节镜手术装置或添加其他辅助设备,直接通过AI视觉算法实现基于目标物轮廓的位姿辅助判断,在保证精准性的前提下,提供了一种更加方便、低成本、易于推广的关节镜术中导航方案。The present invention provides a method for aligning the position and posture of an object under a microscope based on three-dimensional contour matching. The purpose of the invention is to directly realize auxiliary judgment of the position and posture based on the contour of the target object through an AI visual algorithm under the existing arthroscopic hardware conditions, without replacing the existing arthroscopic surgical device or adding other auxiliary equipment. While ensuring accuracy, it provides a more convenient, low-cost and easy-to-promote arthroscopic intraoperative navigation solution.

针对关节镜二维视觉下骨性标记物的空间定位精度低的问题,该方法基于三维轮廓匹配实现了对术中关节骨骼、关节镜和手术器械的相对位姿的实时估计,从而实现骨性标记物的空间定位。其具体包括以下的步骤:In order to solve the problem of low spatial positioning accuracy of bone markers under arthroscopic two-dimensional vision, this method realizes real-time estimation of the relative posture of joint bones, arthroscopy and surgical instruments during surgery based on three-dimensional contour matching, thereby realizing the spatial positioning of bone markers. It specifically includes the following steps:

步骤一,在术前准备阶段,结合CT增强扫描技术获取手术关节的图像,再采用三维重建软件对手术关节进行三维重建,得到手术关节的三维模型。Step 1: During the preoperative preparation stage, CT enhanced scanning technology is used to obtain images of the surgical joint, and then three-dimensional reconstruction software is used to reconstruct the surgical joint in three dimensions to obtain a three-dimensional model of the surgical joint.

在获得三维模型后进行位姿采样,基于球面均匀采样算法(Fibonacci lattice)在三维模型周围生成采样点。After obtaining the 3D model, pose sampling is performed and sampling points are generated around the 3D model based on the spherical uniform sampling algorithm (Fibonacci lattice).

对于每个采样点,采用四元数表示法定义相机朝向,确保相机始终对准模型中心。在手术指定的姿势范围内,以0.5°的角度间隔进行采样,生成密集的视点集合。For each sampling point, the camera orientation is defined using quaternion representation to ensure that the camera is always aimed at the center of the model. Within the posture range specified by the surgery, sampling is performed at an angle interval of 0.5° to generate a dense set of viewpoints.

对每个采样点,通过OpenGL(Open Graphics Library)进行三维模型的离屏渲染,再通过设置正交投影矩阵和视图矩阵将模型投影到二维图像平面。随后应用HED(Holistically-Nested Edge Detection)边缘检测算法提取轮廓特征(如髁软骨边界线、髁关节轮廓和髁转角等),并使用HOG(Histogram of Oriented Gradient)描述子进行编码,形成特征模板。同时,通过记录每个模板对应的相机外参矩阵,使用四元数和欧拉角双重表示方法存储旋转信息以得到包含位姿信息的关节轮廓特征模板库。For each sampling point, the 3D model is rendered off-screen through OpenGL (Open Graphics Library), and then the model is projected onto the 2D image plane by setting the orthogonal projection matrix and view matrix. Subsequently, the HED (Holistically-Nested Edge Detection) edge detection algorithm is applied to extract contour features (such as the condylar cartilage boundary line, condylar joint contour, and condylar rotation angle, etc.), and the HOG (Histogram of Oriented Gradient) descriptor is used to encode them to form a feature template. At the same time, by recording the camera extrinsic parameter matrix corresponding to each template, the rotation information is stored using the dual representation method of quaternion and Euler angle to obtain a joint contour feature template library containing posture information.

步骤二,在术中,对关节镜拍摄的视频流,先对图像进行预处理和畸变矫正使其特征明显,随后同样应用深度学习HED边缘检测算法提取目标骨骼的轮廓特征,再通过MatchNet特征匹配算法将识别到的特征与模板库进行匹配,获得最大置信度的姿势信息以完成三维位姿虚拟重建,同时在术中结合识别到的轮廓特征与位姿信息结合,以对骨性标记物进行空间定位。Step 2: During the operation, the video stream captured by the arthroscopy is first preprocessed and distortion-corrected to make its features obvious. Then, the deep learning HED edge detection algorithm is used to extract the contour features of the target bone. The identified features are then matched with the template library through the MatchNet feature matching algorithm to obtain the posture information with the maximum confidence to complete the three-dimensional posture virtual reconstruction. At the same time, the identified contour features are combined with the posture information during the operation to spatially locate the bone markers.

与现有技术相比较,本发明具备以下的技术效果:Compared with the prior art, the present invention has the following technical effects:

本发明得到的一种基于三维轮廓匹配的镜下对象位姿配准方法,其通过基于三维轮廓匹配实现了对术中关节骨骼、关节镜和手术器械的相对位姿的实时估计,从而实现骨性标记物的空间定位,在术中提供手术辅助定位、病变组织尺寸识别等辅助功能,使医生具备更立体直观的视觉感知能力,从而帮助医生在三维空间信息引导下更准确地进行关节手术,提高手术的安全性和人机协同能力,同时缩短关节镜手术时间,降低手术难度,并发症发生率,减少医疗开支,提高患者的手术质量。The present invention obtains a microscopic object posture alignment method based on three-dimensional contour matching, which realizes real-time estimation of the relative posture of joint bones, arthroscopes and surgical instruments during surgery based on three-dimensional contour matching, thereby realizing the spatial positioning of bone markers, providing auxiliary functions such as surgical auxiliary positioning and diseased tissue size identification during surgery, so that doctors have more three-dimensional and intuitive visual perception capabilities, thereby helping doctors to perform joint surgery more accurately under the guidance of three-dimensional spatial information, improving the safety of surgery and human-computer coordination capabilities, while shortening the time of arthroscopic surgery, reducing the difficulty of surgery, the incidence of complications, reducing medical expenses, and improving the quality of surgery for patients.

另外,当前关节病的发病量随着人口老龄化的增加逐渐增加,且精准化的关节镜术是未来发展的趋势,本发明在现有的关节镜硬件条件下,不更换现有的关节镜手术装置或添加其他辅助设备,对现有关节镜系统进行升级,也符合现在医疗产业的发展方向。In addition, the current incidence of joint diseases is gradually increasing with the aging of the population, and precise arthroscopy is the trend of future development. The present invention upgrades the existing arthroscopic system under the existing arthroscopic hardware conditions without replacing the existing arthroscopic surgical device or adding other auxiliary equipment, which is also in line with the current development direction of the medical industry.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是基于三维轮廓匹配的镜下对象位姿配准方法的逻辑原理框图。FIG. 1 is a block diagram of the logic principle of a method for registering the position and posture of an object under a microscope based on three-dimensional contour matching.

具体实施方式DETAILED DESCRIPTION

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field belong to the scope of protection of the present invention.

本发明提供的一种基于三维轮廓匹配的镜下对象位姿配准方法,如图1所示,其具体包括以下的步骤:The present invention provides a method for registering the position and posture of an object under a microscope based on three-dimensional contour matching, as shown in FIG1 , which specifically includes the following steps:

步骤一,在术前准备阶段,结合CT增强扫描技术获取手术关节的图像,再采用三维重建软件3D Slicer对手术关节进行三维重建,得到手术关节的三维模型。Step 1: In the preoperative preparation stage, CT enhanced scanning technology is used to obtain images of the surgical joint, and then the three-dimensional reconstruction software 3D Slicer is used to reconstruct the surgical joint in three dimensions to obtain a three-dimensional model of the surgical joint.

在获得三维模型后进行位姿采样,基于球面均匀采样算法(Fibonacci lattice)在三维模型周围生成采样点。After obtaining the 3D model, pose sampling is performed and sampling points are generated around the 3D model based on the spherical uniform sampling algorithm (Fibonacci lattice).

对于每个采样点,采用四元数表示法定义相机朝向,确保相机始终对准模型中心。在手术指定的姿势范围内,以0.5°的角度间隔进行采样,生成密集的视点集合。For each sampling point, the camera orientation is defined using quaternion representation to ensure that the camera is always aimed at the center of the model. Within the posture range specified by the surgery, sampling is performed at an angle interval of 0.5° to generate a dense set of viewpoints.

对每个采样点,通过OpenGL(Open Graphics Library)进行三维模型的离屏渲染,再通过设置正交投影矩阵和视图矩阵将模型投影到二维图像平面。随后应用HED(Holistically-Nested Edge Detection)边缘检测算法提取轮廓特征(如髁软骨边界线、髁关节轮廓和髁转角等),并使用HOG(Histogram of Oriented Gradient)描述子进行编码,形成特征模板。同时,通过记录每个模板对应的相机外参矩阵,使用四元数和欧拉角双重表示方法存储旋转信息以得到包含位姿信息的关节轮廓特征模板库。For each sampling point, the 3D model is rendered off-screen through OpenGL (Open Graphics Library), and then the model is projected onto the 2D image plane by setting the orthogonal projection matrix and view matrix. Subsequently, the HED (Holistically-Nested Edge Detection) edge detection algorithm is applied to extract contour features (such as the condylar cartilage boundary line, condylar joint contour, and condylar rotation angle, etc.), and the HOG (Histogram of Oriented Gradient) descriptor is used to encode them to form a feature template. At the same time, by recording the camera extrinsic parameter matrix corresponding to each template, the rotation information is stored using the dual representation method of quaternion and Euler angle to obtain a joint contour feature template library containing posture information.

步骤二,在术中,对关节镜拍摄的视频流,首先,通过基于直方图均衡化的对比度增强和锐化等预处理技术,优化图像质量并凸显轮廓特征。随后同样应用深度学习HED边缘检测算法提取目标骨骼的轮廓特征,再通过MatchNet特征匹配算法将识别到的特征与模板库进行匹配,获得最大置信度的姿势信息以完成三维位姿虚拟重建,同时在术中结合识别到的轮廓特征与位姿信息结合,以对骨性标记物进行空间定位。Step 2: During the operation, the video stream captured by the arthroscopy is firstly processed through pre-processing techniques such as contrast enhancement and sharpening based on histogram equalization to optimize the image quality and highlight the contour features. Then, the deep learning HED edge detection algorithm is also applied to extract the contour features of the target bone, and the identified features are matched with the template library through the MatchNet feature matching algorithm to obtain the posture information with the maximum confidence to complete the three-dimensional posture virtual reconstruction. At the same time, the identified contour features are combined with the posture information during the operation to spatially locate the bone markers.

MatchNet深度学习模型是实现三维图像轮廓特征匹配算法的关键。MatchNet模型在测试阶段将特征网络和度量网络分开进行。但不同的是,使用更加适合轮廓特征提取的HED网络来提取轮廓特征,通过提前提取轮廓特征将其以特征编码的格式保存;在术中识别到特征点后,直接使用FC度量网络与特征编码数据库匹配特征的相似度,计算得到一个大小为N1*N2的得分矩阵;通过分析得分矩阵,获得当前帧最大置信度的位姿信息,从而完成三维位姿的虚拟重建。避免了在匹配图像时对特征提取进行重复计算,从而显著提高了实时匹配的速度。同时由于关节镜得到的是连续的视频流,考虑了连续视频帧的时空关联,将前一帧的位姿估计值用于当前帧的位姿估计,通过lstm算法预测下一帧的关节镜位置范围,结合帧间运动信息优先对该范围内的模板进行匹配,连续帧的位姿估计能显著提升算法的速度和位姿估计精度。The MatchNet deep learning model is the key to realize the three-dimensional image contour feature matching algorithm. The MatchNet model separates the feature network and the metric network in the test phase. However, the difference is that the HED network, which is more suitable for contour feature extraction, is used to extract contour features. The contour features are extracted in advance and saved in the format of feature encoding. After the feature points are identified during the operation, the FC metric network is directly used to match the feature similarity with the feature encoding database to calculate a score matrix of size N1*N2. By analyzing the score matrix, the pose information with the maximum confidence of the current frame is obtained, thereby completing the virtual reconstruction of the three-dimensional pose. It avoids repeated calculation of feature extraction when matching images, thereby significantly improving the speed of real-time matching. At the same time, since the arthroscopy obtains a continuous video stream, the spatiotemporal correlation of continuous video frames is considered, and the pose estimation value of the previous frame is used for the pose estimation of the current frame. The position range of the arthroscopy in the next frame is predicted by the LSTM algorithm, and the templates within the range are matched preferentially in combination with the inter-frame motion information. The pose estimation of continuous frames can significantly improve the speed of the algorithm and the accuracy of pose estimation.

本发明不局限于上述最佳实施方式,任何人在本发明的启示下都可得出其他各种形式的产品,但不论在其形状或结构上作任何变化,凡是具有与本申请相同或相近似的技术方案,均落在本发明的保护范围之内。The present invention is not limited to the above-mentioned optimal implementation mode. Anyone can derive other various forms of products under the inspiration of the present invention. However, no matter what changes are made in the shape or structure, all technical solutions that are the same or similar to those of the present application fall within the protection scope of the present invention.

Claims (1)

Translated fromChinese
1.一种基于三维轮廓匹配的镜下对象位姿配准方法,其特征是包括以下的步骤:1. A method for registering the position and posture of an object under a microscope based on three-dimensional contour matching, characterized in that it comprises the following steps:步骤一,在术前准备阶段,结合CT增强扫描技术获取手术关节的图像,再采用三维重建软件对手术关节进行三维重建,得到手术关节的三维模型;Step 1: In the preoperative preparation stage, the image of the surgical joint is obtained by combining CT enhanced scanning technology, and then the surgical joint is reconstructed in three dimensions using three-dimensional reconstruction software to obtain a three-dimensional model of the surgical joint;在获得三维模型后进行位姿采样,基于球面均匀采样算法在三维模型周围生成采样点;对于每个采样点,采用四元数表示法定义相机朝向,确保相机始终对准模型中心;在手术指定的姿势范围内,以0.5°的角度间隔进行采样,生成密集的视点集合;After obtaining the 3D model, pose sampling is performed, and sampling points are generated around the 3D model based on the spherical uniform sampling algorithm. For each sampling point, the camera orientation is defined using quaternion representation to ensure that the camera is always aimed at the center of the model. Within the posture range specified by the surgery, sampling is performed at an angle interval of 0.5° to generate a dense viewpoint set.对每个采样点,通过OpenGL进行三维模型的离屏渲染,再通过设置正交投影矩阵和视图矩阵将模型投影到二维图像平面;随后应用HED边缘检测算法提取轮廓特征,并使用HOG描述子进行编码,形成特征模板;同时,通过记录每个模板对应的相机外参矩阵,使用四元数和欧拉角双重表示方法存储旋转信息以得到包含位姿信息的关节轮廓特征模板库;For each sampling point, the 3D model is rendered off-screen through OpenGL, and then the model is projected onto the 2D image plane by setting the orthogonal projection matrix and the view matrix. Then, the HED edge detection algorithm is applied to extract the contour features, and the HOG descriptor is used to encode them to form a feature template. At the same time, by recording the camera extrinsic matrix corresponding to each template, the rotation information is stored using the dual representation method of quaternion and Euler angle to obtain a joint contour feature template library containing posture information.步骤二,在术中,对关节镜拍摄的视频流,先对图像进行预处理和畸变矫正使其特征明显,随后同样应用深度学习HED边缘检测算法提取目标骨骼的轮廓特征,再通过MatchNet特征匹配算法将识别到的特征与模板库进行匹配,获得最大置信度的姿势信息以完成三维位姿虚拟重建,同时在术中结合识别到的轮廓特征与位姿信息结合,以对骨性标记物进行空间定位。Step 2: During the operation, the video stream captured by the arthroscopy is first preprocessed and distortion-corrected to make its features obvious. Then, the deep learning HED edge detection algorithm is used to extract the contour features of the target bone. The identified features are then matched with the template library through the MatchNet feature matching algorithm to obtain the posture information with the maximum confidence to complete the three-dimensional posture virtual reconstruction. At the same time, the identified contour features are combined with the posture information during the operation to spatially locate the bone markers.
CN202410917906.3A2024-07-102024-07-10 A method for position and pose registration of objects under microscope based on 3D contour matchingPendingCN118628539A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410917906.3ACN118628539A (en)2024-07-102024-07-10 A method for position and pose registration of objects under microscope based on 3D contour matching

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410917906.3ACN118628539A (en)2024-07-102024-07-10 A method for position and pose registration of objects under microscope based on 3D contour matching

Publications (1)

Publication NumberPublication Date
CN118628539Atrue CN118628539A (en)2024-09-10

Family

ID=92608366

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410917906.3APendingCN118628539A (en)2024-07-102024-07-10 A method for position and pose registration of objects under microscope based on 3D contour matching

Country Status (1)

CountryLink
CN (1)CN118628539A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118888091A (en)*2024-09-132024-11-01武汉联影智融医疗科技有限公司 Medical image processing method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118888091A (en)*2024-09-132024-11-01武汉联影智融医疗科技有限公司 Medical image processing method, device, equipment and storage medium

Similar Documents

PublicationPublication DateTitle
US20230355312A1 (en)Method and system for computer guided surgery
EP3789965B1 (en)Method for controlling a display, computer program and mixed reality display device
CN107456278B (en)Endoscopic surgery navigation method and system
CN115068110A (en)Image registration method and system for femoral neck fracture surgery navigation
CN109925057A (en)A kind of minimally invasive spine surgical navigation methods and systems based on augmented reality
CN103371870A (en)Multimode image based surgical operation navigation system
CN110215284A (en)A kind of visualization system and method
CN114283179B (en)Fracture far-near end space pose real-time acquisition and registration system based on ultrasonic image
JP6493885B2 (en) Image alignment apparatus, method of operating image alignment apparatus, and image alignment program
CN116421313A (en) Augmented reality fusion method in thoracoscopic lung tumor resection surgical navigation
Su et al.Comparison of 3d surgical tool segmentation procedures with robot kinematics prior
CN115049806B (en)Face augmented reality calibration method and device based on Monte Carlo tree search
CN115105204A (en) A laparoscopic augmented reality fusion display method
Hu et al.Occlusion-robust visual markerless bone tracking for computer-assisted orthopedic surgery
CN111658142A (en)MR-based focus holographic navigation method and system
CN118628539A (en) A method for position and pose registration of objects under microscope based on 3D contour matching
CN115375595A (en) Image fusion method, device, system, computer equipment and storage medium
Ding et al.Digital twins as a unifying framework for surgical data science: the enabling role of geometric scene understanding
CN117323002A (en) A neuroendoscopic surgery visualization system based on mixed reality technology
Hussain et al.Real-time augmented reality for ear surgery
Maharjan et al.A novel visualization system of using augmented reality in knee replacement surgery: Enhanced bidirectional maximum correntropy algorithm
Mirota et al.Toward video-based navigation for endoscopic endonasal skull base surgery
CN115245303A (en)Image fusion system and method for endoscope three-dimensional navigation
Marques et al.Framework for augmented reality in Minimally Invasive laparoscopic surgery
CN114191078B (en)Endoscope operation navigation robot system based on mixed reality

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp