Movatterモバイル変換


[0]ホーム

URL:


CN115089293A - A calibration method of a spinal endoscopic surgical robot - Google Patents

A calibration method of a spinal endoscopic surgical robot
Download PDF

Info

Publication number
CN115089293A
CN115089293ACN202210780332.0ACN202210780332ACN115089293ACN 115089293 ACN115089293 ACN 115089293ACN 202210780332 ACN202210780332 ACN 202210780332ACN 115089293 ACN115089293 ACN 115089293A
Authority
CN
China
Prior art keywords
image
binocular camera
rigid body
tracking device
calibrating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210780332.0A
Other languages
Chinese (zh)
Other versions
CN115089293B (en
Inventor
李贻斌
李国梁
宋锐
李倩倩
杜付鑫
祁磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong UniversityfiledCriticalShandong University
Priority to CN202210780332.0ApriorityCriticalpatent/CN115089293B/en
Publication of CN115089293ApublicationCriticalpatent/CN115089293A/en
Application grantedgrantedCritical
Publication of CN115089293BpublicationCriticalpatent/CN115089293B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention relates to a spine endoscope operation robot calibration method, which comprises the following steps: calibrating the binocular camera by using a calibration device, and determining internal parameters, external parameters and distortion parameters of the binocular camera; the tracking device is connected with a rigid body, the rigid body comprises a spine part of a patient, a tail end of a robot and a spine endoscope, and the binocular camera is used for tracking the tracking device and acquiring pose change information of the rigid body; performing image fusion and registration on the preoperative CT image and the spine endoscope image to obtain the corresponding relation between the lesion part image and the actual physical lesion area; and positioning the local lesion target in the operation area according to the rigid body pose information acquired by the binocular camera and the image fusion registration data to finish calibration. Can effectively realize the hand-eye calibration and rigid body tracking, improve the automation degree of the spine endoscope operation robot, effectively improve the accuracy and stability of the operation, reduce risks in the operation and postoperative complications, and greatly reduce the radioactive damage of CT perspective guide to medical personnel.

Description

Translated fromChinese
一种脊柱内镜手术机器人标定方法A calibration method of a spinal endoscopic surgical robot

技术领域technical field

本发明涉及手术机器人技术领域,具体为一种脊柱内镜手术机器人标定方法。The invention relates to the technical field of surgical robots, in particular to a method for calibrating a spinal endoscopic surgical robot.

背景技术Background technique

本部分的陈述仅仅是提供了与本发明相关的背景技术信息,不必然构成在先技术。The statements in this section merely provide background information related to the present invention and do not necessarily constitute prior art.

脊柱内镜微创手术与传统的开放手术相比,具有创伤小、术后恢复快,手术效果可靠等特点,目前绝大多数由医生直接进行脊柱内镜微创手术或部分通过脊柱内镜手术机器人执行医师操作。Compared with traditional open surgery, spinal endoscopic minimally invasive surgery has the characteristics of less trauma, faster postoperative recovery, and reliable surgical results. Robots perform physician operations.

脊柱内镜手术机器人执行手术之前需要进行标定,以帮助机器人获得末端执行器械的坐标系基准位置,而现有技术中依赖光学仪器配合辅助设施实现标定,存在操作复杂、手术效率低、精确度低、稳定性差等问题,并且需要在术中多次进行CT扫描以获得脊柱的准确位置,对患者和医护人员产生较多的辐射量。The spinal endoscopic surgery robot needs to be calibrated before performing the operation to help the robot obtain the reference position of the coordinate system of the end effector. However, the existing technology relies on optical instruments to cooperate with auxiliary facilities to achieve calibration, which has the disadvantages of complicated operation, low surgical efficiency and low accuracy. , poor stability and other problems, and it is necessary to perform multiple CT scans during the operation to obtain the accurate position of the spine, resulting in a large amount of radiation to patients and medical staff.

发明内容SUMMARY OF THE INVENTION

为了解决上述背景技术中存在的技术问题,本发明提供一种脊柱内镜手术机器人标定方法,能够有效实现手眼标定和刚体跟踪,提高脊柱内镜手术机器人的自动化程度,能够有效提高手术的精确度和稳定性,减少术中风险和术后并发症的发生,可以极大减少CT透视引导对医护人员的放射性损害。In order to solve the technical problems existing in the above-mentioned background technologies, the present invention provides a method for calibrating a spinal endoscopic surgical robot, which can effectively realize hand-eye calibration and rigid body tracking, improve the automation degree of the spinal endoscopic surgical robot, and effectively improve the accuracy of surgery. It can greatly reduce the radiation damage to medical staff caused by CT fluoroscopy guidance.

为了实现上述目的,本发明采用如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

本发明的第一个方面提供一种脊柱内镜手术机器人标定方法,包括以下步骤:A first aspect of the present invention provides a method for calibrating a spinal endoscopic surgical robot, comprising the following steps:

步骤1:利用标定装置标定双目相机,确定双目相机的内部参数、外部参数以及畸变参数;Step 1: Use the calibration device to calibrate the binocular camera, and determine the internal parameters, external parameters and distortion parameters of the binocular camera;

步骤2:跟踪装置与刚体连接,刚体包括患者脊柱部位、机器人末端、脊柱内镜、机器人与脊柱内镜连接法兰、机器人与脊柱内镜连接工装、夹具或手术器械,利用双目相机跟踪跟踪装置并获取刚体位姿变化信息;Step 2: The tracking device is connected to the rigid body. The rigid body includes the patient's spine, the end of the robot, the spinal endoscope, the connecting flange between the robot and the spinal endoscope, the tool for connecting the robot and the spinal endoscope, a fixture or a surgical instrument, and the binocular camera is used to track and track device and obtain rigid body pose change information;

步骤3:将术前CT影像与脊柱内镜图像进行影像融合与配准,获取病变部位影像与实际物理病变区域的对应关系;Step 3: Perform image fusion and registration on the preoperative CT image and the spinal endoscopic image to obtain the corresponding relationship between the image of the lesion and the actual physical lesion area;

步骤4:根据双目相机所获取的刚体位姿信息与步骤3中的影像融合配准数据,定位到术区局部病变目标完成标定。Step 4: According to the rigid body pose information obtained by the binocular camera and the image fusion registration data in Step 3, locate the local lesion target in the operation area to complete the calibration.

步骤1中,双目相机的两个相机分别发送红外光线到标定装置,左右两个相机接收到标定装置反射的红外光线,获取二值图像,得到双目相机的内部参数、外部参数以及畸变参数。In step 1, the two cameras of the binocular camera send infrared rays to the calibration device respectively, and the left and right cameras receive the infrared rays reflected by the calibration device, obtain a binary image, and obtain the internal parameters, external parameters and distortion parameters of the binocular camera. .

步骤1中,标定装置为固联在一起位于同一平面且不共线的四个小球,每个小球均具有逆向反射涂层,将双目相机发出的红外光线反射回双目相机中以获取二值图像。In step 1, the calibration device is four small balls that are fixed together on the same plane and are not collinear. Get a binary image.

步骤2中,跟踪装置为固联在一起位于同一平面且不共线的四个小球,每个小球均具有逆向反射涂层且与刚体连接,将双目相机发出的红外光线反射回双目相机中以获取刚体的二值图像。In step 2, the tracking device is four small balls that are fixed together on the same plane and are not collinear. camera to obtain a binary image of the rigid body.

跟踪装置和标定装置中每一个小球的直径均相同或均不同。The diameter of each ball in the tracking device and the calibration device is the same or different.

步骤2中,根据双目相机获取的二值图像进行三维重建并标定跟踪装置,获得跟踪装置的位姿信息,根据跟踪装置的位姿信息得到刚体的位姿。In step 2, three-dimensional reconstruction is performed according to the binary image obtained by the binocular camera and the tracking device is calibrated, the pose information of the tracking device is obtained, and the pose of the rigid body is obtained according to the pose information of the tracking device.

步骤2中,在跟踪刚体位姿变化的过程中,在患者术区的骨组织和脊柱内镜末端连接跟踪装置,将跟踪装置分别固定在骨组织和脊柱内镜上,获取术中脊柱内镜在患者体内位置和姿态的二维图像信息。In step 2, in the process of tracking the change of rigid body posture, the tracking device is connected to the bone tissue of the patient's operating area and the end of the spinal endoscope, and the tracking device is respectively fixed on the bone tissue and the spinal endoscope to obtain the intraoperative spinal endoscope. Two-dimensional image information of position and posture in the patient.

步骤3中,根据患者术前CT影像重建患者器官和目标组织的断层图像,得到三维可视化影像模型;根据脊柱内镜图像得到三维可视化影像模型与二维图像信息的位置对应关系,实现影像融合与配准。In step 3, the tomographic images of the patient's organs and target tissues are reconstructed according to the patient's preoperative CT image to obtain a three-dimensional visual image model; the positional correspondence between the three-dimensional visual image model and the two-dimensional image information is obtained according to the spinal endoscopic image, so as to realize image fusion and integration. registration.

步骤1和2中,基于双目相机内部参数和外部参数,得到相机坐标系与像素坐标系的空间变换关系,和世界坐标系与相机坐标系的空间变换关系,获得世界坐标系与像素坐标系的空间变换关系,得到实际物理空间中刚体在图像中的位姿信息。In steps 1 and 2, based on the internal parameters and external parameters of the binocular camera, the spatial transformation relationship between the camera coordinate system and the pixel coordinate system, and the spatial transformation relationship between the world coordinate system and the camera coordinate system are obtained, and the world coordinate system and the pixel coordinate system are obtained. The spatial transformation relationship is obtained to obtain the pose information of the rigid body in the image in the actual physical space.

步骤4中,根据步骤3获得的影像融合与配准数据,得到术区局部病变目标数据,通过步骤1-2得到的世界坐标系与像素坐标系的空间变换关系矩阵,逆向求解获得术区局部病变目标位姿信息,联立已知的患者脊柱部位与跟踪装置的固联耦合关系矩阵,获得所需的患者脊柱部位跟踪装置的位姿状态。In step 4, according to the image fusion and registration data obtained in step 3, the local lesion target data in the operation area is obtained, and through the spatial transformation relationship matrix between the world coordinate system and the pixel coordinate system obtained in steps 1-2, the inverse solution is used to obtain the local operation area. The position and posture information of the lesion target is combined with the known fixed coupling relationship matrix of the patient's spine part and the tracking device to obtain the required posture state of the patient's spine part of the tracking device.

与现有技术相比,以上一个或多个技术方案存在以下有益效果:Compared with the prior art, the above one or more technical solutions have the following beneficial effects:

1、采用双目相机光学定位系统进行跟踪定位,利用光学反馈实现非接触的定位方式,双目相机接收反光小球反射的光学信息,系统针对反馈的光学信息解算出来位姿,简单易行,能够有效实现手眼标定和刚体跟踪,提高脊柱内镜手术机器人的自动化程度和精确度。1. The binocular camera optical positioning system is used for tracking and positioning, and the optical feedback is used to realize the non-contact positioning method. The binocular camera receives the optical information reflected by the reflective ball, and the system calculates the pose according to the feedback optical information, which is simple and easy to implement. , which can effectively realize hand-eye calibration and rigid body tracking, and improve the automation and accuracy of spinal endoscopic surgical robots.

2、克服了传统标定过程中操作复杂及不稳定的问题,能够有效提高手术的精确度和稳定性,减少术中风险和术后并发症的发生。2. It overcomes the problems of complicated and unstable operation in the traditional calibration process, which can effectively improve the accuracy and stability of the operation, and reduce the intraoperative risk and the occurrence of postoperative complications.

3、利用术前CT影像和双目相机通过可视化的影像引导内镜跟踪定位,极大减少术中CT的扫描次数,减少术中患者和医护人员受辐射的量,能够极大减少CT透视引导对医护人员的放射性损害。3. Using preoperative CT images and binocular cameras to guide endoscopic tracking and positioning through visual images, greatly reducing the number of intraoperative CT scans, reducing the amount of radiation received by patients and medical staff during surgery, and greatly reducing CT fluoroscopy guidance Radiation damage to medical staff.

附图说明Description of drawings

构成本发明的一部分的说明书附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。The accompanying drawings forming a part of the present invention are used to provide further understanding of the present invention, and the exemplary embodiments of the present invention and their descriptions are used to explain the present invention, and do not constitute an improper limitation of the present invention.

图1是本发明一个或多个实施例提供的脊柱内镜手术机器人系统标定流程示意图;FIG. 1 is a schematic flowchart of the calibration flow of a robotic system for spinal endoscopic surgery provided by one or more embodiments of the present invention;

图2是本发明一个或多个实施例提供的双目相机跟踪刚体位姿变化原理示意图;FIG. 2 is a schematic diagram of the principle of the binocular camera tracking rigid body pose change provided by one or more embodiments of the present invention;

图3是本发明一个或多个实施例提供的标定装置或跟踪装置结构示意图。FIG. 3 is a schematic structural diagram of a calibration device or a tracking device provided by one or more embodiments of the present invention.

具体实施方式Detailed ways

下面结合附图与实施例对本发明作进一步说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.

应该指出,以下详细说明都是示例性的,旨在对本发明提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本发明所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the invention. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本发明的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present invention. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that when the terms "comprising" and/or "including" are used in this specification, it indicates that There are features, steps, operations, devices, components and/or combinations thereof.

正如背景技术中所描述的,脊柱内镜手术机器人执行手术之前需要进行标定,以帮助机器人获得末端执行器械的坐标系基准位置,而现有技术中依赖光学仪器配合辅助设施实现标定,存在操作复杂、手术效率低、精确度低、稳定性差等问题,并且需要在术中多次进行CT扫描以获得脊柱的准确位置,对患者和医护人员产生较多的辐射量。As described in the background art, the spinal endoscopic surgery robot needs to be calibrated before performing the operation to help the robot obtain the reference position of the coordinate system of the end effector. However, the existing technology relies on optical instruments and auxiliary facilities to achieve calibration, which is complicated in operation. , the operation efficiency is low, the accuracy is low, the stability is poor, and the CT scan needs to be performed multiple times during the operation to obtain the accurate position of the spine, which generates a large amount of radiation to the patients and medical staff.

因此,以下实施例给出了一种脊柱内镜手术机器人标定方法,能够有效实现手眼标定和刚体跟踪,提高脊柱内镜手术机器人的自动化程度,能够有效提高手术的精确度和稳定性,减少术中风险和术后并发症的发生,可以极大减少CT透视引导对医护人员的放射性损害。Therefore, the following embodiments provide a method for calibrating a spinal endoscopic surgical robot, which can effectively realize hand-eye calibration and rigid body tracking, improve the automation degree of the spinal endoscopic surgical robot, effectively improve the accuracy and stability of surgery, and reduce surgical Moderate risk and postoperative complications can greatly reduce the radiation damage to medical staff caused by CT fluoroscopy.

实施例一:Example 1:

如图1-3所示,一种脊柱内镜手术机器人标定方法,包括以下步骤:As shown in Figure 1-3, a method for calibrating a spinal endoscopic surgery robot includes the following steps:

步骤1:使用标定装置对双目相机进行标定,以确定双目相机的内部参数、外部参数以及畸变参数等信息;Step 1: Use the calibration device to calibrate the binocular camera to determine the internal parameters, external parameters, distortion parameters and other information of the binocular camera;

在双目相机标定的过程中,采用被动式双目相机,左右两个相机分别发送红外光线到标定装置,左右两个相机接收到标定装置反射的红外光线,并拍摄获取二值图像,经过解算即可获得双目相机的内部参数、外部参数以及畸变参数等信息。In the process of binocular camera calibration, passive binocular cameras are used. The left and right cameras send infrared rays to the calibration device respectively. The left and right cameras receive the infrared rays reflected by the calibration device, and capture and obtain binary images. Information such as internal parameters, external parameters and distortion parameters of the binocular camera can be obtained.

在双目相机标定的过程中,所确定的双目相机内部参数用于把坐标由相机坐标系转换到像素坐标系下,以解算相机坐标系与像素坐标系的空间变换关系。In the process of binocular camera calibration, the determined internal parameters of the binocular camera are used to convert the coordinates from the camera coordinate system to the pixel coordinate system, so as to solve the spatial transformation relationship between the camera coordinate system and the pixel coordinate system.

在双目相机标定的过程中,所确定的双目相机外部参数用于把坐标由世界坐标系转换到相机坐标系下,以解算世界坐标系与相机坐标系的空间变换关系。In the process of binocular camera calibration, the determined external parameters of the binocular camera are used to transform the coordinates from the world coordinate system to the camera coordinate system, so as to solve the spatial transformation relationship between the world coordinate system and the camera coordinate system.

在双目相机标定的过程中,所确定的双目相机畸变参数用于获取制造精度以及组装工艺的偏差所引入的图像畸变与原始图像的失真程度,以便做相应补偿,从而获取刚体高精度位姿信息。In the process of binocular camera calibration, the determined binocular camera distortion parameters are used to obtain the image distortion introduced by the manufacturing accuracy and the deviation of the assembly process and the distortion degree of the original image, so as to make corresponding compensation, so as to obtain the rigid body high precision bit posture information.

在双目相机标定的过程中,通过联立相机坐标系与像素坐标系的空间变换关系、世界坐标系与相机坐标系的空间变换关系,即可获得世界坐标系与像素坐标系的空间变换关系,从而进一步获得实际物理空间中刚体在图像中的位姿信息。另外,通过引入畸变参数,可对该位姿信息进行补偿以提高精确度。In the process of binocular camera calibration, the spatial transformation relationship between the world coordinate system and the pixel coordinate system can be obtained by simultaneously establishing the spatial transformation relationship between the camera coordinate system and the pixel coordinate system, and the spatial transformation relationship between the world coordinate system and the camera coordinate system. , so as to further obtain the pose information of the rigid body in the image in the actual physical space. In addition, by introducing distortion parameters, the pose information can be compensated to improve accuracy.

在双目相机标定的过程中,所使用的标定装置为固联在一起位于同一平面且不共线的四个反光小球,小球带有逆向反射涂层,能够实现可靠跟踪。In the process of binocular camera calibration, the calibration device used is four reflective spheres that are fixed together and located on the same plane and are not collinear. The spheres are provided with retroreflective coating, which can achieve reliable tracking.

本实施例中,标定装置的结构如图3所示,包括四个固联在一起位于同一平面且不共线的四个反光小球,四个小球的相对位置关系不做限制,双目相机发射光源的方向为前向,小球为被动反光小球,表面带有逆向反射涂层,与光源发射方向相反方向为后向,即向双目相机所在的方向发射反射光从而被双目相机接收到。In this embodiment, the structure of the calibration device is shown in FIG. 3 , including four reflective balls that are fixedly connected together on the same plane and are not collinear. The relative positional relationship of the four balls is not limited. The direction of the light source emitted by the camera is forward, and the small ball is a passive reflective ball with a retro-reflective coating on the surface. camera received.

步骤2:将跟踪装置固联到刚体,刚体包括患者脊柱部位、机器人末端、脊柱内镜、机器人与脊柱内镜连接法兰、机器人与脊柱内镜连接工装或夹具、手术器械等,双目相机对跟踪装置进行实时跟踪,获取刚体位姿变化信息;Step 2: Fix the tracking device to the rigid body, the rigid body includes the patient's spine, the end of the robot, the spinal endoscope, the connecting flange between the robot and the spinal endoscope, the tool or fixture for connecting the robot and the spinal endoscope, surgical instruments, etc., binocular camera Real-time tracking of the tracking device to obtain rigid body pose change information;

在跟踪刚体位姿变化的过程中,根据获取的二值图像进行双目相机三维重建,然后对跟踪装置进行标定,计算出跟踪装置的位姿信息,而跟踪装置与刚体进行固联,即可根据跟踪装置的位姿实时获得刚体的位姿。In the process of tracking the pose change of the rigid body, three-dimensional reconstruction of the binocular camera is carried out according to the obtained binary image, and then the tracking device is calibrated to calculate the pose information of the tracking device, and the tracking device and the rigid body are fixedly connected. The pose of the rigid body is obtained in real time according to the pose of the tracking device.

在跟踪刚体位姿变化的过程中,通过步骤1所获取的双目相机内部参数、外部参数以及畸变参数,即可获得世界坐标系与像素坐标系的空间变换关系以及畸变补偿关系,经过空间变化从而获得跟踪装置在图像中的精确位姿关系,即对跟踪装置实行了标定。In the process of tracking rigid body pose changes, the spatial transformation relationship between the world coordinate system and the pixel coordinate system and the distortion compensation relationship can be obtained through the internal parameters, external parameters and distortion parameters of the binocular camera obtained in step 1. Thus, the precise pose relationship of the tracking device in the image is obtained, that is, the tracking device is calibrated.

在跟踪刚体位姿变化的过程中,所使用的跟踪装置为固联在一起位于同一平面且不共线的四个反光小球,小球带有逆向反射涂层,能够实现可靠跟踪。In the process of tracking the change of rigid body pose, the tracking device used is four reflective spheres that are fixed together and located on the same plane and are not collinear.

本实施例中,跟踪装置和标定装置均为带有四个反光小球的刚体,结构可以相同,也可以不同,大小和尺寸可以相同,也可以不同,只要满足位于同一平面且不共线即可。In this embodiment, the tracking device and the calibration device are both rigid bodies with four reflective balls, the structures may be the same or different, and the sizes and dimensions may be the same or different, as long as they are located on the same plane and are not collinear Can.

在跟踪刚体位姿变化的过程中,在患者术区的骨组织和脊柱内镜末端安装跟踪装置,将跟踪装置分别固定在骨组织和脊柱内镜上,实时获取术中脊柱内镜在患者体内位置和姿态的二维视频影像。In the process of tracking rigid body posture changes, a tracking device is installed on the bone tissue and the end of the spinal endoscope in the patient's operating area, and the tracking device is fixed on the bone tissue and the spinal endoscope respectively, and the intraoperative spinal endoscope is obtained in the patient's body in real time. 2D video image of position and attitude.

步骤3:将术前CT影像与脊柱内镜实时图像进行影像融合与配准,获取病变部位影像与实际物理病变区域对应关系;Step 3: Perform image fusion and registration on the preoperative CT image and the real-time spinal endoscopic image to obtain the corresponding relationship between the image of the lesion and the actual physical lesion area;

在影像融合与配准过程中,根据患者术前CT影像,重建患者器官和目标组织的断层图像,得到三维可视化影像模型,同时结合内镜实时图像,建立三维影像与二维视频影像的位置对应关系,实现影像融合与配准。In the process of image fusion and registration, the tomographic images of the patient's organs and target tissues are reconstructed according to the patient's preoperative CT images, and a 3D visual image model is obtained. At the same time, combined with the real-time endoscopic images, the position correspondence between the 3D image and the 2D video image is established. relationship to achieve image fusion and registration.

本实施例中,CT图像三维重建经过三个阶段:基于高斯滤波的CT原始数据去噪、基于反距离加权插值法的中间断层图像重建、基于光线投射法的CT三维重建绘制。In this embodiment, the three-dimensional reconstruction of CT images goes through three stages: denoising of CT raw data based on Gaussian filtering, reconstruction of intermediate tomographic images based on inverse distance weighted interpolation, and three-dimensional CT reconstruction and rendering based on ray projection.

基于高斯滤波的CT原始数据去噪,考虑到邻域像素点距离对权重的影响,采用距离加权函数分配权重,即The CT raw data denoising based on Gaussian filtering, considering the influence of the distance of the neighboring pixels on the weight, uses the distance weighting function to assign the weight, namely

Figure BDA0003729256910000081
Figure BDA0003729256910000081

其中,假设原始图像v(i,j)为N×N矩阵,M表示CT图像像素坐标的集合,n表示M内坐标点的全部个数,u(i,j)表示加权值且其累加和为1,f(x,y)为滤波去噪后的图像。Among them, it is assumed that the original image v(i, j) is an N×N matrix, M represents the set of CT image pixel coordinates, n represents the total number of coordinate points in M, and u(i, j) represents the weighted value and its accumulated sum is 1, and f(x, y) is the image after filtering and denoising.

基于反距离加权插值法的中间断层图像重建,用于所获得的CT图像断层中间像素点的重建,以便于获得光滑的重建曲面,提升三维重建效果和精确度。其中,邻近数据点和采样点处的不透明度可表示为:The intermediate tomographic image reconstruction based on the inverse distance weighted interpolation method is used for the reconstruction of the intermediate pixel points of the obtained CT image tomography, so as to obtain a smooth reconstructed surface and improve the three-dimensional reconstruction effect and accuracy. where the opacity at adjacent data points and sampling points can be expressed as:

Figure BDA0003729256910000082
Figure BDA0003729256910000082

且满足:and satisfy:

Figure BDA0003729256910000091
Figure BDA0003729256910000091

其中,Mi(xi,yi,zi)表示采样点M(x,y,z)所在体素立方体网格内的其他邻近数据点,f(Mi)为邻近数据点和采样点处的色彩值,μi为每个邻近点到采样点距离的权重,li为邻近点到采样点的欧氏距离,i=1,2,...,n。Among them, Mi (xi , yi , zi ) represents other adjacent data points in the voxel cube grid where the sampling point M (x, y, z) is located, and f(Mi ) is the adjacent data points and sampling points The color value at , μi is the weight of the distance from each adjacent point to the sampling point,li is the Euclidean distance from the adjacent point to the sampling point, i=1,2,...,n.

基于光线投射法的CT三维重建绘制,即对射线上所有采样点的不透明度和色彩值进行合成计算,得到相应像素点的最终色彩,可表示为:CT 3D reconstruction and rendering based on ray casting method, that is, the opacity and color value of all sampling points on the ray are synthesized and calculated to obtain the final color of the corresponding pixel point, which can be expressed as:

Figure BDA0003729256910000092
Figure BDA0003729256910000092

其中,Si、Ti分别表示入射采样点前的色彩度与不透明度,So、To分别表示投射光线经过采样点后的色彩度与不透明度,Sn、Tn分别表示当前采样点的色彩度与不透明度。Among them, Si and Ti respectively represent the chromaticity and opacity before the incident sampling point, So and To respectively represent the chromaticity and opacity of the projected light after passing through the sampling point,Sn and T nrespectively represent the current sampling point chromaticity and opacity.

在影像融合与配准过程中,采用基于点云数据的迭代最近点算法进行融合与配准,即不断移动术前CT影像重建的三维点云与术中扫描获取的表面点云,以点与点之间的最小二乘距离和为衡量标准,使其最终获得最好的重叠效果,从而求得上述点云数据的坐标转换关系。In the process of image fusion and registration, the iterative closest point algorithm based on point cloud data is used for fusion and registration, that is, the 3D point cloud reconstructed from the preoperative CT image and the surface point cloud obtained by the intraoperative scan are continuously moved, and the point and The least square distance sum between the points is used as a measure, so that the best overlap effect can be obtained in the end, so as to obtain the coordinate conversion relationship of the above point cloud data.

设配准点云为U={ui∈R3,i=1,2,...,m}和V={vi∈R3,i=1,2,...,n},U中任意一点ui与V中一点vi距离最近组成点对,则配准点云数据集中最近点的欧氏距离之和可表示为:Let the registered point cloud be U={ui ∈ R3 ,i=1,2,...,m} and V={vi ∈ R3 ,i=1,2,...,n}, U Any point ui in V and a point vi in V are the closest to form a point pair, then the sum of the Euclidean distances of the closest points in the registration point cloud dataset can be expressed as:

Figure BDA0003729256910000101
Figure BDA0003729256910000101

其中,R为旋转矩阵,T为平移向量。通过不断迭代,求解R和T使得D(R,T)最小。where R is the rotation matrix and T is the translation vector. Through continuous iteration, solve R and T to minimize D(R,T).

步骤4:将双目相机所获取的刚体位姿信息与影像融合配准数据综合分析,定位到术区局部病变目标。Step 4: Comprehensively analyze the rigid body pose information obtained by the binocular camera and the image fusion registration data, and locate the local lesion target in the operation area.

根据步骤3所得影像融合与配准数据,可获得术区局部病变目标数据,通过步骤1所得世界坐标系与像素坐标系的空间变换关系矩阵,可逆向求解获得术区局部病变目标位姿信息,进一步联立已知的患者脊柱部位与跟踪装置的固联耦合关系矩阵,可获得期望的患者脊柱部位跟踪装置的位姿状态。According to the image fusion and registration data obtained in step 3, the target data of local lesions in the operation area can be obtained. Through the spatial transformation relationship matrix between the world coordinate system and the pixel coordinate system obtained in step 1, the pose information of local lesions in the operation area can be obtained by reverse solution. By further combining the known fixed coupling relationship matrix between the patient's spine part and the tracking device, the desired posture state of the patient's spine part tracking device can be obtained.

根据步骤2双目相机所得术中与刚体固联的跟踪装置位姿信息,刚体包括患者脊柱部位、机器人末端、脊柱内镜、机器人与脊柱内镜连接法兰、机器人与脊柱内镜连接工装或夹具、手术器械等,与所得期望值不断比对,基于最小二乘法原理获得最小值,即认为术中脊柱内镜手术机器人已定位到术区局部病变目标部位。According to the position and posture information of the tracking device fixed to the rigid body during the operation obtained by the binocular camera in step 2, the rigid body includes the patient's spine part, the robot end, the spinal endoscope, the connecting flange between the robot and the spinal endoscope, the robot and the spinal endoscope connecting tool or Fixtures, surgical instruments, etc., are constantly compared with the obtained expected values, and the minimum value is obtained based on the principle of least squares, that is, it is considered that the intraoperative spinal endoscopic surgical robot has been positioned to the target site of the local lesion in the operation area.

采用双目相机光学定位系统进行跟踪定位,利用光学反馈实现非接触的定位方式,双目相机接收反光小球反射的光学信息,系统针对反馈的光学信息解算出来位姿,简单易行,能够有效实现手眼标定和刚体跟踪,提高脊柱内镜手术机器人的自动化程度;克服了传统操作复杂及不稳定的问题,能够有效提高手术的精确度和稳定性,减少术中风险和术后并发症的发生。The binocular camera optical positioning system is used for tracking and positioning, and the optical feedback is used to realize the non-contact positioning method. The binocular camera receives the optical information reflected by the reflective ball, and the system calculates the pose according to the feedback optical information, which is simple and easy to implement. It effectively realizes hand-eye calibration and rigid body tracking, and improves the automation of spinal endoscopic surgical robots; it overcomes the problems of complex and unstable traditional operations, can effectively improve the accuracy and stability of surgery, and reduce intraoperative risks and postoperative complications. occur.

利用术前CT影像和双目相机通过可视化的影像引导内镜跟踪定位,极大减少术中CT的扫描次数,减少术中患者和医护人员受辐射的量,能够极大减少CT透视引导对医护人员的放射性损害。Using preoperative CT images and binocular cameras to guide endoscopic tracking and positioning through visual images, greatly reduces the number of intraoperative CT scans, reduces the amount of radiation received by patients and medical staff during surgery, and can greatly reduce the impact of CT fluoroscopy guidance on medical care. Radiation damage to personnel.

双目相机跟踪刚体位姿变化的原理如图2所示:The principle of binocular camera tracking rigid body pose changes is shown in Figure 2:

步骤1:采用被动式双目相机,左右两个相机分别发送红外光线到标定装置,左右两个相机接收到标定装置反射的红外光线,并拍摄获取二值图像,经过解算即可获得双目相机的内部参数、外部参数以及畸变参数等信息;Step 1: Passive binocular cameras are used. The left and right cameras send infrared rays to the calibration device respectively. The left and right cameras receive the infrared rays reflected by the calibration device, and capture and obtain binary images. After solving, the binocular cameras can be obtained. The internal parameters, external parameters and distortion parameters of the information;

步骤2:根据获取的二值图像进行双目相机三维重建;Step 2: Perform 3D reconstruction of the binocular camera according to the obtained binary image;

步骤3:对跟踪装置进行标定;Step 3: calibrate the tracking device;

步骤4:计算出跟踪装置的位姿信息;Step 4: Calculate the pose information of the tracking device;

步骤5:将跟踪装置与刚体进行固联;Step 5: Connect the tracking device to the rigid body;

步骤6:实时计算出跟踪装置的位姿,从而获得刚体的位姿。Step 6: Calculate the pose of the tracking device in real time to obtain the pose of the rigid body.

其中,在患者术区的骨组织和脊柱内镜末端安装跟踪装置,通过螺栓将跟踪装置分别固定在骨组织和脊柱内镜上,实时获取术中脊柱内镜在患者体内位置和姿态的二维视频影像。Among them, a tracking device is installed on the bone tissue and the end of the spinal endoscope in the patient's operation area, and the tracking device is fixed on the bone tissue and the spinal endoscope respectively by bolts, and the two-dimensional position and posture of the intraoperative spinal endoscope in the patient's body are obtained in real time. video images.

以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (10)

1. A calibration method of a spinal endoscopic surgery robot is characterized by comprising the following steps: the method comprises the following steps:
step 1: calibrating the binocular camera by using a calibration device, and determining internal parameters, external parameters and distortion parameters of the binocular camera;
step 2: the tracking device is connected with a rigid body, the rigid body comprises a spine part of a patient, a tail end of a robot, a spine endoscope, a connecting flange of the robot and the spine endoscope, a connecting tool of the robot and the spine endoscope, a clamp or a surgical instrument, and the binocular camera is used for tracking the tracking device and acquiring pose change information of the rigid body;
and step 3: performing image fusion and registration on the preoperative CT image and the spine endoscope image to obtain the corresponding relation between the lesion part image and the actual physical lesion area;
and 4, step 4: and (4) positioning a local lesion target in the operation area according to the rigid body pose information acquired by the binocular camera and the image fusion registration data in the step (3) to finish calibration.
2. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 1, two cameras of the binocular camera respectively send infrared rays to the calibration device, and the left camera and the right camera receive the infrared rays reflected by the calibration device to obtain a binary image, so as to obtain internal parameters, external parameters and distortion parameters of the binocular camera.
3. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 1, the calibration device is four small balls which are fixedly connected together, are positioned on the same plane and are not collinear, each small ball is provided with a retro-reflection coating, and infrared rays emitted by the binocular camera are reflected back to the binocular camera to obtain a binary image.
4. The method for calibrating a spinal endoscopic surgical robot according to claim 1, wherein: in the step 2, the tracking device is four small balls which are fixedly connected together, are positioned on the same plane and are not collinear, each small ball is provided with a retro-reflection coating and is connected with the rigid body, and infrared rays emitted by the binocular camera are reflected back to the binocular camera to obtain a binary image of the rigid body.
5. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: the diameters of the small balls in the tracking device and the calibration device are the same or different.
6. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: and 2, performing three-dimensional reconstruction and calibrating the tracking device according to the binary image acquired by the binocular camera to acquire pose information of the tracking device, and acquiring the pose of the rigid body according to the pose information of the tracking device.
7. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 2, in the process of tracking the posture change of the rigid body, a tracking device is connected with the bone tissue of the operation area of the patient and the tail end of the spinal endoscope, the tracking device is respectively fixed on the bone tissue and the spinal endoscope, and two-dimensional image information of the position and the posture of the spinal endoscope in the body of the patient in the operation is obtained.
8. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 3, a tomographic image of the organ and the target tissue of the patient is reconstructed according to the preoperative CT image of the patient to obtain a three-dimensional visual image model; and obtaining the position corresponding relation between the three-dimensional visual image model and the two-dimensional image information according to the spine endoscope image, and realizing image fusion and registration.
9. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the steps 1 and 2, based on the internal parameters and the external parameters of the binocular camera, the spatial transformation relationship between the camera coordinate system and the pixel coordinate system and the spatial transformation relationship between the world coordinate system and the camera coordinate system are obtained, the spatial transformation relationship between the world coordinate system and the pixel coordinate system is obtained, and the pose information of the rigid body in the image in the actual physical space is obtained.
10. The method for calibrating a spinal endoscopic surgery robot according to claim 1, wherein: in the step 4, local lesion target data of the operation area are obtained according to the image fusion and registration data obtained in the step 3, the pose information of the local lesion target of the operation area is obtained through the space transformation relation matrix of the world coordinate system and the pixel coordinate system obtained in the step 1-2 by reverse solution, and the known fixed connection coupling relation matrix of the spine part of the patient and the tracking device is established in a simultaneous manner to obtain the pose state of the required spine part tracking device of the patient.
CN202210780332.0A2022-07-042022-07-04 A calibration method for a spinal endoscopic surgery robotActiveCN115089293B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210780332.0ACN115089293B (en)2022-07-042022-07-04 A calibration method for a spinal endoscopic surgery robot

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210780332.0ACN115089293B (en)2022-07-042022-07-04 A calibration method for a spinal endoscopic surgery robot

Publications (2)

Publication NumberPublication Date
CN115089293Atrue CN115089293A (en)2022-09-23
CN115089293B CN115089293B (en)2025-03-28

Family

ID=83297465

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210780332.0AActiveCN115089293B (en)2022-07-042022-07-04 A calibration method for a spinal endoscopic surgery robot

Country Status (1)

CountryLink
CN (1)CN115089293B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117679173A (en)*2024-01-032024-03-12骨圣元化机器人(深圳)有限公司Robot assisted navigation spine surgical system and surgical equipment
CN117830438A (en)*2024-03-042024-04-05数据堂(北京)科技股份有限公司Laser radar and camera combined calibration method based on specific marker

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101862205A (en)*2010-05-252010-10-20中国人民解放军第四军医大学 An Intraoperative Tissue Tracking Method Combined with Preoperative Imaging
CN107874832A (en)*2017-11-222018-04-06合肥美亚光电技术股份有限公司Bone surgery set navigation system and method
WO2018075784A1 (en)*2016-10-212018-04-26Syverson BenjaminMethods and systems for setting trajectories and target locations for image guided surgery
CN109925057A (en)*2019-04-292019-06-25苏州大学A kind of minimally invasive spine surgical navigation methods and systems based on augmented reality
CN110946654A (en)*2019-12-232020-04-03中国科学院合肥物质科学研究院 An orthopedic surgery navigation system based on multimodal image fusion
CN111281545A (en)*2020-03-022020-06-16北京大学第三医院Spinal laminectomy surgical equipment
CN113925615A (en)*2021-10-262022-01-14北京歌锐科技有限公司Minimally invasive surgery equipment and control method thereof
CN114129262A (en)*2021-11-112022-03-04北京歌锐科技有限公司Method, equipment and device for tracking surgical position of patient
CN114176772A (en)*2021-12-032022-03-15上海由格医疗技术有限公司 Preoperative positioning method, system, medium and computer equipment based on 3D vision
CN114224489A (en)*2021-12-122022-03-25浙江德尚韵兴医疗科技有限公司Trajectory tracking system for surgical robot and tracking method using the same
WO2022062464A1 (en)*2020-09-272022-03-31平安科技(深圳)有限公司Computer vision-based hand-eye calibration method and apparatus, and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101862205A (en)*2010-05-252010-10-20中国人民解放军第四军医大学 An Intraoperative Tissue Tracking Method Combined with Preoperative Imaging
WO2018075784A1 (en)*2016-10-212018-04-26Syverson BenjaminMethods and systems for setting trajectories and target locations for image guided surgery
CN107874832A (en)*2017-11-222018-04-06合肥美亚光电技术股份有限公司Bone surgery set navigation system and method
CN109925057A (en)*2019-04-292019-06-25苏州大学A kind of minimally invasive spine surgical navigation methods and systems based on augmented reality
CN110946654A (en)*2019-12-232020-04-03中国科学院合肥物质科学研究院 An orthopedic surgery navigation system based on multimodal image fusion
CN111281545A (en)*2020-03-022020-06-16北京大学第三医院Spinal laminectomy surgical equipment
WO2022062464A1 (en)*2020-09-272022-03-31平安科技(深圳)有限公司Computer vision-based hand-eye calibration method and apparatus, and storage medium
CN113925615A (en)*2021-10-262022-01-14北京歌锐科技有限公司Minimally invasive surgery equipment and control method thereof
CN114129262A (en)*2021-11-112022-03-04北京歌锐科技有限公司Method, equipment and device for tracking surgical position of patient
CN114176772A (en)*2021-12-032022-03-15上海由格医疗技术有限公司 Preoperative positioning method, system, medium and computer equipment based on 3D vision
CN114224489A (en)*2021-12-122022-03-25浙江德尚韵兴医疗科技有限公司Trajectory tracking system for surgical robot and tracking method using the same

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117679173A (en)*2024-01-032024-03-12骨圣元化机器人(深圳)有限公司Robot assisted navigation spine surgical system and surgical equipment
CN117679173B (en)*2024-01-032024-10-18骨圣元化机器人(深圳)有限公司 Robot-assisted navigation spinal surgery system and surgical equipment
CN117830438A (en)*2024-03-042024-04-05数据堂(北京)科技股份有限公司Laser radar and camera combined calibration method based on specific marker

Also Published As

Publication numberPublication date
CN115089293B (en)2025-03-28

Similar Documents

PublicationPublication DateTitle
CN114041875B (en) An integrated surgical positioning and navigation system
CN110946654B (en)Bone surgery navigation system based on multimode image fusion
CN107468350B (en) A special calibrator for three-dimensional images, a surgical positioning system and a positioning method
JP7706508B2 (en) 3D and 2D image registration for surgical navigation and robotic guidance without using radiopaque fiducials in the images - Patents.com
US7889905B2 (en)Fast 3D-2D image registration method with application to continuously guided endoscopy
CN115089293A (en) A calibration method of a spinal endoscopic surgical robot
CN108090954A (en)Abdominal cavity environmental map based on characteristics of image rebuilds the method with laparoscope positioning
WO2022218388A1 (en)Method and apparatus for performing positioning by means of x-ray image, and x-ray machine and readable storage medium
CN110992431A (en) A combined three-dimensional reconstruction method of binocular endoscopic soft tissue images
Lapeer et al.Image‐enhanced surgical navigation for endoscopic sinus surgery: evaluating calibration, registration and tracking
CN116883471B (en)Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
Jiang et al.Optical positioning technology of an assisted puncture robot based on binocular vision
CN119313824A (en) Abdominal cavity reconstruction and lesion localization method, system and device based on binocular endoscope
CN114191078B (en)Endoscope operation navigation robot system based on mixed reality
Pan et al.Multi-Modality guidance based surgical navigation for percutaneous endoscopic transforaminal discectomy
US12035974B2 (en)Method for determining target spot path
Zhang et al.Catheter localization for vascular interventional robot with conventional single C-arm
CN117204791A (en) Endoscopic instrument guidance method and system
CN116650117A (en)Neural navigation surface matching spatial registration system based on mechanical arm and three-dimensional scanner and spatial registration method thereof
US12279781B2 (en)2D-image guided robotic distal locking system
VogtAugmented light field visualization and real-time image enhancement for computer assisted endoscopic surgery
TWI884859B (en)Calibration device for medical imaging systems
WO2021012142A1 (en)Surgical robot system and control method therefor
Du et al.Endoscope posture calculation method for minimally invasive surgery
CN115227396A (en)Oral cavity throat narrow space visual navigation method based on artificial characteristic target and guiding device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp