Movatterモバイル変換


[0]ホーム

URL:


CN116883471B - Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture - Google Patents

Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
Download PDF

Info

Publication number
CN116883471B
CN116883471BCN202310975574.XACN202310975574ACN116883471BCN 116883471 BCN116883471 BCN 116883471BCN 202310975574 ACN202310975574 ACN 202310975574ACN 116883471 BCN116883471 BCN 116883471B
Authority
CN
China
Prior art keywords
chest
abdomen
dimensional
registration
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310975574.XA
Other languages
Chinese (zh)
Other versions
CN116883471A (en
Inventor
姜杉
李煜华
杨志永
朱涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin UniversityfiledCriticalTianjin University
Priority to CN202310975574.XApriorityCriticalpatent/CN116883471B/en
Publication of CN116883471ApublicationCriticalpatent/CN116883471A/en
Application grantedgrantedCritical
Publication of CN116883471BpublicationCriticalpatent/CN116883471B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a line structure light non-contact point cloud registration method for chest and abdomen percutaneous puncture, which comprises the steps of building a registration system; scanning a chest and abdomen target area; performing line structure light extraction on an original scanning image; three-dimensional line structure light information in the line structure light extraction image is realized, and depth reconstruction based on stereoscopic vision is completed; the three-dimensional coordinate information is induced, screened and acquired to form a point cloud set, and a real object surface point cloud of the chest and abdomen target area for registration is constructed; and registering the real object surface point cloud of the chest and abdomen target area with the three-dimensional chest and abdomen medical image is realized. The registration method disclosed by the invention does not contact the surface of the chest and abdomen, does not additionally adhere a marker and does not interfere with the original workflow of an operation, so that the high-efficiency and accurate registration which is free of contact, full-automatic, deformation-free and independent of any marker is realized, the complexity of registration is reduced, and the precision and efficiency of registration are greatly improved.

Description

Translated fromChinese
面向胸腹部经皮穿刺的线结构光无接触点云配准方法Line structured light contactless point cloud registration method for percutaneous puncture of chest and abdomen

技术领域Technical field

本发明涉及医学图像导航领域,具体是一种面向胸腹部经皮穿刺的线结构光无接触点云配准方法。The invention relates to the field of medical image navigation, specifically a line structured light non-contact point cloud registration method for percutaneous puncture of the chest and abdomen.

背景技术Background technique

近年来,随着肿瘤疾病治疗的迫切需求、图像导航微创治疗手段的丰富及治疗经验的积累,通过图像导航下的胸腹部经皮微创手术对肿瘤疾病的活检、近距离粒子植入放疗等检测及局部治疗手段得到广泛重视与认可。医学图像导航下活检是诊断病理学中最重要的部分,由于病灶体积小,在病灶组织提取过程中受复杂解剖结构、自然呼吸和心脏搏动的影响较大,穿刺易导致失败并危及患者安全。目前单针穿刺成功率仅为70%。In recent years, with the urgent need for tumor treatment, the enrichment of image-guided minimally invasive treatment methods and the accumulation of treatment experience, tumor biopsy and short-range particle implantation radiotherapy have been used for tumor disease through percutaneous minimally invasive surgery of the chest and abdomen under image navigation. Other detection and local treatment methods have been widely valued and recognized. Biopsy under medical image navigation is the most important part in diagnostic pathology. Due to the small size of the lesion, the extraction process of the lesion tissue is greatly affected by complex anatomical structures, natural breathing and cardiac pulsation. Puncture can easily lead to failure and endanger patient safety. Currently, the success rate of single needle puncture is only 70%.

为提升胸腹部经皮穿刺手术精度,利用计算机图像信息来辅助胸腹部病灶位置确定,需要将三维胸腹部医学图像与真实空间下的胸腹部进行配准注册。要求将真实空间下的胸腹部坐标系与计算机医学图像的坐标系进行注册,进而实现医学图像导航下胸腹部经皮穿刺手术,其中注册的效果和效率决定了运动建模的好坏。因此,配准注册的方法是其中至关重要的技术点。针对胸腹部经皮穿刺手术导航的图像配准,国内外的相关研究主要分为以下几类:In order to improve the accuracy of percutaneous puncture surgery of the chest and abdomen and use computer image information to assist in determining the location of chest and abdomen lesions, it is necessary to register the three-dimensional chest and abdomen medical images with the chest and abdomen in real space. It is required to register the thoracoabdominal coordinate system in real space with the coordinate system of computer medical images, so as to realize thoracoabdominal percutaneous puncture surgery under medical image navigation. The effect and efficiency of registration determine the quality of motion modeling. Therefore, the registration method is a crucial technical point. Regarding image registration for percutaneous surgical navigation of the chest and abdomen, relevant research at home and abroad is mainly divided into the following categories:

(1)实物固有特征点配准方法:基于实物固有特征点,如角点、交叉点等具有鲜明特征的几何标记点,通过光学定位仪识别光学探针,探针进行几何标记点拾取,同时在计算机软件中拾取对应几何标记点,进行最小二乘法计算获得最佳变换矩阵。目前,该方法在神经外科医学图像导航中应用广泛,利用人面部眼角、鼻尖等特征点进行配准。此种算法标记点获取方便,但探针在取点时针头也必须在胸腹部表面进行接触,导致胸腹部软组织变形,严重影响配准精度。(1) Registration method of inherent feature points of the physical object: Based on the inherent feature points of the physical object, such as corners, intersections and other geometric marking points with distinctive characteristics, the optical probe is identified by the optical positioner, and the probe picks up the geometric marking points. Pick the corresponding geometric mark points in the computer software and perform least square calculation to obtain the optimal transformation matrix. Currently, this method is widely used in neurosurgery medical image navigation, using feature points such as the corners of the eyes and the tip of the nose on the human face for registration. This algorithm is easy to obtain marked points, but the needle of the probe must also be in contact with the surface of the chest and abdomen when taking points, which will cause deformation of the soft tissue of the chest and abdomen and seriously affect the registration accuracy.

(2)人工标记物法:在实物表面贴上能显影的几个标记点,采集实物的扫描图像,然后在计算机软件中,在图像中选取粘贴的标记点,同时通过光学定位仪得到标记点的空间坐标,再通过最小二乘法进行对应点配准得到实物物理空间与医学图像空间的最优变换矩阵。此种算法精度相对较高,但是操作繁琐,粘贴标记物的方式与实物有直接接触,会在胸腹部表面留下痕迹,无法符合胸腹部经皮穿刺手术要求。(2) Artificial marker method: Paste several mark points that can be developed on the surface of the physical object, collect a scanned image of the physical object, and then select the pasted mark points in the image in computer software, and obtain the mark points through an optical positioner at the same time The spatial coordinates of the object are then registered using the least squares method to obtain the optimal transformation matrix between the physical space and the medical image space. The accuracy of this algorithm is relatively high, but the operation is cumbersome. The method of pasting the markers is in direct contact with the real object, leaving traces on the surface of the chest and abdomen, and cannot meet the requirements of percutaneous puncture of the chest and abdomen.

因此,需要研发一种面向胸腹部等软组织操作简单、自动化程度高且无接触的配准方式。Therefore, it is necessary to develop a simple, highly automated and contactless registration method for soft tissues such as the chest and abdomen.

发明内容Contents of the invention

针对现有技术的不足,本发明拟解决的技术问题是,提供一种面向胸腹部经皮穿刺的线结构光无接触点云配准方法。In view of the shortcomings of the existing technology, the technical problem to be solved by the present invention is to provide a line structured light non-contact point cloud registration method for percutaneous puncture of the chest and abdomen.

本发明解决所述技术问题的技术方案是,提供一种面向胸腹部经皮穿刺的线结构光无接触点云配准方法,其特征在于,该方法包括以下步骤:The technical solution of the present invention to solve the technical problem is to provide a line structured light non-contact point cloud registration method for percutaneous puncture of the chest and abdomen, which is characterized in that the method includes the following steps:

步骤1、搭建面向胸腹部经皮穿刺的线结构光无接触点云配准系统:该系统包括双目相机、云台、线结构光发射源、运动控制板和电脑;Step 1. Build a line structured light non-contact point cloud registration system for percutaneous puncture of the chest and abdomen: the system includes a binocular camera, a pan/tilt, a line structured light emission source, a motion control panel and a computer;

线结构光发射源固定于云台上;运动控制板用于控制云台的运动;线结构光发射源发射的线结构光能够投影至胸腹部目标区域;双目相机用于捕获带线结构光的胸腹部目标区域;电脑用于接收双目相机的原始扫描图像,处理原始扫描图像,计算配准结果;The line structured light emitting source is fixed on the pan/tilt; the motion control panel is used to control the movement of the pan/tilt; the line structured light emitted by the line structured light emitting source can be projected to the target area of the chest and abdomen; the binocular camera is used to capture the line structured light The chest and abdomen target area; the computer is used to receive the original scan images from the binocular camera, process the original scan images, and calculate the registration results;

步骤2、对胸腹部目标区域进行扫描:首先启动双目相机并判定胸腹部目标区域;然后线结构光发射源发射线结构光至胸腹部目标区域的边界,获取胸腹部目标区域的极限位置;再根据极限位置,确定云台的运动范围和运动转角;然后云台转动,线结构光发射源调制发射线结构光,并将线结构光清晰投影在胸腹部目标区域;最后调整双目相机的光学参数,自适应周围环境达到能够清晰区分环境光线与线结构光的拍摄状态后,双目相机拍摄,得到原始扫描图像;Step 2. Scan the target area of the chest and abdomen: first start the binocular camera and determine the target area of the chest and abdomen; then the line structured light emission source emits line structured light to the boundary of the chest and abdomen target area to obtain the extreme position of the chest and abdomen target area; Then determine the movement range and movement angle of the gimbal based on the extreme position; then the gimbal rotates, and the linear structured light emitting source modulates and emits linear structured light, and clearly projects the linear structured light on the target area of the chest and abdomen; finally adjust the binocular camera. Optical parameters, after adapting to the surrounding environment to achieve a shooting state that can clearly distinguish ambient light and linear structured light, the binocular camera shoots and obtains the original scanned image;

步骤3、对原始扫描图像进行线结构光提取:将原始扫描图像传输至电脑中并对原始扫描图像进行初步处理,过滤掉多余的图像信息,并且进一步增大环境光线与线结构光的区分度,得到线结构光提取图像;线结构光提取图像包括左目图像与右目图像;Step 3. Extract line structured light from the original scanned image: transfer the original scanned image to the computer and perform preliminary processing on the original scanned image to filter out excess image information and further increase the distinction between ambient light and line structured light. , obtain the line structured light extraction image; the line structured light extraction image includes the left eye image and the right eye image;

步骤4、将线结构光提取图像中的二维线结构光信息实现三维化,完成基于立体视觉的深度重建:首先完成双目相机的自校准,自适应获取最佳的立体视觉状态;再获取线结构光信息中同一个二维点在左目图像和右目图像中的像素坐标值并建立映射关系,进而将线结构光信息中的所有二维点完成一对一映射,得到若干个二维匹配点对;最后,对所有的二维匹配点对进行三维化,得到每一个二维匹配点对的对应的三维点;再获取三维点在世界坐标系中的三维坐标,得到三维坐标信息,完成基于立体视觉的深度重建;Step 4. Three-dimensionalize the two-dimensional line structured light information in the line structured light extraction image to complete depth reconstruction based on stereo vision: first complete the self-calibration of the binocular camera to adaptively obtain the best stereo vision state; then obtain The pixel coordinate values of the same two-dimensional point in the line structured light information in the left-eye image and the right-eye image are established and a mapping relationship is established, and then all two-dimensional points in the line structured light information are mapped one-to-one to obtain several two-dimensional matches. point pairs; finally, three-dimensionalize all two-dimensional matching point pairs to obtain the corresponding three-dimensional points of each two-dimensional matching point pair; then obtain the three-dimensional coordinates of the three-dimensional points in the world coordinate system to obtain the three-dimensional coordinate information, and complete Depth reconstruction based on stereo vision;

步骤5、将三维坐标信息归纳筛选获取点云集合,构建用于配准的胸腹部目标区域的实物表面点云:首先对三维坐标信息进行甄别,过滤错误结果与环境噪点,得到符合精度条件的三维坐标,得到点云集合;再根据实际临床应用与算法执行速度要求对点云集合进行栅格化下采样,获取均匀且最适合于配准的实物表面点云;Step 5. Summarize and filter the three-dimensional coordinate information to obtain a point cloud set, and construct a physical surface point cloud of the thorax and abdomen target area for registration: First, screen the three-dimensional coordinate information, filter out error results and environmental noise points, and obtain a point cloud that meets the accuracy conditions. Three-dimensional coordinates are used to obtain a point cloud set; then the point cloud set is rasterized and down-sampled according to the actual clinical application and algorithm execution speed requirements to obtain a uniform and most suitable physical surface point cloud for registration;

步骤6、实现胸腹部目标区域的实物表面点云与三维胸腹部医学图像的配准:首先,枚举三维胸腹部医学图像在计算机医学图像坐标系中可能处于的所有空间位姿状态,并且记录每一种空间位姿状态相对初始位姿的初始变换矩阵再计算实物表面点云的质心坐标以及每一种空间位姿状态下的三维胸腹部医学图像的质心坐标;再分别将所有空间位姿状态下的三维胸腹部医学图像平移到该三维胸腹部医学图像的质心与实物表面点云的质心对齐时的位置,记录所有的质心变换矩阵/>再遍历平移后的所有空间位姿状态,进行ICP配准,进而得到所有的ICP变换矩阵/>再计算所有空间位姿状态下完成ICP配准后的ICP配准误差RMSEi;再比较所有的ICP配准误差RMSEi,获取最小ICP配准误差RMSEk以及该最小误差对应的空间位姿状态;最后,计算最小ICP配准误差RMSEk对应的空间位姿状态下的高迭代变换矩阵/>得到配准结果。Step 6. Realize the registration of the physical surface point cloud of the chest and abdomen target area and the three-dimensional chest and abdomen medical image: First, enumerate all the possible spatial posture states of the three-dimensional chest and abdomen medical image in the computer medical image coordinate system, and record The initial transformation matrix of each space pose state relative to the initial pose Then calculate the centroid coordinates of the physical surface point cloud and the centroid coordinates of the three-dimensional chest and abdomen medical images in each space posture state; then translate the three-dimensional chest and abdomen medical images in all space posture states to the three-dimensional chest and abdomen medical images. The position when the center of mass of the image is aligned with the center of mass of the physical surface point cloud, and all centroid transformation matrices are recorded/> Then traverse all the spatial pose states after translation, perform ICP registration, and then obtain all ICP transformation matrices/> Then calculate the ICP registration error RMSEi after ICP registration is completed in all spatial pose states; then compare all ICP registration errors RMSEi to obtain the minimum ICP registration error RMSEk and the spatial pose state corresponding to the minimum error. ;Finally, calculate the high iteration transformation matrix in the spatial pose state corresponding to the minimum ICP registration error RMSEk /> Get the registration result.

与现有技术相比,本发明的有益效果在于:Compared with the prior art, the beneficial effects of the present invention are:

(1)本发明的配准方法不接触胸腹部表面、不额外粘贴标记物以及不干扰手术原有工作流程,实现了无接触、全自动、不变形且不依赖于任何标记物的高效、精确配准,降低了配准的复杂度,大大提升了配准的精度和效率。(1) The registration method of the present invention does not touch the surface of the chest and abdomen, does not attach additional markers, and does not interfere with the original workflow of the surgery. It achieves high efficiency and accuracy that is non-contact, fully automatic, non-deformable and does not rely on any markers. Registration reduces the complexity of registration and greatly improves the accuracy and efficiency of registration.

(2)本发明只需进行一次精配准,免去粗配准的繁琐,降低了配准的复杂度,整个流程执行速度快,提高了配准的效率。(2) The present invention only needs to perform fine registration once, eliminating the tediousness of rough registration, reducing the complexity of registration, and the entire process is executed quickly, improving the efficiency of registration.

(3)本发明依靠线结构光进行胸腹部区域表面点云获取,无需与胸腹部接触。(3) The present invention relies on linear structured light to obtain surface point clouds in the chest and abdomen areas without contact with the chest and abdomen.

(4)本发明自动化程度高,无需手动进行操作,减轻了操作负担。(4) The present invention has a high degree of automation, does not require manual operation, and reduces the operating burden.

(5)本发明能够快速精准高效地完成真实空间下的胸腹部坐标系与胸腹部计算机医学图像坐标系的注册,辅以胸腹部医学图像导航软件,为后续图像导航下的胸腹部等软组织经皮穿刺手术提供了关键的基础,引导医生进行高精度、高效率的胸腹部经皮穿刺手术,减少了医生的操作负担和与胸腹部接触,提高了手术精度,保证手术安全,具有一定的社会价值和经济效益。(5) The present invention can quickly, accurately and efficiently complete the registration of the thoracoabdominal coordinate system and the thoracoabdominal computer medical image coordinate system in the real space. Supplemented by the thoracoabdominal medical image navigation software, the present invention can provide guidance for subsequent image navigation of soft tissue such as the thorax and abdomen. Skin puncture surgery provides a key foundation, guiding doctors to perform high-precision, high-efficiency percutaneous puncture surgery on the chest and abdomen, reducing the doctor's operating burden and contact with the chest and abdomen, improving surgical accuracy, ensuring surgical safety, and having certain social significance. value and economic benefits.

附图说明Description of the drawings

图1为本发明的流程图;Figure 1 is a flow chart of the present invention;

图2为本发明的配准系统的结构框图;Figure 2 is a structural block diagram of the registration system of the present invention;

图3为本发明的步骤3的线结构光提取图像;Figure 3 is a line structured light extraction image of step 3 of the present invention;

图4为本发明的步骤4的基于立体视觉的深度重建的效果图;Figure 4 is an effect diagram of depth reconstruction based on stereo vision in step 4 of the present invention;

图5为本发明的步骤5的用于配准的胸腹部目标区域的实物表面点云图;Figure 5 is a point cloud image of the physical surface of the chest and abdomen target area used for registration in step 5 of the present invention;

图6为本发明的步骤6的三维胸腹部医学图像在计算机医学图像坐标系中可能处于的所有空间位姿状态图。Figure 6 is a diagram of all possible spatial posture states of the three-dimensional chest and abdomen medical image in the computer medical image coordinate system in step 6 of the present invention.

图中,双目相机1、云台2、线结构光发射源3、运动控制板4、电脑5、胸腹部目标区域6、实物表面点云7、三维胸腹部医学图像8。In the figure, binocular camera 1, pan/tilt 2, line structured light emission source 3, motion control panel 4, computer 5, chest and abdomen target area 6, physical surface point cloud 7, three-dimensional chest and abdomen medical image 8.

具体实施方式Detailed ways

下面给出本发明的具体实施例。具体实施例仅用于进一步详细说明本发明,不限制本发明权利要求的保护范围。Specific embodiments of the present invention are given below. Specific examples are only used to further describe the present invention and do not limit the scope of protection of the claims of the present invention.

本发明提供了一种面向胸腹部经皮穿刺的线结构光无接触点云配准方法(简称方法),其特征在于,该方法包括以下步骤:The present invention provides a line structured light non-contact point cloud registration method for percutaneous puncture of the chest and abdomen (referred to as the method), which is characterized in that the method includes the following steps:

步骤1、搭建面向胸腹部经皮穿刺的线结构光无接触点云配准系统(简称系统):该系统包括双目相机1、云台2、线结构光发射源3、运动控制板4和电脑5;Step 1. Build a line structured light non-contact point cloud registration system (referred to as the system) for percutaneous puncture of the chest and abdomen: the system includes a binocular camera 1, a pan/tilt 2, a line structured light emission source 3, a motion control panel 4 and Computer 5;

线结构光发射源3固定于云台2上;运动控制板4将电机运动数据传输给云台2的电机,用于控制云台2的运动;线结构光发射源3发射的线结构光能够投影至胸腹部目标区域6;双目相机1用于捕获带线结构光的胸腹部目标区域6;电脑5用于接收双目相机1的原始扫描图像,处理原始扫描图像,计算配准结果;The line structured light emitting source 3 is fixed on the pan/tilt 2; the motion control board 4 transmits the motor motion data to the motor of the pan/tilt 2 for controlling the movement of the pan/tilt 2; the line structured light emitted by the line structured light emitting source 3 can Projected to the chest and abdomen target area 6; the binocular camera 1 is used to capture the chest and abdomen target area 6 with line structured light; the computer 5 is used to receive the original scan image of the binocular camera 1, process the original scan image, and calculate the registration result;

优选地,步骤1中,云台2具有两个转动自由度,利用运动控制板4实现对云台2的智能运动控制。运动控制板4采用Arduino板,优选采用UNO型Arduino板。Preferably, in step 1, the pan/tilt 2 has two rotational degrees of freedom, and the motion control panel 4 is used to realize intelligent motion control of the pan/tilt 2. The motion control board 4 uses an Arduino board, preferably a UNO type Arduino board.

步骤2、对胸腹部目标区域6进行扫描:首先启动双目相机1并判定胸腹部目标区域6;然后线结构光发射源3发射线结构光至胸腹部目标区域6的边界,获取胸腹部目标区域6的极限位置;再根据极限位置,确定与线结构光发射源3固定连接的云台2的运动范围和运动转角;然后云台2转动,线结构光发射源3调制发射线结构光,并将线结构光清晰投影在胸腹部目标区域6;最后调整双目相机1的光学参数,自适应周围环境达到能够清晰区分环境光线与线结构光的拍摄状态后,双目相机1高频曝光拍摄,得到原始扫描图像;Step 2. Scan the chest and abdomen target area 6: first start the binocular camera 1 and determine the chest and abdomen target area 6; then the line structured light emission source 3 emits line structured light to the boundary of the chest and abdomen target area 6 to obtain the chest and abdomen target. The extreme position of area 6; and then based on the extreme position, determine the movement range and movement angle of the pan/tilt 2 that is fixedly connected to the line structured light emitting source 3; then the pan/tilt 2 rotates, and the line structured light emitting source 3 modulates and emits line structured light. And clearly project the line structured light on the chest and abdomen target area 6; finally adjust the optical parameters of the binocular camera 1, adapt to the surrounding environment to achieve a shooting state that can clearly distinguish the ambient light and the line structured light, the binocular camera 1 high-frequency exposure Shoot to get the original scanned image;

优选地,步骤2中,双目相机1的光学参数包括相机的曝光、亮度和增益。Preferably, in step 2, the optical parameters of the binocular camera 1 include exposure, brightness and gain of the camera.

步骤3、对原始扫描图像进行线结构光提取:将原始扫描图像传输至电脑5中,在电脑5中对原始扫描图像进行初步处理,过滤掉多余的图像信息降低后续计算负担,并且进一步增大环境光线与线结构光的区分度,得到线结构光提取图像(如图3所示);线结构光提取图像包括左目图像与右目图像;Step 3. Extract line structured light from the original scanned image: transfer the original scanned image to the computer 5, perform preliminary processing on the original scanned image in the computer 5, filter out redundant image information, reduce the subsequent calculation burden, and further increase the computational burden. The distinction between ambient light and line structured light results in a line structured light extraction image (as shown in Figure 3); the line structured light extraction image includes a left eye image and a right eye image;

优选地,步骤3中,初步处理包括灰度化、降噪、平滑和切割。Preferably, in step 3, preliminary processing includes grayscale, noise reduction, smoothing and cutting.

步骤4、将线结构光提取图像中的二维线结构光信息实现三维化,完成基于立体视觉的深度重建:首先完成双目相机1的自校准,自适应获取最佳的立体视觉状态;再获取线结构光信息中同一个二维点在左目图像和右目图像中的像素坐标值并建立映射关系,进而将线结构光信息中的所有二维点完成一对一映射,得到若干个二维匹配点对;最后,对所有的二维匹配点对进行三维化,得到每一个二维匹配点对的对应的三维点;再获取三维点在世界坐标系中的三维坐标,得到三维坐标信息,完成基于立体视觉的深度重建(如图4所示);Step 4. Three-dimensionalize the two-dimensional line structured light information in the line structured light extraction image to complete depth reconstruction based on stereo vision: first complete the self-calibration of the binocular camera 1 to adaptively obtain the best stereo vision state; then Obtain the pixel coordinate values of the same two-dimensional point in the line structured light information in the left-eye image and the right-eye image and establish a mapping relationship, and then complete one-to-one mapping of all two-dimensional points in the line structured light information to obtain several two-dimensional Match point pairs; finally, three-dimensionalize all two-dimensional matching point pairs to obtain the corresponding three-dimensional points of each two-dimensional matching point pair; then obtain the three-dimensional coordinates of the three-dimensional points in the world coordinate system to obtain the three-dimensional coordinate information, Complete depth reconstruction based on stereo vision (as shown in Figure 4);

优选地,步骤4中,双目相机1的自校准包括双目相机1的内参矩阵的标定、图像的畸变校正和双目的极线校正垂直坐标对齐。Preferably, in step 4, the self-calibration of the binocular camera 1 includes calibration of the internal reference matrix of the binocular camera 1, image distortion correction and binocular epipolar correction vertical coordinate alignment.

优选地,步骤4中,获取二维匹配点对的具体步骤是:在左眼图像中取左目线结构光信息中的第i个二1维点1Li,Li的像素高度为再以Li为参考点,在右目线结构光信息中寻找与Li的像素1高度1/>高度最近的像1素点1Ri,Ri的像素高度为/>当1Li和Ri满足时,1Li和Ri为一个二维匹配点对;ε表示预设阈值,根据极线校正的误差设定。Preferably, in step 4, the specific steps for obtaining the two-dimensional matching point pair are: obtaining the i-th two-dimensional point 1Li in the left eye line structured light information in the left eye image, and the pixel height of Li is Then useLi as the reference point, and find the height 1/> of pixel 1of Li in the right eye line structured light information. The height of the nearest pixel 1Ri , the pixel height of Ri is/> When 1Li and Ri satisfy When , 1Li and Ri are a two-dimensional matching point pair; ε represents the preset threshold, which is set according to the error of epipolar correction.

优选地,步骤4中,三维化具体是:采用稳态三角化算法重建出二维匹配点对的深度信息,进而得到每一个二维匹配点对的对应的三维点;稳态三角化算法如式1所示:Preferably, in step 4, the three-dimensionalization is specifically: using a steady-state triangulation algorithm to reconstruct the depth information of the two-dimensional matching point pairs, and then obtaining the corresponding three-dimensional points of each two-dimensional matching point pair; the steady-state triangulation algorithm is as follows: Equation 1 shows:

式(1)中,某个三维点在左相机中的像素坐标为(u1,v1),在左相机的深度为d1;该三维点在右相机中的像素坐标为(u2,v2),在右相机深度为d2;K1为左相机内参矩阵,K2为右相机内参矩阵;R为左相机到右相机的旋转矩阵;t为左相机到右相机的平移矩阵。In formula (1), the pixel coordinates of a certain three-dimensional point in the left camera are (u1 , v1 ), and the depth of the left camera is d1 ; the pixel coordinates of the three-dimensional point in the right camera are (u2 , v2 ), the depth of the right camera is d2 ; K1 is the internal parameter matrix of the left camera, K2 is the internal parameter matrix of the right camera; R is the rotation matrix from the left camera to the right camera; t is the translation matrix from the left camera to the right camera.

步骤5、将三维坐标信息归纳筛选获取点云集合,构建用于配准的胸腹部目标区域6的实物表面点云7:首先对三维坐标信息进行甄别,过滤错误结果与环境噪点,得到符合精度条件的三维坐标,得到点云集合;再根据实际临床应用与算法执行速度要求对点云集合进行栅格化下采样,获取均匀且最适合于配准的实物表面点云7(如图5所示);Step 5: Summarize and filter the three-dimensional coordinate information to obtain a point cloud set, and construct a physical surface point cloud 7 of the thorax and abdomen target area 6 for registration: First, screen the three-dimensional coordinate information, filter out error results and environmental noise, and obtain a point cloud that meets the accuracy The three-dimensional coordinates of the conditions are obtained to obtain a point cloud set; then the point cloud set is rasterized and down-sampled according to the actual clinical application and algorithm execution speed requirements to obtain a uniform and most suitable physical surface point cloud 7 for registration (as shown in Figure 5 Show);

优选地,步骤5中,过滤错误结果与环境噪点利用差分判据微分判据以及差值判据/>来筛选符合精度条件的三维坐标。Preferably, in step 5, the difference criterion is used to filter error results and environmental noise differential criterion And the difference criterion/> To filter the three-dimensional coordinates that meet the accuracy conditions.

优选地1,步1骤5中,差分判据表示获取相邻两点Pi和Pi-1在计算机医学图像坐标系中Z坐标值沿Y方向1的变化状态,当差分值大于阈值ε1,剔除点Pi。本实施例中,根据实际测试结果ε1取值4.5。Preferably 1, in step 1 and step 5, the difference criterion means to obtain the changing state of the Z coordinate value of the two adjacent points Pi and Pi-1 along the Y direction 1 in the computer medical image coordinate system. When the difference value is greater than the threshold ε1 , eliminate pointPi . In this embodiment,ε1 takes a value of 4.5 based on actual test results.

优选地1,步1骤5中,微分判据表示获取相邻两点Pi和Pi-1在计算机医学图像坐标系中Z方向坐标1差的比值,当微分值大于阈值ε2,剔除点Pi。本实施例中,根据实际测试结果ε2取值0.1。Preferably 1. In step 1 and step 5, the differential criterion represents obtaining the ratio of the difference in Z-direction coordinates 1 between two adjacent points Pi and Pi-1 in the computer medical image coordinate system. When the differential value is greater than the threshold ε2 , the point is eliminated. ClickPi . In this embodiment, ε2 takes a value of 0.1 based on actual test results.

优选地1,步1骤5中,差值判据表示获取相邻两点Pi和Pi-1在计算机医学图像坐标系中Z方向的差值,当差值大于阈值ε3,剔除点Pi。本实施例中,根据实际测试结果ε3取值15。Preferably 1, in step 1 and step 5, the difference criterion means to obtain the difference between two adjacent points Pi and Pi-1 in the Z direction in the computer medical image coordinate system. When the difference is greater than the threshold ε3 , the point is eliminated.Pi . In this embodiment,ε3 takes a value of 15 based on actual test results.

优选地,步骤5中,栅格化下采样使用PCL库中的VexelGrid滤波器实现。该算法通过体素化网格,利用体素重心代替体素中其他所有点,从而完成点云的过滤。Preferably, in step 5, rasterization downsampling is implemented using the VexelGrid filter in the PCL library. This algorithm completes point cloud filtering by using the voxel grid and using the voxel center of gravity to replace all other points in the voxel.

步骤6、实现胸腹部目标区域6的实物表面点云7与三维胸腹部医学图像8的配准:首先,枚举三维胸腹部医学图像8在计算机医学图像坐标系中可能处于的所有空间位姿状态(如图6所示),并且记录每一种空间位姿状态相对初始位姿的初始变换矩阵再计算实物表面点云7的质心坐标以及每一种空间位姿状态下的三维胸腹部医学图像8的质心坐标;再分别将所有空间位姿状态下的三维胸腹部医学图像8平移到该三维胸腹部医学图像8的质心与实物表面点云7的质心对齐时的位置,记录所有的质心变换矩阵/>再遍历平移后的所有空间位姿状态,进行ICP配准,进而得到所有的ICP变换矩阵/>再计算所有空间位姿状态下完成ICP配准后的ICP配准误差RMSEi;再比较所有的ICP配准误差RMSEi,获取最小ICP配准误差RMSEk以及该最小误差对应的空间位姿状态;最后,计算最小ICP配准误差RMSEk对应的空间位姿状态下的高迭代变换矩阵/>得到配准结果。Step 6. Realize the registration of the physical surface point cloud 7 of the chest and abdomen target area 6 and the three-dimensional chest and abdomen medical image 8: First, enumerate all the possible spatial positions of the three-dimensional chest and abdomen medical image 8 in the computer medical image coordinate system. state (as shown in Figure 6), and record the initial transformation matrix of each space pose state relative to the initial pose Then calculate the centroid coordinates of the physical surface point cloud 7 and the centroid coordinates of the three-dimensional chest and abdomen medical images 8 in each space posture state; then translate the three-dimensional chest and abdomen medical images 8 in all space posture states to the three-dimensional space. The position when the center of mass of the chest and abdomen medical image 8 is aligned with the center of mass of the physical surface point cloud 7, record all the center of mass transformation matrices/> Then traverse all the spatial pose states after translation, perform ICP registration, and then obtain all ICP transformation matrices/> Then calculate the ICP registration error RMSEi after ICP registration is completed in all spatial pose states; then compare all ICP registration errors RMSEi to obtain the minimum ICP registration error RMSEk and the spatial pose state corresponding to the minimum error. ;Finally, calculate the high iteration transformation matrix in the spatial pose state corresponding to the minimum ICP registration error RMSEk /> Get the registration result.

优选地,所述三维胸腹部医学图像8是胸腹部医学图像进行三维重建后得到的。Preferably, the three-dimensional chest and abdomen medical image 8 is obtained by three-dimensional reconstruction of the chest and abdomen medical image.

优选地,步骤6中,枚举的具体操作是:在计算机医学图像坐标系中绕X、Y和Z三轴循环遍历,按照步长α获得所有三维胸腹部医学图像8,得到种空间位姿状态。Preferably, in step 6, the specific enumeration operation is: loop around the X, Y and Z axes in the computer medical image coordinate system, and obtain all three-dimensional chest and abdomen medical images 8 according to the step size α, and obtain A spatial posture state.

优选地,步骤6中,ICP配准误差RMSEi的计算公式如式(2)所示:Preferably, in step 6, the calculation formula of the ICP registration error RMSEi is as shown in Equation (2):

式(2)中,Np为空间表面点云中点的数量,为空间表面点云中的第j点,/>为第i种空间位姿状态下的三维胸腹部医学图像(8)中空间表面点/>的最近点,R与t为ICP变换矩阵/>中的旋转矩阵与平移矩阵。In formula (2), Np is the number of points in the space surface point cloud, is the jth point in the spatial surface point cloud,/> is the space surface point in the three-dimensional chest and abdomen medical image (8) in the i-th space pose state/> The closest point, R and t are the ICP transformation matrices/> The rotation matrix and translation matrix in .

本发明未述及之处适用于现有技术。The parts not described in the present invention are applicable to the existing technology.

Claims (10)

step 2, scanning a chest and abdomen target area (6): firstly, starting a binocular camera (1) and judging a chest and abdomen target area (6); then, the line structure light emitting source (3) emits line structure light to the boundary of the chest and abdomen target area (6) to obtain the limit position of the chest and abdomen target area (6); determining the movement range and the movement rotation angle of the cradle head (2) according to the limit position; then the cradle head (2) rotates, the line structure light emitting source (3) modulates the emitted line structure light, and the line structure light is clearly projected to the chest and abdomen target area (6); finally, optical parameters of the binocular camera (1) are adjusted, and after the self-adaptive surrounding environment achieves a shooting state capable of clearly distinguishing the ambient light and line structure light, the binocular camera (1) shoots to obtain an original scanning image;
and 4, realizing three-dimensional of two-dimensional line structure light information in the line structure light extraction image, and completing depth reconstruction based on stereoscopic vision: firstly, self-calibration of the binocular camera (1) is completed, and the optimal stereoscopic vision state is obtained in a self-adaptive mode; acquiring pixel coordinate values of the same two-dimensional point in the left-eye image and the right-eye image in the line structure light information, establishing a mapping relation, and further mapping all the two-dimensional points in the line structure light information one to obtain a plurality of two-dimensional matching point pairs; finally, carrying out three-dimensional treatment on all the two-dimensional matching point pairs to obtain corresponding three-dimensional points of each two-dimensional matching point pair; acquiring three-dimensional coordinates of the three-dimensional points in a world coordinate system to obtain three-dimensional coordinate information, and completing depth reconstruction based on stereoscopic vision;
step 6, registering the real object surface point cloud (7) of the chest and abdomen target area (6) with the three-dimensional chest and abdomen medical image (8): firstly, enumerating all spatial pose states of a three-dimensional chest and abdomen medical image (8) possibly in a computer medical image coordinate system, and recording an initial transformation matrix of each spatial pose state relative to an initial poseCalculating the centroid coordinates of the object surface point cloud (7) and the centroid coordinates of the three-dimensional chest and abdomen medical image (8) in each space pose state; respectively translating the three-dimensional chest and abdomen medical image (8) in all space pose states to the position when the mass center of the three-dimensional chest and abdomen medical image (8) is aligned with the mass center of the object surface point cloud (7), and recording all mass center transformation matrixes +.>Traversing all space pose states after translation, performing ICP registration to obtain all ICP transformation matrixes +.>Calculating ICP registration error RMSE after ICP registration under all space pose statesi The method comprises the steps of carrying out a first treatment on the surface of the All ICP registration errors RMSE are then comparedi Obtaining a minimum ICP registration error RMSEk And a spatial pose state corresponding to the minimum error; finally, a minimum ICP registration error RMSE is calculatedk High iterative transformation matrix in corresponding spatial pose state +.>And obtaining a registration result.
CN202310975574.XA2023-08-042023-08-04Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous punctureActiveCN116883471B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310975574.XACN116883471B (en)2023-08-042023-08-04Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310975574.XACN116883471B (en)2023-08-042023-08-04Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture

Publications (2)

Publication NumberPublication Date
CN116883471A CN116883471A (en)2023-10-13
CN116883471Btrue CN116883471B (en)2024-03-15

Family

ID=88264466

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310975574.XAActiveCN116883471B (en)2023-08-042023-08-04Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture

Country Status (1)

CountryLink
CN (1)CN116883471B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117224233B (en)*2023-11-092024-02-20杭州微引科技有限公司Integrated perspective CT and interventional operation robot system and use method thereof
CN118229930B (en)*2024-04-032024-09-10艾瑞迈迪医疗科技(北京)有限公司Near infrared optical tracking method and system

Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108154552A (en)*2017-12-262018-06-12中国科学院深圳先进技术研究院A kind of stereo laparoscope method for reconstructing three-dimensional model and device
CN109272537A (en)*2018-08-162019-01-25清华大学 Panoramic point cloud registration method based on structured light
CN110070598A (en)*2018-01-222019-07-30宁波盈芯信息科技有限公司Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding
CN110415342A (en)*2019-08-022019-11-05深圳市唯特视科技有限公司A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN110443840A (en)*2019-08-072019-11-12山东理工大学The optimization method of sampling point set initial registration in surface in kind
CN110731817A (en)*2019-10-112020-01-31浙江大学radiationless percutaneous spine positioning method based on optical scanning automatic contour segmentation matching
CN111053598A (en)*2019-12-032020-04-24天津大学Augmented reality system platform based on projector
CN112053432A (en)*2020-09-152020-12-08成都贝施美医疗科技股份有限公司Binocular vision three-dimensional reconstruction method based on structured light and polarization
WO2021088481A1 (en)*2019-11-082021-05-14南京理工大学High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection
CN113643427A (en)*2021-08-092021-11-12重庆亲禾智千科技有限公司 A binocular ranging and three-dimensional reconstruction method
CN114937139A (en)*2022-06-012022-08-23天津大学Endoscope augmented reality system and method based on video stream fusion
CN115222893A (en)*2022-08-092022-10-21沈阳度维科技开发有限公司 3D reconstruction and splicing method of large-scale components based on structured light measurement
CN115546289A (en)*2022-10-272022-12-30电子科技大学Robot-based three-dimensional shape measurement method for complex structural part
CN115830217A (en)*2022-07-112023-03-21深圳大学 Method, device and system for generating point cloud of three-dimensional model of object to be modeled
WO2023045455A1 (en)*2021-09-212023-03-30西北工业大学Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration
CN116421313A (en)*2023-04-142023-07-14郑州大学第一附属医院 Augmented reality fusion method in thoracoscopic lung tumor resection surgical navigation

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108154552A (en)*2017-12-262018-06-12中国科学院深圳先进技术研究院A kind of stereo laparoscope method for reconstructing three-dimensional model and device
CN110070598A (en)*2018-01-222019-07-30宁波盈芯信息科技有限公司Mobile terminal and its progress 3D scan rebuilding method for 3D scan rebuilding
CN109272537A (en)*2018-08-162019-01-25清华大学 Panoramic point cloud registration method based on structured light
CN110415342A (en)*2019-08-022019-11-05深圳市唯特视科技有限公司A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN110443840A (en)*2019-08-072019-11-12山东理工大学The optimization method of sampling point set initial registration in surface in kind
CN110731817A (en)*2019-10-112020-01-31浙江大学radiationless percutaneous spine positioning method based on optical scanning automatic contour segmentation matching
WO2021088481A1 (en)*2019-11-082021-05-14南京理工大学High-precision dynamic real-time 360-degree omnibearing point cloud acquisition method based on fringe projection
CN111053598A (en)*2019-12-032020-04-24天津大学Augmented reality system platform based on projector
CN112053432A (en)*2020-09-152020-12-08成都贝施美医疗科技股份有限公司Binocular vision three-dimensional reconstruction method based on structured light and polarization
CN113643427A (en)*2021-08-092021-11-12重庆亲禾智千科技有限公司 A binocular ranging and three-dimensional reconstruction method
WO2023045455A1 (en)*2021-09-212023-03-30西北工业大学Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration
CN114937139A (en)*2022-06-012022-08-23天津大学Endoscope augmented reality system and method based on video stream fusion
CN115830217A (en)*2022-07-112023-03-21深圳大学 Method, device and system for generating point cloud of three-dimensional model of object to be modeled
CN115222893A (en)*2022-08-092022-10-21沈阳度维科技开发有限公司 3D reconstruction and splicing method of large-scale components based on structured light measurement
CN115546289A (en)*2022-10-272022-12-30电子科技大学Robot-based three-dimensional shape measurement method for complex structural part
CN116421313A (en)*2023-04-142023-07-14郑州大学第一附属医院 Augmented reality fusion method in thoracoscopic lung tumor resection surgical navigation

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Automatic segmentation of organs at risk and tumors in CT images of lung cancer from partially labelled datasets with a semi-supervised conditional nnU-Net;Shan Jiang;Computer Methods and Programs in Biomedicine(第211期);全文*
一种基于非量测相机图像的三维模型快速重建方法研究;黄腾达;李有鹏;吕亚磊;刘洋洋;;河南城建学院学报(第01期);全文*
基于TOF三维相机相邻散乱点云配准技术研究;张旭东;吴国松;胡良梅;王竹萌;;机械工程学报(第12期);全文*
基于双重配准的机器人双目视觉三维拼接方法研究;艾青林;刘赛;沈智慧;;机电工程(第10期);全文*
基于无监督学习的三维肺部CT图像配准方法研究;姜杉;天津大学学报(自然科学与工程技术版);第55卷(第3期);全文*
序列图像约束的点云初始配准方法;孙殿柱;沈江华;李延瑞;林伟;;机械工程学报(第09期);全文*
应用摄像机位姿估计的点云初始配准;郭清达;全燕鸣;姜长城;陈健武;;光学精密工程(第06期);全文*

Also Published As

Publication numberPublication date
CN116883471A (en)2023-10-13

Similar Documents

PublicationPublication DateTitle
CN111627521B (en)Enhanced utility in radiotherapy
CN114041875B (en) An integrated surgical positioning and navigation system
CN109464196B (en)Surgical navigation system adopting structured light image registration and registration signal acquisition method
US11123144B2 (en)Registration of frames of reference
CN116883471B (en)Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture
CN109785374B (en) An automatic real-time label-free image registration method for dental augmented reality surgical navigation
CN112168357B (en)System and method for constructing spatial positioning model of C-arm machine
CN108420529A (en)The surgical navigational emulation mode guided based on image in magnetic tracking and art
CN109864806A (en)The Needle-driven Robot navigation system of dynamic compensation function based on binocular vision
JP2009501609A (en) Method and system for mapping a virtual model of an object to the object
CN102622775B (en)A kind of real-time dynamic reconstruction technology of heart compensated based on model interpolation
CN114463482A (en) Calibration model, method and surgical navigation system of optical tracking 3D scanner
CN114283179A (en) Real-time acquisition and registration system of fracture distal and proximal spatial pose based on ultrasound images
CN111839727A (en) Augmented reality-based visualization method and system for prostate seed implantation path
CN118787447A (en) A positioning guidance method and system for spinal surgery
Welleweerd et al.Robot-assisted ultrasound-guided biopsy on MR-detected breast lesions
CN116269831A (en)Holographic image-based surgical assistance system
CN118628538B (en) Registration method between 3D sensor of spinal robot and DICOM image
CN113100941B (en)Image registration method and system based on SS-OCT (scanning and optical coherence tomography) surgical navigation system
CN113256814B (en)Augmented reality virtual-real fusion method and device based on spatial registration
CN116650117A (en)Neural navigation surface matching spatial registration system based on mechanical arm and three-dimensional scanner and spatial registration method thereof
CN113100967B (en) A wearable surgical tool positioning device and positioning method
Lacher et al.Low-cost surface reconstruction for aesthetic results assessment and prediction in breast cancer surgery
Yang et al.Head pose-assisted localization of facial landmarks for enhanced fast registration in skull base surgery
CN120279197B (en)Craniomaxillofacial three-dimensional reconstruction method based on free hand ultrasound

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp