Movatterモバイル変換


[0]ホーム

URL:


CN112614169A - 2D/3D spine CT (computed tomography) level registration method based on deep learning network - Google Patents

2D/3D spine CT (computed tomography) level registration method based on deep learning network
Download PDF

Info

Publication number
CN112614169A
CN112614169ACN202011547646.3ACN202011547646ACN112614169ACN 112614169 ACN112614169 ACN 112614169ACN 202011547646 ACN202011547646 ACN 202011547646ACN 112614169 ACN112614169 ACN 112614169A
Authority
CN
China
Prior art keywords
image
registration
layer
deep learning
learning network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011547646.3A
Other languages
Chinese (zh)
Other versions
CN112614169B (en
Inventor
杨波
颜立祥
郑文锋
刘珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of ChinafiledCriticalUniversity of Electronic Science and Technology of China
Priority to CN202011547646.3ApriorityCriticalpatent/CN112614169B/en
Publication of CN112614169ApublicationCriticalpatent/CN112614169A/en
Application grantedgrantedCritical
Publication of CN112614169BpublicationCriticalpatent/CN112614169B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a 2D/3D spine CT level registration method based on a deep learning network, which mainly comprises two steps of coarse registration and fine registration; firstly, generating deformation of a 3D CT sequence, generating a DRR image through projection of an X-ray imaging calculation model, and then randomly selecting the DRR image to train a deep learning network; then deforming the 3D image to be registered before the operation, generating a DRR through projection of an X-ray imaging model, and inputting the DRR and the 2D reference image in the operation into a depth learning network to obtain a coarse registration parameter; and finally, based on the coarse registration parameters, finishing the precise registration of a plurality of vertebrae in the preoperative 3D image to be registered through an Adam parameter optimization algorithm, and realizing the CT level registration of the spine.

Description

2D/3D spine CT (computed tomography) level registration method based on deep learning network
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a 2D/3D spine CT level registration method based on a deep learning network.
Background
The 2D/3D image registration technology is used as a key technology in an image navigation operation, a plurality of images from different imaging devices, imaging time and imaging targets are subjected to certain spatial transformation and then are positioned in the same reference system to achieve the aim of corresponding matching of image pixels of the same anatomical structure, the accurate tracking and correction of the relative position relation between a surgical instrument and a patient focus are achieved, and the image navigation operation is completed, the key of the operation is to accurately establish the spatial position relation between a preoperative 3D image to be registered and an intraoperative real 2D X ray image, namely, intraoperative 2D is used as a reference image for registering the preoperative 3D image.
There are roughly four categories of medical image registration techniques that are mainly used today: grayscale-based methods, feature-based methods, and deep learning-based methods.
The characteristic-based registration algorithm only needs a small amount of characteristic information to complete an image registration task, has small dependence on image gray scale information, is relatively simple in registration process, easy to operate and low in time consumption under the condition of obtaining the characteristic information, but the extraction of the characteristic information usually needs manual intervention and is difficult to realize automation, so that the characteristic extraction is time-consuming.
Feature-based registration ignores a large amount of valuable other information in the image (such as image gray scale and gradient information), resulting in low registration accuracy, poor stability and low registration success rate.
The gray-scale-based image registration algorithm completes the registration task by using the pixel gray-scale information of far redundant feature information points, so that the registration error is small, the precision is high, and the stability and the robustness are higher.
The registration method based on deep learning directly predicts 2D/3D registration transformation parameters by using a deep regression network, but has complex preprocessing steps, long network structure, large amount of data and adverse guarantee of registration precision due to the fact that the transformation parameters are directly predicted end to end.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a 2D/3D spine CT level registration method based on a deep learning network.
In order to achieve the above object, the present invention provides a deep learning network-based 2D/3D spine CT level registration method, which is characterized by comprising the following steps:
(1) acquiring an X-ray image as an intraoperative 2D reference image in the training and registering process, and acquiring a medical CT sequence as an preoperative 3D image in the training process;
(2) constructing a training image set;
(2.1) inputting the preoperative 3D image into a rigid body transformation model, and randomly transforming six-dimensional rigid body transformation parameter T ═ Tx,ty,tz,rx,ry,rz) Generating a group of three-dimensional image sequences, and then inputting the three-dimensional image sequences into an X-ray imaging calculation model for projection so as to generate a DRR image sequence; wherein, txRepresenting a translation parameter, t, on the X-axisyRepresenting a translation parameter, t, in the Y-axiszRepresenting a translation parameter in the Z-axis, rxRepresenting a rotation parameter along the X-axis, ryRepresenting a rotation parameter along the Y-axis, rzRepresenting a rotation parameter along the Z-axis;
(2.2) combining the DRR image sequences in pairs, wherein one image is used as a reference image, the other image is used as a floating image, and the two images form a training sample to form a training image set;
(3) building a deep learning network model and training;
constructing an 8-layer CNN model as a deep learning network model, then sequentially inputting a reference image and a floating image in a training image set for model training, and accurately outputting a deformation parameter corresponding to the floating image when the model is converged;
(4) carrying out coarse registration by using a deep learning network model;
according to the method in the step (2.1), a DRR image is generated by using the preoperative 3D image to be registered and is used as a floating image, and the floating image and the intraoperative 2D reference image are input into the trained deep learning network model together, so that the rough registration transformation parameters of the preoperative 3D image to be registered are output;
(5) carrying out precise registration on the single vertebra through an Adam parameter optimization algorithm;
(5.1) carrying out vertebra segmentation on the preoperative 3D image to be registered by using a Grow Cut region growing algorithm, so that each segmented sub-image only comprises one vertebra, and obtaining a plurality of single vertebra images;
(5.2) taking the rough registration transformation parameters as initial registration parameters of each single vertebra image, then carrying out rigid body transformation on the single vertebra through the initial registration parameters, and then carrying out projection through an X-ray imaging calculation model to generate a DRR image of the single vertebra image as a floating image;
(5.3) calculating a DiceLoss value between the floating image and the intraoperative 2D reference image of the single vertebra;
Figure BDA0002856165240000031
wherein | X | represents the sum of all elements in the pixel matrix X of the floating image, | Y | represents the sum of all elements in the pixel matrix Y of the reference image, | X |, and Y | represents the sum of all elements after the pixel matrix X and the pixel matrix Y corresponding element point are multiplied;
(5.4) judging whether the DiceLoss value calculated in the step (5.3) is smaller than a preset threshold value or not, if so, stopping iteration, and finishing the fine registration of the single vertebral image; otherwise, setting the objective function of the Adam parameter optimization algorithm as Dice Loss, setting the parameter vector as the current fine registration parameter, then repeating the steps (5.2) - (5.4), and searching the optimal fine registration parameter with the smallest DiceLoss value through the Adam parameter optimization algorithm, thereby completing the fine registration of the single vertebral block diagram;
(5.5) judging whether the precise registration of all the single vertebral images is finished, if not, repeating the steps (5.2) - (5.4) until the precise registration of all the single vertebral images is finished; otherwise, entering the step (5.6);
and (5.6) carrying out spatial transformation on all the single vertebral images according to the corresponding optimized fine registration parameters, and combining according to the positions before segmentation so as to realize the CT level registration of the spine.
The invention aims to realize the following steps:
the invention relates to a 2D/3D spine CT level registration method based on a deep learning network, which mainly comprises two steps of coarse registration and fine registration; firstly, generating deformation of a 3D CT sequence, generating a DRR image through projection of an X-ray imaging calculation model, and then randomly selecting the DRR image to train a deep learning network; then deforming the 3D image to be registered before the operation, generating a DRR through projection of an X-ray imaging model, and inputting the DRR and the 2D reference image in the operation into a depth learning network to obtain a coarse registration parameter; and finally, based on the coarse registration parameters, finishing the precise registration of a plurality of vertebrae in the preoperative 3D image to be registered through an Adam parameter optimization algorithm, and realizing the CT level registration of the spine.
Meanwhile, the 2D/3D spine CT level registration method based on the deep learning network also has the following beneficial effects:
(1) the invention adopts a hierarchical registration mode, not only integrates a deep learning network, but also matches a classical parameter optimization mode, and ensures that the registration precision is more excellent through the combination of two registration methods, and not only performs rigid registration on the vertebra, but also considers the deformation between the vertebrae.
(2) Compared with the traditional mode that the vertebra is taken as a whole rigid body, the mode of circularly and accurately registering a plurality of vertebrae is adopted, the single-block step-by-step registration accuracy is higher, because 2D images in operation and 3D before operation are considered, the posture change of a patient under imaging equipment causes fine deformation between the vertebrae, if the vertebrae are taken as a rigid body, the registration result is difficult to avoid to be rough, and the accurate registration of a plurality of vertebrae solves the problem.
(3) The segmentation of the vertebrae into a plurality of vertebrae before the fine registration is not mentioned in the conventional registration method, and the purpose of the segmentation is to improve the efficiency of the registration by performing the registration on a plurality of vertebrae in the fine registration, namely segmenting each 3D vertebra into single blocks.
Drawings
FIG. 1 is a flowchart of a deep learning network-based 2D/3D spine CT level registration method of the present invention;
FIG. 2 is an X-ray imaging computational model;
fig. 3 is a diagram of a deep learning convolutional network architecture.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
For convenience of description, the related terms appearing in the detailed description are explained:
GPU (graphics Processing Unit): a graphics processor;
drr (digital reconstructed radiograms): digitally reconstructing a radiological image;
adam parameter optimization algorithm: adaptive motion Estimation.
FIG. 1 is a flow chart of a deep learning network-based 2D/3D spine CT level registration method of the invention.
In this embodiment, as shown in fig. 1, the 2D/3D spine CT level registration method based on a deep learning network of the present invention includes the following steps:
s1, acquiring an X-ray image as an intraoperative 2D reference image in the training and registration process, and acquiring a medical CT sequence as an preoperative 3D image in the training process;
s2, constructing a training image set;
s2.1, inputting the preoperative 3D image into a rigid body transformation model, and randomly transforming six-dimensional rigid body transformation parameter T ═ T (T)x,ty,tz,rx,ry,rz) Generating a group of three-dimensional image sequences, and then inputting the three-dimensional image sequences into an X-ray imaging calculation model for projection so as to generate a DRR image sequence; wherein, txRepresenting a translation parameter, t, on the X-axisyRepresenting a translation parameter, t, in the Y-axiszRepresenting a translation parameter in the Z-axis, rxRepresenting a rotation parameter along the X-axis, ryRepresenting a rotation parameter along the Y-axis, rzRepresenting a rotation parameter along the Z-axis; then, in this embodiment, the rotation matrices around the X-axis, the Y-axis, and the Z-axis can be expressed by the following equations, respectively:
Figure BDA0002856165240000051
Figure BDA0002856165240000052
Figure BDA0002856165240000053
the translation matrix is represented as: t isl(tx,ty,tz)T
If the image is first rotated around the X-axis, Y-axis, and Z-axis in sequence, and then translated, the pixel coordinates before and after the rigid body transformation can be expressed as:
Figure BDA0002856165240000054
wherein, (x, y, z)TRepresenting the spatial coordinates of a certain pixel point in the floating image,
Figure BDA0002856165240000055
representing the space coordinate of the pixel point after rigid body transformation;
in this embodiment, as shown in fig. 2, the X-Ray imaging calculation model may be implemented by using a Ray-Casting algorithm based on a GPU, and the model specifically includes:
Figure DEST_PATH_IMAGE001
wherein, I represents the energy of the X-ray after attenuation, I0Denotes the initial energy of X-rays, μiRepresents the linear attenuation coefficient of the ith voxel tissue, diRepresenting the distance traveled by the ray in the ith voxel;
s2.2, combining the DRR image sequences in pairs, wherein one image is used as a reference image, the other image is used as a floating image, and the two images form a training sample to form a training image set;
as shown in fig. 3, an 8-layer CNN model is built as a deep learning network model and trained;
a first layer input layer inputting a floating image and a reference image;
the second layer is the first convolution layer, the convolution kernel size is 5 x 20, no padding, the step size is 1, and the output matrix size of the layer is 152 x 296 x 20;
the third layer was the first pooling layer, with maximum pooling, a pooling window size of 2 x 2, step size of 2, the layer output matrix of 76 x 148 x 20;
the fourth layer is the second convolution layer, the convolution kernel size is 5 x 20, no padding, the step size is 1, and the output matrix size of the layer is 72 x 144 x 20;
the fifth layer is the second pooling layer, maximum pooling is used, the pooling window size is 2 x 2, the step size is 2, the layer output matrix is 36 x 72 x 20;
the sixth layer is a full connection layer, 250 ReLU activation function units are provided, and the number of output nodes is 250;
the seventh layer is a second full link layer and is provided with 6 ReLU activation function units, and the number of output nodes is 6;
the eighth layer is an output layer which outputs 6 parameters, namely (t)x,ty,tz,rx,ry,rz);
The method comprises the steps of sequentially inputting a floating image and a reference image in a training image set to a deep learning network model for training, subtracting the reference image from the floating image to obtain a residual image in the model training process, continuously extracting high-order characteristic information of the residual image through a network, seeking a deformation rule from the floating image to the reference image, outputting accurate 6 individual variable parameters, training by utilizing a TensorFlow frame and accelerating training by using a high-performance GPU and a CUDA (compute unified device architecture), wherein the specific training process is similar to a general deep learning network training process and is not repeated herein.
S4, carrying out coarse registration by using a deep learning network model;
according to the method of the step S2.1, a DRR image is generated by using the preoperative 3D image to be registered and is used as a floating image, and the floating image and the intraoperative 2D reference image are input into the trained depth learning network model together, so that the rough registration transformation parameters of the preoperative 3D image to be registered are output;
s5, performing fine registration of the single vertebra through an Adam parameter optimization algorithm;
s5.1, performing vertebra segmentation on the preoperative 3D image to be registered by using a Grow Cut region growing algorithm, so that each segmented sub-image only comprises one vertebra, and obtaining a plurality of single vertebra images;
s5.2, taking the rough registration transformation parameters as initial registration parameters of each single vertebral image, then carrying out rigid body transformation on the single vertebral image through the initial registration parameters, and then carrying out projection through an X-ray imaging calculation model to generate a DRR image of the single vertebral image as a floating image;
s5.3, calculating a DiceLoss value between the floating image of the single vertebra and the intraoperative 2D reference image;
Figure BDA0002856165240000071
wherein | X | represents the sum of all elements in the pixel matrix X of the floating image, | Y | represents the sum of all elements in the pixel matrix Y of the reference image, | X |, and Y | represents the sum of all elements after the pixel matrix X and the pixel matrix Y corresponding element point are multiplied;
s5.4, judging whether the Dice Loss value calculated in the step S5.3 is smaller than a preset threshold value or not, if so, stopping iteration, and finishing the precise registration of the single vertebral image; otherwise, setting the objective function of the Adam parameter optimization algorithm as Dice Loss, setting the parameter vector as the current fine registration parameter, then repeating the steps (5.2) - (5.4), and searching the optimal fine registration parameter with the smallest DiceLoss value through the Adam parameter optimization algorithm, thereby completing the fine registration of the single vertebral block diagram;
s5.5, judging whether all the single vertebral images are accurately registered, if not, repeating the steps S5.2-S5.4 until the accurate registration of all the single vertebral images is completed; otherwise, go to step S5.6;
and S5.6, performing spatial transformation on all the single vertebral images according to the correspondingly optimized fine registration parameters, and combining the single vertebral images according to the positions before segmentation so as to realize the CT level registration of the spine.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (2)

Translated fromChinese
1.一种基于深度学习网络的2D/3D脊椎CT层级配准方法,其特征在于,包括以下步骤:1. a 2D/3D spine CT level registration method based on deep learning network, is characterized in that, comprises the following steps:(1)、获取X射线图像作为训练及配准过程的术中2D参考图像,获取医学CT序列作为训练过程的术前3D图像;(1) Acquire an X-ray image as an intraoperative 2D reference image for the training and registration process, and acquire a medical CT sequence as a preoperative 3D image for the training process;(2)、构建训练图像集;(2), build a training image set;(2.1)、将术前3D图像输入至刚体变换模型,通过随机变换六维的刚体变换参数T=(tx,ty,tz,rx,ry,rz),生成一组三维的图像序列,然后再输入至X射线成像计算模型进行投影,从而生成DRR图像序列;其中,tx表示在X轴上的平移参数,ty表示在Y轴上的平移参数,tz表示在Z轴上的平移参数,rx表示沿X轴的旋转参数,ry表示沿Y轴的旋转参数,rz表示沿Z轴的旋转参数;(2.1) Input the preoperative 3D image into the rigid body transformation model, and generate a set of three-dimensional transformation parameters T=(tx , ty ,tz , rx , ry , rz ) by randomly transforming the six-dimensional rigid body transformation parameters , and then input it into the X-ray imaging calculation model for projection to generate a DRR image sequence; where tx represents the translation parameter on the X axis,ty represents the translation parameter on the Y axis, and t zrepresents the translation parameter on the Y axis. The translation parameter on the Z axis, rx represents the rotation parameter along the X axis, ry represents the rotation parameter along the Y axis, and rz represents the rotation parameter along theZ axis;(2.2)、对DRR图像序列进行两两组合,一张作为参考图像,另一张作为浮动图像,两张图像构成一个训练样本,从而组成训练图像集;(2.2) Combine the DRR image sequences in pairs, one as a reference image and the other as a floating image, and the two images constitute a training sample, thereby forming a training image set;(3)、搭建深度学习网络模型并训练;(3), build a deep learning network model and train it;搭建8层CNN模型作为深度学习网络模型,然后将训练图像集中的参考图像与浮动图像依次输入,用于模型训练,当模型收敛时能够准确输出浮动图像对应的形变参数;Build an 8-layer CNN model as a deep learning network model, and then input the reference images and floating images in the training image set in turn for model training. When the model converges, the deformation parameters corresponding to the floating images can be accurately output;(4)、利用深度学习网络模型进行粗配准;(4) Coarse registration using deep learning network model;按照步骤(2.1)所述方法,利用术前待配准3D图像生成一幅DRR图像,并作为浮动图像,再将浮动图像与术中2D参考图像一起输入至训练完成的深度学习网络模型,从而输出术前待配准3D图像的粗配准变换参数;According to the method described in step (2.1), use the preoperative 3D image to be registered to generate a DRR image and use it as a floating image, and then input the floating image together with the intraoperative 2D reference image to the deep learning network model after training, so as to Output the coarse registration transformation parameters of the preoperative 3D image to be registered;(5)、通过Adam参数优化算法进行单块椎骨的精配准;(5) Accurate registration of a single vertebra by Adam parameter optimization algorithm;(5.1)、利用用Grow Cut区域生长算法对术前待配准3D图像进行椎骨分割,使分割的每幅子图中仅包含一块椎骨,从而得到多幅单块椎骨图;(5.1) Use the Grow Cut region growth algorithm to segment the vertebrae on the preoperative 3D image to be registered, so that each segmented sub-image contains only one vertebra, thereby obtaining multiple single-block vertebra maps;(5.2)、将粗配准变换参数作为每幅单块椎骨图的初始配准参数,然后将单块椎骨经初始配准参数的刚体变换后,再通过X射线成像计算模型进行投影,生成单块椎骨图的DRR图像,作为浮动图像;(5.2), take the coarse registration transformation parameters as the initial registration parameters of each single vertebra map, and then transform the single vertebra through the rigid body transformation of the initial registration parameters, and then perform projection through the X-ray imaging calculation model to generate a single vertebra. DRR images of block vertebrae, as floating images;(5.3)、计算单块椎骨的浮动图像与术中2D参考图像之间的DiceLoss值;(5.3), calculate the DiceLoss value between the floating image of a single vertebra and the intraoperative 2D reference image;
Figure FDA0002856165230000011
Figure FDA0002856165230000011
其中,|X|表示浮动图像的像素矩阵X中所有元素之和,|Y|表示参考图像的像素矩阵Y中所有元素之和,|X∩Y|表示像素矩阵X与像素矩阵Y对应元素点乘后再求所有元素之和;Among them, |X| represents the sum of all elements in the pixel matrix X of the floating image, |Y| represents the sum of all elements in the pixel matrix Y of the reference image, and |X∩Y| represents the corresponding element point of the pixel matrix X and the pixel matrix Y After multiplying, find the sum of all elements;(5.4)、判断步骤(5.2)中计算得到的DiceLoss值是否小于预设阈值,如果小于,则迭代停止,完成单块椎骨图的精配准;否则,设置Adam参数优化算法的目标函数为DiceLoss,参数向量设置为当前精配准参数,然后重复步骤(5.2)~(5.4),通过Adam参数优化算法搜寻DiceLoss值最小时的最优精配准参数,从而完成该单块椎骨图的精配准;(5.4), determine whether the DiceLoss value calculated in step (5.2) is less than the preset threshold, if it is less than, the iteration stops to complete the precise registration of the single vertebral map; otherwise, the objective function of the Adam parameter optimization algorithm is set to DiceLoss , the parameter vector is set as the current fine registration parameters, and then repeat steps (5.2) to (5.4), through the Adam parameter optimization algorithm to search for the optimal fine registration parameters when the DiceLoss value is the smallest, so as to complete the fine registration of the single vertebral map allow;(5.5)、判断所有单块椎骨图是否都完成了精配准,如果未完成,则重复步骤(5.2)~(5.4),直到完成所有单块椎骨图的精配准;否则,进入步骤(5.6);(5.5), determine whether all single vertebrae have completed the fine registration, if not, repeat steps (5.2) to (5.4) until the fine registration of all single vertebrae is completed; otherwise, go to step ( 5.6);(5.6)、将所有的单块椎骨图根据对应优化后的精配准参数进行空间变换,再按照分割前的位置进行组合,从而实现脊椎CT层级配准。(5.6) Perform spatial transformation on all single vertebral images according to the corresponding optimized precise registration parameters, and then combine them according to the positions before segmentation, so as to realize spinal CT level registration.2.根据权利要求1所述的基于深度学习网络的2D/3D脊椎CT层级配准方法,其特征在于,所述深度学习网络模型的具体结构为:2. the 2D/3D spine CT level registration method based on deep learning network according to claim 1, is characterized in that, the concrete structure of described deep learning network model is:第一层输入层,输入浮动图像和参考图像;The first layer of input layer, input floating image and reference image;第二层是第一卷积层,卷积核大小为5*5*20,不填充,步长为1,该层输出矩阵大小为152*296*20;The second layer is the first convolution layer, the size of the convolution kernel is 5*5*20, no padding, the stride is 1, and the output matrix size of this layer is 152*296*20;第三层是第一池化层,采用最大值池化,池化窗口尺寸为2*2,步长为2,该层输出矩阵为76*148*20;The third layer is the first pooling layer, using maximum pooling, the pooling window size is 2*2, the step size is 2, and the output matrix of this layer is 76*148*20;第四层是第二卷积层,卷积核尺寸为5*5*20,不填充,步长为1,该层输出矩阵大小为72*144*20;The fourth layer is the second convolution layer, the size of the convolution kernel is 5*5*20, no padding, the step size is 1, and the output matrix size of this layer is 72*144*20;第五层是第二池化层,采用最大值池化,池化窗口尺寸为2*2,步长为2,该层输出矩阵为36*72*20;The fifth layer is the second pooling layer, using maximum pooling, the pooling window size is 2*2, the step size is 2, and the output matrix of this layer is 36*72*20;第六层是全连接层,有250个ReLU激活函数单元,输出结点个数为250个;The sixth layer is a fully connected layer with 250 ReLU activation function units and 250 output nodes;第七层为第二个全链接层,有6个ReLU激活函数单元,输出结点个数为6个;The seventh layer is the second full link layer, with 6 ReLU activation function units, and the number of output nodes is 6;第八层为输出层,输出6个参数,即(tx,ty,tz,rx,ry,rz)。The eighth layer is the output layer, which outputs 6 parameters, namely (tx ,ty , t z,rx , ry , rz ).
CN202011547646.3A2020-12-242020-12-242D/3D spine CT (computed tomography) level registration method based on deep learning networkActiveCN112614169B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011547646.3ACN112614169B (en)2020-12-242020-12-242D/3D spine CT (computed tomography) level registration method based on deep learning network

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011547646.3ACN112614169B (en)2020-12-242020-12-242D/3D spine CT (computed tomography) level registration method based on deep learning network

Publications (2)

Publication NumberPublication Date
CN112614169Atrue CN112614169A (en)2021-04-06
CN112614169B CN112614169B (en)2022-03-25

Family

ID=75244639

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011547646.3AActiveCN112614169B (en)2020-12-242020-12-242D/3D spine CT (computed tomography) level registration method based on deep learning network

Country Status (1)

CountryLink
CN (1)CN112614169B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112989081A (en)*2021-05-202021-06-18首都医科大学附属北京安贞医院Method and device for constructing digital reconstruction image library
CN113450396A (en)*2021-06-172021-09-28北京理工大学Three-dimensional/two-dimensional image registration method and device based on bone features
CN113538533A (en)*2021-06-222021-10-22南方医科大学Spine registration method, spine registration device, spine registration equipment and computer storage medium
CN113920178A (en)*2021-11-092022-01-11广州柏视医疗科技有限公司Mark point-based multi-vision 2D-3D image registration method and system
CN114187337A (en)*2021-12-072022-03-15推想医疗科技股份有限公司Image registration method, segmentation method, device, electronic equipment and storage medium
CN114627054A (en)*2022-02-112022-06-14北京理工大学CT-X image registration method and device based on multi-scale reinforcement learning
CN116485847A (en)*2023-04-072023-07-25苏州科技大学3D/2D intra-operative registration method based on contour feature coding
CN116596976A (en)*2023-03-172023-08-15中国工商银行股份有限公司Image registration method, device, electronic equipment and storage medium
CN116993787A (en)*2023-06-122023-11-03苏州科技大学 Spinal medical image registration method, device and medium under implant occlusion
CN118537376A (en)*2024-04-302024-08-23南京航空航天大学Image 2D/3D registration method for robot assisted surgery system
CN118608583A (en)*2024-06-182024-09-06中国科学院深圳先进技术研究院 A 2D-3D medical image registration method, device, computer equipment and storage medium
US12444097B1 (en)2024-12-122025-10-14Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences2D-3D medical image registration method, device, computer device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104254874A (en)*2012-03-052014-12-31伦敦国王学院Method and system to assist 2d-3d image registration
CN106651750A (en)*2015-07-222017-05-10美国西门子医疗解决公司Method and system used for 2D/3D image registration based on convolutional neural network regression
US20190050999A1 (en)*2017-08-142019-02-14Siemens Healthcare GmbhDilated Fully Convolutional Network for Multi-Agent 2D/3D Medical Image Registration
CN110009669A (en)*2019-03-222019-07-12电子科技大学 A 3D/2D Medical Image Registration Method Based on Deep Reinforcement Learning
CN111080681A (en)*2019-12-162020-04-28电子科技大学3D/2D medical image registration method based on LoG operator

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104254874A (en)*2012-03-052014-12-31伦敦国王学院Method and system to assist 2d-3d image registration
CN106651750A (en)*2015-07-222017-05-10美国西门子医疗解决公司Method and system used for 2D/3D image registration based on convolutional neural network regression
US20190050999A1 (en)*2017-08-142019-02-14Siemens Healthcare GmbhDilated Fully Convolutional Network for Multi-Agent 2D/3D Medical Image Registration
CN110009669A (en)*2019-03-222019-07-12电子科技大学 A 3D/2D Medical Image Registration Method Based on Deep Reinforcement Learning
CN111080681A (en)*2019-12-162020-04-28电子科技大学3D/2D medical image registration method based on LoG operator

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
YI XIE 等: "Single shot 2D3D image regisraton", 《2017 10TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI)》*
YOSHITO OTAKE 等: "Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation", 《PHYS MED BIO》*
张翼 等: "脊柱手术导航中分步式2D/3D图像配准方法", 《计算机辅助设计与图形学学报》*
梁玮: "2D-3D医学图像配准研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》*
沈延延 等: "基于神经网络的双X射线影像2D-3D配准算法", 《中国医学物理学杂志》*
邹岚 等: "基于分段的2D/3D医学图像配准", 《2010年全国模式识别学术会议(CCPR2010)》*
陈向前 等: "基于深度学习的2D/3D医学图像配准研究", 《中国生物医学工程学报》*

Cited By (18)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112989081A (en)*2021-05-202021-06-18首都医科大学附属北京安贞医院Method and device for constructing digital reconstruction image library
CN113450396A (en)*2021-06-172021-09-28北京理工大学Three-dimensional/two-dimensional image registration method and device based on bone features
CN113538533B (en)*2021-06-222023-04-18南方医科大学Spine registration method, device and equipment and computer storage medium
CN113538533A (en)*2021-06-222021-10-22南方医科大学Spine registration method, spine registration device, spine registration equipment and computer storage medium
CN113920178A (en)*2021-11-092022-01-11广州柏视医疗科技有限公司Mark point-based multi-vision 2D-3D image registration method and system
CN113920178B (en)*2021-11-092022-04-12广州柏视医疗科技有限公司Mark point-based multi-vision 2D-3D image registration method and system
CN114187337A (en)*2021-12-072022-03-15推想医疗科技股份有限公司Image registration method, segmentation method, device, electronic equipment and storage medium
CN114187337B (en)*2021-12-072022-08-23推想医疗科技股份有限公司Image registration method, segmentation method, device, electronic equipment and storage medium
CN114627054A (en)*2022-02-112022-06-14北京理工大学CT-X image registration method and device based on multi-scale reinforcement learning
CN114627054B (en)*2022-02-112024-07-23北京理工大学 CT-X image registration method and device based on multi-scale reinforcement learning
CN116596976A (en)*2023-03-172023-08-15中国工商银行股份有限公司Image registration method, device, electronic equipment and storage medium
CN116485847A (en)*2023-04-072023-07-25苏州科技大学3D/2D intra-operative registration method based on contour feature coding
CN116993787A (en)*2023-06-122023-11-03苏州科技大学 Spinal medical image registration method, device and medium under implant occlusion
CN118537376A (en)*2024-04-302024-08-23南京航空航天大学Image 2D/3D registration method for robot assisted surgery system
CN118537376B (en)*2024-04-302025-04-01南京航空航天大学 A 2D/3D image registration method for robot-assisted surgery system
CN118608583A (en)*2024-06-182024-09-06中国科学院深圳先进技术研究院 A 2D-3D medical image registration method, device, computer equipment and storage medium
CN118608583B (en)*2024-06-182025-03-21中国科学院深圳先进技术研究院 A 2D-3D medical image registration method, device, computer equipment and storage medium
US12444097B1 (en)2024-12-122025-10-14Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences2D-3D medical image registration method, device, computer device, and storage medium

Also Published As

Publication numberPublication date
CN112614169B (en)2022-03-25

Similar Documents

PublicationPublication DateTitle
CN112614169A (en)2D/3D spine CT (computed tomography) level registration method based on deep learning network
CN110009669B (en)3D/2D medical image registration method based on deep reinforcement learning
CN111080681B (en) A 3D/2D medical image registration method based on LoG operator
CN111080778B (en)Online three-dimensional reconstruction method of binocular endoscope soft tissue image
CN112598649B (en) A non-rigid registration method for 2D/3D spine CT based on generative adversarial network
US8861891B2 (en)Hierarchical atlas-based segmentation
CN112562082A (en)Three-dimensional face reconstruction method and system
CN114399527B (en) Method and device for unsupervised depth and motion estimation of monocular endoscope
CN113570627B (en)Training method of deep learning segmentation network and medical image segmentation method
WO2022247218A1 (en)Image registration method based on automatic delineation
CN114359309B (en) Medical image segmentation method based on calibration point detection and shape grayscale model matching
CN115049709B (en) A deep learning point cloud lumbar spine registration method for minimally invasive spinal surgery navigation
CN119091060B (en) Three-dimensional reconstruction method and system for laparoscopic surgery scenes based on three-dimensional Gaussian sputtering
CN114792311A (en) A 3D medical image detection method and system
CN116485847A (en)3D/2D intra-operative registration method based on contour feature coding
CN115830084A (en)2D-3D image registration method and system
CN113256693A (en)Multi-view registration method based on K-means and normal distribution transformation
CN120219613A (en) A camera-free 3D scene reconstruction method based on 3DGS
CN118736123A (en) High-fidelity reconstruction method for indoor scenes based on normal deflection network
CN117152407B (en)Automatic positioning method for head shadow measurement mark points
CN112562070A (en)Craniosynostosis operation cutting coordinate generation system based on template matching
CN114782504A (en) A Tissue Organ Image Spatial Registration Method
CN120032064B (en) Method, device, equipment and storage medium for three-dimensional reconstruction of ultrasound images of blood vessels
CN119090928B (en) Interactive surgical case guidance method and system based on manual annotation guidance
CN120219640B (en) A three-dimensional reconstruction method and system of nipple protection area based on super-resolution

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp