Movatterモバイル変換


[0]ホーム

URL:


CN120078518A - A multi-dimensional interaction method, system and storage medium of a laparoscopic surgical robot - Google Patents

A multi-dimensional interaction method, system and storage medium of a laparoscopic surgical robot
Download PDF

Info

Publication number
CN120078518A
CN120078518ACN202510571831.2ACN202510571831ACN120078518ACN 120078518 ACN120078518 ACN 120078518ACN 202510571831 ACN202510571831 ACN 202510571831ACN 120078518 ACN120078518 ACN 120078518A
Authority
CN
China
Prior art keywords
information
robot
standard
force
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510571831.2A
Other languages
Chinese (zh)
Other versions
CN120078518B (en
Inventor
曹琪
王植炜
高永卓
梁云雷
史健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Union Hospital Tongji Medical College Huazhong University of Science and Technology
Original Assignee
Union Hospital Tongji Medical College Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Union Hospital Tongji Medical College Huazhong University of Science and TechnologyfiledCriticalUnion Hospital Tongji Medical College Huazhong University of Science and Technology
Priority to CN202510571831.2ApriorityCriticalpatent/CN120078518B/en
Publication of CN120078518ApublicationCriticalpatent/CN120078518A/en
Application grantedgrantedCritical
Publication of CN120078518BpublicationCriticalpatent/CN120078518B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention relates to the technical field of robot control, in particular to a multidimensional interaction method, a multidimensional interaction system and a multidimensional interaction storage medium of a laparoscopic surgery robot, which comprise the following steps that in a laparoscopic simulation surgery standard flow, standard motion information of the robot is marked in visual information recorded by a camera at each flow node, and instrument standard force information is marked in force sense information recorded by a force sense sensor at each flow node; the method comprises the steps of training a double-branch neural network based on a data set, constructing a multidimensional interaction model for controlling the laparoscopic surgery robot through visual force sense information interaction, and carrying out real-time interaction control on the laparoscopic surgery robot by utilizing real-time motion information of the robot and real-time force information of an instrument predicted by the multidimensional interaction model. The invention realizes the control parameters of the model automatic analysis robot, improves the objectivity and efficiency of control analysis, finally realizes standardized control in the operation process and ensures the operation effect.

Description

Multidimensional interaction method, system and storage medium of laparoscopic surgery robot
Technical Field
The invention relates to the technical field of surgical robot control, in particular to a multidimensional interaction method, a multidimensional interaction system and a multidimensional interaction storage medium for a laparoscopic surgical robot.
Background
Robotic laparoscopic surgery is a laparoscopic surgery that is aided by robotic techniques. The system mainly comprises a console and an operation arm. The operator sits in front of the console, and adjusts the operation controls such as the angle of the robot joint, the position coordinates, the amplitude of the motion of the instrument, the opening angle, whether the instrument is locked after being closed or not through controlling the touch screen of the monitor.
In the prior art, the operation control of the laparoscopic surgery robot is obtained by judging and analyzing the visual information fed back by an operator according to the visual information fed back by a monitor, so that the operator only needs to analyze the visual information fed back in the operation process when the information which can be mastered by the operation control is analyzed, the analysis performance is limited by the limited information quantity of a single dimension, the optimal state of the operation control is not necessarily ensured, the operation process is influenced, and finally the operation effect is influenced.
Disclosure of Invention
The invention aims to provide a multidimensional interaction method of a laparoscopic surgery robot, which aims to solve the technical problems that the control analysis effect is limited and the optimal control cannot be realized due to the limited information quantity of a single dimension in the prior art.
In order to solve the technical problems, the invention specifically provides the following technical scheme:
a multi-dimensional interaction method of a laparoscopic surgical robot, comprising the steps of:
In a laparoscopic simulation operation standard procedure, marking robot standard motion information in visual information recorded by a camera at each procedure node, and marking instrument standard force information in force sense information recorded by a force sense sensor at each procedure node;
Combining the visual information, the force sense information, the robot standard motion information and the instrument standard force information at each flow node into a data set, and training a double-branch neural network based on the data set to construct a multidimensional interaction model for controlling the laparoscopic surgery robot through visual force sense information interaction;
real-time interaction control is carried out on the laparoscopic surgery robot by utilizing real-time motion information of the robot and real-time force information of the instrument, which are predicted by the multidimensional interaction model.
As a preferred mode of the invention, the motion information comprises the angle and position coordinates of a robot joint, and the force information comprises the depth, amplitude, speed and force of a surgical instrument loaded by the robot to pull and cut tissues.
As a preferable scheme of the invention, the method for constructing the multi-dimensional interaction model comprises the following steps:
taking the visual information as an input item of the first branch neural network, and taking the instrument standard force information as an output item of the first branch neural network;
taking the force sense information as an input item of the second branch neural network and taking the robot standard motion information as an output item of the second branch neural network;
Training a first branch neural network and a second branch neural network by utilizing the prediction loss and the reconstruction loss to obtain the multidimensional interaction model;
The multidimensional interaction model is as follows:
;
;
in the formula,Instrument standard force information output for the first branch neural network,The robot standard motion information output by the second branch neural network,In order for the visual information to be of interest,For force sense information, CNN1 is a first branch neural network, and CNN2 is a second branch neural network.
As a preferred embodiment of the present invention, the prediction loss is:
;
in the formula,In order to predict the loss of the picture,Instrument standard force information at a t-th flow node output for the first branched neural network,The true value of the instrument standard force information at the t-th flow node in the dataset,The standard motion information of the robot at the t-th flow node output by the second branch neural network,The true value of the standard motion information of the robot at the t-th flow node in the data set is given, N is the total number of the flow nodes in the standard flow of the laparoscopic surgery,AndAre all L2 Fan Shushi.
As a preferred embodiment of the present invention, the reconstruction loss is:
;
in the formula,The reconstruction is lost to the process,For force sense information at the t-th flow node in the dataset,For visual information at the t-th flow node in the dataset,Is composed ofThe force sense information obtained by the conversion is converted,Is composed ofThe visual information obtained by conversion is converted into a network, N is the total number of process nodes in the laparoscopic surgery standard process,AndAll are L2 Fan Shushi;
the conversion network exchange of the force sense information and the visual information is as follows:
;
;
in the formula,For force sense information at the t-th flow node output by the force sense information conversion network,Instrument standard force information at a t-th flow node output for the first branched neural network,Visual information at the t-th flow node output by the visual information transformation network,And for the instrument standard force information at the t-th flow node output by the first branch neural network, CNN3 and CNN4 are convolutional neural networks.
As a preferred scheme of the invention, the method for carrying out real-time interactive control on the laparoscopic surgery robot by utilizing the real-time motion information of the robot and the real-time force information of the instrument predicted by the multidimensional interactive model comprises the following steps:
Inputting real-time visual information fed back by a camera and real-time force sense information fed back by a force sense sensor into a multidimensional interaction model, outputting real-time motion information of a robot by a first branch neural network in the multidimensional interaction model, and outputting real-time force information of an instrument by a second branch neural network in the multidimensional interaction model;
And controlling the laparoscopic surgery robot to perform surgery according to the real-time motion information of the robot and the real-time force information of the instrument.
As a preferred embodiment of the present invention, the loss function of the conversion network exchange of the force sense information and the visual information is:
;
in the formula,In order to switch the predicted loss of the network,For force sense information at the t-th flow node output by the force sense information conversion network,Visual information at the t-th flow node output by the visual information transformation network,For force sense information at the t-th flow node in the dataset,Visual information at the t-th procedure node in the data set, N is the total number of procedure nodes in the laparoscopic surgery standard procedure,AndAre all L2 Fan Shushi.
As a preferable mode of the present invention, the visual information at each flow node is normalized, and the force sense information at each flow node is normalized.
As a preferred solution of the present invention, the present invention provides a multidimensional interactive system of a laparoscopic surgical robot, which is applied to a multidimensional interactive method of a laparoscopic surgical robot, the system comprising:
the data acquisition unit is used for marking robot standard motion information in visual information recorded by a camera at each process node and marking instrument standard force information in force sense information recorded by a force sense sensor at each process node in a laparoscopic simulation operation standard process;
the model building unit is used for combining the visual information, the force sense information, the robot standard motion information and the instrument standard force information at each flow node into a data set, training the double-branch neural network based on the data set, and building a multidimensional interaction model for controlling the laparoscopic surgery robot through visual force sense information interaction;
And the interaction control unit is used for carrying out real-time interaction control on the laparoscopic surgery robot by utilizing the real-time motion information of the robot and the real-time force information of the instrument predicted by the multidimensional interaction model.
As a preferred aspect of the present invention, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement a multi-dimensional interaction method such as a laparoscopic surgical robot.
Compared with the prior art, the invention has the following beneficial effects:
The invention utilizes the visual information and the force sense information to construct the multidimensional interactive model for controlling the laparoscopic surgery robot, realizes analysis of control parameters on the basis of multidimensional information interaction, improves control accuracy, and can realize automatic analysis of the control parameters of the robot by the multidimensional interactive model, improve objectivity and efficiency of control analysis, finally realize standardized control in the surgery operation process and ensure the surgery operation effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
FIG. 1 is a flow chart of a multi-dimensional interaction method of a laparoscopic surgical robot provided by an embodiment of the present invention;
FIG. 2 is a flow chart of a multi-dimensional interactive system of a laparoscopic surgical robot provided by an embodiment of the present invention;
fig. 3 is a schematic diagram of a multidimensional interaction model according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the present invention provides a multi-dimensional interaction method of a laparoscopic surgical robot, comprising the steps of:
In a laparoscopic simulation operation standard procedure, marking robot standard motion information in visual information recorded by a camera at each procedure node, and marking instrument standard force information in force sense information recorded by a force sense sensor at each procedure node;
Combining the visual information, the force sense information, the robot standard motion information and the instrument standard force information at each flow node into a data set, and training a double-branch neural network based on the data set to construct a multidimensional interaction model for controlling the laparoscopic surgery robot through visual force sense information interaction;
real-time interaction control is carried out on the laparoscopic surgery robot by utilizing real-time motion information of the robot and real-time force information of the instrument, which are predicted by the multidimensional interaction model.
When the control parameters of the laparoscopic surgical robot are analyzed, the information of the vision and the force sense is utilized, the multidimensional control analysis is realized, the interaction uniformity of the multidimensional information is ensured, more accurate control parameters are obtained, and the robot is more accurately controlled in a surgical mode.
In the standard flow of the laparoscopic simulation operation, the visual information and the force sense information are collected to form a data set, so that the interaction uniformity of the visual information and the force sense information can be ensured, namely, the robot motion information and the instrument force information reflected by the visual information and the force sense information at each flow node are standardized, the operation with the best operation effect can be obtained, and meanwhile, the robot motion information and the instrument force information reflected by the visual information and the force sense information at the same flow node are in one-to-one matching correspondence, and the smoothness and the accuracy of the cooperation of the operation robot and the operation instrument are ensured.
After the operation standardized control data set is constructed, the operation standardized control data set is used as a data sample for training the multidimensional interaction model, so that the multidimensional interaction model can analyze multidimensional information (namely visual information and force sense information) to accurately control the laparoscopic operation robot in real time.
The multi-dimensional interaction model constructed by the invention comprises two neural network branch structures, wherein the two neural network branch structures are used for realizing interaction supervision analysis on visual information and force sense information, the first neural network branch is used for establishing a mapping relation between the visual information and instrument standard force information so as to realize outputting the instrument standard force information according to the visual information to achieve the aim of matching the optimal instrument force information for the visual information, the second neural network branch is used for establishing a mapping relation between the force sense information and robot standard motion information so as to realize outputting the robot standard motion information according to the force sense information to achieve the aim of matching the optimal robot motion information for the force sense information, and the two neural network branches respectively establish the mapping relation of the interactivity so that the prediction results respectively output have corresponding matching property, thereby ensuring that the robot motion information and the instrument force information at the same flow node or at the same moment have unity.
When the two neural network branch structures are trained, in order to ensure that the robot motion information and the instrument force information have uniformity and standardization, a self-supervision mode is adopted, firstly, the difference between the instrument standard force information output by the first branch neural network and the true value of the instrument standard force information in the data set and the difference (namely prediction loss) between the true value of the robot standard motion information output by the second branch neural network and the true value of the instrument standard motion information in the data set are used for training the two neural network branches, so that the prediction results output by the two neural network branches have correspondence and matching performance, the prediction results output by the two neural network branches are closest to the true results, the prediction accuracy performance is higher, and the true value of the instrument standard force information in the data set and the true value of the instrument standard motion information in the data set are used as label values for training the two neural network branches, and the mutual self-supervision training of the instrument force information and the robot motion information layer is realized for the two neural network branch structures.
Secondly, training the two neural network branches by using the difference between the force sense information and the original force sense information obtained by reflection of the instrument standard force information output by the first branch neural network and the difference (namely reconstruction loss) between the visual information and the original visual information obtained by reflection of the robot standard motion information output by the second branch neural network, so that the instrument standard force information predicted and output by the first branch neural network according to the original visual information is further ensured to be in accordance with the original force sense information, and the original visual information and the original force sense information have time sequence uniformity.
Therefore, the first branch neural network is guaranteed to output the instrument force information matched with the visual information, the robot motion information predicted and output by the second branch neural network according to the original force sense information is guaranteed to be in accordance with the original force sense information, and the original visual information and the original force sense information have timing sequence uniformity, so that the second branch neural network is guaranteed to output the robot motion information matched with the force sense information, the use of reconstruction loss is guaranteed to ensure that the instrument force information and the robot motion information output by the two neural network branch structures inherit the timing sequence uniformity characteristic between the original visual information and the original force sense information, and the original visual information and the original force sense information are still utilized as the label value for training the two neural network branches in the reconstruction loss, so that the mutual self-supervision training of the visual information and the force sense information layers is realized for the two neural network branch structures.
In order to realize the force sense information obtained by reflecting the instrument standard force information and the visual information obtained by reversely mapping the robot standard motion information, a conversion network is constructed, and the mapping relation between the instrument standard force information and the force sense information and the mapping relation between the robot standard motion information and the visual information are established, so that the force sense information obtained by predicting the instrument standard force information and the visual information obtained by predicting the robot standard motion information are respectively realized.
The motion information comprises robot joint angles and position coordinates, and the force information comprises the depth, amplitude, speed and force of the surgical instrument loaded by the robot to the tissue pulling and cutting operation.
In the standard flow of the laparoscopic simulation operation, the visual information and the force sense information are collected to form a data set, so that the interaction uniformity of the visual information and the force sense information can be ensured, namely, the robot motion information and the instrument force information reflected by the visual information and the force sense information at each flow node are standardized, the operation with the best operation effect can be obtained, and meanwhile, the robot motion information and the instrument force information reflected by the visual information and the force sense information at the same flow node are in one-to-one matching correspondence, so that the smoothness and the accuracy of the cooperation of the operation robot and the operation instrument are ensured, and the method comprises the following steps of:
as shown in fig. 3, the method for constructing the multidimensional interaction model includes:
taking the visual information as an input item of the first branch neural network, and taking the instrument standard force information as an output item of the first branch neural network;
taking the force sense information as an input item of the second branch neural network and taking the robot standard motion information as an output item of the second branch neural network;
Training a first branch neural network and a second branch neural network by utilizing the prediction loss and the reconstruction loss to obtain a multidimensional interaction model;
the multidimensional interaction model is as follows:
;
;
in the formula,Instrument standard force information output for the first branch neural network,The robot standard motion information output by the second branch neural network,In order for the visual information to be of interest,For force sense information, CNN1 is a first branch neural network, and CNN2 is a second branch neural network.
The multi-dimensional interaction model constructed by the invention comprises two neural network branch structures, wherein the two neural network branch structures are used for realizing interaction supervision analysis on visual information and force sense information, the first neural network branch is used for establishing a mapping relation between the visual information and instrument standard force information so as to realize outputting the instrument standard force information according to the visual information to achieve the aim of matching the optimal instrument force information for the visual information, the second neural network branch is used for establishing a mapping relation between the force sense information and robot standard motion information so as to realize outputting the robot standard motion information according to the force sense information to achieve the aim of matching the optimal robot motion information for the force sense information, and the two neural network branches respectively establish the mapping relation of the interactivity so that the prediction results respectively output have corresponding matching property, thereby ensuring that the robot motion information and the instrument force information at the same flow node or at the same moment have unity.
The predicted loss is:
;
in the formula,In order to predict the loss of the picture,The instrument standard force information at the t-th flow node output for the first branched neural network (corresponding to that in fig. 3),True values for instrument standard force information at the t-th flow node in the dataset (corresponding to those in FIG. 3),Robot standard motion information at the t-th flow node (corresponding to that in fig. 3) output for the second branch neural network),True values for the robot standard motion information at the t-th flow node in the dataset (corresponding to those in fig. 3) N is the total number of process nodes in the laparoscopic surgery standard process,AndAre all L2 Fan Shushi.
When the two neural network branch structures are trained, in order to ensure that the robot motion information and the instrument force information have uniformity and standardization, a self-supervision mode is adopted, firstly, the difference between the instrument standard force information output by the first branch neural network and the true value of the instrument standard force information in the data set and the difference (namely prediction loss) between the true value of the robot standard motion information output by the second branch neural network and the true value of the instrument standard motion information in the data set are used for training the two neural network branches, so that the prediction results output by the two neural network branches have correspondence and matching performance, the prediction results output by the two neural network branches are closest to the true results, the prediction accuracy performance is higher, and the true value of the instrument standard force information in the data set and the true value of the instrument standard motion information in the data set are used as label values for training the two neural network branches, and the mutual self-supervision training of the instrument force information and the robot motion information layer is realized for the two neural network branch structures.
The reconstruction loss is as follows:
;
in the formula,The reconstruction is lost to the process,For force sense information at the t-th flow node in the dataset,For visual information at the t-th flow node in the dataset,Is composed ofThe force sense information obtained by the conversion is converted,Is composed ofThe visual information obtained by conversion is converted into a network, N is the total number of process nodes in the laparoscopic surgery standard process,AndAll are L2 Fan Shushi;
the conversion network exchange of the force sense information and the visual information is as follows:
;
;
in the formula,Force sense information at the t-th flow node of the output of the force sense information conversion network (corresponding to that in fig. 3),Instrument standard force information at a t-th flow node output for the first branched neural network,Visual information at the t-th flow node (corresponding to that in fig. 3) output for the visual information transformation network),And for the instrument standard force information at the t-th flow node output by the first branch neural network, CNN3 and CNN4 are convolutional neural networks.
Secondly, training the two neural network branches by using the difference between the force sense information and the original force sense information obtained by reflection of the instrument standard force information output by the first branch neural network and the difference (namely reconstruction loss) between the visual information and the original visual information obtained by reflection of the robot standard motion information output by the second branch neural network, so that the instrument standard force information predicted and output by the first branch neural network according to the original visual information is further ensured to be in accordance with the original force sense information, and the original visual information and the original force sense information have time sequence uniformity.
Therefore, the first branch neural network is guaranteed to output the instrument force information matched with the visual information, the robot motion information predicted and output by the second branch neural network according to the original force sense information is guaranteed to be in accordance with the original force sense information, and the original visual information and the original force sense information have timing sequence uniformity, so that the second branch neural network is guaranteed to output the robot motion information matched with the force sense information, the use of reconstruction loss is guaranteed to ensure that the instrument force information and the robot motion information output by the two neural network branch structures inherit the timing sequence uniformity characteristic between the original visual information and the original force sense information, and the original visual information and the original force sense information are still utilized as the label value for training the two neural network branches in the reconstruction loss, so that the mutual self-supervision training of the visual information and the force sense information layers is realized for the two neural network branch structures.
The method for carrying out real-time interaction control on the laparoscopic surgery robot by utilizing the real-time motion information of the robot and the real-time force information of the instrument predicted by the multidimensional interaction model comprises the following steps:
Inputting real-time visual information fed back by a camera and real-time force sense information fed back by a force sense sensor into a multidimensional interaction model, outputting real-time motion information of a robot by a first branch neural network in the multidimensional interaction model, and outputting real-time force information of an instrument by a second branch neural network in the multidimensional interaction model;
And controlling the laparoscopic surgery robot to perform surgery according to the real-time motion information of the robot and the real-time force information of the instrument.
The loss function of the conversion network exchange of force sense information and visual information is as follows:
;
in the formula,In order to switch the predicted loss of the network,For force sense information at the t-th flow node output by the force sense information conversion network,Visual information at the t-th flow node output by the visual information transformation network,For force sense information at the t-th flow node in the dataset,Visual information at the t-th procedure node in the data set, N is the total number of procedure nodes in the laparoscopic surgery standard procedure,AndAre all L2 Fan Shushi.
In order to realize the force sense information obtained by reflecting the instrument standard force information and the visual information obtained by reversely mapping the robot standard motion information, a conversion network is constructed, and the mapping relation between the instrument standard force information and the force sense information and the mapping relation between the robot standard motion information and the visual information are established, so that the force sense information obtained by predicting the instrument standard force information and the visual information obtained by predicting the robot standard motion information are respectively realized.
And normalizing the visual information at each flow node and normalizing the force sense information at each flow node.
As shown in fig. 2, the present invention provides a multi-dimensional interaction system of a laparoscopic surgical robot, which is applied to a multi-dimensional interaction method of a laparoscopic surgical robot, the system comprising:
the data acquisition unit is used for marking robot standard motion information in visual information recorded by a camera at each process node and marking instrument standard force information in force sense information recorded by a force sense sensor at each process node in a laparoscopic simulation operation standard process;
the model building unit is used for combining the visual information, the force sense information, the robot standard motion information and the instrument standard force information at each flow node into a data set, training the double-branch neural network based on the data set, and building a multidimensional interaction model for controlling the laparoscopic surgery robot through visual force sense information interaction;
And the interaction control unit is used for carrying out real-time interaction control on the laparoscopic surgery robot by utilizing the real-time motion information of the robot and the real-time force information of the instrument predicted by the multidimensional interaction model.
The invention provides a computer readable storage medium, wherein computer execution instructions are stored in the computer readable storage medium, and when a processor executes the computer execution instructions, a multidimensional interaction method such as a laparoscopic surgical robot is realized.
The invention utilizes the visual information and the force sense information to construct the multidimensional interactive model for controlling the laparoscopic surgery robot, realizes analysis of control parameters on the basis of multidimensional information interaction, improves control accuracy, and can realize automatic analysis of the control parameters of the robot by the multidimensional interactive model, improve objectivity and efficiency of control analysis, finally realize standardized control in the surgery operation process and ensure the surgery operation effect.
The above embodiments are only exemplary embodiments of the present application and are not intended to limit the present application, the scope of which is defined by the claims. Various modifications and equivalent arrangements of this application will occur to those skilled in the art, and are intended to be within the spirit and scope of the application.

Claims (10)

Translated fromChinese
1.一种腹腔镜手术机器人的多维交互方法,其特征在于,包括以下步骤:1. A multi-dimensional interaction method for a laparoscopic surgical robot, characterized by comprising the following steps:在腹腔镜仿真手术标准流程中,在各流程节点处由摄像头记录的视觉信息中标记出机器人标准运动信息,以及在各流程节点处由力觉传感器记录的力觉信息中标记出器械标准力信息;In the standard process of laparoscopic simulation surgery, the standard motion information of the robot is marked in the visual information recorded by the camera at each process node, and the standard force information of the instrument is marked in the force information recorded by the force sensor at each process node;将各流程节点处的视觉信息、力觉信息、机器人标准运动信息和器械标准力信息组合为数据集,基于数据集训练双分支神经网络构建出用于通过视觉力觉信息交互控制腹腔镜手术机器人的多维交互模型;The visual information, force information, robot standard motion information and instrument standard force information at each process node are combined into a data set, and a dual-branch neural network is trained based on the data set to construct a multi-dimensional interactive model for interactively controlling the laparoscopic surgical robot through visual force information.利用多维交互模型预测出的机器人实时运动信息和器械实时力信息对腹腔镜手术机器人进行实时交互控制。The real-time motion information of the robot and the real-time force information of the instrument predicted by the multi-dimensional interaction model are used to perform real-time interactive control of the laparoscopic surgical robot.2.根据权利要求1所述的一种腹腔镜手术机器人的多维交互方法,其特征在于:所述运动信息包括机器人关节角度、位置坐标,所述力信息包括:机器人装载的手术器械对组织牵拉、切割操作的深度、幅度、速度以及力度。2. A multi-dimensional interaction method for a laparoscopic surgical robot according to claim 1, characterized in that: the motion information includes the robot's joint angles and position coordinates, and the force information includes: the depth, amplitude, speed and strength of the surgical instruments loaded on the robot on tissue traction and cutting operations.3.根据权利要求1所述的一种腹腔镜手术机器人的多维交互方法,其特征在于:所述多维交互模型的构建方法包括:3. A multi-dimensional interaction method for a laparoscopic surgical robot according to claim 1, characterized in that: the method for constructing the multi-dimensional interaction model comprises:将视觉信息作为第一分支神经网络的输入项,将器械标准力信息作为第一分支神经网络的输出项;Using visual information as an input item of the first branch neural network, and using instrument standard force information as an output item of the first branch neural network;将力觉信息作为第二分支神经网络的输入项,将机器人标准运动信息作为第二分支神经网络的输出项;The force information is used as the input item of the second branch neural network, and the standard motion information of the robot is used as the output item of the second branch neural network;利用预测损失和重构损失,训练第一分支神经网络和第二分支神经网络,得到所述多维交互模型;Using the prediction loss and the reconstruction loss, the first branch neural network and the second branch neural network are trained to obtain the multi-dimensional interaction model;所述多维交互模型为:The multi-dimensional interaction model is: ; ;式中,为第一分支神经网络输出的器械标准力信息,为第二分支神经网络输出的机器人标准运动信息,为视觉信息,为力觉信息,CNN1为第一分支神经网络,CNN2为第二分支神经网络。In the formula, is the standard force information of the instrument output by the first branch neural network, is the robot standard motion information output by the second branch neural network, For visual information, is force information, CNN1 is the first branch neural network, and CNN2 is the second branch neural network.4.根据权利要求3所述的一种腹腔镜手术机器人的多维交互方法,其特征在于:所述预测损失为:4. A multi-dimensional interaction method of a laparoscopic surgical robot according to claim 3, characterized in that: the predicted loss is: ;式中,为预测损失,为第一分支神经网络输出的第t个流程节点处的器械标准力信息,为数据集中第t个流程节点处的器械标准力信息的真值,为第二分支神经网络输出的第t个流程节点处的机器人标准运动信息,为数据集中第t个流程节点处的机器人标准运动信息的真值,N为腹腔镜手术标准流程中的流程节点总数量,均为L2范数式。In the formula, To predict the loss, is the instrument standard force information at the tth process node output by the first branch neural network, is the true value of the instrument standard force information at the tth process node in the data set, is the standard motion information of the robot at the tth process node output by the second branch neural network, is the true value of the robot standard motion information at the tth process node in the data set, N is the total number of process nodes in the laparoscopic surgery standard process, and All are L2 norm forms.5.根据权利要求4所述的一种腹腔镜手术机器人的多维交互方法,其特征在于:所述重构损失为:5. A multi-dimensional interaction method of a laparoscopic surgical robot according to claim 4, characterized in that: the reconstruction loss is: ;式中,为重构损失,为数据集中第t个流程节点处的力觉信息,为数据集中第t个流程节点处的视觉信息,为由转换得到的力觉信息,为由转换得到的视觉信息,exchange为转换网络,N为腹腔镜手术标准流程中的流程节点总数量,均为L2范数式;In the formula, is the reconstruction loss, is the force information at the tth process node in the data set, is the visual information at the tth process node in the dataset, Reason The converted force information, Reason The converted visual information, exchange is the conversion network, N is the total number of process nodes in the standard process of laparoscopic surgery, and All are L2 norm forms;其中,力觉信息和视觉信息的转换网络exchange为:Among them, the conversion network exchange of force information and visual information is: ; ;式中,为力觉信息转换网络输出的第t个流程节点处的力觉信息,为第一分支神经网络输出的第t个流程节点处的器械标准力信息,为视觉信息转换网络输出的第t个流程节点处的视觉信息,为第一分支神经网络输出的第t个流程节点处的器械标准力信息,CNN3和CNN4均为卷积神经网络。In the formula, is the force information at the tth process node output by the force information conversion network, is the instrument standard force information at the tth process node output by the first branch neural network, is the visual information at the tth process node output by the visual information conversion network, It is the standard force information of the instrument at the tth process node output by the first branch neural network. Both CNN3 and CNN4 are convolutional neural networks.6.根据权利要求5所述的一种腹腔镜手术机器人的多维交互方法,其特征在于:利用多维交互模型预测出的机器人实时运动信息和器械实时力信息对腹腔镜手术机器人进行实时交互控制的方法包括:6. A multi-dimensional interaction method for a laparoscopic surgical robot according to claim 5, characterized in that: the method for real-time interactive control of the laparoscopic surgical robot using the real-time motion information of the robot and the real-time force information of the instrument predicted by the multi-dimensional interaction model comprises:将摄像头反馈的实时视觉信息和力觉传感器反馈的实时力觉信息输入至多维交互模型,由多维交互模型中第一分支神经网络输出机器人实时运动信息,以及多维交互模型中第二分支神经网络输出器械实时力信息;The real-time visual information fed back by the camera and the real-time force information fed back by the force sensor are input into the multi-dimensional interaction model, and the first branch neural network in the multi-dimensional interaction model outputs the real-time motion information of the robot, and the second branch neural network in the multi-dimensional interaction model outputs the real-time force information of the instrument;依据机器人实时运动信息和器械实时力信息控制腹腔镜手术机器人进行手术操作。The laparoscopic surgical robot is controlled to perform surgical operations based on the robot's real-time motion information and the instrument's real-time force information.7.根据权利要求5所述的一种腹腔镜手术机器人的多维交互方法,其特征在于:所述力觉信息和视觉信息的转换网络exchange的损失函数为:7. A multi-dimensional interaction method for a laparoscopic surgical robot according to claim 5, characterized in that: the loss function of the exchange network for transforming force information and visual information is: ;式中,为转换网络的预测损失,为力觉信息转换网络输出的第t个流程节点处的力觉信息,为视觉信息转换网络输出的第t个流程节点处的视觉信息,为数据集中第t个流程节点处的力觉信息,为数据集中第t个流程节点处的视觉信息,N为腹腔镜手术标准流程中的流程节点总数量,均为L2范数式。In the formula, is the prediction loss of the conversion network, is the force information at the tth process node output by the force information conversion network, is the visual information at the tth process node output by the visual information conversion network, is the force information at the tth process node in the data set, is the visual information at the tth process node in the dataset, N is the total number of process nodes in the standard process of laparoscopic surgery, and All are L2 norm forms.8.根据权利要求1所述的一种腹腔镜手术机器人的多维交互方法,其特征在于:将各流程节点处的视觉信息进行归一化处理,以及在各流程节点处的力觉信息进行归一化处理。8. A multi-dimensional interaction method for a laparoscopic surgical robot according to claim 1, characterized in that: the visual information at each process node is normalized, and the force information at each process node is normalized.9.一种腹腔镜手术机器人的多维交互系统,其特征在于,应用于权利要求1-8任一项所述的一种腹腔镜手术机器人的多维交互方法,系统包括:9. A multi-dimensional interactive system for a laparoscopic surgical robot, characterized in that it is applied to a multi-dimensional interactive method for a laparoscopic surgical robot according to any one of claims 1 to 8, and the system comprises:数据采集单元,用于在腹腔镜仿真手术标准流程中,在各流程节点处由摄像头记录的视觉信息中标记出机器人标准运动信息,以及在各流程节点处由力觉传感器记录的力觉信息中标记出器械标准力信息;A data acquisition unit is used to mark the standard motion information of the robot in the visual information recorded by the camera at each process node in the standard process of laparoscopic simulation surgery, and to mark the standard force information of the instrument in the force information recorded by the force sensor at each process node;模型构建单元,用于将各流程节点处的视觉信息、力觉信息、机器人标准运动信息和器械标准力信息组合为数据集,基于数据集训练双分支神经网络构建出用于通过视觉力觉信息交互控制腹腔镜手术机器人的多维交互模型;A model building unit, used to combine visual information, force information, robot standard motion information and instrument standard force information at each process node into a data set, and train a dual-branch neural network based on the data set to build a multi-dimensional interactive model for interactively controlling the laparoscopic surgical robot through visual force information;交互控制单元,用于利用多维交互模型预测出的机器人实时运动信息和器械实时力信息对腹腔镜手术机器人进行实时交互控制。The interactive control unit is used to perform real-time interactive control on the laparoscopic surgical robot using the real-time motion information of the robot and the real-time force information of the instrument predicted by the multi-dimensional interactive model.10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1-8任一项所述的方法。10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer-executable instructions, and when a processor executes the computer-executable instructions, the method according to any one of claims 1 to 8 is implemented.
CN202510571831.2A2025-05-062025-05-06Multidimensional interaction method, system and storage medium of laparoscopic surgery robotActiveCN120078518B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510571831.2ACN120078518B (en)2025-05-062025-05-06Multidimensional interaction method, system and storage medium of laparoscopic surgery robot

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510571831.2ACN120078518B (en)2025-05-062025-05-06Multidimensional interaction method, system and storage medium of laparoscopic surgery robot

Publications (2)

Publication NumberPublication Date
CN120078518Atrue CN120078518A (en)2025-06-03
CN120078518B CN120078518B (en)2025-08-15

Family

ID=95845605

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510571831.2AActiveCN120078518B (en)2025-05-062025-05-06Multidimensional interaction method, system and storage medium of laparoscopic surgery robot

Country Status (1)

CountryLink
CN (1)CN120078518B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6062865A (en)*1996-07-232000-05-16Medical Simulation CorporationSystem for training persons to perform minimally invasive surgical procedures
CN114758419A (en)*2022-04-202022-07-15普达迪泰(天津)智能装备科技有限公司AR-based multidimensional intelligent application system
US20230042756A1 (en)*2021-10-092023-02-09Southeast UniversityAutonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition
CN117017488A (en)*2023-10-102023-11-10华中科技大学同济医学院附属协和医院Puncture arm path planning method comprising non-autonomous motion compensation
CN117612066A (en)*2023-12-012024-02-27香港中文大学深港创新研究院(福田)Robot action recognition method based on multi-mode information
CN119097359A (en)*2024-05-072024-12-10合肥工业大学 Force-driven and vision-driven laparoscope field of view adjustment hybrid control method and system
WO2025027463A1 (en)*2023-08-012025-02-06Covidien LpSystem and method for processing combined data streams of surgical robots
CN119679513A (en)*2024-12-182025-03-25四川大学 Digestive tract catheterization robot navigation method and system based on multimodal information fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6062865A (en)*1996-07-232000-05-16Medical Simulation CorporationSystem for training persons to perform minimally invasive surgical procedures
US20230042756A1 (en)*2021-10-092023-02-09Southeast UniversityAutonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition
CN114758419A (en)*2022-04-202022-07-15普达迪泰(天津)智能装备科技有限公司AR-based multidimensional intelligent application system
WO2025027463A1 (en)*2023-08-012025-02-06Covidien LpSystem and method for processing combined data streams of surgical robots
CN117017488A (en)*2023-10-102023-11-10华中科技大学同济医学院附属协和医院Puncture arm path planning method comprising non-autonomous motion compensation
CN117612066A (en)*2023-12-012024-02-27香港中文大学深港创新研究院(福田)Robot action recognition method based on multi-mode information
CN119097359A (en)*2024-05-072024-12-10合肥工业大学 Force-driven and vision-driven laparoscope field of view adjustment hybrid control method and system
CN119679513A (en)*2024-12-182025-03-25四川大学 Digestive tract catheterization robot navigation method and system based on multimodal information fusion

Also Published As

Publication numberPublication date
CN120078518B (en)2025-08-15

Similar Documents

PublicationPublication DateTitle
CN104589356B (en)The Dextrous Hand remote operating control method caught based on Kinect human hand movement
KR101975808B1 (en)System and method for the evaluation of or improvement of minimally invasive surgery skills
van Amsterdam et al.Weakly supervised recognition of surgical gestures
Wang et al.Object detection of surgical instruments based on Yolov4
CN111199207A (en)Two-dimensional multi-human body posture estimation method based on depth residual error neural network
CN109581981A (en)A kind of data fusion system and its working method based on data assessment Yu system coordination module
Jog et al.Towards integrating task information in skills assessment for dexterous tasks in surgery and simulation
CN113408443B (en)Gesture posture prediction method and system based on multi-view images
CN110047145B (en) Tissue deformation simulation system and method based on deep learning and finite element modeling
Li et al.SE-OHFM: A surgical phase recognition network with SE attention module
Karimi et al.Reward learning from suboptimal demonstrations with applications in surgical electrocautery
Zhou et al.Detection of surgical instruments based on YOLOv5
Luo et al.Multi-modal autonomous ultrasound scanning for efficient human–machine fusion interaction
CN120078518A (en) A multi-dimensional interaction method, system and storage medium of a laparoscopic surgical robot
Joglekar et al.Autonomous Image-to-Grasp Robotic Suturing Using Reliability-Driven Suture Thread Reconstruction
Yang et al.Instrument-splatting: Controllable photorealistic reconstruction of surgical instruments using gaussian splatting
CN117718967A (en)Surgical robot testing method, device, equipment and medium
CN116597943A (en) A forward trajectory prediction method and device for instrument operation in minimally invasive surgery
Qiao et al.A deep learning-based intelligent analysis platform for fetal ultrasound four-chamber views
Zhu et al.A bronchoscopic navigation method based on neural radiation fields
Hendricks et al.Exploring the limitations and implications of the jigsaws dataset for robot-assisted surgery
CN113807280A (en)Kinect-based virtual ship cabin system and method
CN119091086B (en)Minimally invasive surgery postoperative review method and system based on surgery multi-stage scene registration
Arnout et al.DR-TiST: Disentangled representation for time series translation across application domains
Gao et al.EndoRD-GS: Robust Deformable Endoscopic Scene Reconstruction via Gaussian Splatting

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp