Movatterモバイル変換


[0]ホーム

URL:


CN113180827A - Visual navigation method and device for abdominal cavity operation - Google Patents

Visual navigation method and device for abdominal cavity operation
Download PDF

Info

Publication number
CN113180827A
CN113180827ACN202110414164.9ACN202110414164ACN113180827ACN 113180827 ACN113180827 ACN 113180827ACN 202110414164 ACN202110414164 ACN 202110414164ACN 113180827 ACN113180827 ACN 113180827A
Authority
CN
China
Prior art keywords
image recognition
coordinate
position information
information
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110414164.9A
Other languages
Chinese (zh)
Inventor
蔺又甲
赵岭岚
王栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Turing Minimally Invasive Medical Technology Co Ltd
Original Assignee
Beijing Turing Minimally Invasive Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Turing Minimally Invasive Medical Technology Co LtdfiledCriticalBeijing Turing Minimally Invasive Medical Technology Co Ltd
Priority to CN202110414164.9ApriorityCriticalpatent/CN113180827A/en
Publication of CN113180827ApublicationCriticalpatent/CN113180827A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention relates to a visual navigation method and a device for abdominal cavity operation, wherein the method comprises the following steps: determining a visual field range of image recognition; determining position information of a center of a visual field range of image recognition; selecting at least one point of an image recognition object, and determining position information of the at least one point to determine the position information of the image recognition object; and generating navigation information according to the relationship between the position information of the image recognition object and the position information of the center of the visual field range of the image recognition to guide the operation of the robot system for the abdominal cavity operation. The method and the device control the corresponding operation of the abdominal operation by image recognition and a visual navigation mode, can effectively save the physical strength and energy of medical personnel, and ensure the smooth operation.

Description

Visual navigation method and device for abdominal cavity operation
Technical Field
The invention relates to the field of abdominal cavity operation, in particular to a visual navigation method, a visual navigation device and a storage medium for abdominal cavity operation.
Background
The minimally invasive abdominal surgery is a major revolution on the basis of the traditional open abdominal surgery, and has the advantages of small wound surface, rapid postoperative recovery and the like. With the gradual popularization and expansion of the minimally invasive abdominal surgery, a robot system for the minimally invasive abdominal surgery is developed and applied, and the minimally invasive abdominal surgery is further perfected.
During minimally invasive abdominal surgery, images during abdominal surgery (e.g., images within the abdominal cavity) are obtained laparoscopically, thereby providing a reference for the surgical procedure to medical personnel. In the course of performing the surgery, there is a need to constantly change the view field of the inside of the abdominal cavity to be acquired in order to ensure the smoothness of the surgery, and thus the angle of the laparoscope needs to be frequently adjusted. At present, the operation of adjusting the angle of the laparoscope is generally realized in two ways, one is manual operation, namely, the operation of arranging a medical staff to perform the laparoscope movement is needed, so that the problem of instability or large error and multiple adjustments easily occurs, and the workload of the medical staff is increased, which affects the smooth operation of the operation. Secondly, through the arm centre gripping peritoneoscope, utilize foot switch to operate by medical personnel, this kind of operation mode needs frequently to pedal the action, consumes medical personnel's physical power and influences attention.
Disclosure of Invention
In order to solve the above problems, an aspect of the present invention is directed to provide a visual navigation method for abdominal surgery, the method including: determining a visual field range of image recognition; determining the position information of the center of the visual field range of the image recognition according to the determined visual field range of the image recognition; selecting at least one point of an image recognition object, and determining position information of the at least one point to determine the position information of the image recognition object; and generating navigation information according to the relationship between the position information of the image recognition object and the position information of the center of the visual field range of the image recognition to guide the operation of the robot system for the abdominal cavity operation.
Optionally, generating navigation information to guide an operation of the robot system for the abdominal surgery according to a relationship between the position information of the image recognition object and the position information of the center of the image recognition visual field range, comprises: determining a first coordinate of a center of a visual field range of image recognition; determining second coordinates of the image recognition object; generating motion data of a mechanical arm of a robot system for the abdominal cavity surgery according to the position relation of the first coordinate and the second coordinate; controlling the motion of the robotic arm according to the motion data.
Optionally, generating motion data of a robotic arm of a robotic system for abdominal surgery from a positional relationship of the first and second coordinates comprises: determining at least one angle data based on a difference value of coordinate values of the first coordinate and the second coordinate on a corresponding coordinate axis and focal length information of an image pickup device performing image recognition; generating motion data of a robotic arm based on the at least one angle data to control motion of the robotic arm; wherein the robot arm is a six-axis robot arm or a seven-axis robot arm, and the image pickup device performing the image recognition is a laparoscope.
Alternatively, the first coordinate and the second coordinate are respectively determined according to resolution information of an image pickup apparatus that performs image recognition, wherein the first coordinate and the second coordinate are both two-dimensional coordinates.
Optionally, the image recognition object is located within a field of view of the image recognition.
Another aspect of the present invention is directed to provide a visual navigation device for abdominal surgery, comprising: a first determination module configured to determine a field of view for image recognition; a second determination module configured to determine position information of a center of the image recognition visual field range according to the determined image recognition visual field range; a third determination module configured to: selecting at least one point of an image recognition object, and determining position information of the at least one point to determine the position information of the image recognition object; and a navigation module configured to generate navigation information to guide an operation of the robot system for the abdominal surgery according to a relationship of the position information of the image recognition object and the position information of the center of the image-recognized visual field range.
Optionally, the navigation module comprises: a first coordinate determination sub-module configured to determine a first coordinate of a center of a field of view of the image recognition; a second coordinate determination sub-module configured to determine second coordinates of the image recognition object; a motion data generation submodule configured to generate motion data of a robot arm of a robot system for abdominal surgery according to a positional relationship of the first coordinate and the second coordinate; and a motion control sub-module configured to control motion of the robotic arm in accordance with the motion data.
Optionally, the motion data generation sub-module is specifically configured to: determining at least one angle data based on a difference value of coordinate values of the first coordinate and the second coordinate on a corresponding coordinate axis and focal length information of an image pickup device performing image recognition; generating motion data for the robotic arm based on the at least one angle data to control motion of the robotic arm.
Optionally, the first coordinate determination submodule is specifically configured to determine the first coordinate according to resolution information of an image pickup apparatus that performs image recognition; the second coordinate determination submodule is specifically configured to determine the second coordinate from resolution information of an imaging device that performs image recognition; wherein the first coordinate and the second coordinate are both two-dimensional coordinates.
In addition, the present invention also provides a storage medium, wherein the storage medium stores a computer program, and the computer program is executed by a processor to implement the above-mentioned visual navigation method for abdominal cavity surgery.
The visual navigation method, the device and the storage medium for the abdominal cavity operation provided by the invention can obtain a plurality of specific position information by image recognition in the abdominal cavity operation, and determine the navigation information according to the specific position information, thereby guiding the operation of the robot system for the abdominal cavity operation, having good accuracy, sensitivity and response speed, and solving the following technical problems in the prior art: the manual operation is easy to have the problems of instability or large error and repeated adjustment, and the workload of medical personnel is increased, thus influencing the smooth operation of the operation; the medical staff utilizes the foot switch to operate, and the foot switch needs to be frequently operated, thereby consuming the physical strength of the medical staff and influencing the attention.
Drawings
Some and/or other aspects of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart illustrating a visual navigation method for abdominal surgery according to a first embodiment of the present invention;
fig. 2 is another flowchart illustrating a visual navigation method for abdominal surgery according to a first embodiment of the present invention;
fig. 3 is yet another flowchart illustrating a visual navigation method for abdominal surgery according to a first embodiment of the present invention;
fig. 4 is a schematic configuration diagram showing a visual navigation device for abdominal surgery according to a second embodiment of the present invention;
fig. 5 is a schematic view showing another configuration of the visual navigation device for abdominal surgery according to the second embodiment of the present invention.
The various drawings in the present invention are for purposes of illustrating the invention and are not intended to be limiting.
Detailed Description
Like reference numerals refer to like elements throughout the specification. All elements of the exemplary embodiments of the present invention will not be described, and descriptions of what is known in the art or overlapped with each other in the embodiments will be omitted.
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The principles and exemplary embodiments of the present invention will now be described with reference to fig. 1-5.
In abdominal cavity surgery, when surgical instruments such as an ultrasonic scalpel and a surgical clamp are operated in an abdominal cavity, a laparoscope is generally required to be matched with the surgical instruments, so that the operation of a doctor is facilitated, and a real-time image in the abdominal cavity is required to be used as a basis for the operation, so that a basis is provided for the operation such as mechanical arm movement. Accordingly, the visual navigation method for abdominal surgery according to the first embodiment of the present invention may include:
the visual field range of the image recognition is determined (step S1).
In the abdominal cavity operation, images in the abdominal cavity need to be acquired through a laparoscope, surgical tools (such as an ultrasonic knife, an electric knife, and the like) in the abdominal cavity need to be identified through the laparoscope, and the visual field range of the image identification can be determined according to the specific conditions of the abdominal cavity operation, the parameter information of the laparoscope, and the like. For example, the magnification of the laparoscope can be 1, 2, 4, 6, etc., the laparoscope lens has different viewing angles of 0 °, 30 °, 75 °, etc., and the diameter and length of the laparoscope can also be determined according to actual requirements. The object of image recognition (e.g., the blade of an ultrasonic blade) should be within the determined field of view of the image recognition. Further, in order to facilitate the recognition by the doctor, a range containing the recognition object may be defined in a graphic form (for example, a rectangular frame, a circular frame, etc., to which the present invention is not limited) in the visual field range of this image recognition and displayed on the screen. The defined range does not necessarily represent the entire field of view of the image recognition, and in general the defined range belongs to a part of the defined field of view of the image recognition. Of course, the rectangular frame is provided and displayed only for convenience of identification, and such a region may be provided as another shape representation, or such a rectangular region or a region of another shape may not be displayed.
Position information of the center of the image-recognized visual field range is determined from the determined image-recognized visual field range (step S3).
After the field of view for image recognition is determined, a center position of the field of view may be determined, which may provide a reference for the position of the recognition object (e.g., the blade of an ultrasonic blade or an electric blade) in the image to effectively recognize the change in position of the ultrasonic blade or the like in the abdominal cavity.
Further, the method of the present embodiment may also select at least one point of the image recognition object, and determine position information of the at least one point to determine the position information of the image recognition object (step S5).
In the present embodiment, a point on an image recognition object (for example, a blade of an ultrasonic blade, a jaw of a forceps, and the like, but the present invention is not limited thereto) may be selected, and the position information of the image recognition object may be represented by the position information of the point. In the abdominal cavity surgery, the cutting, hemostasis, separation, coagulation and the like are generally performed by using the blade of the ultrasonic knife, so that it may be preferable to select a point on the blade and determine the position information of the point to characterize the blade position information of the ultrasonic knife, but the invention is not limited thereto.
Finally, navigation information may be generated based on the relationship between the position information of the image recognition object and the position information of the center of the image-recognized visual field range, thereby guiding the operation of the robot system for the abdominal cavity surgery (step S7).
The visual navigation method for the abdominal cavity surgery according to the first embodiment of the present invention determines specific location information through image recognition, and guides the operation of the robot system for the abdominal cavity surgery through the relationship between the specific location information, and the method does not require additional medical personnel, can also avoid frequent pedaling operation by a medical practitioner, saves the physical strength and energy of the medical personnel, is accurate in operation, and solves the following problems in the prior art: the laparoscope moving operation needs to be specially arranged for a medical staff to perform laparoscope moving operation, so that the problem of instability or large error and repeated adjustment easily occurs, the physical strength of the medical staff is consumed, the attention is influenced, the workload of the medical staff is increased, and the smooth operation is influenced.
Specifically, step S7 may be realized by steps S71 to S77 as follows.
In the present embodiment, the information of the center position of the experimental range of the above-described image recognition and the position information of the image recognition object may be respectively specified as coordinates in a specific coordinate system, that is, the method of the present embodiment may determine a first coordinate of the center of the visual field range of the image recognition (step S71) and determine a second coordinate of the image recognition object (step S73), and generate the navigation information according to the positional relationship of the first coordinate and the second coordinate. Since the field of view is determined according to parameters of the laparoscope, the first coordinate representing the center of the field of view is constant. In this embodiment, the relationship between the first and second coordinates may be utilized to generate navigation information to guide the operation of the robotic system for abdominal surgery. Specifically, the motion data of the robot arm of the robot system for the abdominal cavity surgery may be generated according to the positional relationship of the first coordinates and the second coordinates (S75), and the motion of the robot arm of the robot system for the abdominal cavity surgery may be controlled according to the motion data (S77).
In embodying the above-described step S75, at least one angle data may be determined based on a difference value of the first coordinate and the second coordinate on the same coordinate axis in combination with focal length information of the image-recognized photographing device (the laparoscope in the present embodiment) (S751). For example, when the first coordinate and the second coordinate are two-dimensional coordinates, two difference values may be respectively calculated, and two angle data (e.g., euler angles) may be determined by combining the trigonometric functions of the two difference values and the focal length information of the laparoscope. Thereafter, motion data of the robot arm may be generated from the two angle data (S753), thereby controlling the motion of the robot arm. As the image recognition object (e.g., the tool bit of the ultrasonic blade) moves, the coordinate information of the image recognition object changes, so that the two angle data also change accordingly, thereby providing navigation information for the motion of the robot arm. That is, the robot arm may automatically move along with the movement of the cutter head of the ultrasonic scalpel in the abdominal cavity, so that the movement of the abdominal surgery device and the movement of the robot arm correspond to each other without manually controlling the movement of the robot arm that clamps the laparoscope. In this embodiment, the mechanical arm of the robot system for abdominal cavity surgery may be a six-axis six-degree-of-freedom mechanical arm, or a seven-axis mechanical arm, which can convert two angle information into motion data corresponding to three axes, and the specific conversion method is known to those skilled in the art and will not be described herein.
In this embodiment, the first coordinate and the second coordinate may be determined according to the resolution information of the laparoscope, for example, the resolution of the laparoscope is 1920 × 1080 (the present invention is not limited thereto), a corresponding coordinate system may be formed during image recognition, the position information (for example, coordinate information) of the center of the image recognition range may be determined, and similarly, the position information of the image recognition object may be determined.
Furthermore, before determining the respective position information in the image, the image acquired by the laparoscope may be subjected to predetermined processing (e.g., contrast enhancement, noise removal, conversion to a specific format, gray scale processing, etc., but the present invention is not limited thereto) to facilitate subsequent operations.
Of course, the method of this embodiment may also identify a plurality of predetermined portions in the laparoscopic surgery image. For example, a specific organ, a specified lesion, or the like in the abdominal cavity is recognized, and the movement of the manipulator is controlled based on the positional relationship between the position of the organ or the position of the specified lesion and the center of the visual field of the image recognition.
In addition, the moving manner of the laparoscope may include: the laparoscope can move in any directions such as left, right, up and down within a plane defined by the tool center point of the mechanical arm device, and the laparoscope in the embodiment can also extend into the abdominal cavity or retract in a direction out of the abdominal cavity according to the actual condition of the operation, so as to provide a proper image of the abdominal operation. In the embodiment, the laparoscope is clamped by the mechanical arm to move, so that the stable motion of the mechanical arm can be controlled by control methods such as Proportional Integral Derivative (PID), Proportional Derivative (PD) and the like, the situations of reciprocating motion, shaking and the like caused by frequent direction change of the mechanical arm are avoided, and the safety in abdominal cavity operation can be ensured.
Accordingly, a second embodiment of the present invention provides avisual navigation device 1 for abdominal surgery, which may include: afirst determination module 10, asecond determination module 30, athird determination module 50, and anavigation module 70; thefirst determination module 10 may be configured to determine a field of view for image recognition; thesecond determination module 30 may be configured to determine position information of a center of the image recognition visual field range according to the determined image recognition visual field range; thethird determination module 50 may be configured to: selecting at least one point of an image recognition object, and determining position information of the at least one point to determine the position information of the image recognition object; and thenavigation module 70 may be configured to generate navigation information to guide the operation of the robot system for the abdominal surgery according to a relationship of the position information of the image recognition object and the position information of the center of the image recognition visual field range.
In performing abdominal surgery, images in the abdominal cavity need to be obtained laparoscopically, and surgical tools (e.g., ultrasonic knife, electric knife, etc.) in the abdominal cavity need to be identified laparoscopically. Accordingly, thefirst determination module 10 may determine the visual field range of the image recognition according to the specific situation of the abdominal cavity operation, the parameter information of the laparoscope, and the like. For example, the magnification of the laparoscope can be 1, 2, 4, 6, 8, etc., the laparoscope lens has different viewing angles of 0 °, 30 °, 75 °, etc., and the diameter and length of the laparoscope can also be determined according to actual requirements. The object of image recognition (e.g., the blade of an ultrasonic blade) should be within the determined field of view of the image recognition. Further, in actual practice, in order to facilitate the recognition by the doctor, a range containing the recognition object may be defined in a graphic form (for example, a rectangular frame, a circular frame, etc., to which the present invention is not limited) in the visual field range of this image recognition and displayed on the screen. The defined range does not necessarily represent the entire field of view of the image recognition, and in general the defined range belongs to a part of the defined field of view of the image recognition. Of course, the rectangular frame is provided and displayed only for convenience of identification, and such a region may be provided as another shape representation, or such a rectangular region or a region of another shape may not be displayed.
Further, thesecond determination module 30 may determine a center position of the field of view, which may provide a reference for a position of an identification object (e.g., a blade of an ultrasonic blade or an electric blade) in the image to effectively identify a position change of the ultrasonic blade or the like in the abdominal cavity.
In this embodiment, thethird determination module 50 may select a point on the image recognition object (for example, a blade of an ultrasonic blade, a jaw of a forceps, etc., but the present invention is not limited thereto), and the position information of the image recognition object is represented by the position information of the point. In the abdominal cavity surgery, the cutting, hemostasis, separation, coagulation and the like are generally performed by using the blade of the ultrasonic knife, so that it may be preferable to select a point on the blade and determine the position information of the point to characterize the blade position information of the ultrasonic knife, but the invention is not limited thereto.
Finally, thenavigation module 70 may generate navigation information according to a relationship between the position information of the image recognition object determined by thesecond determination module 10 and the position information of the center of the visual field range of the image recognition determined by the third determination module, thereby guiding the operation of the robot system for the abdominal cavity surgery.
Specifically, in thenavigation module 70, the first coordinatedetermination sub-module 701 and the second coordinate determination sub-module 703 may be configured to determine a first coordinate of the center of the image recognition visual field range and determine a second coordinate of the image recognition object, respectively. Also, the motiondata generation submodule 705 may generate motion data of the robot arm of the robot system for the abdominal surgery according to the positional relationship of the first coordinate and the second coordinate, so that themotion control submodule 707 may control the motion of the robot arm of the robot system for the abdominal surgery according to the motion data.
In this embodiment, the motion data generation sub-module 705 may be specifically configured to: at least one angle data is determined based on a difference value of the first coordinate and the second coordinate on the same coordinate axis in combination with focal length information of the image-recognized photographing device (the laparoscope in this embodiment). For example, when the first coordinate and the second coordinate are two-dimensional coordinates, the motion data generation sub-module 705 may calculate two difference values respectively, and determine two angle data (e.g., euler angles corresponding to the motion of the robot arm) by combining the trigonometric functions formed by the two difference values and the focal length of the laparoscope. Thereafter, themotion control module 707 may generate motion data of the robot arm based on the two angle data, thereby controlling the motion of the robot arm. Along with the movement of the image recognition object (for example, the tool bit of the ultrasonic knife), the first coordinate and the second coordinate determined by the first coordinatedetermination submodule 701 and the second coordinatedetermination submodule 703 are changed, so that the angle data determined by the motiondata generation submodule 705 are changed correspondingly, and the motion of the mechanical arm is controlled by themotion control submodule 707. That is, in the device of the present invention, the robot arm may be automatically moved along with the movement of the cutter head of the ultrasonic blade in the abdominal cavity, so that the movement of the abdominal surgery device and the movement of the robot arm correspond to each other without manually controlling the movement of the robot arm holding the laparoscope. In this embodiment, the mechanical arm of the robot system for abdominal cavity surgery may be a six-axis six-degree-of-freedom mechanical arm, or a seven-axis seven-degree-of-freedom mechanical arm, which can convert the two angle information into motion data corresponding to three axes, and the specific conversion method is well known to those skilled in the art and will not be described herein.
In this embodiment, the first coordinate and the second coordinate may be determined according to the resolution information of the laparoscope, for example, the resolution of the laparoscope is 1024 × 768 (not limited thereto), a corresponding coordinate system may be formed when performing image recognition, the position information (for example, coordinate information) of the center of the image recognition range may be determined, and similarly, the position information of the image recognition object may be determined.
Furthermore, before determining the respective position information in the image, the image acquired by the laparoscope may be subjected to predetermined processing (e.g., contrast enhancement, noise removal, conversion to a specific format, gray scale processing, etc., but the present invention is not limited thereto) to facilitate subsequent operations.
Of course, the device in this embodiment can also identify a plurality of predetermined portions in the abdominal surgery image. For example, a specific organ, a specified lesion, or the like in the abdominal cavity is recognized, and the movement of the manipulator is controlled based on the positional relationship between the position of the organ or the position of the specified lesion and the center of the visual field of the image recognition.
In addition, the moving manner of the laparoscope may include: the laparoscope can move in any directions such as left, right, up and down within a plane defined by the tool center point of the mechanical arm device, and the laparoscope in the embodiment can also extend into the abdominal cavity or retract in a direction out of the abdominal cavity according to the actual condition of the operation, so as to provide a proper image of the abdominal operation. In the embodiment, the laparoscope is clamped by the mechanical arm to move, so that the stable motion of the mechanical arm can be controlled by control methods such as Proportional Integral Derivative (PID), Proportional Derivative (PD) and the like, the situations of reciprocating motion, shaking and the like caused by frequent direction change of the mechanical arm are avoided, and the safety in abdominal cavity operation can be ensured.
Furthermore, the present invention relates to a storage medium having a computer program stored thereon, wherein the computer program, when being executed by a processor, can implement the visual navigation method for abdominal surgery according to the first embodiment of the present invention. In particular, the storage medium may be implemented as a non-transitory computer-readable storage medium.
Non-transitory computer-readable storage media may include all kinds of storage media that store commands that can be interpreted by a computer. For example, the non-transitory computer-readable recording medium may be: such as ROM, RAM, magnetic tape, magnetic disk, flash memory, or optical data storage.
So far, embodiments of the present invention have been described with reference to the drawings. It is obvious to those skilled in the art that the present invention can be embodied in other forms than the embodiments described above without changing the technical idea or essential features of the present invention. The above embodiments are exemplary only, and should not be construed in a limiting sense.

Claims (10)

CN202110414164.9A2021-04-162021-04-16Visual navigation method and device for abdominal cavity operationPendingCN113180827A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110414164.9ACN113180827A (en)2021-04-162021-04-16Visual navigation method and device for abdominal cavity operation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110414164.9ACN113180827A (en)2021-04-162021-04-16Visual navigation method and device for abdominal cavity operation

Publications (1)

Publication NumberPublication Date
CN113180827Atrue CN113180827A (en)2021-07-30

Family

ID=76977278

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110414164.9APendingCN113180827A (en)2021-04-162021-04-16Visual navigation method and device for abdominal cavity operation

Country Status (1)

CountryLink
CN (1)CN113180827A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20150320514A1 (en)*2014-05-082015-11-12Samsung Electronics Co., Ltd.Surgical robots and control methods thereof
CN107049492A (en)*2017-05-262017-08-18微创(上海)医疗机器人有限公司The display methods of surgical robot system and surgical instrument position
US20180271603A1 (en)*2015-08-302018-09-27M.S.T. Medical Surgery Technologies LtdIntelligent surgical tool control system for laparoscopic surgeries
CN108577980A (en)*2018-02-082018-09-28南方医科大学南方医院A kind of method, system and device ultrasonic cutter head carried out from motion tracking
US20180325604A1 (en)*2014-07-102018-11-15M.S.T. Medical Surgery Technologies LtdImproved interface for laparoscopic surgeries - movement gestures
CN109171957A (en)*2018-06-222019-01-11南方医科大学南方医院Intelligence based on characteristics of image recognition and tracking technology automatically moves laparoscope and accurate regulating system
CN110464468A (en)*2019-09-102019-11-19深圳市精锋医疗科技有限公司The control method of operating robot and its end instrument, control device
US20200085282A1 (en)*2017-03-272020-03-19Sony CorporationSurgical imaging system, image processing apparatus for surgery, and method for controlling an imaging procedure
CN112618028A (en)*2021-01-062021-04-09深圳市精锋医疗科技有限公司Surgical robot and method and control device for guiding surgical arm to move

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20150320514A1 (en)*2014-05-082015-11-12Samsung Electronics Co., Ltd.Surgical robots and control methods thereof
US20180325604A1 (en)*2014-07-102018-11-15M.S.T. Medical Surgery Technologies LtdImproved interface for laparoscopic surgeries - movement gestures
US20180271603A1 (en)*2015-08-302018-09-27M.S.T. Medical Surgery Technologies LtdIntelligent surgical tool control system for laparoscopic surgeries
US20200085282A1 (en)*2017-03-272020-03-19Sony CorporationSurgical imaging system, image processing apparatus for surgery, and method for controlling an imaging procedure
CN107049492A (en)*2017-05-262017-08-18微创(上海)医疗机器人有限公司The display methods of surgical robot system and surgical instrument position
CN108577980A (en)*2018-02-082018-09-28南方医科大学南方医院A kind of method, system and device ultrasonic cutter head carried out from motion tracking
CN109171957A (en)*2018-06-222019-01-11南方医科大学南方医院Intelligence based on characteristics of image recognition and tracking technology automatically moves laparoscope and accurate regulating system
CN110464468A (en)*2019-09-102019-11-19深圳市精锋医疗科技有限公司The control method of operating robot and its end instrument, control device
CN112618028A (en)*2021-01-062021-04-09深圳市精锋医疗科技有限公司Surgical robot and method and control device for guiding surgical arm to move

Similar Documents

PublicationPublication DateTitle
US11758262B2 (en)Intelligent manual adjustment of an image control element
EP3737322B1 (en)Guidance for placement of surgical ports
US10881268B2 (en)Device to set and retrieve a reference point during a surgical procedure
KR101891162B1 (en)Estimation of a position and orientation of a frame used in controlling movement of a tool
US12279841B2 (en)Robotic surgical safety via video processing
US11266294B2 (en)Image processing device, endoscopic surgery system, and image processing method
US20230126545A1 (en)Systems and methods for facilitating automated operation of a device in a surgical space
CN113993478B (en)Medical tool control system, controller, and non-transitory computer readable memory
JP2020156800A (en)Medical arm system, control device and control method
CN113633387A (en) Supporting laparoscopic minimally invasive robotic touch interaction method and system for operative field tracking
US20220160441A1 (en)Surgery assistance system and method for generating control signals for voice control of motor-controlled movable robot kinematics of such a surgery assistance system
US20230024942A1 (en)Computer assisted surgery system, surgical control apparatus and surgical control method
US20230248467A1 (en)Method of medical navigation
US20250127382A1 (en)Medical observation system, method, and medical observation device
CN116439636A (en)Instrument, endoscope system, medical system and positioning control method of medical system
JP2025081258A (en)Order of priority determination of plurality of objects in support function for surgical microscope observation system
CN113180827A (en)Visual navigation method and device for abdominal cavity operation
CN118453133A (en) A robot-clamped laparoscope posture control method and related device
JP2022530795A (en) Determining the tip and orientation of surgical tools
CN116492062A (en)Master-slave movement control method based on composite identification and surgical robot system
US20240346826A1 (en)Medical observation system, information processing apparatus, and information processing method
US20250152294A1 (en)Tracking moving objects during the auto-configuration of a surgical microscopy system
WO2025136980A1 (en)Surgical performance management via triangulation adjustment
WO2025019594A1 (en)Systems and methods for implementing a zoom feature associated with an imaging device in an imaging space
CN116492063A (en)Master-slave motion control method based on positioning image and surgical robot system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20210730


[8]ページ先頭

©2009-2025 Movatter.jp