Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. When an element is referred to as being "coupled" to another element, it can be directly coupled to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only and do not represent the only embodiments. As used herein, the terms "distal" and "proximal" are used as terms of orientation that are conventional in the art of interventional medical devices, wherein "distal" refers to the end of the device that is distal from the operator during a procedure, and "proximal" refers to the end of the device that is proximal to the operator during a procedure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items. The term "each" as used herein includes one and more than one. The term "plurality", as used herein, includes two and more.
Fig. 1 to 3 are schematic structural diagrams and partial schematic diagrams of a surgical robot according to an embodiment of the present invention.
The surgical robot includes a master operation table 1 and aslave operation device 2. The main operating table 1 has amotion input device 11 and adisplay 12, and a doctor transmits a control command to theslave operating device 2 by operating themotion input device 11 to make theslave operating device 2 perform a corresponding operation according to the control command of the doctor operating themotion input device 11, and observes an operation area through thedisplay 12. Theslave operation device 2 has an arm body having a robot arm 21 and anoperation arm 31 detachably attached to a distal end of the robot arm 21. The robot arm 21 includes a base and a connecting assembly in series, the connecting assembly having a plurality of joint assemblies, which in the configuration illustrated in FIG. 1 have joint assemblies 210-214. Theoperating arm 31 comprises a connectingrod 32, a connectingassembly 33 and aterminal instrument 34 which are connected in sequence, wherein the connectingassembly 33 is provided with a plurality of joint assemblies, and theoperating arm 31 adjusts the pose of theterminal instrument 34 through adjusting the joint assemblies; endinstrument 34 has animage end instrument 34A and amanipulation end instrument 34B. More specifically, theoperating arm 31 is mounted to thepower mechanism 22 at the distal end of the robot arm 21 and is driven by a driving portion in thepower mechanism 22. The robot arm 21 and/or theoperation arm 31 may follow themotion input device 11, and the robot arm 21 may be dragged by an external force.
For example, the motion-input device 11 may be connected to themain console 1 by a wire, or connected to themain console 1 by a rotating link. The motion-input device 11 may be configured to be hand-held or wearable (often worn at the far end of the wrist, such as the fingers or palm), with multiple degrees of freedom available. The motion-input device 11 is, for example, configured in the form of a handle as shown in fig. 3. In one case, the number of degrees of freedom available for the motion-input device 11 is configured to be lower than the number of degrees of freedom defined for the task at the distal end of the arm body; in another case, the number of effective degrees of freedom of the motion-input device 11 is configured not to be lower than the number of task degrees of freedom of the distal end of the arm body. The number of effective degrees of freedom of themotion input device 11 is up to 6, and in order to flexibly control the motion of the arm body in the cartesian space, themotion input device 11 is exemplarily configured to have 6 effective degrees of freedom, wherein the effective degrees of freedom of themotion input device 11 refer to effective degrees of freedom that can follow the motion of the hand, so that the doctor has a large operation space, and can generate more meaningful data by analyzing each effective degree of freedom, thereby satisfying the control of the robot arm 21 in almost all configurations.
Themotion input device 11 follows the hand motion of the doctor, and collects the motion information of the motion input device itself caused by the hand motion in real time. The motion information can be analyzed to obtain position information, attitude information, speed information, acceleration information and the like. The motion-input device 11 includes, but is not limited to, a magnetic navigation position sensor, an optical position sensor, or a link-type main operator, etc.
In one embodiment, a method of controlling a tip instrument in a surgical robot is provided. As shown in fig. 4, the control method includes:
and step S1, an acquisition step, namely acquiring initial target pose information of each controlled operation terminal instrument.
Operative tip instruments 34B mounted onpower mechanism 22 include operative tip instruments configured as controlled operative tip instruments (operative tip instruments to be controlled by the motion-input device) and unconfigured uncontrolled operative tip instruments (operative tip instruments not to be controlled by the motion-input device). At most, one operator can control two controlled operationdistal end instruments 34B at the same time, and when there are two or more controlled operationdistal end instruments 34B, the two or more operators can cooperatively control the instruments.
And step S2, a decomposition step, namely decomposing the pose information of each initial target to respectively obtain a group of pose information sets.
Each set of pose information comprises first component target pose information of the distal end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system. The first coordinate system refers to a base coordinate system of the robot arm, and the second coordinate system refers to a tool coordinate system of the robot arm.
In step S3, the first determination step is to determine the validity of each group of posture information sets.
Specifically, the validity of two component pose information included in each set of pose information is judged, and when the two component pose information are both valid, the set of pose information is determined to be valid, otherwise, the set of pose information is determined to be invalid.
Step S4, a calculation step, which is to calculate, in combination with at least one set of pose information sets, first target pose information of the distal end of the robot arm in the first coordinate system, second target pose information of the image end instrument in the second coordinate system, third target pose information of each controlled operation end instrument in the second coordinate system, and fourth target pose information of each uncontrolled operation end instrument in the second coordinate system, under the condition that at least one set of pose information sets is valid and the image end instrument and each uncontrolled operation end instrument are kept in the current pose.
In this step, it is desirable that the uncontrolledworking tip instrument 34B be able to remain in the current pose; and it is preferable to expect each controlledoperation tip instrument 34B to be able to reach the first desired pose, and if several controlledoperation tip instruments 34B are not able to reach the first desired pose, it is expected that these controlledoperation tip instruments 34B which are not able to reach the first desired pose are able to reach the second desired pose. The first desired pose refers to the pose of the target corresponding to the initial target pose information (including both cases where the initial target pose (which is associated with the motion information input by the controlled motion-input device) is consistent or inconsistent with the current pose). This second desired pose refers to the current pose, in case several controlled operational end-instruments 34B cannot reach the first desired pose, aiming to ensure that it/they can reach the second desired pose, in order to guarantee the safety of the operation.
Step S5, a second judgment step of judging the validity of the first target pose information, the second target pose information, each third target pose information, and each fourth target pose information.
And step S6, a control step, namely when the first target pose information, the second target pose information, each third target pose information and each fourth target pose information are effective, controlling the mechanical arm to move according to the first target pose information so that the far end of the mechanical arm reaches the corresponding target pose, controlling the operation arm corresponding to the controlled operation end instrument to move according to the second target pose information so that the image end instrument is kept at the current pose, controlling the operation arm corresponding to the controlled operation end instrument to move according to each third target pose information so that the controlled operation end instrument reaches the corresponding target pose, and controlling the operation arm corresponding to the uncontrolled operation end instrument to move according to each fourth target pose information so that each uncontrolled operation end instrument is kept at the current pose.
Through the steps S1 to S6, when the acquired first target pose information, second target pose information, third target pose information, and fourth target pose information are all valid, the controlledoperation end instrument 34B can reach the first desired pose or the second desired pose while keeping theimage end instrument 34A and the uncontrolledoperation end instrument 34B in the current poses to provide a stable field of view; in addition, the range of motion of controlledmanipulation tip instrument 34B may be extended in some scenarios in conjunction with the movement of the robotic arm and corresponding manipulation arm to facilitate easier surgical deployment.
In one embodiment, specifically, in step S3, if the sets of posture information obtained by the determination are all invalid, it indicates that none of the controlledoperation end devices 34B (including one or more controlled operation end devices) has adjustability, and therefore the subsequent step is not performed, i.e., the control is ended, and the process returns to step S1.
In one embodiment, referring to fig. 5 and 6, when the controlled operation end instrument is configured as one and the set of pose information is valid, the step S4, namely the calculating step, includes:
and S411, under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information, converting the current pose information of the end instrument of the image to obtain the target pose information of the end instrument in a second coordinate system, and converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the end instrument in the second coordinate system.
The current pose information of eachend instrument 34, includingimage end instrument 34A and controlledoperation end instrument 34B, may be a first coordinate system or a second coordinate system, and may be other reference coordinate systems, which are substantially interconvertible. As exemplified herein, "current pose information" refers to current pose information in a second coordinate system, although current pose information in other coordinate systems is also possible.
Step S412, assigning the first component target pose information to first target pose information, assigning the target pose information of the end-of-arm instrument of the image obtained by conversion in a second coordinate system to second target pose information, assigning the second component target pose information to third target pose information, and assigning the target pose information of each end-of-arm instrument of uncontrolled operation in the second coordinate system to fourth target pose information of the corresponding end-of-arm instrument of uncontrolled operation.
In the above-described step S5, i.e., the second determination step, since the set of pose information has been determined to be valid, the first object pose information and the third object pose information are valid. Therefore, only the second target pose information and the fourth target pose information need to be judged to be effective actually.
And if the first to fourth target pose information is not all effective, ending the control.
If the first to fourth target pose information are all valid, the process proceeds to step S6, i.e., the control step. As shown in fig. 6,manipulation tip instruments 34B include controlled manipulation tip instrument 34B1 and uncontrolledmanipulationtip instrument 34B 2. The manipulator arm 21 is controlled to move according to the first target pose information so as to enable thepower mechanism 22 at the distal end thereof to reach the corresponding target pose, themanipulator arm 31A is controlled to move according to the second target pose information so as to enable theimage end instrument 34A to be kept at the current pose, themanipulator arm 31B is controlled to move according to the third target pose information so as to enable the controlled manipulation end instrument 34B1 to reach the corresponding target pose (first desired pose), and themanipulator arm 31C is controlled to move according to the fourth target pose information so as to enable the uncontrolled manipulation end instrument 34B2 to be kept at the current pose.
In one embodiment, referring to fig. 7 and 8, when the controlled operation end instrument is configured to be a plurality of instruments, and when it is determined in step S3 that only one pose information set is valid and the remaining pose information sets are invalid, step S4 is a calculation step including:
step S421, under the condition that the distal end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set, converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in the second coordinate system, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of each uncontrolled operation end instrument in the second coordinate system, and converting the current pose information of the controlled operation end instrument associated with each ineffective pose information to obtain the second expected target pose information of each uncontrolled operation end instrument in the second coordinate system.
Step S422, assigning the first component target pose information in the effective pose information set to first target pose information, assigning the target pose information of the image end instrument in a second coordinate system to second target pose information, assigning the second component target pose information in the effective pose information set to third target pose information of the associated controlled operation end instrument, assigning each second expected target pose to third target pose information of the corresponding controlled operation end instrument, and assigning the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
In the above step S5, i.e., the second judgment step, it is only necessary to actually judge whether or not the second target pose information of theimage end instrument 34A and the third target pose information of the controlledmanipulation end instrument 34B associated with each invalid set of pose information are valid.
And if the first target pose information, the second target pose information, the third target pose information and the fourth target pose information are not all effective, ending the control.
If the first target pose information, the second target pose information, each third target pose information, and each fourth target pose information are all valid, the process proceeds to step S6, which is a control step. As shown in FIG. 8, themanipulation tip instruments 34B include controlledmanipulation tip instruments 34B 1-34B 3 and an uncontrolledmanipulation tip instrument 34B 4. If one set of pose information associated with the controlled manipulation end instrument 34B1 is valid and two sets of pose information associated with the controlledmanipulation end instruments 34B 2-34B 3 are invalid:
controlling the mechanical arm 21 to move according to the first target pose information so that thepower mechanism 22 at the far end of the mechanical arm reaches the corresponding target pose, controls theoperation arm 31A to move in accordance with the second target pose information to hold theimage end instrument 34A in the current pose, themanipulation arm 31B is controlled to move in accordance with the third target pose information of the controlled manipulation tip instrument 34B1 to bring the controlledmanipulation tip instrument 34B to the corresponding target pose (first desired pose), and controls theoperation arms 31C to 31D to move according to the third target pose information of the controlled operation terminal instruments 34B2 to 34B3 respectively so that the controlled operation terminal instruments 34B2 to 34B3 reach the corresponding target pose (the second desired pose, i.e. the current pose is maintained), and controls the movement of themanipulation arm 31E in accordance with the fourth object pose information to hold the uncontrolled manipulation tip instrument 34B4 in the current pose.
Steps S1 to S6 including steps S421 to S422 are also applicable to the case where the same robot arm 21 has two, or four or more, controlled operationdistal end instruments 34B, in accordance with the principle thereof, except that the same robot arm 21 has three controlled operationdistal end instruments 34B as shown in fig. 8.
In one embodiment, referring to fig. 8 and 9, when there are a plurality of controlled operation end instruments, and more than two pose information sets are valid and the remaining pose information sets are invalid, the step S4 includes:
and step S431, respectively converting the current pose information of the instrument at the end of the image to obtain the target pose information of the instrument in the second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set.
And step S432, judging the effectiveness of the pose information of each target of the image end instrument in the second coordinate system.
In this step, if only one of the target pose information of the image end apparatus is valid, the process proceeds to step S433; if more than two of the target pose information of the image end instrument is valid, the process proceeds to step S438.
Step S433, under the condition that the distal end of the mechanical arm reaches the target pose associated with the target pose information of the effective image terminal instrument and corresponding to the first component target pose information in the effective pose information set, the current pose information of each controlled operation terminal instrument associated with the ineffective target pose information set is converted to obtain the second expected target pose information of the controlled operation terminal instrument in the second coordinate system, the current pose information of each uncontrolled operation terminal instrument is converted to obtain the target pose information of the uncontrolled operation terminal instrument in the second coordinate system, and the initial target pose information of each controlled operation terminal instrument associated with the rest effective target pose information sets is converted to obtain the first expected target pose information of the controlled operation terminal instrument in the second coordinate system.
In step S434, the validity of each first expected target posture information is determined.
In this step, if the position and posture information of each first expected target is valid, the step S435 is entered; if at least part of the first expected target posture information is invalid, the process proceeds to step S436.
Step S435, assigning first component target pose information in the set of pose information associated with the target pose information of the active image end instrument to first target pose information, assigning the target pose information of the active image end instrument to second target pose information, assigning second component target pose information in the set of pose information associated with the target pose information of the active image end instrument to third target pose information of the associated controlled operation end instrument, assigning each first desired target pose information and each second desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assigning the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
Correspondingly assigning the effective first expected target pose information to third target pose information of the associated controlled operation terminal instrument; and correspondingly assigning the second desired target pose information to third target pose information of the controlled operational tip instrument associated with the invalid set of target pose information.
And step S436, under the condition that the distal end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, converting the current pose information of the controlled operation end instrument associated with each first expected target pose information which is invalid to obtain the second expected target pose information of the controlled operation end instrument in the second coordinate system.
Step S437, assigning first component target pose information in the pose information set associated with the target pose information of the effective image end instrument to first target pose information, assigning the target pose information of the effective image end instrument to second target pose information, assigning second component target pose information in the pose information set associated with the target pose information of the effective image end instrument to third target pose information of the associated controlled operation end instrument, assigning each effective first expected target pose information to third target pose information of the corresponding controlled operation end instrument, assigning each second expected target pose information obtained at different stages (i.e., conditions) to third target pose information of the corresponding controlled operation end instrument, and assigning the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
Correspondingly assigning the effective first expected target pose information to third target pose information of the associated controlled operation terminal instrument; each second desired target pose information is associated with two cases, one case being associated with an invalid set of target pose information and the other case being associated with an invalid first desired target pose information (converting the first desired pose to the second desired pose), and therefore its correspondence needs to be assigned to the controlled operation end instrument associated with the respective case. In step S438, one of the target pose information of the end-of-image instrument that is valid is selected as valid, and the other is selected as invalid, and the process proceeds to step S433 when only one of the target pose information of the end-of-image instrument is valid.
In step S438, a plurality of combinations of the target pose information of the plurality of valid image end instruments as valid and invalid may be configured and calculated in step S433 to step S437, respectively, and the control step of step S6 may be performed by setting the calculated first target pose information, second target pose information, and each third target pose information to be valid.
In some embodiments, if more than two sets of the first object pose information, the second object pose information, the third object pose information, and the fourth object pose information calculated for different combinations are valid, then the control step of step S6 may be performed based on some metrics to determine which set of the first object pose information, the second object pose information, the third object pose information, and the fourth object pose information is valid. These metrics include, but are not limited to, one or more of the range of motion, the velocity of motion, and the acceleration of motion of the arm (the mechanical arm or the manipulator arm), and the first object pose information, the second object pose information, and each third object pose information used in step S6 may be selected using, for example, one or more of a smaller range of motion, a smaller velocity of motion, and a smaller acceleration of motion of the arm. These metrics include, but are not limited to, the number of controlled operational end instruments that can achieve the first desired pose and the second desired pose. A larger number of positions that can reach the first expected position are preferentially selected for control. These metrics may also be combined with each other to select an optimum set for the control of step S6.
In some embodiments, priorities may be set for each controlled operation end instrument in advance or in real time (e.g., through voice instructions) during the operation of the controlled operation end instrument, and the target pose information of the valid image end instrument associated with the controlled operation end instrument with higher priority is sequentially selected as valid and the rest as invalid in step S438, so as to ensure that the control object with higher priority can achieve the first desired pose as much as possible. The setting of the priority can be set according to the authority of an operator, can also be set according to a specific control object, and can be flexibly configured.
As shown in FIG. 8, themanipulation tip instruments 34B include controlledmanipulation tip instruments 34B 1-34B 3 and an uncontrolledmanipulation tip instrument 34B 4. Assuming that one set of pose information associated with the controlled operation distal end instrument 34B1 is invalid and both sets of pose information associated with the controlled operation distal end instruments 34B2 to 34B3 are valid, the target pose information of the image distal end instrument in the second coordinate system associated with the controlled operation distal end instruments 34B2, 34B3 calculated according to the above step S431 is C2, C3, respectively.
Case (1.1): if it is determined in step S432 that both C2 and C3 are invalid, the control is terminated.
Case (1.2): assume that it is judged in step S432 that C2 is valid and C3 is invalid.
Based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B2, the target pose information of the controlled operation tip instrument 34B1 in the second coordinate system (second desired target pose information), the target pose information of the controlled operation tip instrument 34B3 in the second coordinate system (first desired target pose information), and the target pose information of the uncontrolled operation tip instrument 34B4 in the second coordinate system are calculated according to step S433.
Assuming that the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is valid as judged according to step S434:
after the value is assigned in step S435, the process proceeds to step S5, and if the first to fourth object pose information is valid, the process proceeds to step S6. In step S6, the robot arm 21 is controlled to move according to the first target pose information so that thepower mechanism 22 at the distal end of the robot arm reaches the corresponding target pose, controls theoperation arm 31A to move in accordance with the second target pose information to hold theimage end instrument 34A in the current pose, controls the movement of themanipulation arm 31B in accordance with the third target pose information of the controlled manipulation end instrument 34B1 to bring the controlledmanipulation end instrument 34B to the corresponding target pose (second desired pose, i.e., hold current pose), controlling the movement of themanipulation arms 31C to 31D according to the third target pose information of the respective controlled manipulation tip instruments 34B2 to 34B3 to bring the controlled manipulation tip instruments 34B2 to 34B3 to the corresponding target pose (first desired pose), and controls the movement of themanipulation arm 31E in accordance with the fourth object pose information to hold the uncontrolled manipulation tip instrument 34B4 in the current pose.
Assuming that the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is determined to be invalid according to step S434:
based on the first component pose information of the set of pose information associated with the controlled manipulation tip instrument 34B2, the second desired target pose information of the controlled manipulation tip instrument 34B3 in the second coordinate system is calculated according to step S436, and after the value is assigned in step S437, the process proceeds to step S5, and when the first to fourth target pose information are all valid, the process proceeds to step S6. In step S6, the robot arm 21 is controlled to move according to the first target pose information so that thepower mechanism 22 at the distal end of the robot arm reaches the corresponding target pose, controls theoperation arm 31A to move in accordance with the second target pose information to hold theimage end instrument 34A in the current pose, controls the movement of themanipulation arms 31B, 34D in accordance with the third target pose information of the respective controlled manipulation tip instruments 34B1, 34B3 to bring the controlled manipulation tip instruments 34B1, 34B3 to the corresponding target pose (second desired pose, i.e., to hold the current pose), and controls theoperation arm 31C to move in accordance with the third target pose information of the controlled operation tip instrument 34B2 to bring the controlled operation tip instrument 34B2 to the corresponding target pose (first desired pose), and controls the movement of themanipulation arm 31E in accordance with the fourth object pose information to hold the uncontrolled manipulation tip instrument 34B4 in the current pose.
Case (1.3): assume that both C2 and C3 are valid as determined in step S432.
The selection of C2 as valid, C3 as invalid and/or C2 as invalid, C3 as valid goes to the above case (1.2).
Steps S1 to S6 including steps S431 to S438 are also applicable to the case where the same robot arm 21 has four or more controlledoperation tip instruments 34B according to the principle thereof, except that the same robot arm 21 has three controlledoperation tip instruments 34B as shown in fig. 8.
In one embodiment, referring to fig. 8 and 10, when the controlled operation terminal device has more than two (including two or more) instruments and each posture information set is valid, the step S4 includes:
and step S441, respectively converting the current pose information of the image end instrument to obtain the target pose information of the image end instrument in a second coordinate system under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in each effective pose information set.
And step S442, judging the effectiveness of the pose information of each target of the image end instrument in the second coordinate system.
In this step, if only one of the target pose information of the image end instrument is valid, the process proceeds to step S443; if more than two of the target pose information of the image end instrument are valid, the process proceeds to step S448.
Step S443, under the condition that the distal end of the manipulator reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, converting the current pose information of each uncontrolled operation end instrument to obtain the target pose information of the uncontrolled operation end instrument in the second coordinate system, and converting the initial target pose information of each controlled operation end instrument associated with the rest effective target pose information sets to obtain the first expected target pose information of the controlled operation end instrument in the second coordinate system.
In step S444, the validity of each first expected target posture information is determined.
In this step, if the position and posture information of each first desired target is valid, the step proceeds to step S445; if at least part of the first expected target pose information is invalid, the process proceeds to step S446.
Step S445, assigning first component target pose information in the set of pose information associated with the target pose information of the active image end instruments to first target pose information, assigning the target pose information of the active image end instruments to second target pose information, assigning second component target pose information in the set of pose information associated with the target pose information of the active image end instruments to third target pose information of the associated controlled operation end instruments, assigning each first desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assigning the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
Step S446, under the condition that the far end of the mechanical arm reaches the target pose corresponding to the first component target pose information in the effective pose information set associated with the target pose information of the effective image end instrument, the current pose information of the controlled operation end instrument associated with each first expected target pose information which is invalid is converted to obtain the second expected target pose information of the controlled operation end instrument in the second coordinate system.
Step S447, assign first component target pose information in the set of pose information associated with the target pose information of the active image end instrument to first target pose information, assign the target pose information of the active image end instrument to second target pose information, assign second component target pose information in the set of pose information associated with the target pose information of the active image end instrument to third target pose information of the associated controlled operation end instrument, assign each first active desired target pose information to third target pose information of the corresponding controlled operation end instrument, assign each second desired target pose information to third target pose information of the corresponding controlled operation end instrument, and assign the target pose information of each uncontrolled operation end instrument in the second coordinate system to fourth target pose information of the corresponding uncontrolled operation end instrument.
The second desired target pose information here is associated only with the case where the first desired target pose information is invalid (the first desired pose is converted into the second desired pose).
In step S448, one of the target pose information of the end-of-image instrument to be valid is selected and the others are invalidated, and the flow proceeds to step S443 when only one of the target pose information of the end-of-image instrument is valid.
As shown in FIG. 8, themanipulation tip instruments 34B include controlledmanipulation tip instruments 34B 1-34B 3 and an uncontrolledmanipulation tip instrument 34B 4. Assuming that the three sets of pose information associated with the controlled operationdistal end instruments 34B 1-34B 3 are valid, the target pose information of the image distal end instruments associated with the controlled operation distal end instruments 34B1, 34B2, 34B3 in the second coordinate system calculated according to the above step S441 are C1, C2, C3, respectively.
Case (2.1): if it is determined in step S442 that none of C1-C3 is valid, the control is ended.
Case (2.2): assume that only C1 is valid and C2 and C3 are invalid as determined by step S442.
Based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, the target pose information (first desired target pose information) of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is calculated according to step S433, and the target pose information (similarly, the second desired target pose) of the uncontrolled operation tip instrument 34B4 in the second coordinate system is calculated according to step S433.
Assuming that the first desired target posture information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is valid according to the step S444:
after the value is assigned in step S445, the process proceeds to step S5, and if the first to fourth object pose information is valid, the process proceeds to step S6. In step S6, the manipulator 21 is controlled to move according to the first object pose information to make thepower mechanism 22 at the distal end reach the corresponding object pose, themanipulator 31A is controlled to move according to the second object pose information to make theimage end instrument 34A maintain the current pose, themanipulator 31B-31D is controlled to move according to the third object pose information of the controlledoperation end instruments 34B 1-34B 3 to reach the corresponding object pose (the first desired pose), and themanipulator 31E is controlled to move according to the fourth object pose information to make the uncontrolled operation end instrument 34B4 maintain the current pose.
Assuming that it is determined according to step S444 that the first desired target posture information of the controlled operation tip instrument 34B2, 34B3 in the second coordinate system is at least partially invalid, if the first desired target posture information of the controlled operation tip instrument 34B2 in the second coordinate system is valid, and the first desired target posture information of the controlled operation tip instrument 34B3 in the second coordinate system is invalid:
based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, the second desired target pose information of the controlled operation tip instrument 34B3 in the second coordinate system is calculated according to step S446, and after the value is assigned in step S447, the process proceeds to step S5, and when the first target pose information, the second target pose information, and each of the third target pose information are all valid, the process proceeds to step S6. In step S6, the manipulator arm 21 is controlled to move according to the first object pose information so that thepower mechanism 22 at the distal end thereof reaches the corresponding object pose, themanipulator arm 31A is controlled to move according to the second object pose information so that theimage end instrument 34A maintains the current pose, themanipulator arms 31B to 31C are controlled to move according to the third object pose information of the controlled manipulation end instruments 34B1 to 34B2 so that themanipulator arms 31D are controlled to move according to the third object pose information of the controlled manipulation end instrument 34B2 so that themanipulator arm 31D reaches the corresponding object pose (the second desired pose), and themanipulator arm 31E is controlled to move according to the fourth object pose information so that the uncontrolled manipulation end instrument 34B4 maintains the current pose.
Assuming that it is judged according to step S444 that the first desired target posture information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is invalid:
based on the first component pose information of the set of pose information associated with the controlled operation tip instrument 34B1, the second desired target pose information of the controlled operation tip instruments 34B2, 34B3 in the second coordinate system is calculated according to step S446, and after the value is assigned in step S447, the process proceeds to step S5, and when the first target pose information, the second target pose information, and each third target pose information are all valid, the process proceeds to step S6. In step S6, the manipulator arm 21 is controlled to move so that thepower mechanism 22 at the distal end thereof reaches the corresponding target pose based on the first target pose information, themanipulator arm 31A is controlled to move so that theimage end instrument 34A is kept at the current pose based on the second target pose information, themanipulator arm 31B is controlled to move so that themanipulator arm 31B reaches the corresponding target pose (first desired pose) based on the third target pose information of the controlled manipulation end instrument 34B1, themanipulator arms 31C and 31D are controlled to move so that the manipulator arms 34B4 are kept at the current pose based on the third target pose information of the controlled manipulation end instruments 34B2 and 34B3, and themanipulator arm 31E is controlled to move so that the non-controlled manipulation end instrument 34B4 is kept at the current pose based on the fourth target pose information.
Steps S1 to S6 including steps S441 to S448 are also applicable to the case where the same robot arm 21 has two or more controlledoperation tip instruments 34B according to the principle thereof, except that the same robot arm 21 has three controlledoperation tip instruments 34B as shown in fig. 8.
In the above embodiments, the number of the uncontrolled operation terminal instruments is not limited.
Through the above embodiment, by combining the movements of the mechanical arm 21 and theoperation arm 31, it is possible to ensure as much as possible that the controlledoperation end instrument 34B can reach the first desired pose to achieve the surgical purpose under the condition that theimage end instrument 34A and the uncontrolled operation end instrument are kept at the current pose, and it is also possible to ensure that the controlledoperation end instrument 34B is kept at the current pose to reduce the surgical risk if the first desired pose cannot be reached.
In an embodiment, as shown in fig. 11, the step S1, namely the obtaining step, includes:
in step S11, motion information of the movement of the teleoperation controlled object input by the motion input device is acquired.
The controlled object is herein specifically referred to as a controlled operation tip instrument.
And step S12, analyzing the motion information into initial target pose information of the controlled object.
Typically, the motion information may be pose information of the motion-input device.
In one embodiment, as shown in fig. 12, the step S12 includes:
and step S121, analyzing and mapping the motion information into pose increment information of the controlled object.
Where "mapping" is a transformation relationship, it may include natural mapping relationships and unnatural mapping relationships.
The natural mapping relationship is a one-to-one correspondence relationship, and refers to a relationship from horizontal movement increment information to horizontal movement increment information, a relationship from vertical movement increment information to vertical movement increment information, a relationship from front and back movement increment information to front and back movement increment information, a relationship from yaw angle rotation increment information to yaw angle rotation increment information, a relationship from pitch angle rotation increment information to pitch angle rotation increment information, and a relationship from roll angle rotation increment information to roll angle rotation increment information between a controlled motion input device and a controlled object.
The non-natural mapping relationship is a mapping relationship other than the natural mapping relationship. In one example, the unnatural mapping includes, but is not limited to, a conversion mapping, which includes, but is not limited to, the aforementioned one-to-one mapping of the horizontal movement increment information, the vertical movement increment information, and the rotation increment information of the fixed coordinate system to the yaw increment information, the pitch increment information, and the roll increment information of the controlled object. The configuration as the unnatural mapping enables easier control of the controlled object in some cases, such as a two-to-one operation mode.
Step S122, position information of each joint component in the controlled object is acquired.
The corresponding position information can be obtained by a position sensor such as an encoder installed at each joint component in the controlled object. In the exemplary embodiment illustrated in fig. 1 and 13, the robot arm 21 has 5 degrees of freedom, and a set of position information (d1, θ) can be detected by means of position sensors2,θ3,θ4,θ5)。
And step S123, calculating the current pose information of the controlled object in the first coordinate system according to the position information of each joint assembly.
Where calculations can be generally made in conjunction with positive kinematics. Establishing a kinematic model from the fixed point of the mechanical arm 21 (namely, the point C, the origin of the tool coordinate system of the mechanical arm 21 is on the fixed point) to the base of the mechanical arm 21, and outputting a model conversion matrix of the point C and the base
The calculation method is
And step S124, calculating initial target pose information of the controlled object in the first coordinate system by combining the incremental pose information and the current pose information.
Wherein, the model conversion matrix is based on the C point and the base
And acquiring the pose information of the point C in the fixed coordinate system. Assuming that the coordinate system of the point C is rotated to the posture described by the model transformation matrix without changing the position of the point C, the rotation axis angle [ theta ] can be obtained
x0,θ
y0,θ
z0]As shown in fig. 14. Theta
x0Is the roll angle, theta
y0Is yaw angle, θ
z0For pitch, in fact in the arm 21 shown in fig. 13, there is a lack of freedom of roll angle and hence theta in fact
x0Is not adjustable. The fixed coordinate system may for example be defined at the display, but may of course also be defined at a location which is not movable at least during operation.
Further, specifically, in the control step of step S6, the control of the robot arm, the image end instrument, and the manipulation end instrument as the control objects may include the steps of:
and calculating the target position information of each corresponding joint component according to the target pose information of the far end of the control object. Such as may be calculated by inverse kinematics.
And controlling each joint assembly in the control object to reach the corresponding target pose in a linkage manner according to the target position information of each joint assembly.
In one embodiment, as shown in fig. 15, the step S12 includes:
in step S125, a selection instruction associated with the operation mode type input for the controlled object is acquired.
The operation modes include a two-to-one operation mode and a one-to-one operation mode, the two-to-one operation mode refers to control of one controlled object with two controlled motion input devices, and the one-to-one operation mode refers to control of one controlled object with one controlled motion input device. When controlling the movement of a controlled object, a one-to-one operation mode or a two-to-one operation mode can be selected. For the one-to-one operation mode, it is further selectable which motion-input device is to be used as the controlled motion-input device for control. For example, when the same operator moves with both hands, the same operator may control one controlled object in a two-to-one operation mode or may control two controlled objects in a one-to-one operation mode according to the configuration. This is still true for more than two operators when the surgical robot provides enough motion-input devices.
And step S126, acquiring the motion information input by the controlled motion input equipment by combining the type of the operation mode, and analyzing and mapping the motion information into the incremental pose information of the far end of the controlled object in the first coordinate system.
In one embodiment, for one-to-one operation mode, the formula P is used for examplen=KPnPose information P of a corresponding one of the controlled motion-input devices 11 at the nth time is obtained, where K is a scaling factor,in general, K>0, more preferably, 1. gtoreq.K>And 0, so as to realize the scaling of the pose and facilitate the control.
In one embodiment, for the two-to-one operation mode, the formula P is used for examplen=K1PnL+K2PnRObtaining pose information P for the respective two controlled motion-input devices 11 at time n, where K1And K2Respectively representing the scaling factors of different motion-input devices 11, typically K1>0,K2>0; more preferably, 1 is not less than K1>0,1≥K2>0。
Calculating incremental pose information Δ p of the controlledmotion input device 11 corresponding to a one-to-one operation mode or a two-to-one operation mode at a time before and after a certain timen_n-1The method can be calculated according to the following formula:
Δpn_n-1=Pn-Pn-1
in an embodiment, as shown in fig. 16 and fig. 17, when the selection instruction acquired in step S125 is associated with a two-to-one operation mode, step S126 includes:
in step S1261, the first position and orientation information of the two controlled motion input devices at the previous time are respectively obtained.
Step S1262, respectively obtaining second position and orientation information of the two controlled motion input devices at the later time.
And S1263, calculating and acquiring the incremental pose information of the two controlled motion input devices in a fixed coordinate system by combining the first scale coefficient and the first pose information and the second pose information of the two controlled motion input devices.
In step S1263, the following steps may be specifically implemented:
and calculating the incremental pose information of the first pose information and the second pose information of one controlled motion input device in the fixed coordinate system, and calculating the incremental pose information of the first pose information and the second pose information of the other controlled motion input device in the fixed coordinate system.
And calculating the increment pose information of one motion input device in the fixed coordinate system and the increment pose information of the other motion input device in the fixed coordinate system by combining the first scale coefficient to respectively obtain the increment pose information of the two motion input devices in the fixed coordinate system.
In the two-to-one operation mode, the first scaling factor is 0.5, i.e. K, for example1And K2And if the values of the two controlled motion input devices are both 0.5, the acquired incremental pose information represents the incremental pose information of the central point of the connecting line between the two controlled motion input devices. According to the actual situation, K can also be matched1And K2Additional assignments are made. K1And K2May be the same or different.
And S1264, mapping the incremental pose information of the two controlled motion input devices in the fixed coordinate system to the incremental pose information of the far end of the controlled object in the first coordinate system.
In an embodiment, as shown in fig. 18, when the selection instruction acquired in step S125 is associated with a two-to-one operation mode, step S126 may also include:
in step S1265, first position information of the two controlled motion input devices in the fixed coordinate system at the previous time is respectively obtained.
In step S1266, second position information of the two controlled motion input devices in the fixed coordinate system at the later time is respectively obtained.
And S1267, calculating and acquiring horizontal movement increment information, vertical movement increment information and rotation increment information of the two controlled motion input devices in the fixed coordinate system by combining the second proportionality coefficient and the first position information and the second position information of the two controlled motion input devices in the fixed coordinate system.
And S1268, mapping the horizontal movement increment information, the vertical movement increment information and the rotation increment information of the two controlled motion input devices in a fixed coordinate system into the yaw angle increment information, the pitch angle increment information and the roll angle increment information of the remote end of the controlled object in a first coordinate system correspondingly.
Further, as shown in fig. 19 and 20, the step S1268 of calculating and acquiring incremental rotation information of the two controlled motion input devices in the fixed coordinate system according to the first position information and the second position information of the two controlled motion input devices in the fixed coordinate system includes:
step S12681, a first position vector between the two controlled motion-input devices at a previous time is established.
Step S12682, a second position vector between the two controlled motion input devices at a later time is established.
Step S12683, the rotation increment information of the two controlled motion input devices in the fixed coordinate system is obtained by combining the third scaling factor and the included angle between the first position vector and the second position vector.
In an embodiment, as shown in fig. 21 and fig. 22, when the selection instruction acquired in step S125 is associated with a one-to-one operation mode, step S126 may include:
in step S12611, first pose information of the controlled motion input device in the fixed coordinate system at the previous time is obtained.
In step S12612, second pose information of the controlled motion input device in the fixed coordinate system at the later time is obtained.
And S12613, calculating and acquiring the incremental pose information of the controlled motion input equipment in the fixed coordinate system by combining the fourth proportionality coefficient and the first pose information and the second pose information of the controlled motion input equipment in the fixed coordinate system.
Step S12614, mapping the incremental pose information of the controlled motion input device in the fixed coordinate system to the incremental pose information of the remote end of the controlled object in the first coordinate system.
It should be noted that, in some usage scenarios, when the mechanical arm 21 moves, it is necessary to ensure that the distal end of the mechanical arm 21 moves around a stationary point (a distal end Center of Motion) when the mechanical arm 21 moves, that is, performs RCM constrained Motion, and specifically, the task degree of freedom at the distal end of the mechanical arm may be ensured to be implemented by setting the task degree of freedom, which is only related to the pose degree of freedom. The task degree of freedom of the distal end of the arm body can be understood as the degree of freedom of the distal end of the arm body allowing movement in cartesian space, which is at most 6. The degree of freedom actually possessed by the distal end of the arm body in the cartesian space is an effective degree of freedom, which is related to the configuration (i.e., structural features) thereof, and can be understood as the degree of freedom that can be realized by the distal end of the arm body in the cartesian space.
The stationary point has a relatively fixed positional relationship with the distal end of the robotic arm. Depending on the particular control objective, the origin of the second coordinate system may be the fixed point in some embodiments, or a point on the distal end of the robotic arm in other embodiments.
In an embodiment, as shown in fig. 23, specifically in step S2, i.e. the decomposition step, the method may include:
in step S211, an input operation command associated with the degree of freedom of the task at the distal end of the robot arm is acquired.
Step S212, decomposing each initial target pose information by combining the task freedom degrees respectively to obtain a group of pose information sets including first component target pose information of the far end of the mechanical arm in a first coordinate system and second component target pose information of the controlled operation end instrument in a second coordinate system.
Wherein the operation command may include a first operation command and a second operation command. The first operation command is associated with the condition that the task degree of freedom of the distal end of the mechanical arm 21 is completely matched with the effective degree of freedom of the mechanical arm 21, so that the distal end of the mechanical arm can move freely in the effective degree of freedom of the mechanical arm; the second operating command, which corresponds to the above-mentioned RCM constrained motion, is associated with the case where the task degree of freedom of the distal end of the robot arm 21 perfectly matches the attitude degree of freedom in the effective degrees of freedom of the robot arm 21, so as to ensure that the distal end thereof, i.e., thepower mechanism 22, moves around the motionless point when the robot arm 21 moves. Of course, other combinations of task degrees of freedom may be defined to facilitate control, and are not described in detail herein.
For example, when the second operation command is acquired in step S211, only the information on the degree of freedom of the attitude is changed while the information on the degree of freedom of the position is kept unchanged in the first component target pose information obtained by decomposition. In this way, the distal end of the robot arm 21 moves around the fixed point, and the desired posture is achieved mainly depending on the movement of the controlledoperation end instrument 34B, and the safety of the operation can be ensured.
In an embodiment, specifically in step S2, i.e. the decomposition step, the method may include:
step S221, acquiring current pose information of the far end of the mechanical arm in a first coordinate system;
step S222, converting the initial target pose information to obtain second component target pose information under the condition that the far end of the mechanical arm is kept at the current pose corresponding to the current pose information of the mechanical arm;
step S223, judging the validity of the pose information of the second component target;
in this step, if the pose information of the second component target is valid, go to step S224; otherwise, the process proceeds to step S225.
Step S224, under the condition that the controlled operation terminal instrument reaches the target pose corresponding to the second component target pose information, converting the initial target pose information to obtain first component target pose information;
and step S225, adjusting the second component target pose information to be effective, updating the second component target pose information, and converting the initial target pose information to obtain the first component target pose information under the condition that the controlled operation terminal instrument reaches the target pose corresponding to the updated second component target pose information.
Through the steps S221 to S225, when the pose of the controlled operation terminal instrument is adjusted, the corresponding operation arm is preferentially adjusted, and if the movement of the operation arm meets the adjustment of the controlled operation terminal instrument, only the operation arm needs to move; if the movement of the manipulator arm is not sufficient for adjustment of the controlled manipulation tip instrument, the adjustment may be made in conjunction with the movement of the robotic arm.
Furthermore, in the above step S3, i.e. the first determination step, since the pose information of the second component target itself is valid or is valid after adjustment, only the pose information of the first component target needs to be determined, and when the pose information of the first component target is valid, the corresponding pose information set can be determined to be valid, otherwise, the pose information set is determined to be invalid.
Referring to fig. 6, the steps S221 to S222 may be implemented as follows, specifically, by the following formula (1):
wherein,
is the initial target pose information of controlled
manipulation tip instrument 34B in the first coordinate system,
is the current pose information of the far end of the mechanical arm in a first coordinate system,
is the target pose information of controlled
manipulation tip instrument 34B in the second coordinate system. T2 is the tool coordinate system of the controlled
manipulation tip instrument 34B, T1 is the tool coordinate system of the robotic arm, and B is the base coordinate system of the robotic arm. When calculating, because the calculation is carried out first
And
is known and can thus be calculated
If it is judged in the step S223
Is invalid, can
Is adjusted to be effective then due to
And
is known, calculation
It will be understood by those skilled in the art that the foregoing embodiments, which relate to calculating (converting) the target pose information of the distal end of the corresponding arm in the corresponding coordinate system, can be implemented by using the above equation (1), only for the case of
Etc. may vary depending on the circumstances, e.g.,
or the current pose information of the corresponding arm body in the first coordinate system, which is not described in detail herein.
In an embodiment, the step of determining the validity of the arbitrarily obtained target pose information includes:
and step S71, resolving the target pose information into target motion state parameters of each joint component in the corresponding arm body.
Step S72, the target motion state parameters of each joint component in the arm body are compared with the motion state threshold of each joint component in the arm body.
Step S73, if more than one target motion state parameter of each joint component in the arm body exceeds the motion state threshold of the corresponding joint component, the target pose information is judged to be invalid; and if the target motion state parameters of all the joint assemblies in the arm body do not exceed the motion state threshold of the corresponding joint assembly, judging that the target pose information is effective.
The motion state parameters comprise position parameters, speed parameters and acceleration parameters, and the motion state thresholds comprise position parameter thresholds, speed parameter thresholds and acceleration parameter thresholds. When comparing, they are of the same type.
Further, in the aforementioned step S225, specifically, in the step of adjusting the pose information of the second component object to be valid, the motion state of each joint assembly in the arm body exceeding the motion state threshold may be adjusted to be within the corresponding motion state threshold so as to be valid. In one embodiment, the motion state of each joint assembly in the arm body exceeding the motion state threshold can be adjusted to the corresponding motion state threshold to be effective, so that the operation arm can move to the limit as much as possible and then be adjusted by matching with the mechanical arm.
The above described embodiments are suitable for controlling end instruments in a surgical robot of the type shown in fig. 1. The surgical robot of this type includes one robot arm 21 and one ormore operation arms 31 havingend instruments 34 installed at the distal end of the robot arm 21, and the robot arm 21 and theoperation arms 31 each have several degrees of freedom.
The above embodiments are equally applicable to the control of end instruments in a surgical robot of the type shown in fig. 26. The surgical robot of this type includes a main arm 32 ', one or more adjusting arms 30' installed at a distal end of the main arm 32 ', and one ormore manipulation arms 31' having a distal end instrument installed at a distal end of the adjusting arm 30 ', the main arm 32', the adjusting arm 30 ', and themanipulation arm 31' each having several degrees of freedom. As shown in fig. 26, in the surgical robot, four adjustment arms 30 ' may be provided, and only oneoperation arm 31 ' may be provided for each adjustment arm 30 '. According to the actual use scenario, the three-segment arm structure of the surgical robot shown in fig. 26 can be configured as the two-segment arm structure of the surgical robot shown in fig. 1 to realize control. In an embodiment, in the case where the concepts of the operation arms in the two types of surgical robots are identical, for example, depending on the configuration, each of the adjustment arms 30' in the type of surgical robot shown in fig. 26 may be regarded as the robot arm 21 in the type of surgical robot shown in fig. 1 for control; for example, depending on the arrangement, the entire adjustment arm 30 'and the main arm 32' of the surgical robot of the type shown in fig. 26 may be controlled as the robot arm 21 of the surgical robot of the type shown in fig. 1. In one embodiment, the main arm 32 ' of the surgical robot shown in fig. 26 may be regarded as the mechanical arm 21 of the surgical robot shown in fig. 1, and the whole of the adjusting arm 30 ' and thecorresponding operation arm 31 ' of the surgical robot shown in fig. 26 may be regarded as theoperation arm 31 of the surgical robot shown in fig. 1.
The control method is particularly suitable for the single-hole surgical robot. The method described above is also applicable if the multi-hole surgical robot is provided with a manipulator arm having a distal end instrument for imaging, at the distal end of the manipulator arm, and a manipulator arm having a distal end instrument for manipulation.
In one embodiment, the control method of the surgical robot is generally configured to be implemented in a processing system of the surgical robot, and the processing system has more than one processor.
In one embodiment, a computer-readable storage medium is provided, in which a computer program is stored, the computer program being configured to be executed by one or more processors to implement the steps of the control method according to any one of the above-mentioned embodiments.
The surgical robot, the control method thereof and the computer readable storage medium of the invention have the following advantages:
theoperation terminal instrument 34B is controlled to move to keep the current pose while theoperation terminal instrument 34B reaches the target pose by controlling the movement of the mechanical arm 21, so that the operation space of theoperation terminal instrument 34B can be enlarged by the movement of the mechanical arm 21 on the premise of keeping the visual field unchanged, and the operation terminal instrument is convenient and safe to use.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.