Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to facilitate understanding of the present embodiment, the following first describes in detail an active training method of an upper limb rehabilitation robot provided by the embodiment of the present invention. The upper limb rehabilitation robot shown in fig. 1 comprises a display end 10 and a host machine end 20, wherein the display end 10 comprises a display device 101 and a vision sensor 102, the display device 101 is used for displaying an image of an ROI (Region Of Interest, a region of interest) in real time, an optical flow in the vicinity of the tail end of an arm to be trained, a tail end target position and the like, the vision sensor 102 is fixedly arranged on the display end 10 and used for acquiring an RGB image containing the arm to be trained, the host machine end 20 comprises an upper limb exoskeleton 201 and a control host machine 202, the upper limb exoskeleton 201 is used for providing auxiliary force/moment to assist a patient to complete rehabilitation training, and the control host machine 202 is used for adjusting the auxiliary force/moment output by the upper limb exoskeleton 201.
In practical application, the upper limb exoskeleton 201 has a left-right hand switching function, a big arm length and a small arm length adjusting function, and at least includes 4 degrees of freedom, i.e., a shoulder joint outward swing/inward retraction degree of freedom, an anteversion/backward extension degree of freedom, an inward rotation/outward rotation degree of freedom, and an elbow joint flexion/extension degree of freedom, which may be specifically set according to practical situations.
Based on the above-mentioned upper limb rehabilitation robot, the embodiment of the invention provides an active training method of an upper limb rehabilitation robot, as shown in fig. 2, the method comprises the following steps:
Step S202, an RGB image containing an arm to be trained is obtained, and a region of interest (ROI) is extracted from the RGB image based on a positive kinematic model to obtain an ROI image;
Specifically, when the patient (incompletely paralyzed) initiates the active motion, the arm to be trained can generate micro displacement, at the moment, the visual sensor can capture the size and direction of the displacement, an RGB image containing the arm to be trained is generated, after the control host acquires the RGB image, the RGB image is cut based on the positive kinematics model, so as to extract the ROI image of the region of interest, wherein the ROI image covers all the motion regions of the arm to be trained.
For the positive kinematic model, as shown in FIG. 3, the upper limb is modeled as a four-degree-of-freedom two-link structure (forearm+forearm), where coordinate system XYZ represents the camera coordinate system, SH [ x0,y0,z0 ] is the position coordinate of the shoulder joint center in the camera coordinate system (determined by the mounting position), EL represents the elbow joint center, WR represents the wrist joint center, i.e., the end of the arm to be trained, Lu is the forearm length (distance from SH to EL), Ll is the forearm length (distance from EL to WR), q1, q2, q3, q4 represent the angles of shoulder joint outward/inward roll, shoulder joint forward/backward roll, shoulder joint inward/outward roll, elbow joint flexion/extension, respectively.
Based on the positive kinematic model described above, the position of the end of the arm to be trained (i.e. the wrist center WR) in the camera coordinate system can be determined, namely:
then, the camera internal reference matrix K is transformed to a pixel coordinate system, so that the pixel coordinate of the tail end of the arm to be trained in the RGB image can be obtained:
Wherein, [ x0,y0,z0 ] represents the position coordinates of the shoulder joint center in the camera coordinate system (determined by the mounting position), Lu represents the large arm length, Ll represents the small arm length, q1, q2, q3, q4 represent the angles of shoulder joint outward swing/inward contraction, shoulder joint forward/backward extension, shoulder joint inward/outward rotation, elbow joint flexion/extension, respectively, [ xe,ye ] represents the pixel coordinates of the wrist joint center in the RGB image pixel coordinate system, K e R3×3 represents the internal reference matrix of the camera (given by the manufacturer), and the camera coordinate system and the pixel coordinate system are transformed by the internal reference matrix.
Therefore, with the value ranges of q1, q2, q3, q4, Lu and Ll as constraints, the minimum value xe,min and the maximum value xe,max of xe, and the minimum value ye,min and the maximum value ye,max of ye can be obtained by establishing an optimization problem, and a rectangular region of [ xe,min-we:xe,max+we,ye,min-he:ye,max+he ] is taken on the RGB image as a ROI region, such as a gray shadow region shown in fig. 4. In the figure, a standard pixel coordinate system is adopted, the origin is at the upper left corner of the image, the X axis is towards the right, the Y axis is towards the lower, Xe,min is the minimum value of the X coordinate of the center of the wrist joint, Xe,max is the maximum value of the X coordinate of the center of the wrist joint, Ye,min is the minimum value of the Y coordinate of the center of the wrist joint, Ye,max is the maximum value of the Y coordinate of the center of the wrist joint, we represents the width margin, and he represents the height margin. Note that the width margin we and the height margin he may be set according to historical empirical values or experimental values.
In summary, by extracting the ROI image from the RGB image, the calculation amount can be reduced to improve algorithm instantaneity. One of the disadvantages of the optical flow method is that the operation speed is slow, the extraction of the ROI image is equivalent to the clipping of the original RGB image output by the vision sensor, the clipping range is all possible motion areas of the arm to be trained, and the subsequent algorithm is performed on the clipped image, namely the ROI image, so that the calculation amount can be reduced, and the operation speed of the optical flow method can be improved.
Step S204, calculating dense optical flow of the ROI image based on Farneback optical flow algorithm;
At present, the common visual motion detection algorithm mainly comprises an inter-frame difference method, a background difference method, an optical flow method and the like, and the optical flow method can detect independent moving targets without knowing any information of scenes in advance, so that complete information of the moving targets can be obtained, and the method is suitable for a dynamic background. In addition, compared with the neural network, the neural network is more suitable for recognizing the action of large displacement, and has a general effect on recognizing small displacement, so that the embodiment of the invention adopts an optical flow method to detect visual movement aiming at the micro displacement generated by the limb after the patient initiates active movement.
The conventional optical flow method mainly comprises dense optical flow and sparse optical flow, and because the dense optical flow is an image registration method for performing point-by-point matching on an image or a specified region, the method calculates the offset of all points on the image so as to form a dense optical flow field, and pixel-level image registration can be performed through the dense optical flow field, the embodiment of the invention calculates the dense optical flow of the ROI image. The method for calculating the dense optical flow mainly comprises a Brox algorithm, a Farneback algorithm, a TVL1 algorithm and the like, and the Farneback algorithm is good in instantaneity, and the frame rate on a medium-low end GPU (Graphics Processing Unit, a graphics processor) can reach 30 frames, so that the real-time use requirement can be met, and therefore, the embodiment of the invention adopts the Farneback optical flow algorithm to calculate the dense optical flow of the ROI image.
Specifically, for the ROI image, the optical flow calculation process includes performing gray conversion processing on the ROI image to obtain a first gray image ROI_gray, performing median filtering processing on the first gray image ROI_gray to obtain a filtered second gray image ROI_gray_ filted, and calculating optical flow of the second gray image of the current frame according to the second gray image ROI_gray_ filted of the current frame and the second gray image ROI_gray_ filted of the previous frame based on Farneback optical flow algorithm. The optical flow is composed of optical flows in two directions of X and Y, and is further split into an optical flow flowx in the X direction and an optical flow flowy in the Y direction, where w=xe,max-xe,min+2we,H=ye,max-ye,min+2he, and flowx and flowy are matrices with dimensions w×h.
Step S206, the current position of the tail end of the arm to be trained in the ROI image is obtained, and the neighborhood of the tail end of the arm to be trained in the ROI image is determined according to the current position;
Because the ROI image is obtained after clipping on the original RGB image, the pixel coordinates of the end of the arm to be trained in the ROI image deviate from the pixel coordinates in the RGB image, and the pixel coordinates of the end of the arm to be trained (i.e., the wrist joint center) in the ROI image can be determined according to the following formula:
Wherein [ xe,ye ] represents the pixel coordinates of the wrist joint center in the RGB image pixel coordinate system, we represents the width margin, he represents the height margin,Representing the pixel coordinates, i.e. the current position, of the end of the arm to be trained in the ROI image.
As shown in FIG. 5, a standard pixel coordinate system is adopted in the figure, the origin is at the upper left corner of the image, the X axis is towards the right, and the Y axis is towards the lower; representing the x-coordinate of the end of the arm to be trained in the pixel coordinate system of the ROI image,Representing the y coordinate of the tail end of the arm to be trained in the pixel coordinate system of the ROI image; representing the width of the neighborhood,Representing the height of the neighborhood,AndCan be determined according to the shape of the arm, and is based on the current position of the tail end of the arm to be trained in the ROI imageThe process of determining the neighborhood is as follows, determining the width of the neighborhood according to the shape of the arm to be trainedAnd neighborhood heightIn the current positionFor the center, establish the width as the neighborhood widthThe height is the neighborhood heightAs a neighborhood of the end of the arm to be trained in the ROI image, i.e., the rectangular region in fig. 5. It should be noted that, when in the illustrated rest state, the optical flow of each pixel in the neighborhood is about zero, and looks like a point.
Step S208, determining an actual speed vector of the tail end of the arm to be trained according to the optical flow in the neighborhood;
For the optical flow in the state of movement of the end of the arm to be trained, as shown in fig. 6, [ Ve,x,Ve,y ] represents the actual velocity vector of the end of the arm to be trained,Representing the target position of the end of the upper limb, [ Vd,x,Vd,y ] represents the target movement speed of the end of the arm to be trained, i.e. the target speed vector. When the arm to be trained produces a small movement to the left, some pixels in the neighborhood produce a component in the positive X direction, which looks like a right arrow (the direction is opposite to the actual movement due to mirroring). Since the optical flow comprises a first optical flow in the X direction, flowx, and a second optical flow in the Y direction, flowy, the actual velocity vector of the end of the arm to be trained [ Ve,x,Ve,y ] can be obtained by averaging the optical flows in the vicinity of the end of the arm to be trained.
The method comprises the steps of carrying out average calculation on all first optical flows in the neighborhood to obtain a first speed in the X direction, namely Ve,x, carrying out average calculation on all second optical flows in the neighborhood to obtain a second speed in the Y direction, namely Ve,y, and obtaining an actual speed vector according to the first speed and the second speed.
Wherein, the average calculation formula is as follows:
Where [ i, j ] S represents all pixel coordinates in the field S, flowx [ i, j ] represents the element of the ith row and jth column in matrix flowx, flowy [ i, j ] represents the element of the ith row and jth column in matrix flowy,Representing the neighborhood width,Representing the neighborhood height.
The optical flow in the vicinity of the tail end of the arm to be trained is averaged instead of the optical flow of a single pixel at the tail end of the arm to be trained, so that the actual velocity vector of the tail end of the arm to be trained is obtained, and the problem that the obtained velocity vector cannot reflect the actual motion intention due to the influence of noise on the optical flow of the single pixel is avoided. Therefore, the influence of noise can be reduced by averaging the optical flow in the vicinity of the end of the arm to be trained, and the calculation accuracy of the actual velocity vector is improved.
Step S210, obtaining a target position of the tail end of the arm to be trained in the ROI image, and determining a target speed vector of the tail end of the arm to be trained according to the target position and the current position;
specifically, the target speed vector [ Vd,x,Vd,y ] is calculated according to the following formula:
Wherein,Representing the target position of the end of the arm to be trained in the ROI image, i.e. the upper extremity target position, as shown in figures 5 or 6,Indicating the current position, T indicating the preset duration of movement. The target position is to be notedThe target provided for the upper limb rehabilitation robot guides the position corresponding to the current training task in the rehabilitation training, and when the current training task is completed, the control host generates the target position corresponding to the next training task until all tasks are completed or the training time is finished.
Step S212, determining the movement intention of the arm to be trained according to the actual speed vector and the target speed vector, and assisting the arm to be trained to perform rehabilitation training according to the movement intention until the arm to be trained moves to the target position.
The method comprises the steps of calculating projection of an actual speed vector on a target speed vector, judging whether the projection meets preset motion conditions, if so, determining that the arm to be trained generates correct motion intention, and assisting the arm to be trained to perform rehabilitation training until the arm to be trained moves to the target position.
Wherein, the projection V of the actual speed vector on the target speed vector is calculated according to the following formula:
Where [ Vd,x,Vd,y ] represents the target speed vector and [ Ve,x,Ve,y ] represents the actual speed vector.
For the projection V, the process of judging whether the projection meets the preset motion condition comprises judging whether the direction of the projection is positive and whether the amplitude of the projection reaches a preset threshold value, and if so, judging that the projection meets the preset motion condition. When the projection direction is positive and the amplitude reaches a preset threshold, determining that the arm to be trained generates a motion intention consistent with the target direction, namely a correct motion intention, wherein the control host controls the exoskeleton to provide an auxiliary force/moment to enable the arm to be trained to move to the target position according to the target speed vector [ Vd,x,Vd,y ]When the target position is reached, the current task is considered to be completed, and the control host generates the next target position until all tasks are completed or the training time is over.
Further, the method further comprises the step of determining that the arm to be trained generates the wrong movement intention if the projection does not meet the preset movement condition, wherein at the moment, the control host controls the exoskeleton to remain static, namely stopping assisting the arm to be trained in rehabilitation training, until the arm to be trained generates the correct movement intention.
According to the upper limb rehabilitation robot active training method provided by the embodiment of the invention, after a patient initiates an active movement, the limb generates a tiny displacement, and the movement intention of the patient is identified to realize the active training through capturing the size and the direction of the displacement by the RGB image, so that the active training cost of the upper limb rehabilitation robot is reduced, the practicability of an active training mode is improved, and the upper limb rehabilitation robot active training method is convenient to popularize and implement in practical application.
In summary, the upper limb rehabilitation robot active training method has the advantages that ① can identify the active movement intention of the upper limb of a patient based on a common RGB camera, compared with a method using an electroencephalogram or electromyographic signal, the method can greatly reduce the price of a required sensor, is beneficial to reducing the active training cost of the upper limb rehabilitation robot and improving the practicability of an active training mode, ② cuts out a region of interest (ROI) from an RGB image output by a visual sensor by performing through filtering on the RGB image output by the visual sensor and based on a positive kinematic model, and the follow-up algorithm is performed on the ROI image, so that the calculated amount can be obviously reduced, the practicability of an optical flow method is improved, ③ can reduce the influence of noise by averaging the optical flow in the vicinity of the tail end of an arm to be trained, improve the accuracy of the upper limb active movement intention identification, and is convenient to popularize and implement in practical application,
Corresponding to the embodiment of the method, the embodiment of the invention also provides an active training device of the upper limb rehabilitation robot, which comprises an image acquisition module 71, an optical flow calculation module 72, a neighborhood determination module 73, an actual speed determination module 74, a target speed determination module 75 and a movement intention determination module 76, wherein the functions of the modules are as follows:
The image acquisition module 71 is configured to acquire an RGB image containing an arm to be trained, and extract a region of interest ROI from the RGB image based on a positive kinematic model to obtain an ROI image, where the ROI image covers all the motion regions of the arm to be trained;
an optical flow calculation module 72 for calculating a dense optical flow of the ROI image based on Farneback optical flow algorithm;
A neighborhood determining module 73, configured to obtain a current position of the end of the arm to be trained in the ROI image, and determine a neighborhood of the end of the arm to be trained in the ROI image according to the current position;
An actual speed determining module 74, configured to determine an actual speed vector of the end of the arm to be trained according to the optical flow in the neighborhood;
A target speed determining module 75, configured to obtain a target position of the end of the arm to be trained in the ROI image, and determine a target speed vector of the end of the arm to be trained according to the target position and the current position;
the movement intention determining module 76 is configured to determine a movement intention of the arm to be trained according to the actual speed vector and the target speed vector, and assist the arm to be trained to perform rehabilitation training according to the movement intention until the arm to be trained moves to the target position.
According to the upper limb rehabilitation robot active training device provided by the embodiment of the invention, after a patient initiates an active movement, the limb generates a tiny displacement, and the movement intention of the patient is identified to realize the active training through capturing the size and the direction of the displacement by the RGB image, so that the active training cost of the upper limb rehabilitation robot is reduced, the practicability of an active training mode is improved, and the upper limb rehabilitation robot active training device is convenient to popularize and implement in practical application.
Preferably, the optical flow calculation module 72 is further configured to perform gray level conversion processing on the ROI image to obtain a first gray level image, perform median filtering processing on the first gray level image to obtain a filtered second gray level image, and calculate, based on Farneback optical flow algorithm, optical flow of the second gray level image of the current frame according to the second gray level image of the current frame and the second gray level image of the previous frame.
Preferably, the neighborhood determining module 73 is further configured to determine a neighborhood width and a neighborhood height according to the shape of the arm to be trained, and set up a rectangular area with the width being the neighborhood width and the height being the neighborhood height with the current position as the center, as a neighborhood of the end of the arm to be trained in the ROI image.
Preferably, the optical flow includes a first optical flow in the X direction and a second optical flow in the Y direction, and the actual speed determining module 74 is further configured to average all the first optical flows in the neighborhood to obtain a first speed in the X direction, average all the second optical flows in the neighborhood to obtain a second speed in the Y direction, and obtain an actual speed vector according to the first speed and the second speed.
Preferably, the movement intention determining module 76 is further configured to calculate a projection of the actual velocity vector onto the target velocity vector, determine whether the projection meets a preset movement condition, if so, determine that the arm to be trained generates a correct movement intention, and assist the arm to be trained in performing rehabilitation training until the arm to be trained moves to the target position.
Preferably, judging whether the projection meets the preset motion condition comprises judging whether the direction of the projection is positive and whether the amplitude of the projection reaches a preset threshold value, and if so, judging that the projection meets the preset motion condition.
Preferably, the device further comprises determining that the arm to be trained generates wrong exercise intention if the projection does not meet the preset exercise condition, and stopping assisting the arm to be trained in rehabilitation training.
The upper limb rehabilitation robot active training device provided by the embodiment of the invention has the same technical characteristics as the upper limb rehabilitation robot active training method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment of the invention also provides an upper limb rehabilitation robot, which comprises a processor and a memory, wherein the memory stores machine executable instructions which can be executed by the processor, and the processor executes the machine executable instructions to realize the active training method of the upper limb rehabilitation robot.
Referring to fig. 8, the upper limb rehabilitation robot includes a processor 100 and a memory 101, the memory 101 stores machine executable instructions executable by the processor 100, and the processor 100 executes the machine executable instructions to implement the above-mentioned upper limb rehabilitation robot active training method.
Further, the upper limb rehabilitation robot shown in fig. 8 further comprises a bus 102 and a communication interface 103, and the processor 100, the communication interface 103 and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA (Industrial Standard Architecture, industry standard architecture) bus, PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or EISA (Enhanced Industry Standard Architecture, extended industry standard architecture) bus, among others. The buses may be classified into address buses, data buses, control buses, and the like. For ease of illustration, only one bi-directional arrow is shown in FIG. 8, but not only one bus or type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The Processor 100 may be a general-purpose Processor, including a central processing unit (Central Processing Unit, CPU), a network Processor (Network Processor, NP), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The embodiment also provides a machine-readable storage medium, wherein the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to realize the active training method of the upper limb rehabilitation robot.
The active training method and device for the upper limb rehabilitation robot and the computer program product of the upper limb rehabilitation robot provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, the instructions included in the program codes can be used for executing the method described in the method embodiment, and specific implementation can be seen in the method embodiment and will not be repeated here.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, mechanically connected, electrically connected, directly connected, indirectly connected via an intermediate medium, or in communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It should be noted that the foregoing embodiments are merely illustrative embodiments of the present invention, and not restrictive, and the scope of the invention is not limited to the embodiments, and although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that any modification, variation or substitution of some of the technical features of the embodiments described in the foregoing embodiments may be easily contemplated within the scope of the present invention, and the spirit and scope of the technical solutions of the embodiments do not depart from the spirit and scope of the embodiments of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.