Movatterモバイル変換


[0]ホーム

URL:


CN116492644B - Active training method and device for upper limb rehabilitation robot and upper limb rehabilitation robot - Google Patents

Active training method and device for upper limb rehabilitation robot and upper limb rehabilitation robot
Download PDF

Info

Publication number
CN116492644B
CN116492644BCN202310420013.3ACN202310420013ACN116492644BCN 116492644 BCN116492644 BCN 116492644BCN 202310420013 ACN202310420013 ACN 202310420013ACN 116492644 BCN116492644 BCN 116492644B
Authority
CN
China
Prior art keywords
arm
trained
optical flow
determining
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310420013.3A
Other languages
Chinese (zh)
Other versions
CN116492644A (en
Inventor
吴剑煌
黄冠
孙维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huaquejing Medical Technology Co ltd
Original Assignee
Shenzhen Huaquejing Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huaquejing Medical Technology Co ltdfiledCriticalShenzhen Huaquejing Medical Technology Co ltd
Priority to CN202310420013.3ApriorityCriticalpatent/CN116492644B/en
Publication of CN116492644ApublicationCriticalpatent/CN116492644A/en
Application grantedgrantedCritical
Publication of CN116492644BpublicationCriticalpatent/CN116492644B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了上肢康复机器人主动训练方法、装置及上肢康复机器人;其中,该方法包括:基于正运动学模型从RGB图像中提取感兴趣区域ROI,得到ROI图像;基于Farneback光流算法计算ROI图像的稠密光流,并确定待训练手臂末端在ROI图像中的邻域;根据邻域内的光流确定待训练手臂末端的实际速度向量;最后确定待训练手臂末端的目标速度向量,并根据实际速度向量和目标速度向量确定待训练手臂的运动意图,并根据运动意图辅助待训练手臂进行康复训练,直至运动至目标位置;上述训练方法降低了上肢康复机器人主动训练成本,提高了主动训练模式的实用性,便于在实际应用中推广实施。

The present invention provides an active training method and device for an upper limb rehabilitation robot, and an upper limb rehabilitation robot; wherein the method comprises: extracting a region of interest (ROI) from an RGB image based on a forward kinematics model to obtain an ROI image; calculating the dense optical flow of the ROI image based on a Farneback optical flow algorithm, and determining the neighborhood of the end of an arm to be trained in the ROI image; determining the actual velocity vector of the end of the arm to be trained according to the optical flow in the neighborhood; finally determining the target velocity vector of the end of the arm to be trained, and determining the movement intention of the arm to be trained according to the actual velocity vector and the target velocity vector, and assisting the arm to be trained in rehabilitation training according to the movement intention until it moves to the target position; the above training method reduces the active training cost of the upper limb rehabilitation robot, improves the practicability of the active training mode, and is convenient for popularization and implementation in practical applications.

Description

Active training method and device for upper limb rehabilitation robot and upper limb rehabilitation robot
Technical Field
The invention relates to the technical field of rehabilitation training, in particular to an active training method and device for an upper limb rehabilitation robot and the upper limb rehabilitation robot.
Background
The cerebral apoplexy is commonly called as apoplexy, is an acute cerebrovascular circulatory disturbance disease caused by cerebral vessel blockage or rupture, and has the characteristics of high morbidity, high mortality, high disability rate, high recurrence rate and the like. Only a few patients with slight stroke can recover naturally, and most patients with cerebral stroke can have different degrees of dysfunctions such as movement, sensation, cognition, speech and the like, and the daily activity capacity and the quality of life of the patients are seriously affected. Rehabilitation is the most effective method for reducing the disability rate of the stroke, and timely, scientific and effective rehabilitation training, particularly plays a role in treating diseases, recovering functions, preventing recurrence of the stroke, reducing complications and the like in the "golden period" of early disease. In traditional apoplexy rehabilitation therapy, one-to-one rehabilitation therapy is mainly carried out on patients by doctors in a freehand mode, the treatment effect is directly affected by the personal treatment means, experience differences, subjective consciousness and fatigue degree of the doctors, the labor intensity in the treatment process is high, the nursing cost is high, and the quantity proportion of the doctors to the patients is seriously unbalanced, so that the ever-increasing medical demands are difficult to meet, and therefore, the introduction of medical rehabilitation robot equipment is a feasible scheme for helping effectively relieving contradiction between supply and demand of rehabilitation.
The rehabilitation robot can assist or even replace doctors to provide more continuous, effective and more targeted rehabilitation training treatment for patients, relieves the shortage problem of human resources in rehabilitation medical treatment, can record treatment data of the patients in real time, and provides objective basis for condition evaluation and scheme improvement. In practical application, the rehabilitation robot mainly provides two rehabilitation training modes, namely active rehabilitation training mode and passive rehabilitation training mode, and with the deep research of nerve plasticity and functional recombination theory and practice, the effect of active rehabilitation is far greater than passive exercise.
The active rehabilitation training of the rehabilitation robot means that a patient actively initiates movement, but because the limb of the patient is damaged and cannot independently generate all the force/moment required for completing the movement, the rehabilitation robot is required to provide partial auxiliary force/moment, and the rehabilitation robot provides the required auxiliary force/moment to assist the patient to complete the movement after recognizing the movement intention of the patient. The difficulty in active rehabilitation training is how to identify the movement intention of a patient, the existing method mainly identifies the movement intention of the patient through electroencephalogram or electromyographic signals to realize active training, but the sensor for collecting the electroencephalogram or electromyographic signals is expensive, has insufficient practicability, and greatly limits the popularization of the active training mode of the rehabilitation robot.
Disclosure of Invention
Accordingly, the invention aims to provide an upper limb rehabilitation robot active training method and device and an upper limb rehabilitation robot, so as to alleviate the problems, reduce the active training cost of the upper limb rehabilitation robot, improve the practicability of an active training mode and facilitate popularization and implementation in practical application.
The embodiment of the invention provides an active training method of an upper limb rehabilitation robot, which comprises the steps of obtaining an RGB image containing an arm to be trained, extracting a region of interest (ROI) from the RGB image based on a positive kinematics model to obtain an ROI image, wherein the ROI image covers all motion regions of the arm to be trained, calculating dense optical flow of the ROI image based on Farneback optical flow algorithm, obtaining the current position of the tail end of the arm to be trained in the ROI image, determining the neighborhood of the tail end of the arm to be trained in the ROI image according to the current position, determining the actual speed vector of the tail end of the arm to be trained according to the optical flow in the neighborhood, obtaining the target position of the tail end of the arm to be trained in the ROI image, determining the motion intention of the arm to be trained according to the actual speed vector and the target speed vector, and assisting rehabilitation training of the arm to be trained according to the motion intention until the target position is reached.
Preferably, the step of calculating dense optical flow of the ROI image based on Farneback optical flow algorithm comprises the steps of carrying out gray level conversion processing on the ROI image to obtain a first gray level image, carrying out median filtering processing on the first gray level image to obtain a filtered second gray level image, and calculating optical flow of the second gray level image of the current frame based on Farneback optical flow algorithm according to the second gray level image of the current frame and the second gray level image of the previous frame.
Preferably, the step of determining the neighborhood of the tail end of the arm to be trained in the ROI image according to the current position comprises the steps of determining the neighborhood width and the neighborhood height according to the shape of the arm to be trained, and establishing a rectangular area with the width being the neighborhood width and the height being the neighborhood height by taking the current position as a center to serve as the neighborhood of the tail end of the arm to be trained in the ROI image.
Preferably, the optical flow comprises a first optical flow in the X direction and a second optical flow in the Y direction, and the step of determining the actual velocity vector of the tail end of the arm to be trained according to the optical flow in the neighborhood comprises the steps of carrying out average calculation on all the first optical flows in the neighborhood to obtain the first velocity in the X direction, carrying out average calculation on all the second optical flows in the neighborhood to obtain the second velocity in the Y direction, and obtaining the actual velocity vector according to the first velocity and the second velocity.
Preferably, the step of determining the movement intention of the arm to be trained according to the actual speed vector and the target speed vector comprises the steps of calculating projection of the actual speed vector on the target speed vector, judging whether the projection meets a preset movement condition, if so, determining that the arm to be trained generates the correct movement intention, and assisting the arm to be trained in rehabilitation training until the arm to be trained moves to the target position.
Preferably, the step of judging whether the projection meets the preset motion condition comprises judging whether the direction of the projection is positive and whether the amplitude of the projection reaches a preset threshold value, and if so, judging that the projection meets the preset motion condition.
Preferably, the method further comprises determining that the arm to be trained generates wrong exercise intention if the projection does not meet the preset exercise condition, and stopping assisting the arm to be trained in rehabilitation training.
The embodiment of the invention also provides an active training device of the upper limb rehabilitation robot, which comprises an image acquisition module, a neighborhood determination module, a target speed determination module and a motion intention determination module, wherein the image acquisition module is used for acquiring an RGB image containing an arm to be trained and extracting a region of interest (ROI) from the RGB image based on a positive kinematics model to obtain an ROI image, the ROI image covers all motion regions of the arm to be trained, the optical flow calculation module is used for calculating dense optical flow of the ROI image based on a Farneback optical flow algorithm, the neighborhood determination module is used for acquiring the current position of the tail end of the arm to be trained in the ROI image and determining the neighborhood of the tail end of the arm to be trained in the ROI image according to the current position, the actual speed determination module is used for determining the target position of the tail end of the arm to be trained in the ROI image and determining the target speed vector of the tail end of the arm to be trained according to the target position and the current position, and the motion intention determination module is used for determining the motion intention of the arm to be trained according to the target speed vector and assisting the motion intention to be trained until the target position is reached.
In a third aspect, an embodiment of the present invention further provides an upper limb rehabilitation robot, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method of the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect described above.
The embodiment of the invention has the following beneficial effects:
The embodiment of the invention provides an active training method and device for an upper limb rehabilitation robot and the upper limb rehabilitation robot, wherein a region of interest (ROI) is firstly extracted from an RGB image based on a positive kinematics model to obtain an ROI image, a dense optical flow of the ROI image is calculated based on Farneback optical flow algorithm, a neighborhood of the tail end of an arm to be trained in the ROI image is determined, an actual speed vector of the tail end of the arm to be trained is determined according to the optical flow in the neighborhood, a target speed vector of the tail end of the arm to be trained is finally determined, the movement intention of the arm to be trained is determined according to the actual speed vector and the target speed vector, and the arm to be trained is assisted to perform rehabilitation training according to the movement intention until the target position.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an upper limb rehabilitation robot according to an embodiment of the present invention;
fig. 2 is a flowchart of an active training method of an upper limb rehabilitation robot according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a positive kinematic model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a region of interest according to an embodiment of the present invention;
FIG. 5 is a schematic view of an end neighborhood of an arm to be trained according to an embodiment of the present invention;
FIG. 6 is a schematic view of an optical flow of an arm to be trained in a state of movement according to an embodiment of the present invention;
Fig. 7 is a schematic diagram of an active training device of an upper limb rehabilitation robot according to an embodiment of the present invention;
Fig. 8 is a schematic structural diagram of another upper limb rehabilitation robot according to the embodiment of the invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to facilitate understanding of the present embodiment, the following first describes in detail an active training method of an upper limb rehabilitation robot provided by the embodiment of the present invention. The upper limb rehabilitation robot shown in fig. 1 comprises a display end 10 and a host machine end 20, wherein the display end 10 comprises a display device 101 and a vision sensor 102, the display device 101 is used for displaying an image of an ROI (Region Of Interest, a region of interest) in real time, an optical flow in the vicinity of the tail end of an arm to be trained, a tail end target position and the like, the vision sensor 102 is fixedly arranged on the display end 10 and used for acquiring an RGB image containing the arm to be trained, the host machine end 20 comprises an upper limb exoskeleton 201 and a control host machine 202, the upper limb exoskeleton 201 is used for providing auxiliary force/moment to assist a patient to complete rehabilitation training, and the control host machine 202 is used for adjusting the auxiliary force/moment output by the upper limb exoskeleton 201.
In practical application, the upper limb exoskeleton 201 has a left-right hand switching function, a big arm length and a small arm length adjusting function, and at least includes 4 degrees of freedom, i.e., a shoulder joint outward swing/inward retraction degree of freedom, an anteversion/backward extension degree of freedom, an inward rotation/outward rotation degree of freedom, and an elbow joint flexion/extension degree of freedom, which may be specifically set according to practical situations.
Based on the above-mentioned upper limb rehabilitation robot, the embodiment of the invention provides an active training method of an upper limb rehabilitation robot, as shown in fig. 2, the method comprises the following steps:
Step S202, an RGB image containing an arm to be trained is obtained, and a region of interest (ROI) is extracted from the RGB image based on a positive kinematic model to obtain an ROI image;
Specifically, when the patient (incompletely paralyzed) initiates the active motion, the arm to be trained can generate micro displacement, at the moment, the visual sensor can capture the size and direction of the displacement, an RGB image containing the arm to be trained is generated, after the control host acquires the RGB image, the RGB image is cut based on the positive kinematics model, so as to extract the ROI image of the region of interest, wherein the ROI image covers all the motion regions of the arm to be trained.
For the positive kinematic model, as shown in FIG. 3, the upper limb is modeled as a four-degree-of-freedom two-link structure (forearm+forearm), where coordinate system XYZ represents the camera coordinate system, SH [ x0,y0,z0 ] is the position coordinate of the shoulder joint center in the camera coordinate system (determined by the mounting position), EL represents the elbow joint center, WR represents the wrist joint center, i.e., the end of the arm to be trained, Lu is the forearm length (distance from SH to EL), Ll is the forearm length (distance from EL to WR), q1, q2, q3, q4 represent the angles of shoulder joint outward/inward roll, shoulder joint forward/backward roll, shoulder joint inward/outward roll, elbow joint flexion/extension, respectively.
Based on the positive kinematic model described above, the position of the end of the arm to be trained (i.e. the wrist center WR) in the camera coordinate system can be determined, namely:
then, the camera internal reference matrix K is transformed to a pixel coordinate system, so that the pixel coordinate of the tail end of the arm to be trained in the RGB image can be obtained:
Wherein, [ x0,y0,z0 ] represents the position coordinates of the shoulder joint center in the camera coordinate system (determined by the mounting position), Lu represents the large arm length, Ll represents the small arm length, q1, q2, q3, q4 represent the angles of shoulder joint outward swing/inward contraction, shoulder joint forward/backward extension, shoulder joint inward/outward rotation, elbow joint flexion/extension, respectively, [ xe,ye ] represents the pixel coordinates of the wrist joint center in the RGB image pixel coordinate system, K e R3×3 represents the internal reference matrix of the camera (given by the manufacturer), and the camera coordinate system and the pixel coordinate system are transformed by the internal reference matrix.
Therefore, with the value ranges of q1, q2, q3, q4, Lu and Ll as constraints, the minimum value xe,min and the maximum value xe,max of xe, and the minimum value ye,min and the maximum value ye,max of ye can be obtained by establishing an optimization problem, and a rectangular region of [ xe,min-we:xe,max+we,ye,min-he:ye,max+he ] is taken on the RGB image as a ROI region, such as a gray shadow region shown in fig. 4. In the figure, a standard pixel coordinate system is adopted, the origin is at the upper left corner of the image, the X axis is towards the right, the Y axis is towards the lower, Xe,min is the minimum value of the X coordinate of the center of the wrist joint, Xe,max is the maximum value of the X coordinate of the center of the wrist joint, Ye,min is the minimum value of the Y coordinate of the center of the wrist joint, Ye,max is the maximum value of the Y coordinate of the center of the wrist joint, we represents the width margin, and he represents the height margin. Note that the width margin we and the height margin he may be set according to historical empirical values or experimental values.
In summary, by extracting the ROI image from the RGB image, the calculation amount can be reduced to improve algorithm instantaneity. One of the disadvantages of the optical flow method is that the operation speed is slow, the extraction of the ROI image is equivalent to the clipping of the original RGB image output by the vision sensor, the clipping range is all possible motion areas of the arm to be trained, and the subsequent algorithm is performed on the clipped image, namely the ROI image, so that the calculation amount can be reduced, and the operation speed of the optical flow method can be improved.
Step S204, calculating dense optical flow of the ROI image based on Farneback optical flow algorithm;
At present, the common visual motion detection algorithm mainly comprises an inter-frame difference method, a background difference method, an optical flow method and the like, and the optical flow method can detect independent moving targets without knowing any information of scenes in advance, so that complete information of the moving targets can be obtained, and the method is suitable for a dynamic background. In addition, compared with the neural network, the neural network is more suitable for recognizing the action of large displacement, and has a general effect on recognizing small displacement, so that the embodiment of the invention adopts an optical flow method to detect visual movement aiming at the micro displacement generated by the limb after the patient initiates active movement.
The conventional optical flow method mainly comprises dense optical flow and sparse optical flow, and because the dense optical flow is an image registration method for performing point-by-point matching on an image or a specified region, the method calculates the offset of all points on the image so as to form a dense optical flow field, and pixel-level image registration can be performed through the dense optical flow field, the embodiment of the invention calculates the dense optical flow of the ROI image. The method for calculating the dense optical flow mainly comprises a Brox algorithm, a Farneback algorithm, a TVL1 algorithm and the like, and the Farneback algorithm is good in instantaneity, and the frame rate on a medium-low end GPU (Graphics Processing Unit, a graphics processor) can reach 30 frames, so that the real-time use requirement can be met, and therefore, the embodiment of the invention adopts the Farneback optical flow algorithm to calculate the dense optical flow of the ROI image.
Specifically, for the ROI image, the optical flow calculation process includes performing gray conversion processing on the ROI image to obtain a first gray image ROI_gray, performing median filtering processing on the first gray image ROI_gray to obtain a filtered second gray image ROI_gray_ filted, and calculating optical flow of the second gray image of the current frame according to the second gray image ROI_gray_ filted of the current frame and the second gray image ROI_gray_ filted of the previous frame based on Farneback optical flow algorithm. The optical flow is composed of optical flows in two directions of X and Y, and is further split into an optical flow flowx in the X direction and an optical flow flowy in the Y direction, where w=xe,max-xe,min+2we,H=ye,max-ye,min+2he, and flowx and flowy are matrices with dimensions w×h.
Step S206, the current position of the tail end of the arm to be trained in the ROI image is obtained, and the neighborhood of the tail end of the arm to be trained in the ROI image is determined according to the current position;
Because the ROI image is obtained after clipping on the original RGB image, the pixel coordinates of the end of the arm to be trained in the ROI image deviate from the pixel coordinates in the RGB image, and the pixel coordinates of the end of the arm to be trained (i.e., the wrist joint center) in the ROI image can be determined according to the following formula:
Wherein [ xe,ye ] represents the pixel coordinates of the wrist joint center in the RGB image pixel coordinate system, we represents the width margin, he represents the height margin,Representing the pixel coordinates, i.e. the current position, of the end of the arm to be trained in the ROI image.
As shown in FIG. 5, a standard pixel coordinate system is adopted in the figure, the origin is at the upper left corner of the image, the X axis is towards the right, and the Y axis is towards the lower; representing the x-coordinate of the end of the arm to be trained in the pixel coordinate system of the ROI image,Representing the y coordinate of the tail end of the arm to be trained in the pixel coordinate system of the ROI image; representing the width of the neighborhood,Representing the height of the neighborhood,AndCan be determined according to the shape of the arm, and is based on the current position of the tail end of the arm to be trained in the ROI imageThe process of determining the neighborhood is as follows, determining the width of the neighborhood according to the shape of the arm to be trainedAnd neighborhood heightIn the current positionFor the center, establish the width as the neighborhood widthThe height is the neighborhood heightAs a neighborhood of the end of the arm to be trained in the ROI image, i.e., the rectangular region in fig. 5. It should be noted that, when in the illustrated rest state, the optical flow of each pixel in the neighborhood is about zero, and looks like a point.
Step S208, determining an actual speed vector of the tail end of the arm to be trained according to the optical flow in the neighborhood;
For the optical flow in the state of movement of the end of the arm to be trained, as shown in fig. 6, [ Ve,x,Ve,y ] represents the actual velocity vector of the end of the arm to be trained,Representing the target position of the end of the upper limb, [ Vd,x,Vd,y ] represents the target movement speed of the end of the arm to be trained, i.e. the target speed vector. When the arm to be trained produces a small movement to the left, some pixels in the neighborhood produce a component in the positive X direction, which looks like a right arrow (the direction is opposite to the actual movement due to mirroring). Since the optical flow comprises a first optical flow in the X direction, flowx, and a second optical flow in the Y direction, flowy, the actual velocity vector of the end of the arm to be trained [ Ve,x,Ve,y ] can be obtained by averaging the optical flows in the vicinity of the end of the arm to be trained.
The method comprises the steps of carrying out average calculation on all first optical flows in the neighborhood to obtain a first speed in the X direction, namely Ve,x, carrying out average calculation on all second optical flows in the neighborhood to obtain a second speed in the Y direction, namely Ve,y, and obtaining an actual speed vector according to the first speed and the second speed.
Wherein, the average calculation formula is as follows:
Where [ i, j ] S represents all pixel coordinates in the field S, flowx [ i, j ] represents the element of the ith row and jth column in matrix flowx, flowy [ i, j ] represents the element of the ith row and jth column in matrix flowy,Representing the neighborhood width,Representing the neighborhood height.
The optical flow in the vicinity of the tail end of the arm to be trained is averaged instead of the optical flow of a single pixel at the tail end of the arm to be trained, so that the actual velocity vector of the tail end of the arm to be trained is obtained, and the problem that the obtained velocity vector cannot reflect the actual motion intention due to the influence of noise on the optical flow of the single pixel is avoided. Therefore, the influence of noise can be reduced by averaging the optical flow in the vicinity of the end of the arm to be trained, and the calculation accuracy of the actual velocity vector is improved.
Step S210, obtaining a target position of the tail end of the arm to be trained in the ROI image, and determining a target speed vector of the tail end of the arm to be trained according to the target position and the current position;
specifically, the target speed vector [ Vd,x,Vd,y ] is calculated according to the following formula:
Wherein,Representing the target position of the end of the arm to be trained in the ROI image, i.e. the upper extremity target position, as shown in figures 5 or 6,Indicating the current position, T indicating the preset duration of movement. The target position is to be notedThe target provided for the upper limb rehabilitation robot guides the position corresponding to the current training task in the rehabilitation training, and when the current training task is completed, the control host generates the target position corresponding to the next training task until all tasks are completed or the training time is finished.
Step S212, determining the movement intention of the arm to be trained according to the actual speed vector and the target speed vector, and assisting the arm to be trained to perform rehabilitation training according to the movement intention until the arm to be trained moves to the target position.
The method comprises the steps of calculating projection of an actual speed vector on a target speed vector, judging whether the projection meets preset motion conditions, if so, determining that the arm to be trained generates correct motion intention, and assisting the arm to be trained to perform rehabilitation training until the arm to be trained moves to the target position.
Wherein, the projection V of the actual speed vector on the target speed vector is calculated according to the following formula:
Where [ Vd,x,Vd,y ] represents the target speed vector and [ Ve,x,Ve,y ] represents the actual speed vector.
For the projection V, the process of judging whether the projection meets the preset motion condition comprises judging whether the direction of the projection is positive and whether the amplitude of the projection reaches a preset threshold value, and if so, judging that the projection meets the preset motion condition. When the projection direction is positive and the amplitude reaches a preset threshold, determining that the arm to be trained generates a motion intention consistent with the target direction, namely a correct motion intention, wherein the control host controls the exoskeleton to provide an auxiliary force/moment to enable the arm to be trained to move to the target position according to the target speed vector [ Vd,x,Vd,y ]When the target position is reached, the current task is considered to be completed, and the control host generates the next target position until all tasks are completed or the training time is over.
Further, the method further comprises the step of determining that the arm to be trained generates the wrong movement intention if the projection does not meet the preset movement condition, wherein at the moment, the control host controls the exoskeleton to remain static, namely stopping assisting the arm to be trained in rehabilitation training, until the arm to be trained generates the correct movement intention.
According to the upper limb rehabilitation robot active training method provided by the embodiment of the invention, after a patient initiates an active movement, the limb generates a tiny displacement, and the movement intention of the patient is identified to realize the active training through capturing the size and the direction of the displacement by the RGB image, so that the active training cost of the upper limb rehabilitation robot is reduced, the practicability of an active training mode is improved, and the upper limb rehabilitation robot active training method is convenient to popularize and implement in practical application.
In summary, the upper limb rehabilitation robot active training method has the advantages that ① can identify the active movement intention of the upper limb of a patient based on a common RGB camera, compared with a method using an electroencephalogram or electromyographic signal, the method can greatly reduce the price of a required sensor, is beneficial to reducing the active training cost of the upper limb rehabilitation robot and improving the practicability of an active training mode, ② cuts out a region of interest (ROI) from an RGB image output by a visual sensor by performing through filtering on the RGB image output by the visual sensor and based on a positive kinematic model, and the follow-up algorithm is performed on the ROI image, so that the calculated amount can be obviously reduced, the practicability of an optical flow method is improved, ③ can reduce the influence of noise by averaging the optical flow in the vicinity of the tail end of an arm to be trained, improve the accuracy of the upper limb active movement intention identification, and is convenient to popularize and implement in practical application,
Corresponding to the embodiment of the method, the embodiment of the invention also provides an active training device of the upper limb rehabilitation robot, which comprises an image acquisition module 71, an optical flow calculation module 72, a neighborhood determination module 73, an actual speed determination module 74, a target speed determination module 75 and a movement intention determination module 76, wherein the functions of the modules are as follows:
The image acquisition module 71 is configured to acquire an RGB image containing an arm to be trained, and extract a region of interest ROI from the RGB image based on a positive kinematic model to obtain an ROI image, where the ROI image covers all the motion regions of the arm to be trained;
an optical flow calculation module 72 for calculating a dense optical flow of the ROI image based on Farneback optical flow algorithm;
A neighborhood determining module 73, configured to obtain a current position of the end of the arm to be trained in the ROI image, and determine a neighborhood of the end of the arm to be trained in the ROI image according to the current position;
An actual speed determining module 74, configured to determine an actual speed vector of the end of the arm to be trained according to the optical flow in the neighborhood;
A target speed determining module 75, configured to obtain a target position of the end of the arm to be trained in the ROI image, and determine a target speed vector of the end of the arm to be trained according to the target position and the current position;
the movement intention determining module 76 is configured to determine a movement intention of the arm to be trained according to the actual speed vector and the target speed vector, and assist the arm to be trained to perform rehabilitation training according to the movement intention until the arm to be trained moves to the target position.
According to the upper limb rehabilitation robot active training device provided by the embodiment of the invention, after a patient initiates an active movement, the limb generates a tiny displacement, and the movement intention of the patient is identified to realize the active training through capturing the size and the direction of the displacement by the RGB image, so that the active training cost of the upper limb rehabilitation robot is reduced, the practicability of an active training mode is improved, and the upper limb rehabilitation robot active training device is convenient to popularize and implement in practical application.
Preferably, the optical flow calculation module 72 is further configured to perform gray level conversion processing on the ROI image to obtain a first gray level image, perform median filtering processing on the first gray level image to obtain a filtered second gray level image, and calculate, based on Farneback optical flow algorithm, optical flow of the second gray level image of the current frame according to the second gray level image of the current frame and the second gray level image of the previous frame.
Preferably, the neighborhood determining module 73 is further configured to determine a neighborhood width and a neighborhood height according to the shape of the arm to be trained, and set up a rectangular area with the width being the neighborhood width and the height being the neighborhood height with the current position as the center, as a neighborhood of the end of the arm to be trained in the ROI image.
Preferably, the optical flow includes a first optical flow in the X direction and a second optical flow in the Y direction, and the actual speed determining module 74 is further configured to average all the first optical flows in the neighborhood to obtain a first speed in the X direction, average all the second optical flows in the neighborhood to obtain a second speed in the Y direction, and obtain an actual speed vector according to the first speed and the second speed.
Preferably, the movement intention determining module 76 is further configured to calculate a projection of the actual velocity vector onto the target velocity vector, determine whether the projection meets a preset movement condition, if so, determine that the arm to be trained generates a correct movement intention, and assist the arm to be trained in performing rehabilitation training until the arm to be trained moves to the target position.
Preferably, judging whether the projection meets the preset motion condition comprises judging whether the direction of the projection is positive and whether the amplitude of the projection reaches a preset threshold value, and if so, judging that the projection meets the preset motion condition.
Preferably, the device further comprises determining that the arm to be trained generates wrong exercise intention if the projection does not meet the preset exercise condition, and stopping assisting the arm to be trained in rehabilitation training.
The upper limb rehabilitation robot active training device provided by the embodiment of the invention has the same technical characteristics as the upper limb rehabilitation robot active training method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment of the invention also provides an upper limb rehabilitation robot, which comprises a processor and a memory, wherein the memory stores machine executable instructions which can be executed by the processor, and the processor executes the machine executable instructions to realize the active training method of the upper limb rehabilitation robot.
Referring to fig. 8, the upper limb rehabilitation robot includes a processor 100 and a memory 101, the memory 101 stores machine executable instructions executable by the processor 100, and the processor 100 executes the machine executable instructions to implement the above-mentioned upper limb rehabilitation robot active training method.
Further, the upper limb rehabilitation robot shown in fig. 8 further comprises a bus 102 and a communication interface 103, and the processor 100, the communication interface 103 and the memory 101 are connected through the bus 102.
The memory 101 may include a high-speed random access memory (RAM, random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 103 (which may be wired or wireless), and may use the internet, a wide area network, a local network, a metropolitan area network, etc. Bus 102 may be an ISA (Industrial Standard Architecture, industry standard architecture) bus, PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or EISA (Enhanced Industry Standard Architecture, extended industry standard architecture) bus, among others. The buses may be classified into address buses, data buses, control buses, and the like. For ease of illustration, only one bi-directional arrow is shown in FIG. 8, but not only one bus or type of bus.
The processor 100 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 100 or by instructions in the form of software. The Processor 100 may be a general-purpose Processor, including a central processing unit (Central Processing Unit, CPU), a network Processor (Network Processor, NP), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and, in combination with its hardware, performs the steps of the method of the previous embodiment.
The embodiment also provides a machine-readable storage medium, wherein the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to realize the active training method of the upper limb rehabilitation robot.
The active training method and device for the upper limb rehabilitation robot and the computer program product of the upper limb rehabilitation robot provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, the instructions included in the program codes can be used for executing the method described in the method embodiment, and specific implementation can be seen in the method embodiment and will not be repeated here.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, mechanically connected, electrically connected, directly connected, indirectly connected via an intermediate medium, or in communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It should be noted that the foregoing embodiments are merely illustrative embodiments of the present invention, and not restrictive, and the scope of the invention is not limited to the embodiments, and although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that any modification, variation or substitution of some of the technical features of the embodiments described in the foregoing embodiments may be easily contemplated within the scope of the present invention, and the spirit and scope of the technical solutions of the embodiments do not depart from the spirit and scope of the embodiments of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

CN202310420013.3A2023-04-142023-04-14 Active training method and device for upper limb rehabilitation robot and upper limb rehabilitation robotActiveCN116492644B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310420013.3ACN116492644B (en)2023-04-142023-04-14 Active training method and device for upper limb rehabilitation robot and upper limb rehabilitation robot

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310420013.3ACN116492644B (en)2023-04-142023-04-14 Active training method and device for upper limb rehabilitation robot and upper limb rehabilitation robot

Publications (2)

Publication NumberPublication Date
CN116492644A CN116492644A (en)2023-07-28
CN116492644Btrue CN116492644B (en)2025-05-06

Family

ID=87319570

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310420013.3AActiveCN116492644B (en)2023-04-142023-04-14 Active training method and device for upper limb rehabilitation robot and upper limb rehabilitation robot

Country Status (1)

CountryLink
CN (1)CN116492644B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105536205A (en)*2015-12-082016-05-04天津大学Upper limb training system based on monocular video human body action sensing
CN111631726A (en)*2020-06-012020-09-08深圳华鹊景医疗科技有限公司Upper limb function evaluation device and method and upper limb rehabilitation training system and method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CA2820241C (en)*2012-06-132020-01-14Robert G. HilkesAn apparatus and method for enhancing human visual performance in a head worn video system
EP3017761B1 (en)*2014-11-062021-07-21Fundación Tecnalia Research & InnovationSystem for functional balance assessment
CN112184767A (en)*2020-09-222021-01-05深研人工智能技术(深圳)有限公司Method, device, equipment and storage medium for tracking moving object track
CN112891137A (en)*2021-01-212021-06-04深圳华鹊景医疗科技有限公司Upper limb rehabilitation robot system, robot control method and device
CN114028156B (en)*2021-10-282024-07-05深圳华鹊景医疗科技有限公司Rehabilitation training method and device and rehabilitation robot
CN114005073B (en)*2021-12-242022-04-08东莞理工学院 Upper limb mirror rehabilitation training, identification method and device
CN114964233A (en)*2022-05-162022-08-30中国科学技术大学Meta-universe system and method for underwater augmented reality wearable robot
CN115937895B (en)*2022-11-112023-09-19南通大学Speed and strength feedback system based on depth camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105536205A (en)*2015-12-082016-05-04天津大学Upper limb training system based on monocular video human body action sensing
CN111631726A (en)*2020-06-012020-09-08深圳华鹊景医疗科技有限公司Upper limb function evaluation device and method and upper limb rehabilitation training system and method

Also Published As

Publication numberPublication date
CN116492644A (en)2023-07-28

Similar Documents

PublicationPublication DateTitle
CN110728241A (en) A driver fatigue detection method based on deep learning multi-feature fusion
CN112163564B (en)Tumble prejudging method based on human body key point behavior identification and LSTM (least Square TM)
CN108416276B (en)Abnormal gait detection method based on human lateral gait video
CN112990089B (en) A method for judging human body movement posture
CN108268858B (en)High-robustness real-time sight line detection method
CN111582158A (en)Tumbling detection method based on human body posture estimation
CN112568898A (en)Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image
CN112232128A (en) A method of identifying the care needs of the elderly with disabilities based on eye tracking
CN109978907A (en)A kind of sitting posture of student detection method towards household scene
US20210113170A1 (en)Diagnostic image processing apparatus, assessment assistance method, and program
CN117158952A (en)Method for realizing three-dimensional gait feature extraction and abnormal gait evaluation
CN111611876A (en) A method and system for gait correction based on big data analysis
CN116492644B (en) Active training method and device for upper limb rehabilitation robot and upper limb rehabilitation robot
CN113545956A (en) A personalized customized high-compliance knee exoskeleton design method
CN113117295B (en)Brain cognitive rehabilitation robot system based on hand recognition and training method
CN115240224A (en)Gesture feature extraction method based on three-dimensional hand key point and image feature fusion
KR100515798B1 (en)Robot driving method using facial gestures
CN110095109A (en)Attitude detecting method based on hoistable platform
WO2024135013A1 (en)Action analysis method, action analysis program, and action analysis system
CN117830726A (en)Skeletal key point position detection method and rehabilitation robot
CN114283176B (en)Pupil track generation method based on human eye video
CN116092128A (en) A monocular real-time human fall detection method and system based on machine vision
CN117253282A (en)Multi-person hybrid human body falling detection system and method based on double cameras
CN114842403A (en) Correction and Evaluation Methods of Fitness Movements
CN112233769A (en)Recovery system after suffering from illness based on data acquisition

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp