Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one objective of the invention is to provide a method for evaluating the input degree of patient training based on multi-modal information, which evaluates the input degree of the patient in rehabilitation training based on multi-modal information of motion, perception, cognition and emotion, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate.
It is another object of the invention to propose a device for assessing the level of exercise input of a patient based on multimodal information.
In order to achieve the above object, an embodiment of an aspect of the present invention provides a method for evaluating a training input level of a patient based on multi-modal information, including:
acquiring an electromyographic signal and a movement speed of a patient, and calculating the movement input degree of the patient according to the electromyographic signal and the movement speed;
acquiring the focus position of the eyes of a patient in the training process, and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and a moving object on a screen used for training;
acquiring an electroencephalogram signal of a frontal lobe area of a patient, and calculating the cognitive input degree of the patient according to the electroencephalogram signal;
acquiring a facial expression image of a patient in a training process, extracting and identifying emotions in the facial expression image through image analysis software to obtain duration of positive emotion and duration of negative emotion of the patient in the training process, and calculating the emotion input degree of the patient according to the duration of the positive emotion and the duration of the negative emotion of the patient;
and comprehensively evaluating according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The method for evaluating the input degree of the patient training based on the multi-mode information, disclosed by the embodiment of the invention, evaluates the input degree of the patient in the rehabilitation training based on the multi-mode information of movement, perception, cognition and emotion, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
In addition, the method for evaluating the training input degree of the patient based on the multi-modal information according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the calculating of the exercise input level of the patient based on the electromyographic signal and the exercise velocity includes: calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor said exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in an embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispFor the level of perception input, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in an embodiment of the present invention, the calculating the cognitive input degree of the patient according to the electroencephalogram signal includes:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
wherein E iscFor the cognitive input level, α is the alpha signal, β is the beta signal and theta signal for the elevation of the frontal lobe zone.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor the emotional input level, TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
In order to achieve the above object, another embodiment of the present invention provides an apparatus for evaluating a training input level of a patient based on multi-modal information, comprising:
the first calculation module is used for acquiring the electromyographic signals and the movement speed of the patient and calculating the movement input degree of the patient according to the electromyographic signals and the movement speed;
the second calculation module is used for acquiring the focus position of the eyes of the patient in the training process and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and a moving object on a screen used for training;
the third calculation module is used for collecting electroencephalogram signals of frontal lobe areas of the patients and calculating cognitive input degrees of the patients according to the electroencephalogram signals;
the fourth calculation module is used for acquiring facial expression images of a patient in a training process, extracting and identifying emotions in the facial expression images through image analysis software to obtain the duration time of positive emotions and the duration time of negative emotions of the patient in the training process, and calculating the emotion input degree of the patient according to the duration time of the positive emotions and the duration time of the negative emotions of the patient;
and the evaluation module is used for carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The device for evaluating the input degree of the patient training based on the multi-mode information evaluates the input degree of the patient in the rehabilitation training based on the motion, perception, cognition and emotion multi-mode information, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
In addition, the device for evaluating the training input degree of the patient based on the multi-modal information according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the first calculating module is specifically configured to,
calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor said exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in an embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispFor the level of perception input, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in an embodiment of the present invention, the calculating the cognitive input degree of the patient according to the electroencephalogram signal includes:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
wherein E iscFor the cognitive input level, α is the alpha signal, β is the beta signal and theta signal for the elevation of the frontal lobe zone.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor the emotional input degree,TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A method and apparatus for assessing a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described with reference to the accompanying drawings.
A method for evaluating a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described first with reference to the accompanying drawings.
FIG. 1 is a flow diagram of a method for assessing a patient's training input based on multimodal information, in accordance with one embodiment of the present invention.
As shown in FIG. 1, the method for assessing the exercise input level of a patient based on multi-modal information comprises the following steps:
and step S1, acquiring the electromyographic signals and the movement speed of the patient, and calculating the movement input degree of the patient according to the electromyographic signals and the movement speed.
Degree of investment in exercise (Motor engagement, E)m) Is defined as the state in which the patient is actively and striving to exercise. In rehabilitation training, the exercise state is generally monitored and characterized by electromyographic signals (EMG), and how much effort the patient has done on the exercise can be directly expressed. The Root Mean Square (RMS) value of the EMG signal is used in rehabilitation training to evaluate the patient's exercise input state during training. Since the energy of the signal can be characterized, the rms value is considered to be the most meaningful method for analyzing the amplitude of the electromyographic signal. Because the movement speed is an important factor influencing the EMG amplitude, the participation degree of the patient movement level is represented by the ratio of the root mean square value of the electromyographic signals to the movement speed.
During rehabilitation training of a patient, the patient wears electroencephalogram equipment and electromyogram equipment to measure electroencephalogram signals and electromyogram signals in the training process, the motion input degree of the patient is calculated according to the ratio of the root mean square value of the measured electromyogram signals of the training limb to the motion speed, and the formula is as follows:
Em=EMGRms/v
wherein E ismFor exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
And step S2, acquiring the focus position of the eyes of the patient in the training process, and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and the moving object on the screen used for training.
The perceived level of engagement is defined as the state of system attention focus perceived for the training task. For visual interaction, eye trajectory tracking is widely used to assess the attention of a user subject. Some evaluation indexes such as the number of times the eye focus immobility occurs, the number of times the eye focus focuses on the area outside the screen, and the like. However, these evaluation indexes cannot represent the reduction of perception input degree in the training process, and even if the training subjects look at the screen, the training subjects are not input into the training. The eye focus moving speed, the eye focus total displacement and the like are used for evaluating the perception input degree, and the indexes can quantify the visual attention degree of the patient in rehabilitation training. However, the evaluation method cannot perform real-time evaluation in the training process. In an embodiment of the invention the distance between the focus of the eyes of the training subject and the moving object on the screen is used to characterize the perceived input level. The perception input degree is as follows:
Ep=d(gaze,screen changes)
wherein E ispTo sense the degree of engagement, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
And step S3, acquiring the electroencephalogram signals of the frontal lobe area of the patient, and calculating the cognitive input degree of the patient according to the electroencephalogram signals.
Further, calculating the cognitive input degree of the patient according to the electroencephalogram signals, comprising:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
wherein E iscTo recognize the degree of input, α is the alpha signal, β is the beta signal, and θ is the theta signal.
Cognitive engagement is defined as the degree of concentration at the completion of a cognitive task. The cognitive concentration and cognitive load of a subject is generally assessed by monitoring brain waves (EEG).
The brain electrical variables used to monitor cognitive concentration include decreased alpha signal, increased beta signal, increased theta signal and the ratios between them. At present, the most widely used method for evaluating cognitive input is to calculate a formula of cognitive input according to energy of alpha, beta and theta frequency bands, that is, a ratio of energy of the beta frequency band to a sum of the energy of the alpha and the theta frequency bands, according to the current understanding of electroencephalogram signals, because electroencephalogram signals mainly represent signals of the beta frequency bands when people are attentive or alert, and the electroencephalogram signals mainly represent signals of the alpha or the theta or even lower frequency bands when people are at rest or sleep, the ratio can represent the attention input degree of people. The frontal lobe on the cerebral cortex is responsible for attention, mental state, motion planning and the like of a human body, the electroencephalogram signals in the embodiment of the invention are taken from the frontal lobe of a patient, and the electroencephalogram signals of the frontal lobe area of the patient are detected through electroencephalogram equipment worn by the patient.
Step S4, facial expression images of the patient in the training process are collected, emotions in the facial expression images are extracted and identified through image analysis software, the duration time of the positive emotion and the duration time of the negative emotion of the patient in the training process are obtained, and the emotion input degree of the patient is calculated according to the duration time of the positive emotion and the duration time of the negative emotion of the patient.
It is understood that the emotion input state is monitored based on the facial expression of the user's subject and expressed by the ratio of the duration of the main emotion of the positive emotion to the duration of the main emotion of the negative emotion.
The increase in the motor function state of the patient is associated with a positive mood of the patient. Therefore, one of the goals of rehabilitation training is to mobilize the positive emotions of the patient. The emotional engagement level is defined as the emotional engagement level in rehabilitation training. If the rehabilitation training can affect the emotion of the patient, the patient is indicated to be emotionally put into the training. If the patient is involved in the rehabilitation training with emotion, different events in the rehabilitation training, such as different game elements or the completion or non-completion of a task, may have an impact on the patient's emotion.
Therefore, in the training process of a patient, the facial expression of the patient is monitored, the facial expression image of the patient is collected, the emotion of the patient can be identified and extracted by adopting instight software for the collected facial expression image, the duration that the positive emotion of the patient is the main emotion and the duration that the negative emotion is the main emotion are recorded in the training process, the emotion input degree of the patient is calculated according to the duration of the positive emotion and the duration of the negative emotion of the patient, and the emotion input degree calculation formula is as follows:
Ee=Tpositive/Tnegative
wherein E iseFor emotional input level, TposttiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
And step S5, carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
As shown in fig. 2, the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree of the patient obtained through the calculation are comprehensively evaluated, so that the input degree of the patient in the training process is more accurately and objectively evaluated, a rehabilitation doctor can change the training mode in the rehabilitation training process, and the patient can keep a higher input degree in the rehabilitation training process.
According to the method for evaluating the input degree of the patient training based on the multi-modal information, which is provided by the embodiment of the invention, the input degree of the patient in the rehabilitation training is evaluated based on the multi-modal information of motion, perception, cognition and emotion, the subjectivity of evaluating the input state by a scale method is compensated, and the evaluation result is more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
Next, an apparatus for evaluating a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described with reference to the accompanying drawings.
FIG. 3 is a schematic diagram of an apparatus for assessing a patient's training input based on multimodal information, according to one embodiment of the present invention.
As shown in fig. 3, the apparatus for evaluating the exercise input level of a patient based on multi-modal information includes: afirst calculation module 100, asecond calculation module 200, athird calculation module 300, afourth calculation module 400 and anevaluation module 500.
Thefirst calculating module 100 is configured to collect an electromyographic signal and a movement velocity of a patient, and calculate a movement input degree of the patient according to the electromyographic signal and the movement velocity.
And thesecond calculation module 200 is configured to acquire the focal position of the patient's eye during the training process, and calculate the perception input degree of the patient according to the distance between the focal position of the patient's eye and the moving object on the screen used for the training.
And thethird calculating module 300 is used for acquiring the electroencephalogram signals of the frontal lobe area of the patient and calculating the cognitive input degree of the patient according to the electroencephalogram signals.
Thefourth calculating module 400 is configured to collect facial expression images of a patient in a training process, extract and identify emotions in the facial expression images through image analysis software, obtain duration of positive emotion and duration of negative emotion of the patient in the training process, and calculate an emotion input degree of the patient according to the duration of the positive emotion and the duration of the negative emotion of the patient.
And theevaluation module 500 is used for carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The device makes up the subjectivity of the input state evaluation by a scale method, so that the evaluation result is more objective and accurate.
Further, in one embodiment of the present invention, the first calculation module, in particular for,
calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in one embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispTo sense the degree of engagement, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in one embodiment of the present invention, calculating the cognitive input level of the patient according to the electroencephalogram signal comprises:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
wherein E iscTo recognize the degree of input, α is the alpha signal, β is the beta signal, and θ is the theta signal.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor emotional input level, TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
It should be noted that the foregoing explanation of the embodiment of the method for evaluating the training input level of the patient based on the multi-modal information is also applicable to the apparatus of the embodiment, and is not repeated herein.
According to the device for evaluating the input degree of the patient training based on the multi-modal information, which is provided by the embodiment of the invention, the input degree of the patient in the rehabilitation training is evaluated based on the motion, perception, cognition and emotion multi-modal information, so that the subjectivity of evaluating the input state by a scale method is compensated, and the evaluation result is more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.