Movatterモバイル変換


[0]ホーム

URL:


CN111012307A - Method and device for evaluating training input degree of patient based on multi-mode information - Google Patents

Method and device for evaluating training input degree of patient based on multi-mode information
Download PDF

Info

Publication number
CN111012307A
CN111012307ACN201911175432.5ACN201911175432ACN111012307ACN 111012307 ACN111012307 ACN 111012307ACN 201911175432 ACN201911175432 ACN 201911175432ACN 111012307 ACN111012307 ACN 111012307A
Authority
CN
China
Prior art keywords
patient
degree
signal
training
engagement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911175432.5A
Other languages
Chinese (zh)
Inventor
季林红
李翀
钱超
贾天宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua UniversityfiledCriticalTsinghua University
Priority to CN201911175432.5ApriorityCriticalpatent/CN111012307A/en
Publication of CN111012307ApublicationCriticalpatent/CN111012307A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于多模态信息评估患者训练投入程度的方法及装置,其中,该方法包括:根据患者的肌电信号和运动速度计算患者的运动投入程度;根据患者眼睛的焦点位置与训练所使用的屏幕上的移动物体之间的距离计算患者的感知投入程度;根据患者额叶区的脑电信号计算患者的认知投入程度;根据患者的积极情感的持续时间和消极情感的持续时间计算患者的情感投入程度;根据运动投入程度、感知投入程度、认知投入程度和情感投入程度进行综合评估,得到患者训练投入程度。该方法基于运动、感知、认知、情感多模态信息来评估患者康复训练时的投入程度,弥补了量表法评价投入状态的主观性,使得评估结果更加客观、精确。

Figure 201911175432

The invention discloses a method and device for evaluating a patient's training engagement degree based on multimodal information, wherein the method comprises: calculating the patient's exercise engagement degree according to the patient's electromyography signal and exercise speed; The distance between the moving objects on the screen used for training was used to calculate the patient's perceived engagement; the patient's cognitive engagement was calculated based on the EEG signals of the patient's frontal lobe; the duration of the patient's positive emotions and the duration of negative emotions were calculated Time to calculate the patient's emotional involvement; comprehensively evaluate the patient's involvement in exercise, perception, cognitive, and emotional involvement to obtain the patient's training involvement. This method is based on multimodal information of movement, perception, cognition and emotion to evaluate the degree of involvement of patients in rehabilitation training.

Figure 201911175432

Description

Method and device for evaluating training input degree of patient based on multi-mode information
Technical Field
The invention relates to the technical field of rehabilitation assessment, in particular to a method and a device for assessing the training input degree of a patient based on multi-mode information.
Background
Stroke, a common disease of cerebrovascular blood circulation disorder. With the continuous deepening of the global aging degree and the continuous aggravation of the problems of large living pressure, irregular living and the like of young people, the number of stroke patients is on an increasing trend. Research shows that more than 80% of patients can recover most of motor functions through rehabilitation training and return to life and work again.
However, the repeated nature of the rehabilitation training easily makes the patient lose interest and feel boring, and the patient is depressed in mood due to the influence of the illness state, so that the patient is very likely to lose confidence due to improper training prescription and training difficulty, and the rehabilitation training produces boring mood and extremely adverse effect on the rehabilitation effect. Therefore, it is very important to evaluate the degree of investment of the patient in rehabilitation training.
The rehabilitation input degree is a variable which can represent a plurality of elements and comprises the attitude of a patient to rehabilitation training, the understanding of the patient to the requirement of a training task, the requirement of a training prompt on language or action, the active participation degree in the training, the attendance rate of the whole rehabilitation treatment course and the like. The input degree in rehabilitation is defined as the state driven by enthusiasm and actively strives to participate in rehabilitation training, and the difference of the input degree of a patient from the state of purely participating in the training is that the input degree contains the high interest of the patient and the effort of the patient following the training process. Many existing methods for evaluating patient input levels are based primarily on usage scales or indirect evaluation methods, which are inaccurate.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one objective of the invention is to provide a method for evaluating the input degree of patient training based on multi-modal information, which evaluates the input degree of the patient in rehabilitation training based on multi-modal information of motion, perception, cognition and emotion, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate.
It is another object of the invention to propose a device for assessing the level of exercise input of a patient based on multimodal information.
In order to achieve the above object, an embodiment of an aspect of the present invention provides a method for evaluating a training input level of a patient based on multi-modal information, including:
acquiring an electromyographic signal and a movement speed of a patient, and calculating the movement input degree of the patient according to the electromyographic signal and the movement speed;
acquiring the focus position of the eyes of a patient in the training process, and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and a moving object on a screen used for training;
acquiring an electroencephalogram signal of a frontal lobe area of a patient, and calculating the cognitive input degree of the patient according to the electroencephalogram signal;
acquiring a facial expression image of a patient in a training process, extracting and identifying emotions in the facial expression image through image analysis software to obtain duration of positive emotion and duration of negative emotion of the patient in the training process, and calculating the emotion input degree of the patient according to the duration of the positive emotion and the duration of the negative emotion of the patient;
and comprehensively evaluating according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The method for evaluating the input degree of the patient training based on the multi-mode information, disclosed by the embodiment of the invention, evaluates the input degree of the patient in the rehabilitation training based on the multi-mode information of movement, perception, cognition and emotion, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
In addition, the method for evaluating the training input degree of the patient based on the multi-modal information according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the calculating of the exercise input level of the patient based on the electromyographic signal and the exercise velocity includes: calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor said exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in an embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispFor the level of perception input, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in an embodiment of the present invention, the calculating the cognitive input degree of the patient according to the electroencephalogram signal includes:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure BDA0002289821050000021
wherein E iscFor the cognitive input level, α is the alpha signal, β is the beta signal and theta signal for the elevation of the frontal lobe zone.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor the emotional input level, TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
In order to achieve the above object, another embodiment of the present invention provides an apparatus for evaluating a training input level of a patient based on multi-modal information, comprising:
the first calculation module is used for acquiring the electromyographic signals and the movement speed of the patient and calculating the movement input degree of the patient according to the electromyographic signals and the movement speed;
the second calculation module is used for acquiring the focus position of the eyes of the patient in the training process and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and a moving object on a screen used for training;
the third calculation module is used for collecting electroencephalogram signals of frontal lobe areas of the patients and calculating cognitive input degrees of the patients according to the electroencephalogram signals;
the fourth calculation module is used for acquiring facial expression images of a patient in a training process, extracting and identifying emotions in the facial expression images through image analysis software to obtain the duration time of positive emotions and the duration time of negative emotions of the patient in the training process, and calculating the emotion input degree of the patient according to the duration time of the positive emotions and the duration time of the negative emotions of the patient;
and the evaluation module is used for carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The device for evaluating the input degree of the patient training based on the multi-mode information evaluates the input degree of the patient in the rehabilitation training based on the motion, perception, cognition and emotion multi-mode information, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
In addition, the device for evaluating the training input degree of the patient based on the multi-modal information according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the first calculating module is specifically configured to,
calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor said exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in an embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispFor the level of perception input, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in an embodiment of the present invention, the calculating the cognitive input degree of the patient according to the electroencephalogram signal includes:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure BDA0002289821050000041
wherein E iscFor the cognitive input level, α is the alpha signal, β is the beta signal and theta signal for the elevation of the frontal lobe zone.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor the emotional input degree,TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a method for assessing a patient's training input based on multimodal information, in accordance with one embodiment of the present invention;
FIG. 2 is a block flow diagram of a method for assessing a patient's training input based on multimodal information, in accordance with one embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus for assessing a patient's training input based on multimodal information, according to one embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A method and apparatus for assessing a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described with reference to the accompanying drawings.
A method for evaluating a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described first with reference to the accompanying drawings.
FIG. 1 is a flow diagram of a method for assessing a patient's training input based on multimodal information, in accordance with one embodiment of the present invention.
As shown in FIG. 1, the method for assessing the exercise input level of a patient based on multi-modal information comprises the following steps:
and step S1, acquiring the electromyographic signals and the movement speed of the patient, and calculating the movement input degree of the patient according to the electromyographic signals and the movement speed.
Degree of investment in exercise (Motor engagement, E)m) Is defined as the state in which the patient is actively and striving to exercise. In rehabilitation training, the exercise state is generally monitored and characterized by electromyographic signals (EMG), and how much effort the patient has done on the exercise can be directly expressed. The Root Mean Square (RMS) value of the EMG signal is used in rehabilitation training to evaluate the patient's exercise input state during training. Since the energy of the signal can be characterized, the rms value is considered to be the most meaningful method for analyzing the amplitude of the electromyographic signal. Because the movement speed is an important factor influencing the EMG amplitude, the participation degree of the patient movement level is represented by the ratio of the root mean square value of the electromyographic signals to the movement speed.
During rehabilitation training of a patient, the patient wears electroencephalogram equipment and electromyogram equipment to measure electroencephalogram signals and electromyogram signals in the training process, the motion input degree of the patient is calculated according to the ratio of the root mean square value of the measured electromyogram signals of the training limb to the motion speed, and the formula is as follows:
Em=EMGRms/v
wherein E ismFor exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
And step S2, acquiring the focus position of the eyes of the patient in the training process, and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and the moving object on the screen used for training.
The perceived level of engagement is defined as the state of system attention focus perceived for the training task. For visual interaction, eye trajectory tracking is widely used to assess the attention of a user subject. Some evaluation indexes such as the number of times the eye focus immobility occurs, the number of times the eye focus focuses on the area outside the screen, and the like. However, these evaluation indexes cannot represent the reduction of perception input degree in the training process, and even if the training subjects look at the screen, the training subjects are not input into the training. The eye focus moving speed, the eye focus total displacement and the like are used for evaluating the perception input degree, and the indexes can quantify the visual attention degree of the patient in rehabilitation training. However, the evaluation method cannot perform real-time evaluation in the training process. In an embodiment of the invention the distance between the focus of the eyes of the training subject and the moving object on the screen is used to characterize the perceived input level. The perception input degree is as follows:
Ep=d(gaze,screen changes)
wherein E ispTo sense the degree of engagement, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
And step S3, acquiring the electroencephalogram signals of the frontal lobe area of the patient, and calculating the cognitive input degree of the patient according to the electroencephalogram signals.
Further, calculating the cognitive input degree of the patient according to the electroencephalogram signals, comprising:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure BDA0002289821050000061
wherein E iscTo recognize the degree of input, α is the alpha signal, β is the beta signal, and θ is the theta signal.
Cognitive engagement is defined as the degree of concentration at the completion of a cognitive task. The cognitive concentration and cognitive load of a subject is generally assessed by monitoring brain waves (EEG).
The brain electrical variables used to monitor cognitive concentration include decreased alpha signal, increased beta signal, increased theta signal and the ratios between them. At present, the most widely used method for evaluating cognitive input is to calculate a formula of cognitive input according to energy of alpha, beta and theta frequency bands, that is, a ratio of energy of the beta frequency band to a sum of the energy of the alpha and the theta frequency bands, according to the current understanding of electroencephalogram signals, because electroencephalogram signals mainly represent signals of the beta frequency bands when people are attentive or alert, and the electroencephalogram signals mainly represent signals of the alpha or the theta or even lower frequency bands when people are at rest or sleep, the ratio can represent the attention input degree of people. The frontal lobe on the cerebral cortex is responsible for attention, mental state, motion planning and the like of a human body, the electroencephalogram signals in the embodiment of the invention are taken from the frontal lobe of a patient, and the electroencephalogram signals of the frontal lobe area of the patient are detected through electroencephalogram equipment worn by the patient.
Step S4, facial expression images of the patient in the training process are collected, emotions in the facial expression images are extracted and identified through image analysis software, the duration time of the positive emotion and the duration time of the negative emotion of the patient in the training process are obtained, and the emotion input degree of the patient is calculated according to the duration time of the positive emotion and the duration time of the negative emotion of the patient.
It is understood that the emotion input state is monitored based on the facial expression of the user's subject and expressed by the ratio of the duration of the main emotion of the positive emotion to the duration of the main emotion of the negative emotion.
The increase in the motor function state of the patient is associated with a positive mood of the patient. Therefore, one of the goals of rehabilitation training is to mobilize the positive emotions of the patient. The emotional engagement level is defined as the emotional engagement level in rehabilitation training. If the rehabilitation training can affect the emotion of the patient, the patient is indicated to be emotionally put into the training. If the patient is involved in the rehabilitation training with emotion, different events in the rehabilitation training, such as different game elements or the completion or non-completion of a task, may have an impact on the patient's emotion.
Therefore, in the training process of a patient, the facial expression of the patient is monitored, the facial expression image of the patient is collected, the emotion of the patient can be identified and extracted by adopting instight software for the collected facial expression image, the duration that the positive emotion of the patient is the main emotion and the duration that the negative emotion is the main emotion are recorded in the training process, the emotion input degree of the patient is calculated according to the duration of the positive emotion and the duration of the negative emotion of the patient, and the emotion input degree calculation formula is as follows:
Ee=Tpositive/Tnegative
wherein E iseFor emotional input level, TposttiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
And step S5, carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
As shown in fig. 2, the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree of the patient obtained through the calculation are comprehensively evaluated, so that the input degree of the patient in the training process is more accurately and objectively evaluated, a rehabilitation doctor can change the training mode in the rehabilitation training process, and the patient can keep a higher input degree in the rehabilitation training process.
According to the method for evaluating the input degree of the patient training based on the multi-modal information, which is provided by the embodiment of the invention, the input degree of the patient in the rehabilitation training is evaluated based on the multi-modal information of motion, perception, cognition and emotion, the subjectivity of evaluating the input state by a scale method is compensated, and the evaluation result is more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
Next, an apparatus for evaluating a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described with reference to the accompanying drawings.
FIG. 3 is a schematic diagram of an apparatus for assessing a patient's training input based on multimodal information, according to one embodiment of the present invention.
As shown in fig. 3, the apparatus for evaluating the exercise input level of a patient based on multi-modal information includes: afirst calculation module 100, asecond calculation module 200, athird calculation module 300, afourth calculation module 400 and anevaluation module 500.
Thefirst calculating module 100 is configured to collect an electromyographic signal and a movement velocity of a patient, and calculate a movement input degree of the patient according to the electromyographic signal and the movement velocity.
And thesecond calculation module 200 is configured to acquire the focal position of the patient's eye during the training process, and calculate the perception input degree of the patient according to the distance between the focal position of the patient's eye and the moving object on the screen used for the training.
And thethird calculating module 300 is used for acquiring the electroencephalogram signals of the frontal lobe area of the patient and calculating the cognitive input degree of the patient according to the electroencephalogram signals.
Thefourth calculating module 400 is configured to collect facial expression images of a patient in a training process, extract and identify emotions in the facial expression images through image analysis software, obtain duration of positive emotion and duration of negative emotion of the patient in the training process, and calculate an emotion input degree of the patient according to the duration of the positive emotion and the duration of the negative emotion of the patient.
And theevaluation module 500 is used for carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The device makes up the subjectivity of the input state evaluation by a scale method, so that the evaluation result is more objective and accurate.
Further, in one embodiment of the present invention, the first calculation module, in particular for,
calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in one embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispTo sense the degree of engagement, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in one embodiment of the present invention, calculating the cognitive input level of the patient according to the electroencephalogram signal comprises:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure BDA0002289821050000081
wherein E iscTo recognize the degree of input, α is the alpha signal, β is the beta signal, and θ is the theta signal.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor emotional input level, TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
It should be noted that the foregoing explanation of the embodiment of the method for evaluating the training input level of the patient based on the multi-modal information is also applicable to the apparatus of the embodiment, and is not repeated herein.
According to the device for evaluating the input degree of the patient training based on the multi-modal information, which is provided by the embodiment of the invention, the input degree of the patient in the rehabilitation training is evaluated based on the motion, perception, cognition and emotion multi-modal information, so that the subjectivity of evaluating the input state by a scale method is compensated, and the evaluation result is more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

Translated fromChinese
1.一种基于多模态信息评估患者训练投入程度的方法,其特征在于,包括以下步骤:1. a method for evaluating patient training input degree based on multimodal information, is characterized in that, comprises the following steps:采集患者的肌电信号和运动速度,根据所述肌电信号和所述运动速度计算患者的运动投入程度;Collecting the electromyographic signal and exercise speed of the patient, and calculating the patient's exercise commitment degree according to the electromyographic signal and the exercise speed;采集训练过程中患者眼睛的焦点位置,根据所述患者眼睛的焦点位置与训练所使用的屏幕上的移动物体之间的距离计算患者的感知投入程度;Collect the focal position of the patient's eyes during the training process, and calculate the patient's perceived engagement level according to the distance between the focal position of the patient's eyes and the moving object on the screen used for training;采集患者额叶区的脑电信号,根据所述脑电信号计算患者的认知投入程度;Collect the EEG signal of the patient's frontal lobe area, and calculate the patient's cognitive engagement level according to the EEG signal;采集患者训练过程中的面部表情图像,通过图像分析软件对所述面部表情图像中的情感进行提取识别,得到训练过程中患者的积极情感的持续时间和消极情感持续的时间,根据患者的积极情感的持续时间和消极情感的持续时间计算患者的情感投入程度;Collect the facial expression images of the patient during the training process, extract and identify the emotions in the facial expression images through image analysis software, and obtain the duration of the patient's positive emotion and the duration of the negative emotion during the training process, according to the patient's positive emotion. The duration of negative affect and the duration of negative affect to calculate the patient's emotional engagement;根据所述运动投入程度、所述感知投入程度、所述认知投入程度和所述情感投入程度进行综合评估,得到患者训练投入程度。A comprehensive evaluation is performed according to the exercise engagement level, the perceived engagement level, the cognitive engagement level, and the emotional engagement level to obtain the patient's training engagement level.2.根据权利要求1所述的基于多模态信息评估患者训练投入程度的方法,其特征在于,所述根据所述肌电信号和所述运动速度计算患者的运动投入程度,包括:根据所述肌电信号的均方根值与所述运动速度的比值计算患者的运动投入程度,公式为:2. The method for evaluating a patient's training engagement degree based on multimodal information according to claim 1, wherein the calculating the patient's exercise engagement degree according to the myoelectric signal and the movement speed comprises: according to the The ratio of the root mean square value of the electromyographic signal to the exercise speed is used to calculate the patient's exercise involvement degree, and the formula is:Em=EMGRms/vEm = EMGRms /v其中,Em为所述运动投入程度,EMGRms为一个运动周期内患者运动肢体的肌电信号均方根值,v为一个运动周期内患者运动的平均速度。Wherein, Em is the degree of the exercise engagement, EMGRms is the root mean square value of the electromyographic signal of the patient's moving limbs in one exercise cycle, andv is the average speed of the patient's movement in one exercise cycle.3.根据权利要求1所述的基于多模态信息评估患者训练投入程度的方法,其特征在于,所述感知投入程度计算公式为:3. the method for evaluating patient training investment degree based on multimodal information according to claim 1, is characterized in that, described perception investment degree calculation formula is:Ep=d(gaze,screen changes)Ep =d(gaze, screen changes)其中,Ep为所述感知投入程度,gaze为所述患者眼睛的焦点位置,screen changes为屏幕上的移动物体的位置。Wherein, Ep is the degree of the perception engagement, gaze is the focal position of the patient's eyes, and screen changes is the position of the moving object on the screen.4.根据权利要求1所述的基于多模态信息评估患者训练投入程度的方法,其特征在于,所述根据所述脑电信号计算患者的认知投入程度,包括:4. The method for evaluating a patient's training investment degree based on multimodal information according to claim 1, wherein the calculating the patient's cognitive investment degree according to the EEG signal comprises:对所述脑电信号进行分解得到患者额叶区的降低的alpha信号、升高的beta信号和升高的theta信号,根据所述降低的alpha信号、升高的beta信号和升高的theta信号的比值计算所述认知投入程度,具体公式为:Decomposing the EEG signal to obtain the reduced alpha signal, elevated beta signal and elevated theta signal in the frontal lobe region of the patient, according to the reduced alpha signal, elevated beta signal and elevated theta signal The ratio of , to calculate the cognitive input degree, the specific formula is:
Figure FDA0002289821040000011
Figure FDA0002289821040000011
其中,Ec为所述认知投入程度,α为额叶区升高的alpha信号,β为额叶区升高的beta信号,θ为额叶区升高的theta信号。Among them, Ec is the degree of cognitive engagement, α is the increased alpha signal in the frontal lobe region, β is the increased beta signal in the frontal lobe region, and θ is the increased theta signal in the frontal lobe region.5.根据权利要求1所述的基于多模态信息评估患者训练投入程度的方法,其特征在于,所述情感投入程度计算公式为:5. the method for evaluating patient training investment degree based on multimodal information according to claim 1, is characterized in that, described emotional investment degree calculation formula is:Ee=Tpositive/TnegativeEe =Tpositive /Tnegative其中,Ee为所述情感投入程度,Tpositive为患者的积极情感为主要情感的持续时间,Tnegative为患者的消极情感为主要情感的持续时间。Among them, Ee is the degree of emotional investment, Tpositive is the duration of the patient's positive emotion as the main emotion, and Tnegative is the duration of the patient's negative emotion as the main emotion.6.一种基于多模态信息评估患者训练投入程度的装置,其特征在于,包括:6. A device for evaluating the degree of patient training input based on multimodal information, comprising:第一计算模块,用于采集患者的肌电信号和运动速度,根据所述肌电信号和所述运动速度计算患者的运动投入程度;The first calculation module is used to collect the EMG signal and the movement speed of the patient, and calculate the patient's exercise involvement degree according to the EMG signal and the movement speed;第二计算模块,用于采集训练过程中患者眼睛的焦点位置,根据所述患者眼睛的焦点位置与训练所使用的屏幕上的移动物体之间的距离计算患者的感知投入程度;The second calculation module is used to collect the focus position of the patient's eyes during the training process, and calculate the patient's perceived investment degree according to the distance between the focus position of the patient's eyes and the moving object on the screen used for training;第三计算模块,用于采集患者额叶区的脑电信号,根据所述脑电信号计算患者的认知投入程度;The third computing module is used to collect the EEG signal of the patient's frontal lobe area, and calculate the patient's cognitive investment degree according to the EEG signal;第四计算模块,用于采集患者训练过程中的面部表情图像,通过图像分析软件对所述面部表情图像中的情感进行提取识别,得到训练过程中患者的积极情感的持续时间和消极情感持续的时间,根据患者的积极情感的持续时间和消极情感的持续时间计算患者的情感投入程度;The fourth calculation module is used to collect the facial expression images in the training process of the patient, and extract and identify the emotions in the facial expression images through image analysis software, so as to obtain the duration of the positive emotion and the duration of the negative emotion of the patient during the training process. Time, according to the duration of the patient's positive emotions and the duration of negative emotions to calculate the patient's emotional involvement;评估模块,用于根据所述运动投入程度、所述感知投入程度、所述认知投入程度和所述情感投入程度进行综合评估,得到患者训练投入程度。The evaluation module is configured to perform a comprehensive evaluation according to the exercise engagement degree, the perception engagement degree, the cognitive engagement degree and the emotional engagement degree to obtain the training engagement degree of the patient.7.根据权利要求6所述的基于多模态信息评估患者训练投入程度的装置,其特征在于,所述第一计算模块,具体用于,7. The device for evaluating the degree of patient training input based on multimodal information according to claim 6, wherein the first calculation module is specifically used for:根据所述肌电信号的均方根值与所述运动速度的比值计算患者的运动投入程度,公式为:According to the ratio of the root mean square value of the electromyographic signal to the exercise speed, the degree of exercise involvement of the patient is calculated, and the formula is:Em=EMGRms/vEm = EMGRms /v其中,Em为所述运动投入程度,EMGRms为一个运动周期内患者运动肢体的肌电信号均方根值,v为一个运动周期内患者运动的平均速度。Wherein, Em is the degree of the exercise engagement, EMGRms is the root mean square value of the electromyographic signal of the patient's moving limbs in one exercise cycle, andv is the average speed of the patient's movement in one exercise cycle.8.根据权利要求6所述的基于多模态信息评估患者训练投入程度的装置,其特征在于,所述感知投入程度计算公式为:8. The device for evaluating a patient's training investment degree based on multimodal information according to claim 6, wherein the calculation formula for the perceived investment degree is:Ep=d(gaze,screen changes)Ep =d(gaze, screen changes)其中,Ep为所述感知投入程度,gaze为所述患者眼睛的焦点位置,screen changes为屏幕上的移动物体的位置。Wherein, Ep is the degree of the perception engagement, gaze is the focal position of the patient's eyes, and screen changes is the position of the moving object on the screen.9.根据权利要求6所述的基于多模态信息评估患者训练投入程度的装置,其特征在于,所述根据所述脑电信号计算患者的认知投入程度,包括:9. The device for evaluating a patient's training investment degree based on multimodal information according to claim 6, wherein the calculating the patient's cognitive investment degree according to the EEG signal comprises:对所述脑电信号进行分解得到患者额叶区的降低的alpha信号、升高的beta信号和升高的theta信号,根据所述降低的alpha信号、升高的beta信号和升高的theta信号的比值计算所述认知投入程度,具体公式为:Decomposing the EEG signal to obtain the reduced alpha signal, elevated beta signal and elevated theta signal in the frontal lobe region of the patient, according to the reduced alpha signal, elevated beta signal and elevated theta signal The ratio of , to calculate the cognitive input degree, the specific formula is:
Figure FDA0002289821040000031
Figure FDA0002289821040000031
其中,Ec为所述认知投入程度,α为额叶区升高的alpha信号,β为额叶区升高的beta信号,θ为额叶区升高的theta信号。Among them, Ec is the degree of cognitive engagement, α is the increased alpha signal in the frontal lobe region, β is the increased beta signal in the frontal lobe region, and θ is the increased theta signal in the frontal lobe region.
10.根据权利要求6所述的基于多模态信息评估患者训练投入程度的装置,其特征在于,所述情感投入程度计算公式为:10. The device for evaluating a patient's training investment degree based on multimodal information according to claim 6, wherein the emotional investment degree calculation formula is:Ee=Tpositive/TnegativeEe =Tpositive /Tnegative其中,Ee为所述情感投入程度,Tpositive为患者的积极情感为主要情感的持续时间,Tnegative为患者的消极情感为主要情感的持续时间。Among them, Ee is the degree of emotional investment, Tpositive is the duration of the patient's positive emotion as the main emotion, and Tnegative is the duration of the patient's negative emotion as the main emotion.
CN201911175432.5A2019-11-262019-11-26Method and device for evaluating training input degree of patient based on multi-mode informationPendingCN111012307A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911175432.5ACN111012307A (en)2019-11-262019-11-26Method and device for evaluating training input degree of patient based on multi-mode information

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911175432.5ACN111012307A (en)2019-11-262019-11-26Method and device for evaluating training input degree of patient based on multi-mode information

Publications (1)

Publication NumberPublication Date
CN111012307Atrue CN111012307A (en)2020-04-17

Family

ID=70202410

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911175432.5APendingCN111012307A (en)2019-11-262019-11-26Method and device for evaluating training input degree of patient based on multi-mode information

Country Status (1)

CountryLink
CN (1)CN111012307A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112057040A (en)*2020-06-122020-12-11国家康复辅具研究中心Upper limb motor function rehabilitation evaluation method
CN112541541A (en)*2020-12-102021-03-23杭州电子科技大学Lightweight multi-modal emotion analysis method based on multi-element hierarchical depth fusion
CN113349780A (en)*2021-06-072021-09-07浙江科技学院Method for evaluating influence of emotional design on online learning cognitive load
CN116370788A (en)*2023-06-052023-07-04浙江强脑科技有限公司Training effect real-time feedback method and device for concentration training and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5724987A (en)*1991-09-261998-03-10Sam Technology, Inc.Neurocognitive adaptive computer-aided training method and system
CN106128201A (en)*2016-06-142016-11-16北京航空航天大学The attention training system that a kind of immersion vision and discrete force control task combine
CN107564585A (en)*2017-07-062018-01-09四川护理职业学院Brain palsy recovery management system and method based on cloud platform
CN109620265A (en)*2018-12-262019-04-16中国科学院深圳先进技术研究院Recognition methods and relevant apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5724987A (en)*1991-09-261998-03-10Sam Technology, Inc.Neurocognitive adaptive computer-aided training method and system
CN106128201A (en)*2016-06-142016-11-16北京航空航天大学The attention training system that a kind of immersion vision and discrete force control task combine
CN107564585A (en)*2017-07-062018-01-09四川护理职业学院Brain palsy recovery management system and method based on cloud platform
CN109620265A (en)*2018-12-262019-04-16中国科学院深圳先进技术研究院Recognition methods and relevant apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李翀: "基于机器人辅助神经康复的患者训练参与度与专注度研究", 《中国博士学位论文全文数据库 医药卫生科技辑》*

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112057040A (en)*2020-06-122020-12-11国家康复辅具研究中心Upper limb motor function rehabilitation evaluation method
CN112057040B (en)*2020-06-122024-04-12国家康复辅具研究中心Upper limb movement function rehabilitation evaluation method
CN112541541A (en)*2020-12-102021-03-23杭州电子科技大学Lightweight multi-modal emotion analysis method based on multi-element hierarchical depth fusion
CN112541541B (en)*2020-12-102024-03-22杭州电子科技大学Lightweight multi-modal emotion analysis method based on multi-element layering depth fusion
CN113349780A (en)*2021-06-072021-09-07浙江科技学院Method for evaluating influence of emotional design on online learning cognitive load
CN116370788A (en)*2023-06-052023-07-04浙江强脑科技有限公司Training effect real-time feedback method and device for concentration training and terminal equipment
CN116370788B (en)*2023-06-052023-10-17浙江强脑科技有限公司Training effect real-time feedback method and device for concentration training and terminal equipment

Similar Documents

PublicationPublication DateTitle
Maura et al.Literature review of stroke assessment for upper-extremity physical function via EEG, EMG, kinematic, and kinetic measurements and their reliability
Zheng et al.Unobtrusive and multimodal wearable sensing to quantify anxiety
Sharma et al.Objective measures, sensors and computational techniques for stress recognition and classification: A survey
US9173582B2 (en)Adaptive performance trainer
US20200265942A1 (en)Systems and methods for determining human performance capacity and utility of a person-utilized device
KR101739058B1 (en)Apparatus and method for Psycho-physiological Detection of Deception (Lie Detection) by video
CN111012307A (en)Method and device for evaluating training input degree of patient based on multi-mode information
US20160029965A1 (en)Artifact as a feature in neuro diagnostics
EP2916724A1 (en)Method and device for determining vital parameters
JP2013505811A (en) System and method for obtaining applied kinesiology feedback
US20250134429A1 (en)System, method and appratus for objectively screening depression
Xu et al.From the lab to the real-world: An investigation on the influence of human movement on Emotion Recognition using physiological signals
KR101753834B1 (en)A Method for Emotional classification using vibraimage technology
CN103251417B (en)Method for representing and identifying entrepreneurial potential electroencephalogram signals
CN118098507B (en) Adaptive upper limb rehabilitation training control method and system based on multi-source data
Saikia et al.EEG-EMG correlation for Parkinson’s disease
Al-Khasawneh et al.An artificial intelligence based effective diagnosis of parkinson disease using EEG signal
Tiwari et al.Movement artifact-robust mental workload assessment during physical activity using multi-sensor fusion
Heinisch et al.The Impact of Physical Activities on the Physiological Response to Emotions
Apicella et al.Preliminary validation of a measurement system for emotion recognition
CN112449579A (en)Method and apparatus for assessing brain fatigue using a touchless video system
JebelliWearable Biosensors to Understand Construction Workers' Mental and Physical Stress
Ngamsomphornpong et al.Development of Hybrid EEG-fEMG-based Stress Levels Classification and Biofeedback Training System
ChowdhuryEEG based assessment of emotional wellbeing in smart environment
Monge et al.Robustness of parameters from heart rate for mental stress detection

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20200417

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp