




技术领域technical field
本发明涉及到脑-机接口、信号处理、情绪识别等领域,尤其涉及一种分布式多模态情绪检测方法。The invention relates to the fields of brain-computer interface, signal processing, emotion recognition and the like, in particular to a distributed multimodal emotion detection method.
背景技术Background technique
情绪是人类与外界交流的重要媒介,反映人类的心理和生理状态,直接影响着人类的行为和决策。近些年来,随着情感计算(Affective computing)相关理论方法的快速发展,实时的情绪检测变得越来越重要。准确地识别出情绪状态在诸如航空航天飞行安全、远途车辆驾驶安全等方面均具有重要的意义;另外,让计算机也能像人一样对人的情绪进行实时、准确的判别也有助于构建更加和谐、友好的人机交互,为人们生活和工作带来更多的便利。Emotion is an important medium for human beings to communicate with the outside world. It reflects human's psychological and physiological state and directly affects human behavior and decision-making. In recent years, with the rapid development of related theoretical methods in affective computing, real-time emotion detection has become more and more important. Accurately identifying emotional states is of great significance in areas such as aerospace flight safety, long-distance vehicle driving safety, etc. In addition, allowing computers to perform real-time and accurate discrimination of human emotions like humans can also help build more Harmonious and friendly human-computer interaction brings more convenience to people's life and work.
由于人的情绪变化往往伴随着面部表情、肢体动作等外部行为以及各种生理信号的变化,因此各种外部行为和生理信号等就常常作为情绪识别的输入信号。人与人之间进行日常交流时就经常通过对方的面部表情、肢体动作等来识别对方的情绪。然而,面部表情和肢体动作是可以伪装和隐藏的,例如,在一些社交场合,人可能会通过微笑来掩饰自己的尴尬,因此,使用这些外部行为进行情绪识别可能并不够准确。相比之下,生理信号具有不易伪装、难以隐藏的特点,更能够反映出真实的情绪状态。因此对于计算机来说,使用生理信号进行情绪识别是一种非常有效的方法。Because human emotion changes are often accompanied by external behaviors such as facial expressions and body movements, as well as changes in various physiological signals, various external behaviors and physiological signals are often used as input signals for emotion recognition. In daily communication between people, they often identify each other's emotions through their facial expressions, body movements, etc. However, facial expressions and body movements can be camouflaged and hidden. For example, in some social situations, people may smile to hide their embarrassment, so using these external behaviors for emotion recognition may not be accurate enough. In contrast, physiological signals are not easy to camouflage and hide, and can better reflect the real emotional state. Therefore, using physiological signals for emotion recognition is a very effective method for computers.
由于情绪的产生和变化是一个非常复杂的过程,涉及到方方面面的过程,而仅使用单一模态的信号可能无法对情绪进行全面客观的度量,因此需要结合多模态的生理信号来进行情绪识别。目前常用于进行情绪识别的生理信号有脑电(Electroencephalogram,EEG)、眼电(Electrooculogram,EOG)、皮电(Galvanic skin response,GSR)、心电(Electrocardiograph,ECG)等信号。通过融合各种生理信号,可以充分利用各个模态之间的互补特性,弥补单模态信号识别情绪的不足之处,有利于提取出真正与情绪相关的特征,构建更加准确客观的情绪识别模型,提升情绪识别方法的有效性。Since the generation and change of emotion is a very complex process, involving all aspects of the process, and only using a single modality signal may not be able to comprehensively and objectively measure emotion, it is necessary to combine multimodal physiological signals for emotion recognition. . At present, physiological signals commonly used for emotion recognition include electroencephalogram (EEG), electrooculogram (EOG), electroskin response (GSR), electrocardiogram (ECG) and other signals. By fusing various physiological signals, the complementary characteristics between the various modalities can be fully utilized to make up for the shortcomings of single-modal signals in identifying emotions, which is beneficial to extract the features that are truly related to emotions and build a more accurate and objective emotion recognition model. , to improve the effectiveness of emotion recognition methods.
目前,用于在线识别情绪的常常为单端架构,集数据收发、算法处理、结果展示于一体。一方面,将众多功能集中到一起会加重计算机的负荷,使得计算机工作的效率低下,另一方面,这种架构不利于迭代更新,更不利于推广到实际的应用层面。因为在实际的应用当中,数据收发、算法处理和结果展示通常需要部署到不同的位置,例如对飞机驾驶员进行情绪检测,数据收发端就可能部署到飞机之上,而算法处理端则需要部署到计算能力较强的服务器上,结果展示端就需要部署到指挥中心,以便指挥中心实时监测驾驶员的情绪状态。因此,情绪的分布式架构有利于提升整体的工作效率,在实际应用中有助于提升的稳定性。At present, the single-ended architecture is often used to identify emotions online, which integrates data transmission and reception, algorithm processing, and result display. On the one hand, concentrating many functions together will increase the load of the computer, which makes the computer work inefficiently. Because in practical applications, data sending and receiving, algorithm processing and result display usually need to be deployed in different locations, such as emotional detection of aircraft pilots, the data sending and receiving end may be deployed on the aircraft, while the algorithm processing end needs to be deployed On a server with strong computing power, the result display terminal needs to be deployed to the command center, so that the command center can monitor the driver's emotional state in real time. Therefore, the distributed architecture of emotions is conducive to improving the overall work efficiency, and helps to improve stability in practical applications.
发明内容SUMMARY OF THE INVENTION
针对情绪检测中单模态的生理信号无法对情绪进行全面客观地度量,情绪识别单端架构的计算机负荷较重,效率低下,一个地方的变动会影响到整个,不利于整个的更新迭代,同时不利于推广到实际应用场景的问题,本发明提供一种分布式多模态情绪检测方法,结合多种生理信号对情绪进行检测,采用多端(刺激生成端、信号采集端、主控端、数据收发端、算法处理端、结果展示端)架构提升检测性能。该方法通过多模态(脑电,心电,皮电)的生理信号相结合,充分利用不同模态信号之间的互补特性,最大限度地提高情绪(五种情绪:平静,恐惧,高兴,悲伤,愤怒)识别的准确性,同时,该采取多端分布式架构,根据各端的不同定位部署在不同位置,充分地发挥各端的协同作用,促使整个可以有条不紊地运行。该具有高效稳定的优点,可以更好地推广到实际应用。The single-modal physiological signal in emotion detection cannot comprehensively and objectively measure emotion. The single-ended architecture of emotion recognition has a heavy computer load and low efficiency. Changes in one place will affect the whole, which is not conducive to the entire update iteration. To solve the problem of being unfavorable to be extended to practical application scenarios, the present invention provides a distributed multi-modal emotion detection method, which combines a variety of physiological signals to detect emotions. The transceiver, algorithm processing, and result display) architecture improves detection performance. The method combines physiological signals of multiple modalities (EEG, ECG, EEG), and makes full use of the complementary characteristics between different modal signals to maximize emotions (five emotions: calm, fear, joy, At the same time, the multi-terminal distributed architecture should be adopted, and it should be deployed in different locations according to the different positioning of each terminal, so as to give full play to the synergistic effect of each terminal, so that the whole can run in an orderly manner. This has the advantages of high efficiency and stability, and can be better extended to practical applications.
本发明实施例公开了一种分布式多模态情绪检测方法,所述方法包括:The embodiment of the present invention discloses a distributed multi-modal emotion detection method, the method includes:
S1、刺激生成端,连接信号采集端;数据收发端,通过TCP连接所述信号采集端;主控单元,通过TCP连接所述数据收发端;算法处理端,通过TCP连接所述主控端;结果展示端,通过TCP连接所述主控端;S1, the stimulus generation terminal is connected to the signal acquisition terminal; the data transceiver terminal is connected to the signal acquisition terminal through TCP; the main control unit is connected to the data transmission terminal through TCP; the algorithm processing terminal is connected to the main control terminal through TCP; The result display terminal is connected to the main control terminal through TCP;
S2、利用刺激生成端,播放诱发情绪的实验范式,得到受试者产生的情绪生理信号;S2. Use the stimulus generating terminal to play the experimental paradigm that induces emotion, and obtain the emotional physiological signal generated by the subject;
S3、利用信号采集端,采集所述受试者产生的情绪生理信号;S3, using the signal collection terminal to collect emotional physiological signals generated by the subject;
通过TCP协议,将所述情绪生理信号,发送到数据收发模端;Through the TCP protocol, the emotional physiological signal is sent to the data transceiver module;
S4、所述主控端,如果向所述数据收发端、算法处理端、结果展示端发出第一指令;S4, the main control terminal, if the first instruction is sent to the data transceiver terminal, the algorithm processing terminal, and the result display terminal;
S401、所述数据收发端,将所述情绪生理信号发送到所述主控端;S401. The data transceiver terminal sends the emotional physiological signal to the main control terminal;
S402、所述主控端,将所述情绪生理信号发送到所述算法处理端;所述算法处理端处理所述情绪生理信号,得到受试者多模态情绪识别结果;S402. The main control terminal sends the emotional physiological signal to the algorithm processing terminal; the algorithm processing terminal processes the emotional physiological signal to obtain a multimodal emotion recognition result of the subject;
S403、所述算法处理端,将所述受试者多模态情绪识别结果,发送到所述主控端;S403, the algorithm processing terminal sends the multimodal emotion recognition result of the subject to the main control terminal;
S404、所述主控端,将所述受试者多模态情绪识别结果,发送到所述结果展示端;S404, the main control terminal sends the multimodal emotion recognition result of the subject to the result display terminal;
所述结果展示端展示所述受试者多模态情绪识别结果;The result display terminal displays the multimodal emotion recognition result of the subject;
S5、所述主控端,如果向所述数据收发端、算法处理端、结果展示端发出第二指令,则结束。S5. If the main control end sends a second instruction to the data transceiver end, the algorithm processing end, and the result display end, the process ends.
作为一种可选的实施方式,在本发明实施例中,所述主控端向所述数据收发端、算法处理端、结果展示端发送的启动命令为第一指令;As an optional implementation manner, in the embodiment of the present invention, the start command sent by the main control terminal to the data transceiver terminal, the algorithm processing terminal, and the result display terminal is the first instruction;
所述主控端向所述数据收发端、算法处理端、结果展示端发送的结束命令为第二指令。The end command sent by the main control end to the data transceiver end, the algorithm processing end, and the result display end is the second instruction.
作为一种可选的实施方式,在本发明实施例中,所述多模态,包括脑电模态、心电模态、皮电模态;As an optional implementation manner, in this embodiment of the present invention, the multimodality includes an EEG modality, an ECG modality, and an electrical skin modality;
所述情绪,包括:平静,恐惧,高兴,悲伤,愤怒。The emotions include: calm, fear, happiness, sadness, anger.
作为一种可选的实施方式,在本发明实施例中,所述诱发情绪的实验范式,包括:As an optional implementation manner, in this embodiment of the present invention, the experimental paradigm for inducing emotion includes:
特定情绪的图片,电影片段,音乐片段。Pictures, movie clips, music clips for specific emotions.
作为一种可选的实施方式,在本发明实施例中,所述利用信号采集端,采集所述受试者产生的情绪生理信号,包括:As an optional implementation manner, in this embodiment of the present invention, the use of a signal collection terminal to collect emotional physiological signals generated by the subject includes:
利用脑电帽、生理多导仪采集所述受试者产生的情绪生理信号。The emotional and physiological signals produced by the subject are collected by using an EEG cap and a physiological polyconductor.
作为一种可选的实施方式,在本发明实施例中,所述算法处理端处理所述情绪生理信号,得到受试者多模态情绪识别结果,包括:As an optional implementation manner, in this embodiment of the present invention, the algorithm processing end processes the emotional physiological signal to obtain a multimodal emotion recognition result of the subject, including:
所述算法处理端,对所述情绪生理信号进行预处理,得到预处理情绪生理信号;The algorithm processing end performs preprocessing on the emotional physiological signal to obtain a preprocessing emotional physiological signal;
提取所述预处理情绪生理信号的特征信息,得到脑电模态信号的特征信息、心电模态信号的特征信息、皮电模态信号的特征信息;extracting the feature information of the preprocessed emotional physiological signal, to obtain the feature information of the EEG modal signal, the feature information of the ECG modal signal, and the feature information of the skin electrical modal signal;
融合所述脑电模态信号的特征信息、心电模态信号的特征信息、皮电模态信号的特征信息,得到多模态融合特征信息;fusing the feature information of the electroencephalographic modal signal, the feature information of the electrocardiographic modal signal, and the feature information of the electrical skin modality signal to obtain multi-modal fusion feature information;
利用预设的分类器,对所述多模态融合特征信息进行分类,得到所述受试者多模态情绪识别结果。Using a preset classifier, the multimodal fusion feature information is classified to obtain a multimodal emotion recognition result of the subject.
作为一种可选的实施方式,在本发明实施例中,所述算法处理端处理还包括,As an optional implementation manner, in this embodiment of the present invention, the algorithm processing end processing further includes:
将所述预处理情绪生理信号,所述脑电模态信号的特征信息,所述心电模态信号的特征信息,所述皮电模态信号的特征信息,所述受试者多模态情绪识别结果,发送给所述主控端。Combine the preprocessed emotional physiological signal, the characteristic information of the EEG modal signal, the characteristic information of the ECG modal signal, the characteristic information of the electrical skin modal signal, and the multimodality of the subject. The emotion recognition result is sent to the main control terminal.
作为一种可选的实施方式,在本发明实施例中,所述预处理,包括:As an optional implementation manner, in this embodiment of the present invention, the preprocessing includes:
拆分、带通滤波和降采样;Splitting, bandpass filtering and downsampling;
所述拆分包括,所述情绪生理信号前300000维为脑电模态信号,所述情绪生理信号中间5000维为心电模态信号,所述情绪生理信号后5000维为皮电信号;The splitting includes that the first 300,000 dimensions of the emotional and physiological signal are EEG modal signals, the middle 5,000 dimensions of the emotional and physiological signals are ECG modal signals, and the last 5,000 dimensions of the emotional and physiological signals are skin electrical signals;
所述带通滤波包括,所述脑电模态信号带通滤波0.4-50Hz,所述心电模态信号带通滤波0.4-10Hz,所述皮电模态信号带通滤波0.02-0.3Hz;The band-pass filtering includes band-pass filtering of the EEG modal signal at 0.4-50 Hz, band-pass filtering of the ECG modal signal at 0.4-10 Hz, and band-pass filtering of the electrodermal modal signal at 0.02-0.3 Hz;
所述降采样包括,采样率为1000Hz,降采样到200Hz,得到预处理情绪生理信号。The down-sampling includes that the sampling rate is 1000 Hz, and the down-sampling is down to 200 Hz to obtain a pre-processed emotional physiological signal.
作为一种可选的实施方式,在本发明实施例中,所述脑电模态信号的特征信息、心电模态信号的特征信息、皮电模态信号的特征信息,包括:As an optional implementation manner, in this embodiment of the present invention, the characteristic information of the EEG modal signal, the characteristic information of the ECG modal signal, and the characteristic information of the electrical skin modal signal include:
所述脑电模态信号的特征信息为脑电微分熵特征:The feature information of the EEG modal signal is the EEG differential entropy feature:
式中,p(x)表示脑电模态信号的概率密度函数,[a,b]表示脑电模态信号取值的区间,DE为所述脑电微分熵特征;In the formula, p(x) represents the probability density function of the EEG modal signal, [a, b] represents the value interval of the EEG modal signal, and DE is the EEG differential entropy feature;
所述心电模态信号的特征信息为心电心率变异性特征,包括SDNN、RMSSD、SDSD心率变异性指标;The characteristic information of the ECG modal signal is the ECG heart rate variability characteristic, including SDNN, RMSSD, SDSD heart rate variability index;
所述皮电模态信号的特征信息为所述皮电模态信号的均值、中值、标准差、一阶差分、一阶差分的均值、二阶差分、二阶差分的均值。The characteristic information of the electrical skin modal signal is the mean, median, standard deviation, first order difference, mean value of first order difference, second order difference, and mean value of second order difference of the electrical skin modal signal.
作为一种可选的实施方式,在本发明实施例中,所述方法还包括:As an optional implementation manner, in this embodiment of the present invention, the method further includes:
所述主控端,将所述预处理情绪生理信号,所述脑电模态信号的特征信息,所述心电模态信号的特征信息,所述皮电模态信号的特征信息,所述受试者多模态情绪识别结果,发送给所述结果展示端;The main control terminal combines the preprocessed emotional physiological signal, the characteristic information of the EEG modal signal, the characteristic information of the ECG modal signal, the characteristic information of the electrical skin modal signal, the The multimodal emotion recognition result of the subject is sent to the result display terminal;
所述结果展示端,展示所述预处理情绪生理信号,所述脑电模态信号的特征信息,所述心电模态信号的特征信息,所述皮电模态信号的特征信息,所述受试者多模态情绪识别结果。The result display end displays the preprocessed emotional physiological signal, the characteristic information of the EEG modal signal, the characteristic information of the ECG modal signal, the characteristic information of the electrical skin modal signal, the Multimodal emotion recognition results of subjects.
与现有技术相比,本发明实施例具有以下有益效果:Compared with the prior art, the embodiments of the present invention have the following beneficial effects:
(1)通过脑电帽、生理多导仪实现了对人的脑电、心电、皮电三种模态生理信号相结合,充分利用不同模态信号之间的互补特性,最大限度地提高情绪(五种情绪:平静,恐惧,高兴,悲伤,愤怒)识别的准确性。(1) Through the EEG cap and the physiological polyconductor, the three modal physiological signals of human EEG, ECG, and EEG are combined, and the complementary characteristics between different modal signals are fully utilized to maximize the improvement of The accuracy of emotion (five emotions: calm, fear, happiness, sadness, anger) recognition.
(2)采用分布式结构,刺激生成端、信号采集端、主控端、数据收发端、算法处理端和结果展示端可以根据需要部署在不同位置,充分地发挥各端的协同作用,具有高效稳定的优点,可以更好地推广到实际应用。(2) Using a distributed structure, the stimulus generation end, signal acquisition end, main control end, data transceiver end, algorithm processing end and result display end can be deployed in different positions as needed, and the synergistic effect of each end can be fully exerted, with high efficiency and stability The advantages can be better generalized to practical applications.
附图说明Description of drawings
图1为本发明实施例公开的一种分布式多模态情绪检测方法的整体架构示意图。FIG. 1 is a schematic diagram of an overall architecture of a distributed multimodal emotion detection method disclosed in an embodiment of the present invention.
图2为本发明实施例公开的一种分布式多模态情绪检测方法主控端的工作流程。FIG. 2 is a workflow of a main control terminal of a distributed multimodal emotion detection method disclosed in an embodiment of the present invention.
图3为本发明实施例公开的一种分布式多模态情绪检测方法的数据收发端、算法处理端、结果展示端的工作流程。FIG. 3 is a workflow of a data transceiver end, an algorithm processing end, and a result display end of a distributed multimodal emotion detection method disclosed in an embodiment of the present invention.
图4为本发明实施例公开的一种分布式多模态情绪检测方法的算法处理端的特征融合方法示意图。4 is a schematic diagram of a feature fusion method at an algorithm processing end of a distributed multimodal emotion detection method disclosed in an embodiment of the present invention.
图5为本发明实施例公开的一种分布式多模态情绪检测方法的结果展示端的示意图。FIG. 5 is a schematic diagram of a result display end of a distributed multimodal emotion detection method disclosed in an embodiment of the present invention.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only These are some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、装置、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。The terms "first", "second" and the like in the description and claims of the present invention and the above drawings are used to distinguish different objects, rather than to describe a specific order. Furthermore, the terms "comprising" and "having", and any variations thereof, are intended to cover non-exclusive inclusion. For example, a process, method, apparatus, product or device comprising a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally also includes For other steps or units inherent to these processes, methods, products or devices.
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本发明的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor a separate or alternative embodiment that is mutually exclusive of other embodiments. It is explicitly and implicitly understood by those skilled in the art that the embodiments described herein may be combined with other embodiments.
实施例一Example 1
一种分布式多模态情绪检测方法,包括刺激生成端、信号采集端、主控端、数据收发端、算法处理端和结果展示端。刺激生成端通过播放诱发情绪的实验范式诱发受试者产生与情绪相关的生理信号,信号采集端采集受试者产生的生理信号,数据收发端接收来自于信号采集端的数据并通过TCP协议将其发送给主控端,主控端通过TCP协议将数据发送给算法处理端进行处理,算法处理端处理后的结果通过TCP协议发送给主控端,主控端将算法处理端处理后的结果发送给结果展示端进行展示。A distributed multi-modal emotion detection method includes a stimulus generation end, a signal acquisition end, a main control end, a data transceiver end, an algorithm processing end and a result display end. The stimulus generating end induces the subjects to generate physiological signals related to emotions by playing the experimental paradigm of evoking emotions, the signal acquisition end collects the physiological signals generated by the subjects, and the data transceiver end receives the data from the signal acquisition end and transmits it through the TCP protocol. Send to the main control terminal, the main control terminal sends the data to the algorithm processing terminal through the TCP protocol for processing, the result processed by the algorithm processing terminal is sent to the main control terminal through the TCP protocol, and the main control terminal sends the result processed by the algorithm processing terminal. Display the results to the display side.
刺激生成端用于播放诱发情绪的实验范式:特定情绪的图片、电影片段、音乐片段。受试者接受这样的刺激之后,会产生高质量的与情绪高度相关的生理信号。The stimulus generator is used to play experimental paradigms that evoke emotions: pictures, movie clips, and music clips of specific emotions. Subjects exposed to such stimuli produced high-quality physiological signals that were highly correlated with emotion.
信号采集端通过脑电帽、生理多导仪对受试者产生的生理信号进行采集。The signal collection end collects the physiological signals generated by the subjects through the EEG cap and the physiological polyconductor.
主控端、数据收发端、算法处理端和结果展示端属于信号的处理阶段。主控端相当于数据收发端,算法处理端和结果展示端的服务端。The main control end, the data transceiver end, the algorithm processing end and the result display end belong to the signal processing stage. The main control end is equivalent to the data transceiver end, the algorithm processing end and the server end of the result display end.
主控端,主要用来控制整个处理阶段的启动与停止,协调整个处理阶段运行。当主控端发送命令启动各端开始工作时,数据收发端就会实时地接收来自于信号采集端的脑电、心电、皮电三个模态的生理信号,并将这些来自不同采集设备的生理信号进行打包,按照特定的格式、特定的大小(脑电:60*5000,心电:1*5000,皮电:1*5000,三种模态数据分别转为向量后按顺序拼接,得到310000维的数据)通过TCP协议以数据包的形式发送给主控端,主控端接收到数据收发端的数据包之后,就会立即发送给算法处理端进行处理。The main control terminal is mainly used to control the start and stop of the entire processing stage and coordinate the operation of the entire processing stage. When the main control terminal sends a command to start each terminal to start working, the data transceiver terminal will receive the physiological signals of EEG, ECG, and skin electricity from the signal acquisition terminal in real time, and send these signals from different acquisition devices. Physiological signals are packaged according to a specific format and size (EEG: 60*5000, ECG: 1*5000, PEG: 1*5000, the three modal data are converted into vectors and then spliced in order to obtain 310,000-dimensional data) is sent to the main control terminal in the form of data packets through the TCP protocol. After the main control terminal receives the data packets from the data transceiver, it will immediately send it to the algorithm processing end for processing.
算法处理端将接收到的数据包进行拆分、滤波、降采样处理,采样率为1000Hz,降采样到200Hz。提取出各个模态生理信号的特征,其中脑电提取微分熵(Differentialentropy,DE)特征:式中p(x)表示连续信息的概率密度函数,[a,b]表示信息取值的区间。The algorithm processing end splits, filters, and downsamples the received data packets, with a sampling rate of 1000Hz and downsampling to 200Hz. The features of each modal physiological signal are extracted, and the differential entropy (DE) features are extracted from the EEG: In the formula, p(x) represents the probability density function of continuous information, and [a, b] represents the interval of the information value.
心电提取心率变异性(Heart rate variability,HRV)特征,包括SDNN、RMSSD、SDSD等心率变异性指标。心率变异性(HRV)是指逐次心跳周期差异的变化情况,它含有神经体液因素对心血管调节的信息,从而判断其对心血管等疾病的病情及预防,可能是预测心脏性猝死和心律失常性事件的一个有价值的指标。HRV标准的正常值如下:24小时时域分析的SDNN、SDANN、RMSSD分别为(141±39)ms、(127±35)ms和(27±l 2)ms。Heart rate variability (HRV) features were extracted from ECG, including SDNN, RMSSD, SDSD and other HRV indicators. Heart rate variability (HRV) refers to the change of the heartbeat cycle difference, which contains the information of neurohumoral factors on cardiovascular regulation, so as to judge the condition and prevention of cardiovascular and other diseases, and may predict sudden cardiac death and arrhythmia. A valuable indicator of sexual events. The normal values of HRV criteria are as follows: SDNN, SDANN, RMSSD of 24-hour time domain analysis are (141±39)ms, (127±35)ms and (27±12)ms, respectively.
皮电提取时域统计学特征,即信号的均值,中值,标准差,一阶差分,一阶差分的均值,二阶差分,二阶差分的均值等。将各个模态生理信号的特征采用基于深度典型相关分析的特征融合方法进行融合,并使用支持向量机进行分类,进而判别出受试者的情绪状态。Electrodermal extraction of time-domain statistical features, that is, the mean, median, standard deviation, first-order difference, first-order difference mean, second-order difference, and second-order difference mean of the signal. The features of each modal physiological signal are fused by the feature fusion method based on deep canonical correlation analysis, and the support vector machine is used for classification, and then the emotional state of the subjects is judged.
算法处理端将计算出的生理信号的特征,降采样后的生理信号以及情绪识别的结果按照特定的格式,特定的大小(脑电信号特征:60*5,降采样后心电数据:1*1000,降采样后皮电数据:1*1000,情绪识别结果为1*5的向量,向量中的5个数分别表示为识别成平静、恐惧、高兴、悲伤、愤怒的概率。将脑电信号特征、降采样后的心电数据、降采样后的皮电数据、情绪识别结果等四种数据转为向量后依次进行拼接,得到2305维数据)通过TCP协议以数据包的形式发送给主控端。主控端接收到算法处理端的数据包之后,就会立即发送给结果展示端,结果展示端将得到的数据包进行分解,可以实时地展示出受试者当前的生理信号,受试者的生理信号特征地形图以及情绪识别的结果。The algorithm processing end will calculate the characteristics of the physiological signal, the physiological signal after downsampling and the result of emotion recognition according to a specific format and a specific size (EEG signal characteristics: 60*5, downsampling ECG data: 1* 1000, the electrodermal data after downsampling: 1*1000, the emotion recognition result is a vector of 1*5, and the 5 numbers in the vector are expressed as the probability of being recognized as calm, fear, happiness, sadness, and anger. Features, down-sampling ECG data, down-sampling skin electrical data, emotion recognition results and other four kinds of data are converted into vectors and then spliced in turn to obtain 2305-dimensional data) sent to the master in the form of data packets through the TCP protocol end. After the main control terminal receives the data packets from the algorithm processing terminal, it will immediately send it to the result display terminal. Signal feature topography and results of emotion recognition.
实施例二
本发明提供一种分布式多模态情绪检测方法,结合多种生理信号对情绪进行检测,采用多端(刺激生成端、信号采集端、主控端、数据收发端、算法处理端、结果展示端)架构提升检测性能。该方法通过多模态(脑电,心电,皮电)的生理信号相结合,充分利用不同模态信号之间的互补特性,最大限度地提高情绪(五种情绪:平静,恐惧,高兴,悲伤,愤怒)识别的准确性,同时,该采取多端分布式架构,根据各端的不同定位部署在不同位置,充分地发挥各端的协同作用,促使整个可以有条不紊地运行。该具有高效稳定的优点,可以更好地推广到实际应用。The present invention provides a distributed multi-modal emotion detection method, which combines various physiological signals to detect emotion, and adopts multi-terminal (stimulus generation terminal, signal acquisition terminal, main control terminal, data transceiver terminal, algorithm processing terminal, and result display terminal). ) architecture to improve detection performance. The method combines physiological signals of multiple modalities (EEG, ECG, EEG), and makes full use of the complementary characteristics between different modal signals to maximize emotions (five emotions: calm, fear, joy, At the same time, the multi-terminal distributed architecture should be adopted, and it should be deployed in different locations according to the different positioning of each terminal, so as to give full play to the synergistic effect of each terminal, so that the whole can run in an orderly manner. This has the advantages of high efficiency and stability, and can be better extended to practical applications.
其具体步骤包括:的整体架构主要包括刺激生成端、信号采集端、主控端、数据收发端、算法处理端和结果展示端,如图1所示。在实验室环境下,为保证受试者可以产生可靠稳定的情绪,需要对受试者进行刺激,以诱发出高质量的可靠稳定的与情绪相关的生理信号。这部分由刺激生成端负责,它用于播放诱发情绪的实验范式:特定情绪的视频片段。受试者接受这样的刺激之后,便会产生高质量的与情绪高度相关的生理信号。这样的生理信号被信号采集端所采集,主要通过可穿戴设备:脑电帽,生理多导仪进行采集,这些都是属于对于数据的采集阶段。The specific steps include: the overall architecture mainly includes a stimulus generation end, a signal acquisition end, a main control end, a data transceiver end, an algorithm processing end and a result display end, as shown in Figure 1. In a laboratory environment, in order to ensure that subjects can generate reliable and stable emotions, subjects need to be stimulated to induce high-quality, reliable and stable emotional-related physiological signals. This part is handled by the stimulus generator, which plays an experimental paradigm for evoking emotions: video clips of specific emotions. When subjects received such stimuli, they produced high-quality physiological signals that were highly correlated with emotion. Such physiological signals are collected by the signal collection terminal, mainly through wearable devices: EEG caps, physiological polyconductors, and these belong to the data collection stage.
图2为本发明实施例公开的一种分布式多模态情绪检测方法主控端的工作流程,1表示按钮按下,0表示按钮未按下。主控端开始工作后,首先会与数据收发端、算法处理端和结果展示端建立TCP连接,之后会检测启动按钮的状态,若启动按钮未被按下,则继续等待,一直检测启动按钮状态,当启动按钮被按下,就会向数据收发端、算法处理端和结果展示端发送启动命令“1”,接着便会等待接收来自数据收发端的数据包,接收到数据包之后,会立即将数据包发送到算法处理端,接着等待接收来自算法处理端的数据包,接收到数据包之后,再立即将数据包发送到结果展示端,接着检测停止按钮的状态,若停止按钮未被按下,就向数据收发端、算法处理端和结果展示端发送继续命令“1”,接着等待接收数据收发端数据包,如此循环;若停止按钮被按下,就会向数据收发端、算法处理端和结果展示端发送结束命令“2”,运行结束。FIG. 2 is a workflow of a main control terminal of a distributed multimodal emotion detection method disclosed in an embodiment of the present invention, where 1 indicates that the button is pressed, and 0 indicates that the button is not pressed. After the main control terminal starts to work, it first establishes a TCP connection with the data transceiver, the algorithm processing terminal and the result display terminal, and then detects the status of the start button. , when the start button is pressed, the start command "1" will be sent to the data transceiver, the algorithm processing end and the result display end, and then it will wait to receive the data packet from the data transceiver. After receiving the data packet, it will immediately send The data packet is sent to the algorithm processing end, and then waits to receive the data packet from the algorithm processing end. After receiving the data packet, the data packet is immediately sent to the result display end, and then the state of the stop button is detected. If the stop button is not pressed, It sends the continuation command "1" to the data transceiver, the algorithm processing end and the result display end, and then waits to receive the data packets of the data transceiver, and so on; if the stop button is pressed, it will send the data transceiver, algorithm processing and The result display terminal sends the end command "2", and the operation ends.
图3为本发明实施例公开的一种分布式多模态情绪检测方法的数据收发端、算法处理端、结果展示端的工作流程。图3(a)是数据收发端的工作流程,当开始运行时,首先会建立与主控端的TCP连接,接着会等待来自主控端的命令,如果等待到了来自主控端的启动命令“1”,就会开始实时接收来自采集设备的多模态的数据,每个模态数据都接收5s,脑电得到60*5000的数据,心电得到1*5000的数据,皮电得到1*5000的数据,三种模态数据分别转为向量后按顺序拼接,得到310000维的数据。并将各模态的数据放入发送数据的缓存区,当累计的数据时长达到了5s,便打包以指定的格式发送给主控端,接着等待主控端的命令,是继续还是停止,如果接收到了主控端的继续命令“1”,则继续将接收到的多模态数据结合,以5s的时长打包发送给主控端,如此循环;如果接收到了主控端的停止命令“2”,则运行结束。图3(b)是算法处理端的工作流程,当开始运行时,首先会建立与主控端的TCP连接,接着会等待来自主控端的命令,如果等待到了来自主控端的启动命令“1”,就继续等待来自主控端的数据包,接收到了主控端的数据包之后,算法处理端会将整个数据包拆分为各个模态的数据,分别进行带通滤波、降采样、去趋势的预处理操作,其中脑电带通滤波:0.4-50Hz,心电带通滤波:0.4-10Hz,皮电带通滤波:0.02-0.3Hz,降采样率:200Hz。接着分别提取各种生理信号的特征,然后进行各个模态的特征融合。这里采用的融合方法为基于深度典型相关分析的特征融合方法,记X1,X2分别为两个模态信号的特征矩阵,分别为两个模态特征训练一个深度神经网络,如下式所示:FIG. 3 is a workflow of a data transceiver end, an algorithm processing end, and a result display end of a distributed multimodal emotion detection method disclosed in an embodiment of the present invention. Figure 3(a) is the workflow of the data transceiver. When it starts to run, it will first establish a TCP connection with the master, and then wait for the command from the master. If it waits for the start command "1" from the master, then It will start to receive multi-modal data from the acquisition device in real time. Each modal data is received for 5s. The EEG gets 60*5000 data, the ECG gets 1*5000 data, and the skin electricity gets 1*5000 data. The three modal data were converted into vectors and spliced in sequence to obtain 310,000-dimensional data. Put the data of each mode into the buffer area for sending data. When the accumulated data duration reaches 5s, it will be packaged and sent to the host in the specified format, and then wait for the command from the host, whether to continue or stop, if received When the continuation command "1" from the main control end is reached, the received multi-modal data will continue to be combined, packaged and sent to the main control end with a duration of 5s, and so on; if the stop command "2" of the main control end is received, run Finish. Figure 3(b) is the workflow of the algorithm processing end. When it starts to run, it will first establish a TCP connection with the main control end, and then wait for the command from the main control end. If it waits for the start command "1" from the main control end, then Continue to wait for the data packet from the main control end. After receiving the data packet from the main control end, the algorithm processing end will split the entire data packet into data of each modal, and perform the preprocessing operations of bandpass filtering, downsampling, and detrending respectively. , among which EEG bandpass filter: 0.4-50Hz, ECG bandpass filter: 0.4-10Hz, skin electric bandpass filter: 0.02-0.3Hz, downsampling rate: 200Hz. Then, the features of various physiological signals are extracted respectively, and then the feature fusion of each modality is performed. The fusion method used here is a feature fusion method based on deep canonical correlation analysis, denoted X1 , X2 are the feature matrices of the two modal signals, respectively, to train a deep neural network for the two modal features, as shown in the following formula :
X′1=f(X1;W1)X'1 =f(X1 ; W1 )
X′2=f(X2;W2)X'2 =f(X2 ; W2 )
这里W1和W2分别代表神经网络参数矩阵。X′1和X′2分别为神经网络的输出。深度典型相关分析的目标是学习两个神经网络的参数W1和W2,使得两个网络的输出X′1和X′2之间的相关性尽可能大。因此规定这两个神经网络的损失函数如下:Here W1 and W2 represent the neural network parameter matrix, respectively. X'1 and X'2 are the outputs of the neural network, respectively. The goal of deep canonical correlation analysis is to learn the parameters W1 and W2 of two neural networks such that the correlation between the outputs X′1 and X′2 of the two networks is as large as possible. Therefore, the loss functions of these two neural networks are specified as follows:
Loss=-corr(X′1,X′2)Loss=-corr(X'1 ,X'2 )
其中corr(X′1,X′2)表示X′1和X′2的相关性。通过使损失函数最小,就可以训练出两个深度神经网络使得X′1和X′2的相关性最高。网络训练好了之后,在测试阶段对于神经网络输出得到的新特征X′1和X′2,再做加权求和处理得到两个模态融合的特征X,如下式所示:where corr(X'1 , X'2 ) represents the correlation between X'1 and X'2 . By minimizing the loss function, two deep neural networks can be trained such that X'1 and X'2 have the highest correlation. After the network is trained, the new features X′1 and X′2 output by the neural network are processed in the test phase, and then the weighted summation process is performed to obtain the feature X of the fusion of the two modes, as shown in the following formula:
X=αX′1+(1-α)X′2X=αX′1 +(1-α)X′2
式中α为调节两模态特征权重的参数。以上为两模态融合的情况。对于本发明中的三模态特征融合,其融合方案如图4所示,首先对三个模态两两选取进行深度典型相关分析处理,分别得到三个特征向量,接着对三个特征向量进行向量拼接,得到最终融合后的特征向量。具体来说,记提取到的脑电特征为Xeeg,心电特征为Xecg,皮电特征为Xgsr,首先对脑电特征和心电特征进行深度典型相关分析处理,得到二者融合特征O1,接着对脑电特征和皮电特征进行深度典型相关分析处理,得到二者融合特征O2,然后对心电特征和皮电特征进行深度典型相关分析处理,得到二者融合特征O3。接着分别对三个融合特征向量做归一化处理,得到归一化后的特征向量分别为O′1,O′2和O′3最后将得到的三个融合特征进行向量拼接:O=[O′1;O′2;O′3],得到最后的特征向量O。where α is a parameter that adjusts the weights of the two modal features. The above is the case of two-modal fusion. For the three-modal feature fusion in the present invention, the fusion scheme is shown in Figure 4. First, the three modalities are selected in pairs to perform deep canonical correlation analysis processing to obtain three eigenvectors respectively, and then the three eigenvectors are processed Vector splicing to obtain the final fused feature vector. Specifically, the extracted EEG feature is Xeeg , the ECG feature is Xecg , and the skin electrical feature is Xgsr . First, deep canonical correlation analysis is performed on the EEG feature and the ECG feature, and the fusion feature of the two is obtained. O1 , then perform deep canonical correlation analysis on EEG features and electrical skin features to obtain the two fusion features O2 , and then perform deep canonical correlation analysis on ECG features and electrical skin features to obtain the two fusion features O3 . Then the three fusion feature vectors are normalized respectively, and the normalized feature vectors are obtained as O'1 , O'2 and O'3. Finally, the three fusion features obtained are vector spliced: O=[ O'1;O'2;O'3 ], the final eigenvector O is obtained.
在获取到融合特征向量O之后,将其送入到预先训练好的分类器进行分类,得到各种情绪的后验概率。接着,算法处理端将预处理后的生理信号、计算得到的生理信号原始特征以及分类器对情绪的识别结果打包在一起,作为一个数据发送给主控端。接着检测主控端的命令,是继续还是停止,如果接收到了主控端的继续命令“1”,就继续接收来自主控端的数据包进行算法处理,如此循环;如果接收到了主控端的停止命令“2”,则运行结束。图3(c)为结果展示端的工作流程,当开始运行时,首先会建立与主控端的TCP连接,接着会等待来自主控端的命令,如果等待到了来自主控端的启动命令“1”,会继续等待来自主控端的数据包,接收到了主控端的数据包之后,结果展示端将数据包进行拆分,拆分为降采样后的信号、信号特征以及分类器对情绪的识别结果,接着根据这些结果分别在结果展示端的界面上绘图,如图4所示。最后仍旧需要检测主控端的命令,是继续还是停止,如果接收到了主控端的继续命令“1”,就继续接收来自主控端的数据包进行结果展示,如此循环;如果接收到了主控端的停止命令“2”,则运行结束。After the fusion feature vector O is obtained, it is sent to the pre-trained classifier for classification, and the posterior probability of various emotions is obtained. Then, the algorithm processing end packs the preprocessed physiological signal, the calculated original characteristics of the physiological signal, and the classifier's recognition result of emotion together, and sends it to the main control end as one data. Then check whether the command from the main control end is to continue or stop. If the continue command "1" from the main control end is received, it will continue to receive data packets from the main control end for algorithm processing, and so on; ", the operation ends. Figure 3(c) shows the workflow of the result display terminal. When it starts to run, it will first establish a TCP connection with the master, and then wait for the command from the master. If it waits for the start command "1" from the master, it will Continue to wait for the data packet from the main control end. After receiving the data packet from the main control end, the result display end splits the data packet into the down-sampled signal, signal characteristics and the classification result of emotion recognition by the classifier, and then according to These results are plotted on the interface of the result display terminal, as shown in Figure 4. Finally, it is still necessary to detect the command of the main control end, whether to continue or stop. If the continue command "1" of the main control end is received, it will continue to receive data packets from the main control end to display the results, and so on; if the stop command of the main control end is received "2", the operation ends.
图5为本发明实施例公开的一种分布式多模态情绪检测方法的结果展示端的示意图。主控端将预处理情绪生理信号,脑电模态信号的特征信息,心电模态信号的特征信息,皮电模态信号的特征信息,受试者多模态情绪识别结果,发送给结果展示端,结果展示端进行展示,上述信息的展示形式还包括脑电信号特征的脑地形图,心电、皮电信号降采样后的波形图,识别为各种情绪的概率雷达图以及各种情绪概率的历史曲线。FIG. 5 is a schematic diagram of a result display end of a distributed multimodal emotion detection method disclosed in an embodiment of the present invention. The main control terminal will preprocess the emotional physiological signal, the characteristic information of the EEG modal signal, the characteristic information of the ECG modal signal, the characteristic information of the electrical skin modal signal, and the multi-modal emotion recognition result of the subject, and send it to the result. The display side and the result display side are displayed. The display form of the above information also includes the brain topographic map of the EEG signal characteristics, the waveform diagram of the down-sampling of the ECG and skin electrical signals, the probability radar map identified as various emotions, and various Historical curve of sentiment probability.
通过以上的实施例的具体描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,存储介质包括只读存储器(Read-Only Memory,ROM)、随机存储器(Random Access Memory,RAM)、可编程只读存储器(Programmable Read-only Memory,PROM)、可擦除可编程只读存储器(ErasableProgrammable Read Only Memory,EPROM)、一次可编程只读存储器(One-timeProgrammable Read-Only Memory,OTPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(CompactDisc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够用于携带或存储数据的计算机可读的任何其他介质。From the specific description of the above embodiments, those skilled in the art can clearly understand that each implementation manner can be implemented by means of software plus a necessary general hardware platform, and certainly can also be implemented by means of hardware. Based on this understanding, the above-mentioned technical solutions can be embodied in the form of software products in essence or that make contributions to the prior art. The computer software products can be stored in a computer-readable storage medium, and the storage medium includes a read-only memory. (Read-Only Memory, ROM), Random Access Memory (Random Access Memory, RAM), Programmable Read-only Memory (Programmable Read-only Memory, PROM), Erasable Programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM) , One-time Programmable Read-Only Memory (OTPROM), Electronically-Erasable Programmable Read-Only Memory (EEPROM), CompactDisc Read-Only Memory , CD-ROM) or other optical disk storage, magnetic disk storage, magnetic tape storage, or any other computer-readable medium that can be used to carry or store data.
最后应说明的是:本发明实施例公开的一种分布式多模态情绪检测方法所揭露的仅为本发明较佳实施例而已,仅用于说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解;其依然可以对前述各项实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或替换,并不使相应的技术方案的本质脱离本发明各项实施例技术方案的精神和范围。Finally, it should be noted that the distributed multi-modal emotion detection method disclosed in the embodiment of the present invention is only a preferred embodiment of the present invention, and is only used to illustrate the technical solution of the present invention, but not to limit it. Although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that; it can still modify the technical solutions described in the foregoing embodiments, or perform equivalent replacements to some of the technical features; However, these modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210451569.4ACN114795209A (en) | 2022-04-26 | 2022-04-26 | A Distributed Multimodal Emotion Detection Method |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210451569.4ACN114795209A (en) | 2022-04-26 | 2022-04-26 | A Distributed Multimodal Emotion Detection Method |
| Publication Number | Publication Date |
|---|---|
| CN114795209Atrue CN114795209A (en) | 2022-07-29 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210451569.4APendingCN114795209A (en) | 2022-04-26 | 2022-04-26 | A Distributed Multimodal Emotion Detection Method |
| Country | Link |
|---|---|
| CN (1) | CN114795209A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106667467A (en)* | 2017-02-22 | 2017-05-17 | 杨晓明 | Child physiological parameter acquiring and emotion detecting system |
| CN109620262A (en)* | 2018-12-12 | 2019-04-16 | 华南理工大学 | A kind of Emotion identification system and method based on wearable bracelet |
| CN110353704A (en)* | 2019-07-12 | 2019-10-22 | 东南大学 | Mood assessments method and apparatus based on wearable ECG monitoring |
| CN110477914A (en)* | 2019-08-09 | 2019-11-22 | 南京邮电大学 | Mood excitation and EEG signals Emotion identification system based on Android |
| CN110916631A (en)* | 2019-12-13 | 2020-03-27 | 东南大学 | Student classroom learning state evaluation system based on wearable physiological signal monitoring |
| CN111616721A (en)* | 2020-05-31 | 2020-09-04 | 天津大学 | Emotion recognition system and application based on deep learning and brain-computer interface |
| CN112438741A (en)* | 2020-09-30 | 2021-03-05 | 燕山大学 | Driving state detection method and system based on electroencephalogram feature transfer learning |
| CN112932486A (en)* | 2021-01-20 | 2021-06-11 | 安徽建筑大学 | Multi-mode-based university student emotional stress detection system and method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106667467A (en)* | 2017-02-22 | 2017-05-17 | 杨晓明 | Child physiological parameter acquiring and emotion detecting system |
| CN109620262A (en)* | 2018-12-12 | 2019-04-16 | 华南理工大学 | A kind of Emotion identification system and method based on wearable bracelet |
| CN110353704A (en)* | 2019-07-12 | 2019-10-22 | 东南大学 | Mood assessments method and apparatus based on wearable ECG monitoring |
| CN110477914A (en)* | 2019-08-09 | 2019-11-22 | 南京邮电大学 | Mood excitation and EEG signals Emotion identification system based on Android |
| CN110916631A (en)* | 2019-12-13 | 2020-03-27 | 东南大学 | Student classroom learning state evaluation system based on wearable physiological signal monitoring |
| CN111616721A (en)* | 2020-05-31 | 2020-09-04 | 天津大学 | Emotion recognition system and application based on deep learning and brain-computer interface |
| CN112438741A (en)* | 2020-09-30 | 2021-03-05 | 燕山大学 | Driving state detection method and system based on electroencephalogram feature transfer learning |
| CN112932486A (en)* | 2021-01-20 | 2021-06-11 | 安徽建筑大学 | Multi-mode-based university student emotional stress detection system and method |
| Publication | Publication Date | Title |
|---|---|---|
| Pratama et al. | A review on driver drowsiness based on image, bio-signal, and driver behavior | |
| Nikolova et al. | ECG-based emotion recognition: Overview of methods and applications | |
| Khosrowabadi et al. | EEG-based emotion recognition using self-organizing map for boundary detection | |
| EP2698112B1 (en) | Real-time stress determination of an individual | |
| CN107007291A (en) | Recognition system and information processing method of stress intensity based on multiple physiological parameters | |
| CN110169770A (en) | The fine granularity visualization system and method for mood brain electricity | |
| CN105938397A (en) | Hybrid brain-computer interface method based on steady state motion visual evoked potential and default stimulation response | |
| CN110353704A (en) | Mood assessments method and apparatus based on wearable ECG monitoring | |
| Di Lascio et al. | Laughter recognition using non-invasive wearable devices | |
| CN113208593A (en) | Multi-modal physiological signal emotion classification method based on correlation dynamic fusion | |
| CN110151203A (en) | Fatigue driving recognition method based on multi-level avalanche convolutional recurrent network EEG analysis | |
| CN105212949A (en) | A kind of method using skin pricktest signal to carry out culture experience emotion recognition | |
| Petrantonakis et al. | EEG-based emotion recognition using hybrid filtering and higher order crossings | |
| Anupama et al. | Deep Learning Approach For Emotions Detection | |
| WO2024140417A1 (en) | Human-machine interaction collection method, apparatus and system for wearable extended reality device | |
| Prince et al. | Brain machine interface using Emotiv EPOC to control robai cyton robotic arm | |
| Gaurav et al. | Characterizing neural activity during video game engagement using eeg sensor based topological dynamics analysis | |
| CN114795213A (en) | Non-contact driver fatigue state detection method and system | |
| CN119097325A (en) | Abnormal discharge detection method, model training method, device, medium and equipment | |
| CN119272125A (en) | A multimodal fusion feature emotion recognition method | |
| CN114795209A (en) | A Distributed Multimodal Emotion Detection Method | |
| CN117195153B (en) | EEG-EOG multimodal emotion recognition method and system based on contrastive learning | |
| Zacarias et al. | Gender classification using nonstandard ECG Signals-a conceptual framework of implementation | |
| AU2021104767A4 (en) | Method for classification of human emotions based on selected scalp region eeg patterns by a neural network | |
| Islam et al. | A review on emotion recognition with machine learning using EEG signals |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |