Movatterモバイル変換


[0]ホーム

URL:


CN109730700A - A kind of user emotion based reminding method - Google Patents

A kind of user emotion based reminding method
Download PDF

Info

Publication number
CN109730700A
CN109730700ACN201811648330.6ACN201811648330ACN109730700ACN 109730700 ACN109730700 ACN 109730700ACN 201811648330 ACN201811648330 ACN 201811648330ACN 109730700 ACN109730700 ACN 109730700A
Authority
CN
China
Prior art keywords
user
sequence
skin conductance
conductance signal
characteristic value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811648330.6A
Other languages
Chinese (zh)
Inventor
金涛
江浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Xinming Intelligent Technology Co Ltd
Original Assignee
Zhejiang Xinming Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Xinming Intelligent Technology Co LtdfiledCriticalZhejiang Xinming Intelligent Technology Co Ltd
Priority to CN201811648330.6ApriorityCriticalpatent/CN109730700A/en
Publication of CN109730700ApublicationCriticalpatent/CN109730700A/en
Withdrawnlegal-statusCriticalCurrent

Links

Landscapes

Abstract

The present invention provides a kind of user emotion based reminding methods, sample including interval preset time to the physiological parameter of user to be identified and obtain heart rate variability rate sequence, skin conductance signal sequence;Characteristic parameter is extracted from the heart rate variability rate sequence, skin conductance signal sequence, the characteristic parameter includes the First Eigenvalue based on heart rate variability rate, Second Eigenvalue and third feature value based on skin conductance signal;The characteristic parameter is inputted into pre-set classifier in order to which classifier output is to the recognition result of user emotion state;Judge whether the recognition result is negative emotions;If so, call user's attention mood regulation.The present invention can slightly judge the psychological condition of user over a period to come according to the monitored results to user's physiological parameter, so that mood regulation prompting be generated for user, the mood of oneself is preferably managed convenient for user, keep physically and mentally healthy.

Description

User emotion reminding method
Technical Field
The invention relates to the field of data processing, in particular to a user emotion reminding method.
Background
With the increasing popularity of intelligent wearable equipment, data processing of physiological parameters increasingly becomes a hot point for research, and the data processing based on the physiological parameters can facilitate mastering of the physical state and the psychological state of a user, so that various services based on the physical state and the psychological state are provided for the user, for example, if the emotion of the user is not good, a happy song can be played for the user to adjust the mind and body, and intelligent conversation can be performed with the user according to the physical state and the psychological state of the user, so that the data processing of the physiological parameters has higher research value;
however, the related art for data processing of physiological parameters is not mature, thereby limiting the development of technical solutions for providing related services based on the physical and mental states of the user.
Disclosure of Invention
In order to solve the technical problem, the invention provides a user emotion reminding method. The invention is realized by the following technical scheme:
a method for reminding user emotion comprises the following steps:
sampling physiological parameters of a user to be identified at intervals of preset time and obtaining a heart rate variation rate sequence and a skin conductance signal sequence;
extracting characteristic parameters from the heart rate variation rate sequence and the skin conductance signal sequence, wherein the characteristic parameters comprise a first characteristic value based on the heart rate variation rate, a second characteristic value based on the skin conductance signal and a third characteristic value;
inputting the characteristic parameters into a preset classifier so that the classifier can output a recognition result of the emotional state of the user;
judging whether the recognition result is negative emotion or not;
and if so, reminding the user to pay attention to the emotion regulation.
Further, the identification result is a binarization result and comprises a negative emotion and a non-negative emotion, and if the user generates anxiety and depressed emotion, the identification result of the emotional state is a negative emotion.
Further, recording the recognition result, and providing emotion adjusting suggestions for the user if the recognition results of three consecutive times are negative emotions; the suggestion is that preset music which can bring joyful emotion to the user is played, and nearby food and movies, books and entertainment places with better user evaluation are recommended to the user.
Further, the method for obtaining the first characteristic value of the heart rate variability rate comprises the following steps:
obtaining a sequence to be reconstructed according to the heart rate variation rate sequence;
performing m-dimensional phase space reconstruction on the sequence to be reconstructed to obtain a target sequence;
calculating the relative distance between adjacent elements of the target sequence to obtain a target distance sequence;
and calculating a first characteristic value of the heart rate variation rate according to a preset formula.
Further, the method for acquiring the second characteristic value of the skin conductance signal comprises the following steps:
obtaining a sequence to be convolved according to the skin conductance signal sequence;
convolving the sequence to be convolved with a preset window function to obtain a convolution sequence;
and obtaining a second characteristic value according to the convolution sequence.
Further, the method for acquiring the third characteristic value of the skin conductance signal comprises the following steps:
acquiring skin conductance signal { elci};
According to the formulaCalculating skin conductance Signal { elciA third eigenvalue of };for concentration parameters, N is the skin conductance signal { elciLength of, n is a first internal parameter of the concentration parameter, p is a second internal parameter of the concentration parameter.
The embodiment of the invention provides a user emotion reminding method in detail, which can roughly judge the psychological state of a user in a certain period according to the monitoring result of the physiological parameters of the user, thereby generating emotion regulation reminding for the user, facilitating the user to better manage the emotion of the user and keeping physical and mental health.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for reminding a user of emotion according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for obtaining a first characteristic value of a heart rate variability according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for obtaining a second characteristic value based on a skin conductance signal according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for obtaining a third characteristic value based on a skin conductance signal according to an embodiment of the present invention;
FIG. 5 is a flowchart of a training process of a classifier provided by an embodiment of the present invention;
FIG. 6 is a flowchart of a training method of sub-classifiers according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention provides a method for reminding a user of emotion, which comprises the following steps of:
s101, sampling physiological parameters of a user to be identified at intervals of preset time, and obtaining a heart rate variation rate sequence and a skin conductance signal sequence.
S102, extracting characteristic parameters from the heart rate variation rate sequence and the skin conductance signal sequence, wherein the characteristic parameters comprise a first characteristic value based on the heart rate variation rate, a second characteristic value based on the skin conductance signal and a third characteristic value.
And S103, inputting the characteristic parameters into a preset classifier so that the classifier can output the recognition result of the emotional state of the user.
Specifically, the recognition result may be a binary result, specifically, a negative emotion and a non-negative emotion, and if the user generates an anxiety, depression, and other emotions, which may be a mood state, the recognition result is a negative emotion.
And S104, judging whether the recognition result is a negative emotion.
And S105, if yes, reminding the user to pay attention to emotion regulation.
Further, still include:
and recording the recognition result, and providing a mood regulation suggestion for the user if the recognition results of three consecutive times are all negative emotions.
Specifically, the suggestion may be to play preset music that can bring a pleasant mood to the user, recommend nearby food and recommend movies, books and entertainment venues that the user has a better evaluation.
In the implementation process of the embodiment of the invention, the inventor researches the directional effect of various physiological emotion signals on anxiety and depression, and obtains a first characteristic value based on the heart rate variation rate and a second characteristic value and a third characteristic value based on the skin conductance signal based on the research result. The second characteristic value and the third characteristic value based on the skin conductance signal have the following three main characteristics based on the first characteristic value of the heart rate variation rate:
(1) the directivity is clear, clear judgment results of anxiety and depression can be obtained by processing the data, and the misjudgment rate is relatively low;
(2) the heart rate variability rate and the skin conductance signals are relatively easy to collect, so that the heart rate variability rate and the skin conductance signals can be collected through common wearable equipment without bringing burden to a user;
(3) the characteristic value obtained by further processing based on the heart rate variation rate and the skin conductance signal has a more definite pointing effect, and a specific algorithm of the characteristic value is specifically defined by the embodiment of the invention.
Specifically, the embodiment of the present invention further discloses a method for acquiring a first characteristic value of a heart rate variability rate, as shown in fig. 2, including:
s1, according to the heart rate variation rate sequence { xiGet the sequence to be reconstructed
Wherein,wherein N' is the heart rate variation sequence { xiTotal length of { y } seeiτIs of lengthτ is a segmentation parameter, which can be set based on empirical values.
S2, for the sequence to be reconstructedPerforming m-dimensional phase space reconstruction to obtain a target sequence { zjm}。
Wherein z isjm={yj,yj+1,......,yj+m-1And f, wherein m is a fixed value, and the value is 2 in the embodiment of the invention.
S3, calculating a target sequence zjmThe relative distance between adjacent elements yields the target distance sequence njm}。
And S4, calculating a first characteristic value of the heart rate variation rate according to a preset formula.
Specifically, the formula isWhereinWherein δ is { xi0.2 times the standard deviation of the standard deviation,in a specific implementation process, the formula isIs an ideal value, but in practice a similar approximation can be made in the processor.
Specifically, the embodiment of the present invention further discloses a method for acquiring a second characteristic value based on a skin conductance signal, as shown in fig. 3, including:
s10, according to the skin conductance signal sequence { elciGet the sequence to be convolved { elc }i}。
In particular, the amount of the solvent to be used,
s20, the sequence to be convolved { elc }iConvolving with a preset window function to obtain a convolution sequence {. dsti}。
In particular, the window function is preferably a hanning window. The hanning window, also called raised cosine window, can be regarded as the sum of the frequency spectra of 3 rectangular time windows.
And S30, obtaining a second characteristic value according to the convolution sequence.
Specifically, the second characteristic value is obtained according to a formula
Specifically, the embodiment of the present invention further discloses a method for acquiring a third characteristic value based on a skin conductance signal, as shown in fig. 4, including:
s100, acquiring a skin conductance signal { elci}。
S200. according to the formulaCalculating skin conductance Signal { elciA third eigenvalue of.
In particular, the amount of the solvent to be used,for the concentration parameter, the detailed explanation is given in this example, where N is the skin conductance signal { elciThe length of the concentration parameter, n is the first internal parameter of the concentration parameter, which can be set, generally larger than 0 and smaller than 40, the higher the value is, the better the concentration effect is, but the slower the calculation speed is; p is a second internal parameter of the concentration parameter, which is used to represent that the portion of the extracted information is emphasized in the skin conductance signal { elciThe position in the position is 0.5 because the embodiment of the invention mainly extracts the information of the intermediate signal.
In particular toWherein, Λn(i-1, p, N-1) is concentration nucleus, and concentration nucleus Lambdan(i-1, p, N-1) is defined asWherein2F1() For hypergeometric function, M (p, N, N) is a constant related to p, N, N, and its value can be
On the basis of obtaining the first characteristic value based on the heart rate variation rate and the second characteristic value and the third characteristic value based on the skin conductance signal, the emotion recognition result can be obtained only by inputting the first characteristic value and the third characteristic value into a preset classifier. The classifier can be obtained by training a large number of training samples, and specifically, the training process of the classifier can include the following steps, as shown in fig. 5, including:
s201, a first sample set and a second sample set are obtained.
Specifically, in the embodiment of the present invention, the first sample set is a sample set with negative emotions, and the second sample set is a sample set without negative emotions. Specifically, each element in the first sample set and the second sample set includes four items, namely a first characteristic value based on the heart rate variation rate, a second characteristic value based on the skin conductance signal, a third characteristic value based on the skin conductance signal and the emotion recognition result.
If each sample is used (x)i,yi) Then the feature used for classification in each sample is xiUnified identification: the first characteristic value based on the heart rate variation rate, the second characteristic value based on the skin conductance signal and the third characteristic value based on the skin conductance signal; accordingly, the classification result to which it belongs uses yiRepresents: the emotion recognition result is marked as 1 if it is a negative emotion, and is marked as 0 otherwise.
S202, acquiring control parameters of a training process and initializing the cycle control parameters and a classifier.
The training process control parameter comprises the maximum misjudgment rate F of the training resulttargetMaximum false rate f of each layer of sub-classifiersmaxMinimum detection rate d of each layer of sub-classifiersmin
Specifically, in the embodiment of the present invention, the misjudgment rate is a ratio of the number of misjudged samples to the number of samples participating in the judgment, and the detection rate is a ratio of the number of correctly judged samples to the number of samples participating in the judgment.
The cycle control parameters include cycle control variables, and the misjudgment rate and the detection rate of the sub-classifiers in the current cycle result, specifically, the cycle control parameters are initialized according to the following formulas: the loop control variable i is equal to 0, and the misjudgment rate F of the sub-classifiers in the current loop resulti1 and detection rate Di=1。
The classifier is initialized to null.
S203, executing a cyclic training process: the loop control variable is increased by 1, i is i + 1; randomly extracting half of data from the first sample set and the second sample set respectively to obtain a current training sample set; training an ith sub-classifier C meeting preset requirements according to the current training sample seti
In each circulation process, the current training sample is randomly replaced and extracted based on the current first sample set and the second sample set, so that the randomness of the training of the sub-classifiers is improved to the maximum extent.
S204, the ith sub-classifier CiCascading with a current classifier to update the current classifier; updating the misjudgment rate F of the current classifieri+1=Fi*fiAnd detection rate Di+1=Di*diWherein f isiAnd diAre respectively the ith sub-classifier CiThe false positive rate and the detection rate.
S205, judging whether the misjudgment rate of the current classifier is larger than the maximum misjudgment rate.
S206, if yes, clearing the second sample set; the current classifier is used to classify the first sample set and the misclassified samples are added to the second sample set, and step S203 is repeatedly performed.
The method comprises the steps of firstly carrying out emptying operation on a second sample set, classifying based on a currently cascaded classifier, and classifying misjudged samples into the second sample set, so that the attention degree of the misjudged samples and the samples belonging to negative emotions in the training process is improved, the misjudged samples play a role in the next training process, the training precision is improved, the effect of always paying attention to the samples belonging to negative emotions is achieved in the training process, and the importance degree of the samples belonging to negative emotions is improved laterally.
And S207, if not, ending the process and outputting the current classifier.
Specifically, the embodiment of the present invention further provides a training method of the sub-classifier, as shown in fig. 6, including.
S301, initializing the weight of each sample in the current training sample set, a training threshold T and the adjustment times T of the weight distribution of the samples in the current training sample set.
Let the weight distribution of the samples in the current training sample set be SD0=(ω01,...ω0i...,ω0N),Where N is the total number of samples in the current training sample set.
S302, training M linear element classifiers G according to a training threshold T by using current training samples with weightsm(x)。
Specifically, the linear element classifier may be a Support Vector Machine (SVM), and a specific training method thereof belongs to the prior art, which is not described in detail in the embodiments of the present invention. The number M of linear basic classifiers can be preset.
S303, verifying each linear element classifier G by using current training samplesm(x) To obtain the misjudgment rate em
In particular, the false positive rateWherein t is the adjustment times of the weight distribution of the samples in the current training sample set, omegatiFor the weight distribution of the samples in the current training sample set, I (G)m(xi≠yi) Sample with the index i) uses a meta classifier Gm(x) If a false determination occurs, I (G)m(xi≠yi) ) has a value of 1.
S304, obtaining the suspected sub-classifier according to the misjudgment rate
S305, using the current training sample to verify the suspected sub-classifiers so as to obtain the misjudgment rate f (G (x)) and the detection rate d (G (x)).
S306, if the misjudgment rate f (G (x)) is less than or equal to fmaxAnd the detection rate d (G (x) ≧ dminIf yes, the suspected sub-classifier is judged to reach the preset standard, and the process is ended.
Further, it is also used as a sub-classifier and step S204 is performed, specifically, f is also included in step S204i=f(G(x)),di=d(G(x))。
S306, otherwise, adjusting the weight of each sample in the current training sample set; adjusting a training threshold T; increasing the adjustment times of the weight distribution of the samples in the current training sample set by 1; return to perform step 302.
Specifically, the adjusting the weight of each sample in the current training sample set specifically includes: the method further includes increasing the weight of the sample wrongly classified by the suspected sub-classifier, and decreasing the weight of the sample correctly classified by the suspected sub-classifier, and the specific adjustment process is not limited in the embodiments of the present invention, and the prior art is used.
Specifically, the training threshold T may be appropriately reduced in the next iteration, and the specific reduction method may be manually set according to experience, and the embodiment of the present invention is not limited to the specific implementation manner.
The embodiment of the invention provides a user emotion reminding method in detail, which can roughly judge the psychological state of a user in a certain period according to the monitoring result of the physiological parameters of the user, thereby generating emotion regulation reminding for the user, facilitating the user to better manage the emotion of the user and keeping physical and mental health.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

CN201811648330.6A2018-12-302018-12-30A kind of user emotion based reminding methodWithdrawnCN109730700A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201811648330.6ACN109730700A (en)2018-12-302018-12-30A kind of user emotion based reminding method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811648330.6ACN109730700A (en)2018-12-302018-12-30A kind of user emotion based reminding method

Publications (1)

Publication NumberPublication Date
CN109730700Atrue CN109730700A (en)2019-05-10

Family

ID=66362912

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811648330.6AWithdrawnCN109730700A (en)2018-12-302018-12-30A kind of user emotion based reminding method

Country Status (1)

CountryLink
CN (1)CN109730700A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111248928A (en)*2020-01-202020-06-09北京津发科技股份有限公司 Pressure identification method and device
CN114869284A (en)*2022-05-112022-08-09吉林大学 A monitoring system for driver's driving emotional state and driving posture
CN115715680A (en)*2022-12-012023-02-28杭州市第七人民医院Anxiety discrimination method and device based on connective tissue potential

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111248928A (en)*2020-01-202020-06-09北京津发科技股份有限公司 Pressure identification method and device
CN114869284A (en)*2022-05-112022-08-09吉林大学 A monitoring system for driver's driving emotional state and driving posture
CN115715680A (en)*2022-12-012023-02-28杭州市第七人民医院Anxiety discrimination method and device based on connective tissue potential

Similar Documents

PublicationPublication DateTitle
Han et al.Acoustic classification of Australian anurans based on hybrid spectral-entropy approach
Wang et al.The acoustic emotion Gaussians model for emotion-based music annotation and retrieval
Santana et al.Filter-based optimization techniques for selection of feature subsets in ensemble systems
Gupta et al.A stacked technique for gender recognition through voice
CN113887397B (en)Classification method and classification system of electrophysiological signals based on ocean predator algorithm
Zottesso et al.Bird species identification using spectrogram and dissimilarity approach
CN109730700A (en)A kind of user emotion based reminding method
Lopes et al.Feature set comparison for automatic bird species identification
Seichepine et al.Piecewise constant nonnegative matrix factorization
CN111782863A (en)Audio segmentation method and device, storage medium and electronic equipment
Lee et al.Inter-subject contrastive learning for subject adaptive eeg-based visual recognition
Ma et al.Cost-sensitive two-stage depression prediction using dynamic visual clues
Sidorov et al.Automatic recognition of personality traits: A multimodal approach
CN109730665A (en)A kind of physiological data processing method
Moreaux et al.Benchmark for kitchen20, a daily life dataset for audio-based human action recognition
PurnamaMusic genre recommendations based on spectrogram analysis using convolutional neural network algorithm with RESNET-50 and VGG-16 architecture
Palo et al.Classification of emotional speech of children using probabilistic neural network
AgranatBat species identification from zero crossing and full spectrum echolocation calls using Hidden Markov Models, Fisher scores, unsupervised clustering and balanced winnow pairwise classifiers
Hasan et al.Multi-objective evolutionary methods for channel selection in brain-computer interfaces: some preliminary experimental results
Bai et al.Xception Based Method for Bird Sound Recognition of BirdCLEF 2020.
CN109685156B (en)Method for acquiring classifier for recognizing emotion
KR101520572B1 (en)Method and apparatus for multiple meaning classification related music
De Camargo et al.PROTAX-Sound: A probabilistic framework for automated animal sound identification
MendesDeep learning techniques for music genre classification and building a music recommendation system
Zhang et al.Impute vs. ignore: Missing values for prediction

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WW01Invention patent application withdrawn after publication
WW01Invention patent application withdrawn after publication

Application publication date:20190510


[8]ページ先頭

©2009-2025 Movatter.jp