



技术领域technical field
本发明涉及智能感知、移动计算及模式识别领域,特别涉及一种面向可穿戴传感器的人类活动识别方法。The invention relates to the fields of intelligent perception, mobile computing and pattern recognition, in particular to a human activity recognition method for wearable sensors.
背景技术Background technique
人类活动识别是指通过各种传感器感知人体行为数据,然后利用计算机自动检测技术、分析和理解人体各类运动和行为的过程,该技术有着广泛的应用场景,例如智能监控、人机交互、机器人等。近年来,随着内置多种传感器的可穿戴设备的普及,基于可穿戴传感器的接触式人类活动识别可以直接与我们的日常生活息息相关,例如医疗健康监测或健身监测等。因此,面向可穿戴传感器的活动识别成为近年来的研究热点。Human activity recognition refers to the process of sensing human behavior data through various sensors, and then using computer automatic detection technology to analyze and understand various human movements and behaviors. This technology has a wide range of application scenarios, such as intelligent monitoring, human-computer interaction, robotics Wait. In recent years, with the popularization of wearable devices with built-in multiple sensors, contact human activity recognition based on wearable sensors can be directly related to our daily life, such as medical health monitoring or fitness monitoring. Therefore, activity recognition for wearable sensors has become a research hotspot in recent years.
通常可穿戴传感器是多通道的,所以其感知的数据具有异构、时序的特点,能够体现人的多维度移动特性,因此,面向可穿戴传感器的人类活动识别通常被认为是对异构时序数据的分类问题。针对这一问题,早期一些学者提出基于数据融合的识别方法,即通过分析多通道感知数据的物理特性,然后通过加权平均等方法将多源数据进行融合从而得到一个综合特征,例如,通过融合三轴加速度信息能够得到一个综合加速度值。最后再通过支持向量机(SVM)、随机森林(RF)、隐马尔科夫模型(HMM)等方法对融合的信息进行分类。然而,这类方法属于人工特征提取法,由于不同的人对于同一活动可能会有很大差异,使得这些手工特征很难在复杂的现实环境中得以应用。此外,该方法既没有反映数据时间连续的特性,也不能提取异构数据间的内部特征。Wearable sensors are usually multi-channel, so the data they perceive has the characteristics of heterogeneity and time series, which can reflect the multi-dimensional movement characteristics of people. Therefore, human activity recognition for wearable sensors is usually considered to be a heterogeneous time series data. classification problem. In response to this problem, some early scholars proposed identification methods based on data fusion, that is, by analyzing the physical characteristics of multi-channel sensing data, and then fusing multi-source data through methods such as weighted average to obtain a comprehensive feature, for example, by fusing three The axis acceleration information can get a comprehensive acceleration value. Finally, the fused information is classified by support vector machine (SVM), random forest (RF), hidden Markov model (HMM) and other methods. However, this kind of method belongs to the artificial feature extraction method, because different people may have great differences for the same activity, making these handcrafted features difficult to be applied in complex real-world environments. In addition, this method neither reflects the time-continuous nature of the data, nor can it extract the internal features among heterogeneous data.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于克服现有技术的不足,提供一种面向可穿戴传感器的人类活动识别方法。提高了可穿戴传感器对用户活动的认知能力,可作为现实增强的一项辅助技能,提升用户体验感。The purpose of the present invention is to overcome the deficiencies of the prior art and provide a human activity recognition method for wearable sensors. The wearable sensor's cognitive ability to user activities is improved, and it can be used as an auxiliary skill for augmented reality to improve user experience.
本发明是通过以下技术方案实现的:The present invention is achieved through the following technical solutions:
一种面向可穿戴传感器的人类活动识别方法,该方法包括以下步骤:A wearable sensor-oriented human activity recognition method includes the following steps:
(1)将可穿戴传感器感知的时序异构数据形成上下文指纹矩阵,并利用滑动窗口重叠机制进行数据标注,对感知的数据标识出活动类别的标签;(1) The time series heterogeneous data sensed by the wearable sensor is formed into a contextual fingerprint matrix, and the sliding window overlap mechanism is used to label the data, and the sensed data is marked with the label of the activity category;
(2)通过由前向长短期记忆和后向长短期记忆构成的双向LSTM层处理输入数据,获得源数据的粗粒度特征;(2) The coarse-grained features of the source data are obtained by processing the input data through a bidirectional LSTM layer consisting of forward long short-term memory and backward long short-term memory;
(3)利用Attention机制对粗粒度特征进行重要度计算,获得能够反映活动特性的细粒度特征;(3) Use the Attention mechanism to calculate the importance of coarse-grained features to obtain fine-grained features that can reflect activity characteristics;
(4)通过分类的逻辑回归方法处理细粒度特征,获得当前数据的多个标签的概率分布,概率最大的为当前感知数据的活动类型;(4) The fine-grained features are processed by the logistic regression method of classification, and the probability distribution of multiple labels of the current data is obtained, and the highest probability is the activity type of the current perception data;
(5)通过步骤(1)带标签的数据集训练步骤(2)-(4)的网络模型,进而得到最终的分层深度学习模型。(5) Train the network model of steps (2)-(4) through the labeled dataset in step (1), and then obtain the final hierarchical deep learning model.
进一步的,步骤(1)中的上下文指纹是指通过整合可穿戴传感器感知的人体行为信息使之成为上下文不变特征并且能够用于后续数据处理,所述上下文指纹矩阵F=(f1,f2,…,fn)用于时序异构数据的表达,其中,fi=(Accxi,Accyi,Acczi,Gyrxi,Gyryi,Gyrzi,Magxi,Magyi,Magzi,Comi…..)T,fi中的元素为各类可穿戴传感器数值,i为数据采集点。Further, the context fingerprint in step (1) refers to integrating the human behavior information perceived by the wearable sensor to make it a context invariant feature and can be used for subsequent data processing, the context fingerprint matrix F=(f1 , f2,._________i .....)T , the elements in fi are the values of various wearable sensors, and i is the data collection point.
进一步的,步骤(1)中,通过滑动窗口切分数据并利用窗口重叠机制增加数据冗余特性,用每个窗口最后数据帧所属的活动类型标注数据;进一步的,滑动窗口大小设置为1500ms最佳。Further, in step (1), the data is divided by the sliding window and the data redundancy feature is increased by using the window overlap mechanism, and the data is marked with the activity type to which the last data frame of each window belongs; further, the sliding window size is set to a maximum of 1500ms. good.
进一步的,步骤(2)经过双向LSTM模型得到的隐藏状态h=(h0,h1,…,ht)即为提取的数据粗粒度特征,其中,与是分别由前向LSTM和后向LSTM模型提取的关于数据的粗粒度特征。Further, the hidden state h=(h0 , h1 ,...,ht ) obtained through the bidirectional LSTM model in step (2) is the coarse-grained feature of the extracted data, in, and are coarse-grained features about the data extracted by the forward LSTM and backward LSTM models, respectively.
进一步的,步骤(3)利用Attention机制获取具有活动判别特性的细粒度特征,是指对步骤(2)提取的粗粒度特征,通过Attention机制学习这些特征的权重,从而得到具有特征偏好的细粒度特征,使之能够反映出活动变化时呈现的独有特性。Further, step (3) uses the Attention mechanism to obtain fine-grained features with activity discrimination characteristics, which refers to the coarse-grained features extracted in step (2), and the weights of these features are learned through the Attention mechanism, so as to obtain fine-grained features with feature preference. characteristics so that they reflect the unique characteristics that occur when the activity changes.
进一步的,步骤(3)中,首先对于粗粒度特征h经过一个非线性变换获得其隐式表达值u,该过程可以表示为:Further, in step (3), first, the coarse-grained feature h is subjected to a nonlinear transformation to obtain its implicit expression value u, and the process can be expressed as:
u=tanh(Wu·h+bu),u=tanh(Wu ·h+bu ),
在隐式表达的基础上,通过Attention机制要学习一个归一化的能够体现u中各元素的重要性的权重系数向量α,使粗粒度特征中越能体现活动特性的特征得到的权重越大,从而获得细粒度特征。权重系数α的计算表达式为:On the basis of implicit expression, a normalized weight coefficient vector α that can reflect the importance of each element in u needs to be learned through the Attention mechanism, so that the features that can reflect the activity characteristics in the coarse-grained features are more weighted. Thereby obtaining fine-grained features. The calculation expression of the weight coefficient α is:
因此,细粒度特征s可以表示为:Therefore, the fine-grained feature s can be expressed as:
进一步的,步骤(4)中,活动类型结果计算为:Further, in step (4), the activity type result is calculated as:
y=softmax(wls+bl)。y=softmax(wl s+bl ).
进一步的,步骤(5)中,模型训练中,使用交叉熵损失函数进行评价,当训练过程交叉熵损失函数趋于收敛,则获得最优模型。Further, in step (5), during model training, the cross-entropy loss function is used for evaluation, and when the cross-entropy loss function tends to converge in the training process, the optimal model is obtained.
本发明的优点和有益效果为:The advantages and beneficial effects of the present invention are:
(1)本发明以传感器感知的时间序列异构数据作为原始数据,在对活动特征表达方面,通过分层深度学习模型,更注重区别性更强的细粒度特征提取,此特征能更好地反映活动变化时呈现的独特特征,从而能够提高活动识别的精度。为此,首先构建了上下文指纹矩阵作为模型的输入;其次利用双向LSTM模型提取原始数据的粗粒度特征;再根据Attention机制获得原始数据的细粒度特征表达;最后通过多分类器获得活动识别结果。可以对用户活动进行精确的识别,提高人机交互能力。(1) The present invention uses the time series heterogeneous data perceived by the sensor as the original data. In terms of the expression of activity features, through the hierarchical deep learning model, it pays more attention to the extraction of fine-grained features that are more distinguishable, and this feature can better It reflects the unique characteristics that appear when the activity changes, so as to improve the accuracy of activity recognition. To this end, the context fingerprint matrix is first constructed as the input of the model; secondly, the bidirectional LSTM model is used to extract the coarse-grained features of the original data; then the fine-grained feature expression of the original data is obtained according to the Attention mechanism; finally, the activity recognition results are obtained through multi-classifiers. It can accurately identify user activities and improve human-computer interaction capabilities.
(2)提高了可穿戴传感器对用户活动的认知能力,可作为现实增强的一项辅助技能,提升用户体验感。(2) The wearable sensor's cognitive ability to user activities is improved, which can be used as an auxiliary skill for augmented reality to enhance user experience.
(3)本发明提供的活动识别方法对现实环境具有较强的鲁棒性,即在复杂环境下,模型也具有较高的识别精度和稳定的识别速度,其具有较强的可移植性。(3) The activity recognition method provided by the present invention has strong robustness to the real environment, that is, in a complex environment, the model also has high recognition accuracy and stable recognition speed, and has strong portability.
附图说明Description of drawings
图1是本发明的面向可穿戴传感器的人类活动识别方法的分层深度学习模型结构图;Fig. 1 is the hierarchical deep learning model structure diagram of the wearable sensor-oriented human activity recognition method of the present invention;
图2双向LSTM模型结构图;Figure 2 Bidirectional LSTM model structure diagram;
图3基于滑动窗口重叠机制数据标注示意图;3 is a schematic diagram of data annotation based on the sliding window overlap mechanism;
图4基于OPPORTUNITY数据集分层深度学习模型在不同滑动窗口下活动分类结果示意图。Figure 4 is a schematic diagram of the activity classification results under different sliding windows based on the hierarchical deep learning model of the OPPORTUNITY dataset.
对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,可以根据以上附图获得其他的相关附图。For those of ordinary skill in the art, other related drawings can be obtained from the above drawings without any creative effort.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明方案,下面结合具体实施例进一步说明本发明的技术方案。In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions of the present invention are further described below with reference to specific embodiments.
实施例一Example 1
一种面向可穿戴传感器的人类活动识别方法,该方法通过分层深度学习模型获得感知数据的细粒度特征,并通过端到端的方式实现活动识别,其模型结构如图1所示,分为输入层、粗粒度特征提取层、细粒度特征提取层、活动识别输出层。包括以下步骤:A wearable sensor-oriented human activity recognition method, which obtains fine-grained features of perceptual data through a hierarchical deep learning model, and realizes activity recognition in an end-to-end manner. The model structure is shown in Figure 1, which is divided into input layer, coarse-grained feature extraction layer, fine-grained feature extraction layer, activity recognition output layer. Include the following steps:
(1)首先,将可穿戴传感器感知的时序异构数据形成上下文指纹矩阵,并利用滑动窗口重叠机制进行数据标注,作为数据输入;(1) First, the time-series heterogeneous data perceived by the wearable sensor is formed into a contextual fingerprint matrix, and the sliding window overlap mechanism is used to label the data as data input;
(2)然后,通过由前向长短期记忆和后向长短期记忆构成的双向LSTM层处理输入数据,获得源数据的粗粒度特征;(2) Then, the input data is processed by a bidirectional LSTM layer composed of forward long short-term memory and backward long short-term memory to obtain coarse-grained features of the source data;
(3)之后,利用Attention机制对先前的粗粒度特征进行重要度计算,以便获得能够反映活动特性的细粒度特征;(3) After that, use the Attention mechanism to calculate the importance of the previous coarse-grained features in order to obtain fine-grained features that can reflect the activity characteristics;
(4)最后,用分类的逻辑回归方法处理细粒度特征,获得当前数据的多个标签的概率分布,从而最终判定活动类型。(4) Finally, use the classification logistic regression method to process fine-grained features, obtain the probability distribution of multiple labels of the current data, and finally determine the activity type.
下面进一步阐述本发明的实施方案。Embodiments of the present invention are further described below.
其中:in:
步骤(1),上下文指纹是指通过整合可穿戴传感器感知的人体行为信息使之成为上下文不变特征并且能够用于分层深度学习模型处理。由多通道传感器感知的数据通常是多粒度的,例如,加速度传感器数据反映的是物体的移动速度的变化,而陀螺仪传感器数据反映的是物体运动方向的变化。因此,感知的数据具有异构的特点。此外,感知的数据是时间相关的,且随着时间的变化而变化。所以,感知的数据还具有时间连续的特点。为此,所述上下文指纹矩阵F=(f1,f2,…,fn)用于异构时序数据的表达,其中,fi=(Accxi,Accyi,Acczi,Gyrxi,Gyryi,Gyrzi,Magxi,Magyi,Magzi,Comi…..)T,fi中的元素为各类传感器数值,i为数据采集点。Step (1), the context fingerprint refers to integrating the human behavior information sensed by the wearable sensor to make it a context invariant feature and can be used for processing by a hierarchical deep learning model. The data sensed by multi-channel sensors is usually multi-granularity. For example, the accelerometer sensor data reflects the change of the moving speed of the object, while the gyroscope sensor data reflects the change of the object's movement direction. Therefore, the perceived data has heterogeneous characteristics. Furthermore, perceived data is time-dependent and changes over time. Therefore, the perceived data also has the characteristics of time continuity. To this end, the contextfingerprintmatrixF= (f1 ,f2 , .i ,Gyrzii ,Magxi ,Magyi ,Magzii ,Comi …..)T , the elements in fi are the values of various sensors, and i is the data collection point.
对感知的活动数据进行类别标注,即对感知的数据标识出活动类别的标签,得到活动类别数据集。由于感知数据是连续的,为了便于我们的分层深度学习模型处理且能够用于网络训练,需要对数据进行切分及标注出活动类别,本发明对数据切分采用滑动窗口重叠机制,即以窗口大小为T对数据进行切分,为了在训练和测试模型时不遗漏数据的关键特征,以当前数据片段与上一个数据片段有50%重叠的条件下滑动窗口。另外,对于每个窗口数据活动类别,以每个窗口内最后一个数据帧所属的活动类别对数据进行标注,数据标注说明示意图如图3所示,其中传感器信道为不同类型的传感器。因此,此时感知的数据可以表示为:{Fk,y(k)},k=1,2,3...N,其中Fk为第k个窗口数据的指纹矩阵,其维度为m×T,m为传感器通道数,y(k)为该窗口数据的活动类别。为了确定窗口T的大小,我们使用了OPPORTUNITY数据集测试了模型在不同窗口大小下分类精度,结果如图4所示。由图可以看出,为了获得较高的活动分类精度,滑动窗口大小设置为1500ms最佳。The category labeling is performed on the perceived activity data, that is, the label of the activity category is marked on the perceived data, and the activity category data set is obtained. Since the perceptual data is continuous, in order to facilitate the processing of our hierarchical deep learning model and can be used for network training, it is necessary to segment the data and mark the activity category. The window size is T to segment the data. In order not to miss the key features of the data during training and testing, the window is slid under the condition that the current data segment overlaps with the previous data segment by 50%. In addition, for each window data activity category, the data is marked with the activity category to which the last data frame in each window belongs. The schematic diagram of the data annotation is shown in Figure 3, where the sensor channels are different types of sensors. Therefore, the perceived data at this time can be expressed as: {Fk, y(k) }, k=1, 2, 3...N, where Fk is the fingerprint matrix of the kth window data, and its dimension is m ×T, m is the number of sensor channels, y(k) is the activity category of the window data. In order to determine the size of the window T, we used the OPPORTUNITY dataset to test the classification accuracy of the model under different window sizes, and the results are shown in Figure 4. As can be seen from the figure, in order to obtain higher activity classification accuracy, the sliding window size is set to 1500ms is the best.
步骤(2),为了提取源数据的粗粒度特征,采用双向LSTM模型,模型得到的隐藏状态h=(h0,h1,…,ht)则为我们要提取的数据粗粒度特征,它受模型的细胞状态Ct、临时细胞状态遗忘门ft、记忆门it和输出门ot控制,通过对细胞状态中信息的遗忘以及记忆新的信息,使得有用的信息得以传递,而无用的信息被丢弃。具体而言,对于单向LSTM模型,假设t时刻输入的数据为xt,则其经过遗忘门后数值为:In step (2), in order to extract the coarse-grained features of the source data, a bidirectional LSTM model is used. The hidden state h=(h0 , h1 ,...,ht ) obtained by the model is the coarse-grained feature of the data we want to extract. Model-dependent cell state Ct , temporary cell state The forgetting gateft , the memory gate it and the output gateot are controlled byforgetting the information in the cell state and memorizing new information, so that useful information can be transmitted and useless information is discarded. Specifically, for the one-way LSTM model, assuming that the input data at time t is xt , the value after passing through the forgetting gate is:
ft=sigmoid(Wf·[ht-1,xt]+bf), (1)ft =sigmoid(Wf ·[ht-1 ,xt ]+bf ), (1)
对于xt需要记忆的信息为:The information that needs to be memorized for xt is:
it=sigmoid(Wi·[ht-1,xt]+bi), (2)it =sigmoid(Wi ·[ht-1 ,xt ]+bi ), (2)
此时LSTM模型的细胞状态为:At this time, the cell state of the LSTM model is:
其中,对于xt输出门数值为:in, For xt the output gate value is:
ot=sigmoid(Wo·[ht-1,xt]+bo), (4)ot =sigmoid(Wo ·[ht-1 ,xt ]+bo ), (4)
因此,当前时刻隐藏状态即要提取的粗粒度特征ht为:Therefore, the hidden state at the current moment, that is, the coarse-grained feature ht to be extracted is:
ht=ot·tanh(Ct), (5)ht =ot ·tanh(Ct ), (5)
通过上述过程,当输入数据为x=(x1,x2,…,xt)时,单向隐藏状态为h=(h1,h2,…,ht)。因此,经过双向LSTM模型,提取到的数据x的粗粒度特征为:Through the above process, when the input data is x=(x1 , x2 ,..., xt ), the one-way hidden state is h=(h1 , h2 ,..., ht ). Therefore, after the bidirectional LSTM model, the coarse-grained features of the extracted data x are:
其中,与是分别由前向LSTM和后向LSTM模型提取的关于数据x的粗粒度特征。in, and are the coarse-grained features about the data x extracted by the forward LSTM and backward LSTM models, respectively.
步骤(3),在粗粒度特征提取过程后能获得数据x的粗粒度特征h,但这些特征对于活动标识的重要性是一致性的,并不能反映活动变化时数据呈现出的变化特性,从而影响活动识别精度。因此,我们在粗粒度特征提取的基础上又引入了细粒度特征提取过程,从而能够获得对于具体活动更具“注意”性的特征,提高活动识别精度。为了实现这一目的,使用了Attention机制,使其能够学习出粗粒度特征中的更重要的特征表达。具体过程为,首先对于粗粒度特征h经过一个非线性变换获得其隐式表达值u,该过程可以表示为:In step (3), the coarse-grained feature h of the data x can be obtained after the coarse-grained feature extraction process, but the importance of these features to the activity identification is consistent, and cannot reflect the changing characteristics of the data when the activity changes, so Affects activity recognition accuracy. Therefore, we introduce a fine-grained feature extraction process on the basis of coarse-grained feature extraction, so as to obtain features that are more "attentional" to specific activities and improve the accuracy of activity recognition. To achieve this, an Attention mechanism is used, which enables it to learn more important feature representations among coarse-grained features. The specific process is as follows: first, the coarse-grained feature h is subjected to a nonlinear transformation to obtain its implicit expression value u, and the process can be expressed as:
u=tanh(Wu·h+bu), (7)u=tanh(Wu ·h+bu ), (7)
在隐式表达的基础上,通过Attention机制要学习一个归一化的能够体现u中各元素的重要性的权重系数向量α,使粗粒度特征中越能体现活动特性的特征得到的权重越大,从而获得细粒度特征。权重系数α的计算表达式为:On the basis of implicit expression, a normalized weight coefficient vector α that can reflect the importance of each element in u needs to be learned through the Attention mechanism, so that the features that can reflect the activity characteristics in the coarse-grained features are more weighted. Thereby obtaining fine-grained features. The calculation expression of the weight coefficient α is:
因此,细粒度特征s可以表示为:Therefore, the fine-grained feature s can be expressed as:
步骤(4),在获得细粒度特征后,利用多分类逻辑回归方法预测数据x对应各类活动的概率,概率值最大的则为识别的活动结果,活动类别预测结果可以计算为:In step (4), after obtaining the fine-grained features, the multi-class logistic regression method is used to predict the probability of the data x corresponding to various activities, and the highest probability value is the identified activity result, and the activity category prediction result can be calculated as:
y=softmax(wls+bl), (10)y=softmax(wl s+bl) , (10)
进一步的说,在公式(1)-(10)中涉及参数的W和b为待求变量,需要通过步骤Further, in formulas (1)-(10), W and b involved in parameters are variables to be determined, and it is necessary to pass the steps
(1)带标签的数据集训练网络模型确定,进而能够得到最终的分层深度学习模型。(1) The labeled dataset training network model is determined, and then the final hierarchical deep learning model can be obtained.
为了确定模型中的最优参数值,需要用带标签的数据对网络进行训练,而在这一过程中需要引入一个指标来评价模型分类结果误差,通过最小化该误差更新模型参数,从而获得最优结果。本发明中使用交叉熵目标函数作为误差评价指标,可以表示为:In order to determine the optimal parameter value in the model, it is necessary to train the network with labeled data, and in this process, an indicator needs to be introduced to evaluate the error of the model classification result, and the model parameters are updated by minimizing the error, so as to obtain the optimal value. Excellent result. In the present invention, the cross-entropy objective function is used as the error evaluation index, which can be expressed as:
其中,i为第i组感知的数据索引,j为第j类活动类别。在模型训练过程,在输入层输入标注好的数据,然后我们采用时间反向传播(BPTT)算法获得目标函数相对于所有参数的导数,并通过随机梯度下降法最小化目标函数,从而确定最优参数。Among them, i is the data index of the i-th group perception, and j is the j-th activity category. In the model training process, the labeled data is input in the input layer, and then we use the back-propagation time (BPTT) algorithm to obtain the derivative of the objective function with respect to all parameters, and minimize the objective function through the stochastic gradient descent method to determine the optimal parameter.
为了验证最终模型的有效性,还需要对模型就行测试,在该过程我们同样使用部分带标签的数据集测试模型对其活动分类的精度。在模型训练和测试过程中,数据集的比例设置为了7:3。当测试数据精度小于给定阈值,则认为模型为有效模型。In order to verify the validity of the final model, the model needs to be tested. In this process, we also use some labeled datasets to test the accuracy of the model's activity classification. During model training and testing, the dataset ratio was set to 7:3. When the test data accuracy is less than the given threshold, the model is considered to be a valid model.
实施例二
上述的面向可穿戴传感器的人类活动识别方法在智能手机中的应用:The application of the above-mentioned wearable sensor-oriented human activity recognition method in a smartphone:
智能手机不仅配备有各种各样的传感器,如加速计、磁力计、GPS、指南针等,还有着较强的计算、存储和通信能力。因此可以利用被随身携带的智能手机感知人的行为,通过上述的面向可穿戴传感器的人类活动识别方法监测携带者的行为。例如,用户A是一位老人,由于家人工作繁忙无瑕对其进行照顾。对于老年人来说,跌倒、久坐等行为都是危害其身体健康的首要因素,因此,家人可以为A配备一部智能手机,实时监测A的日常行为,判断发生跌倒时可以通过默认联系人的方式及时联系家人和急救部门;或者在判断用户A久坐时,提醒其进行适量的运动,以此可以大大提高老年人晚年的生活质量。Smartphones are not only equipped with various sensors, such as accelerometers, magnetometers, GPS, compass, etc., but also have strong computing, storage and communication capabilities. Therefore, a person's behavior can be sensed by a smartphone that is carried around, and the carrier's behavior can be monitored through the above-mentioned wearable sensor-oriented human activity recognition method. For example, user A is an elderly person who takes care of him due to his busy work. For the elderly, behaviors such as falling and sedentary are the primary factors that endanger their health. Therefore, family members can equip A with a smartphone to monitor A's daily behavior in real time, and determine when a fall occurs. contact the family and the emergency department in a timely manner; or when it is judged that user A has been sedentary for a long time, reminding him to do appropriate exercise, which can greatly improve the quality of life of the elderly in their later years.
以上对本发明做了示例性的描述,应该说明的是,在不脱离本发明的核心的情况下,任何简单的变形、修改或者其他本领域技术人员能够不花费创造性劳动的等同替换均落入本发明的保护范围。The present invention has been exemplarily described above. It should be noted that, without departing from the core of the present invention, any simple deformation, modification, or other equivalent replacements that can be performed by those skilled in the art without any creative effort fall into the scope of the present invention. the scope of protection of the invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910887761.6ACN110664412A (en) | 2019-09-19 | 2019-09-19 | A Human Activity Recognition Method for Wearable Sensors |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910887761.6ACN110664412A (en) | 2019-09-19 | 2019-09-19 | A Human Activity Recognition Method for Wearable Sensors |
| Publication Number | Publication Date |
|---|---|
| CN110664412Atrue CN110664412A (en) | 2020-01-10 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910887761.6APendingCN110664412A (en) | 2019-09-19 | 2019-09-19 | A Human Activity Recognition Method for Wearable Sensors |
| Country | Link |
|---|---|
| CN (1) | CN110664412A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111444983A (en)* | 2020-04-22 | 2020-07-24 | 中国科学院上海微系统与信息技术研究所 | Risk event identification method and system based on sensing data information fingerprints |
| CN111652361A (en)* | 2020-06-04 | 2020-09-11 | 南京博芯电子技术有限公司 | Composite granularity near-storage approximate acceleration structure and method of long-time memory network |
| CN115438705A (en)* | 2022-11-09 | 2022-12-06 | 武昌理工学院 | A Human Action Prediction Method Based on Wearable Devices |
| CN115964678A (en)* | 2023-03-16 | 2023-04-14 | 微云智能科技有限公司 | Intelligent identification method and system based on multi-sensor data |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105956558A (en)* | 2016-04-26 | 2016-09-21 | 陶大鹏 | Human movement identification method based on three-axis acceleration sensor |
| CN106845351A (en)* | 2016-05-13 | 2017-06-13 | 苏州大学 | It is a kind of for Activity recognition method of the video based on two-way length mnemon in short-term |
| CN107092894A (en)* | 2017-04-28 | 2017-08-25 | 孙恩泽 | A kind of motor behavior recognition methods based on LSTM models |
| CN107609460A (en)* | 2017-05-24 | 2018-01-19 | 南京邮电大学 | A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism |
| WO2018191555A1 (en)* | 2017-04-14 | 2018-10-18 | Drishti Technologies. Inc | Deep learning system for real time analysis of manufacturing operations |
| CN108960337A (en)* | 2018-07-18 | 2018-12-07 | 浙江大学 | A kind of multi-modal complicated activity recognition method based on deep learning model |
| CN109670548A (en)* | 2018-12-20 | 2019-04-23 | 电子科技大学 | HAR algorithm is inputted based on the more sizes for improving LSTM-CNN |
| CN109740419A (en)* | 2018-11-22 | 2019-05-10 | 东南大学 | A Video Action Recognition Method Based on Attention-LSTM Network |
| CN109784280A (en)* | 2019-01-18 | 2019-05-21 | 江南大学 | Human action recognition method based on Bi-LSTM-Attention model |
| CN110188637A (en)* | 2019-05-17 | 2019-08-30 | 西安电子科技大学 | A method of behavior recognition technology based on deep learning |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105956558A (en)* | 2016-04-26 | 2016-09-21 | 陶大鹏 | Human movement identification method based on three-axis acceleration sensor |
| CN106845351A (en)* | 2016-05-13 | 2017-06-13 | 苏州大学 | It is a kind of for Activity recognition method of the video based on two-way length mnemon in short-term |
| WO2018191555A1 (en)* | 2017-04-14 | 2018-10-18 | Drishti Technologies. Inc | Deep learning system for real time analysis of manufacturing operations |
| CN107092894A (en)* | 2017-04-28 | 2017-08-25 | 孙恩泽 | A kind of motor behavior recognition methods based on LSTM models |
| CN107609460A (en)* | 2017-05-24 | 2018-01-19 | 南京邮电大学 | A kind of Human bodys' response method for merging space-time dual-network stream and attention mechanism |
| CN108960337A (en)* | 2018-07-18 | 2018-12-07 | 浙江大学 | A kind of multi-modal complicated activity recognition method based on deep learning model |
| CN109740419A (en)* | 2018-11-22 | 2019-05-10 | 东南大学 | A Video Action Recognition Method Based on Attention-LSTM Network |
| CN109670548A (en)* | 2018-12-20 | 2019-04-23 | 电子科技大学 | HAR algorithm is inputted based on the more sizes for improving LSTM-CNN |
| CN109784280A (en)* | 2019-01-18 | 2019-05-21 | 江南大学 | Human action recognition method based on Bi-LSTM-Attention model |
| CN110188637A (en)* | 2019-05-17 | 2019-08-30 | 西安电子科技大学 | A method of behavior recognition technology based on deep learning |
| Title |
|---|
| TONGTONG SU,HUAZHI SUN,CHUNMEI MA,LIFEN JIANG,TONGTONG XU: "HDL: Hierarchical Deep Learning Model based", 《INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS》* |
| 宿通通,孙华志,马春梅,姜丽芬: "基于循环神经网络的人体行为识别", 《天津师范大学学报(自然科学版)》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111444983A (en)* | 2020-04-22 | 2020-07-24 | 中国科学院上海微系统与信息技术研究所 | Risk event identification method and system based on sensing data information fingerprints |
| CN111444983B (en)* | 2020-04-22 | 2023-10-24 | 中国科学院上海微系统与信息技术研究所 | Risk event identification method and system based on sensing data information fingerprint |
| CN111652361A (en)* | 2020-06-04 | 2020-09-11 | 南京博芯电子技术有限公司 | Composite granularity near-storage approximate acceleration structure and method of long-time memory network |
| CN111652361B (en)* | 2020-06-04 | 2023-09-26 | 南京博芯电子技术有限公司 | Composite granularity near storage approximate acceleration structure system and method for long-short-term memory network |
| CN115438705A (en)* | 2022-11-09 | 2022-12-06 | 武昌理工学院 | A Human Action Prediction Method Based on Wearable Devices |
| CN115964678A (en)* | 2023-03-16 | 2023-04-14 | 微云智能科技有限公司 | Intelligent identification method and system based on multi-sensor data |
| CN115964678B (en)* | 2023-03-16 | 2023-10-03 | 微云智能科技有限公司 | Intelligent identification method and system based on multi-sensor data |
| Publication | Publication Date | Title |
|---|---|---|
| CN111652066B (en) | Medical behavior identification method based on multi-self-attention mechanism deep learning | |
| CN110664412A (en) | A Human Activity Recognition Method for Wearable Sensors | |
| CN108764059B (en) | Human behavior recognition method and system based on neural network | |
| JP7691334B2 (en) | On-device activity recognition | |
| KR20200005987A (en) | System and method for diagnosing cognitive impairment using touch input | |
| CN112148128B (en) | A real-time gesture recognition method, device and human-computer interaction system | |
| CN107219924B (en) | A Method of Air Gesture Recognition Based on Inertial Sensor | |
| CN112597921B (en) | Human behavior recognition method based on attention mechanism GRU deep learning | |
| CN110309861A (en) | A Multimodal Human Activity Recognition Method Based on Generative Adversarial Networks | |
| Benalcázar et al. | Real-time hand gesture recognition based on artificial feed-forward neural networks and EMG | |
| CN112687374B (en) | Psychological crisis early warning method based on text and image information joint calculation | |
| CN113095386B (en) | Gesture recognition method and system based on triaxial acceleration space-time feature fusion | |
| CN112115993B (en) | Zero sample and small sample evidence photo anomaly detection method based on meta-learning | |
| CN117198468B (en) | Intervention scheme intelligent management system based on behavior recognition and data analysis | |
| CN116975299B (en) | Text data discrimination method, device, equipment and medium | |
| CN104834918A (en) | Human behavior recognition method based on Gaussian process classifier | |
| CN110807471A (en) | A behavior recognition system and recognition method of a multimodal sensor | |
| CN111931616A (en) | Emotion recognition method and system based on mobile intelligent terminal sensor equipment | |
| Wang et al. | Cornerstone network with feature extractor: a metric-based few-shot model for chinese natural sign language | |
| CN117064379A (en) | A fall detection method based on TCN-GRU network | |
| CN111062412B (en) | Novel intelligent shoe intelligent recognition method for indoor pedestrian movement speed | |
| CN114970640B (en) | Gesture recognition method and system based on knowledge distillation | |
| KR20240096336A (en) | Method and apparatus for evaluating cognitive ability of user based on game contents | |
| EP4086811A1 (en) | Problematic behavior classification system and method based on deep neural network algorithm | |
| Zhang et al. | Stacked LSTM-based dynamic hand gesture recognition with six-axis motion sensors |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication | Application publication date:20200110 |