技术领域technical field
本发明涉及人体识别智能监控系统设计,具体说来先建立步态特征向量提取系统,利用WPF程序应用界面通过Kinect采集步态特征向量,再通过ASP.NET来开发分类识别系统。The invention relates to the design of an intelligent monitoring system for human body recognition. Specifically, a gait feature vector extraction system is established first, the gait feature vector is collected through Kinect by using the WPF program application interface, and then the classification recognition system is developed through ASP.NET.
背景技术Background technique
随着社会和经济的飞速发展,步态识别技术被广泛应用于安全防范领域如银行、商场等视频监控区域,可为安保部门提供侦破盗窃案件的线索。With the rapid development of society and economy, gait recognition technology is widely used in the security field such as banks, shopping malls and other video surveillance areas, which can provide clues for security departments to detect theft cases.
发明内容Contents of the invention
本发明的目的是开发一种步态识别分类系统,旨在自动识别出现在银行、商场等视频监控区域的人员。The purpose of the present invention is to develop a gait recognition and classification system, aiming at automatic identification of persons appearing in video surveillance areas such as banks and shopping malls.
本发明的目的是按以下方式实现的,首先提取步态特征向量;然后对步态特征向量进行筛选,找出适合用于分类识别的步态特征向量;最后设计步态识别分类系统。具体算法如下:The purpose of the present invention is achieved in the following manner, first extracting the gait feature vector; then screening the gait feature vector to find out the gait feature vector suitable for classification and recognition; finally designing the gait recognition classification system. The specific algorithm is as follows:
首先,提取步态特征向量:利用WPF程序应用界面通过Kinect采集人体骨骼点的三维坐标,使用空间向量法来计算获得步态特征向量。First, extract the gait feature vector: use the WPF program application interface to collect the three-dimensional coordinates of human skeleton points through Kinect, and use the space vector method to calculate the gait feature vector.
然后,筛选步态特征向量:选取每个步态序列中的每一个步态特征分量的最大值来作为分类识别的特征向量的分量。Then, the gait feature vector is screened: the maximum value of each gait feature component in each gait sequence is selected as the component of the feature vector for classification and recognition.
最后,设计步态识别分类系统:通过ASP.NET来开发分类识别系统,利用BP神经网络分类算法实现系统分类。Finally, design the gait recognition and classification system: develop the classification and recognition system through ASP.NET, and use the BP neural network classification algorithm to realize the system classification.
本发明的优异效果如下:选择使用Kinect传感器作为步态数据采集设备,提取人体骨骼点的三维坐标,进而使用空间向量法来计算获得步态特征向量,不但可以避免复杂的人体模型提取过程,对衣服、光照、背景环境也不具有敏感性,而且提取的步态特征向量数据非常精确;选取每个步态序列中的每一个步态特征分量的最大值来作为分类识别的特征向量的分量,这样不但可以降低特征向量数据样本的冗余度,而且避免了步态周期的测量;采用BP神经网络分类算法的步态识别系统具有%98以上的识别率。The excellent effects of the present invention are as follows: choose to use the Kinect sensor as the gait data acquisition device, extract the three-dimensional coordinates of human skeleton points, and then use the space vector method to calculate and obtain the gait feature vector, which can not only avoid the complex human body model extraction process, but also Clothes, lighting, and background environment are not sensitive, and the extracted gait feature vector data is very accurate; select the maximum value of each gait feature component in each gait sequence as the component of the feature vector for classification recognition, This can not only reduce the redundancy of feature vector data samples, but also avoid the measurement of gait cycle; the gait recognition system using BP neural network classification algorithm has a recognition rate of more than 98%.
附图说明Description of drawings
图1 步态识别分类系统整体流程图Figure 1 The overall flowchart of the gait recognition and classification system
图2 Kinect坐标系Figure 2 Kinect coordinate system
图3 骨骼节点在Kinect坐标系中的向量表示Figure 3 Vector representation of bone nodes in the Kinect coordinate system
图4 提取步态特征的程序运行界面Figure 4. The program running interface for extracting gait features
图5 两层神经网络的简化表示Figure 5 Simplified representation of a two-layer neural network
图6 BP网络算法的流程图Figure 6 Flow chart of BP network algorithm
图7 目标查询显示界面Figure 7 Target query display interface
具体实施方式detailed description
参照说明书附图对本发明作以下详细地说明。The present invention will be described in detail below with reference to the accompanying drawings.
首先,提取步态特征向量:利用WPF程序应用界面通过Kinect采集人体骨骼点的三维坐标来获得步态特征向量。First, extract the gait feature vector: use the WPF program application interface to collect the three-dimensional coordinates of human skeleton points through Kinect to obtain the gait feature vector.
步态特征向量包括左右胳膊的长度、左右腿的长度、身高、动态步长、左大臂和脊柱之间夹角、右大臂和脊柱之间夹角、左小臂和脊柱之间夹角、右小臂和脊柱之间夹角、左大臂和左小臂之间的夹角、右大臂和右小臂之间的夹角、左大腿和左小腿之间的夹角、右大腿和右小腿之间的夹角、上身部位质心分别到左右胳膊质心的距离、上身部位质心分别到左右腿部质心的距离。利用空间向量法计算步态特征向量的具体方法如下:The gait feature vector includes the length of the left and right arms, the length of the left and right legs, height, dynamic step length, the angle between the left arm and the spine, the angle between the right arm and the spine, the angle between the left forearm and the spine , the angle between the right forearm and the spine, the angle between the left upper arm and the left forearm, the angle between the right upper arm and the right forearm, the angle between the left thigh and the left calf, the right thigh The angle between the upper body and the right calf, the distance from the center of mass of the upper body to the center of mass of the left and right arms, and the distance from the center of mass of the upper body to the center of mass of the left and right legs. The specific method of calculating the gait feature vector using the space vector method is as follows:
胳膊由大臂和小臂组成,因此胳膊长度的计算公式如下所示:The arm is composed of a large arm and a small arm, so the formula for calculating the arm length is as follows:
Darm=D1+D2 公式(1)Darm =D1 +D2 formula (1)
D1、D2分别表示大臂的长度与小臂的长度,D1可以通过肩关节(Shoulder)与肘关节(Elbow)组成的向量求得,D2可以通过肘关节(Elbow)与腕关节(Wrist)组成的向量求得。其中各个关节点之间的距离D可以利用下式计算得到:D1 and D2 represent the length of the upper arm and the length of the forearm respectively. D1 can be obtained through the vector composed of the shoulder joint (Shoulder) and the elbow joint (Elbow), and D2 can be obtained through the elbow joint (Elbow) and the wrist joint (Wrist) composed of the vector obtained. The distance D between each joint point can be calculated using the following formula:
xi、yi、zi分别表示前一个骨骼关节点的三维坐标,xj、yj、zj分别表示后一个骨骼关节点的三维坐标。xi , yi , zi represent the three-dimensional coordinates of the previous bone joint point respectively, and xj , yj , zj represent the three-dimensional coordinates of the latter bone joint point respectively.
腿部由大腿和小腿组成,因此腿部长度的计算公式如下所示:Legs are made up of thighs and calves, so the formula for calculating leg length is as follows:
Dleg=D1+D2 公式(3)Dleg =D1 +D2 formula (3)
D1表示大腿的长度,D2表示小腿的长度,D1可以通过臀关节(Hip)与膝关节(Knee)组成的向量计算得到,D2可以通过膝关节(Knee)与踝关节(Ankle)组成的向量计算得到,其中各个关节点之间的距离D可以利用公式(2)计算得到。D1 represents the length of the thigh, D2 represents the length of the calf, D1 can be calculated by the vector composed of the hip joint (Hip) and the knee joint (Knee), D2 can be calculated by the knee joint (Knee) and the ankle joint (Ankle) The composed vectors are calculated, and the distance D between each joint point can be calculated by formula (2).
由于人体在行走的过程中,穿款式不同的鞋子对于身高也是有一定的影响的,在计算身高的时候要将踝关节点(Ankle)与脚关节点(Foot)之间的距离除去。身高的近似值可以通过以下公式求得:Because the human body is walking, wearing different styles of shoes also has a certain impact on the height. When calculating the height, the distance between the ankle joint point (Ankle) and the foot joint point (Foot) should be removed. An approximate value for height can be found with the following formula:
DHeight=(D1+D2)/2 公式(4)DHeight = (D1 +D2 )/2 formula (4)
D1指的是头部关节点(Head)和左脚踝关节点(AnkleLeft)之间的距离,D2指的是头部关节点(Head)和右脚踝关节点(AnkleRight)之间的距离。D1 refers to the distance between the head joint point (Head) and the left ankle joint point (AnkleLeft), and D2 refers to the distance between the head joint point (Head) and the right ankle joint point (AnkleRight).
步长DFoot通过左踝关节点(AnkleLeft)和右踝关节点(AnkleRight)求得,连接两点之间向量的模值即为步长,可以通过公式(2)求得。The step size DFoot is obtained by the left ankle point (AnkleLeft) and the right ankle point (AnkleRight), and the modulus value of the vector connecting the two points is the step size, which can be obtained by formula (2).
左大臂、右大臂、左小臂、右小臂分别和脊柱之间的夹角θ1、θ2、θ3、θ4可以通过公式(5)求得。The included angles θ1 , θ2 , θ3 , and θ4 between the left forearm, the right forearm, the left forearm, and the right forearm and the spine respectively can be obtained by formula (5).
其中左右大臂向量通过肩关节点(Shoulder)和肘关节点(Elbow)得到,左右小臂向量通过肘关节点(Elbow)和腕关节点(Wrist)得到,脊柱向量可以通过肩部中央点(Shouldercenter)和脊柱(Spine)得到。Among them, the left and right arm vectors are obtained through the shoulder joint point (Shoulder) and the elbow joint point (Elbow), the left and right forearm vectors are obtained through the elbow joint point (Elbow) and the wrist joint point (Wrist), and the spine vector can be obtained through the shoulder center point ( Shouldercenter) and spine (Spine) are obtained.
其中θ为向量与之间的夹角,A点的坐标值为(Ax,Ay,Az),B点的坐标值为(Bx,By,Bz)。where θ is the vector and The angle between them, the coordinate value of point A is (Ax , Ay , Az ), and the coordinate value of point B is (Bx , By , Bz ).
左大臂和左小臂之间的夹角θ5、右大臂和右小臂之间的夹角θ6、左大腿和左小腿之间的夹角θ7、右大腿和右小腿之间的夹角θ8也可以通过公式(5)求得。The angle θ5 between the left upper arm and the left forearm, the angle θ6 between the right upper arm and the right forearm, the θ7 between the left thigh and the left calf, the angle between the right thigh and the right calf The included angle θ8 can also be obtained by formula (5).
其中左右大臂向量通过肩关节点(Shoulder)和肘关节点(Elbow)得到,左右小臂向量通过肘关节点(Elbow)和腕关节点(Wrist)得到,左右大腿向量通过臀关节点(Hip)和膝关节点(Knee)得到,左右小腿向量通过膝关节点(Knee)和踝关节点(Ankle)得到。The left and right upper arm vectors are obtained through the shoulder joint point (Shoulder) and the elbow joint point (Elbow), the left and right forearm vectors are obtained through the elbow joint point (Elbow) and the wrist joint point (Wrist), and the left and right thigh vectors are obtained through the hip joint point (Hip ) and the knee joint (Knee), the left and right calf vectors are obtained through the knee joint (Knee) and the ankle joint (Ankle).
上身部位质心Cupper到左右胳膊质心的距离Dc1、Dc2可以通过公式(9)求得。The distances Dc1 , Dc2 from the center of massCu upper of the upper body to the center of mass of the left and right arms can be obtained by formula (9).
上身部位质心Cupper到左右腿部质心Cleg的距离Dc3、Dc4可以也可以按照公式(9)的形式求得。The distances Dc3 and Dc4 from the center of mass Cupper of the upper body part to the center of mass Cleg of the left and right legs can also be obtained in the form of formula (9).
其中身体上部由左肩膀(ShoulderLeft)、右肩膀(ShoulderRight)、双肩中央(ShoulderCenter)、左臀部(HipLeft)、右臀部(HipRight)围成,因此身体上部质心的三维坐标值可以通过公式(10)求得;The upper part of the body is surrounded by the left shoulder (ShoulderLeft), the right shoulder (ShoulderRight), the center of the shoulders (ShoulderCenter), the left hip (HipLeft), and the right hip (HipRight). Therefore, the three-dimensional coordinates of the center of mass of the upper body can be obtained by formula (10) obtain;
XupperC、YupperC、ZupperC分别表示身体上部质心Cupper的三维坐标,xi、yi、zi依次表示左肩膀、右肩膀、双肩中央、左臀部、右臀部在Kinect坐标系中的三维坐标的x值、y值、z值,其中i=1…5。XupperC , YupperC , andZupperC represent the three-dimensional coordinates of the upper body mass center Cuupper respectively, and xi , yi , andzi respectively represent the three-dimensional coordinates of the left shoulder, right shoulder, the center of both shoulders, left hip, and right hip in the Kinect coordinate system The x value, y value, and z value of the coordinates, where i=1...5.
左右胳膊均由肩部关节Shoulder、肘关节Elbow、腕关节Wrist围成,因此左右胳膊质心的三维坐标值可以通过公式(11)求得;The left and right arms are surrounded by shoulder joints Shoulder, elbow joint Elbow, and wrist joint Wrist, so the three-dimensional coordinates of the left and right arm centroids can be obtained by formula (11);
其中XarmC、YarmC、ZarmC分别表示胳膊质心Carm的三维坐标,xj、yj、zj依次表示肩部关节、肘关节、腕部关节三维坐标的x值、y值、z值,其中j=1,2,3。Among them, XarmC , YarmC , and ZarmC represent the three-dimensional coordinates of the arm mass center Carm respectively, and xj , yj , and zj represent the x value, y value, and z value of the three-dimensional coordinates of the shoulder joint, elbow joint, and wrist joint in turn. , where j=1,2,3.
左右腿部均由臀部关节Hip、膝关节Knee、踝关节Ankle围成,左右腿部质心的三维坐标值的计算公式可以按照公式(11)的形式来定义求得。The left and right legs are surrounded by the hip joint Hip, the knee joint Knee, and the ankle joint Ankle. The calculation formula of the three-dimensional coordinates of the center of mass of the left and right legs can be defined and obtained in the form of formula (11).
如此得到的步态特征向量为18维,这样得出的特征向量维数比较高,考虑到系统的高效性和准确性,有必要对特征向量进行降维。由于人体本身是左右对称的,因此可以用某些特征的左右平均值来表示相应特征,例如胳膊长度可以用左右胳膊长度的平均值来表示,大臂和小臂之间的夹角可以用左右胳膊夹角的平均值表示。因此最终的步态特征集合为10维,包括胳膊的长度、腿部的长度、身高、左右两脚之间的步幅跨距、大臂和脊柱之间的夹角、小臂和脊柱之间的夹角、大臂和小臂的夹角、大腿和小腿之间的夹角、上身部位质心到胳膊质心的距离、上身部位质心到腿部质心的距离。The gait feature vector obtained in this way has 18 dimensions, and the dimension of the feature vector obtained in this way is relatively high. Considering the efficiency and accuracy of the system, it is necessary to reduce the dimension of the feature vector. Since the human body itself is left-right symmetrical, the corresponding features can be represented by the left-right average of certain features. For example, the length of the arm can be represented by the average The mean value of the angle between the arms is indicated. Therefore, the final gait feature set is 10-dimensional, including the length of the arm, the length of the leg, the height, the stride span between the left and right feet, the angle between the upper arm and the spine, and the distance between the forearm and the spine. The angle between the upper arm and the forearm, the angle between the thigh and the calf, the distance from the center of mass of the upper body to the center of mass of the arm, and the distance from the center of mass of the upper body to the center of mass of the legs.
然后,筛选步态特征向量:选取每个步态序列中的每一个步态特征分量的最大值来作为分类识别的特征向量的分量。Then, the gait feature vector is screened: the maximum value of each gait feature component in each gait sequence is selected as the component of the feature vector for classification and identification.
人体在行走的过程中,每个步态特征分量的最大值是唯一的,可以作为识别人体目标的特征分类向量。通过WPF程序可以从每个步态序列帧当中筛选出所有特征分量相应的最大值来做为该特征向量的分量值。例如对于每个人体采集5个视频序列,其中身高特征分量有5个值,即每一个视频序列采集出一个身高特征分量用于分类识别。这样不但可以降低特征向量数据的冗余度,而且避免了步态周期的测量。When the human body is walking, the maximum value of each gait feature component is unique, which can be used as a feature classification vector for identifying human targets. Through the WPF program, the corresponding maximum values of all feature components can be selected from each gait sequence frame as the component value of the feature vector. For example, five video sequences are collected for each human body, and the height feature component has five values, that is, one height feature component is collected for each video sequence for classification and recognition. This not only reduces the redundancy of feature vector data, but also avoids the measurement of gait cycle.
人体在Kinect之前行走的范围大约为2米,实验确定人体走完这段距离用时平均约2秒,并且序列帧频率为30帧/秒,则在这段距离内可以采集到的步态序列大约为60帧。因此在程序中设置每个步态序列为60帧,在其中找出每个步态特征分量的最大值作为分类识别的特征向量集合。用Fclasify来表示分类向量:The walking range of the human body before Kinect is about 2 meters. Experiments have determined that the average time for the human body to walk this distance is about 2 seconds, and the sequence frame frequency is 30 frames per second. The gait sequence that can be collected within this distance is about 2 seconds. for 60 frames. Therefore, each gait sequence is set to 60 frames in the program, and the maximum value of each gait feature component is found in it as a set of feature vectors for classification and recognition. Use Fclasify to represent categorical vectors:
Fclasify={DMaxarm,DMaxleg,DMaxHeight,DMaxFoot,θMax1,θMax2,θMax3,θMax4,DMaxc1,DMaxc2} 公式(12)Fclasify ={DMaxarm ,DMaxleg ,DMaxHeight ,DMaxFoot ,θMax1 ,θMax2 ,θMax3 ,θMax4 ,DMaxc1 ,DMaxc2 } Formula (12)
其中在每个步态序列中,DMaxarm表示手臂的最大长度,DMaxleg表示腿部的最大长度,DMaxHeight表示身高的最大值,DMaxFoot表示最大步长,θMax1表示大臂和脊柱之间的最大夹角,θMax2表示小臂和脊柱之间的最大夹角,θMax3表示大臂关节部位和小臂关节部位之间的最大夹角,θMax4表示大腿和小腿之间的最大夹角,DMaxc1表示上身部位质心到胳膊质心的最大距离,DMaxc2表示上身部位质心到腿部质心的最大距离。Among them, in each gait sequence, DMaxarm represents the maximum length of the arm, DMaxleg represents the maximum length of the leg, DMaxHeight represents the maximum height, DMaxFoot represents the maximum step length, θMax1 represents the distance between the upper arm and the spine , θMax2 represents the maximum angle between the forearm and the spine, θMax3 represents the maximum angle between the upper arm joint and the forearm joint, θMax4 represents the maximum angle between the thigh and the calf , DMaxc1 represents the maximum distance from the center of mass of the upper body to the center of mass of the arms, and DMaxc2 represents the maximum distance from the center of mass of the upper body to the center of mass of the legs.
最后,设计步态识别分类系统:通过ASP.NET来开发分类识别系统,利用BP神经网络分类算法实现系统分类。Finally, design the gait recognition and classification system: develop the classification and recognition system through ASP.NET, and use the BP neural network classification algorithm to realize the system classification.
系统主要是通过SQL Server 2008数据库连接Visual studio2010在Windows10操作系统下,搭建ASP.NET开发环境,利用C#语言来开发分类系统。The system mainly uses SQL Server 2008 database to connect Visual studio2010 under Windows 10 operating system, build ASP.NET development environment, and use C# language to develop classification system.
BP神经网络分类算法的具体实现步骤如下:The specific implementation steps of the BP neural network classification algorithm are as follows:
第一步:在神经网络的结构确定之后,设置神经网络的第k次迭代计算的权值和偏置值为Wm(k),bm(k);Step 1: After the structure of the neural network is determined, set the weight and bias values calculated by the kth iteration of the neural network to Wm (k), bm (k);
第二步:在神经网络接收到输入向量以后,根据确定的权值和偏置值计算出神经网络的实际输出向量a:Step 2: After the neural network receives the input vector, calculate the actual output vector a of the neural network according to the determined weight and bias value:
a0(k)=p 公式(13)a0 (k) = p Formula (13)
am+1(k)=fm+1(wm+1(k)am(k)+bm+1(k)),m=0,1,…,M-1 公式(14)am+1 (k)=fm+1 (wm+1 (k)am (k)+bm+1 (k)),m=0,1,...,M-1 formula (14)
a(k)=aM(k) 公式(15)a(k)=aM (k) formula (15)
其中a0表示神经网络从外部环境获取的输入向量p,M表示神经网络的层数,am表示第m层神经网络的实际输出,aM代表整个神经网络的最终实际输出。Where a0 represents the input vector p obtained by the neural network from the external environment, M represents the number of layers of the neural network, am represents the actual output of the m-th layer neural network, and aM represents the final actual output of the entire neural network.
第三步:通过目标输出向量和实际输出向量计算得到均方误差值,判断均方误差值是否达到预先设定的水平,如果达到则停止计算,如果没有达到,继续进行以下计算:Step 3: Calculate the mean square error value through the target output vector and the actual output vector, judge whether the mean square error value reaches the preset level, stop the calculation if it reaches it, and continue the following calculation if it does not reach it:
其中为近似均方误差,t是目标输出向量,a是实际输出向量。in is the approximate mean square error, t is the target output vector, and a is the actual output vector.
第四步:根据神经网络的目标输出向量和实际输出向量计算得到神经网络的敏感性sm:Step 4: Calculate the sensitivity sm of the neural network according to the target output vector and the actual output vector of the neural network:
第五步:将计算得到的敏感性值代入近似最速下降法公式,得到第k+1次迭代的权值和偏置值:Step 5: Substitute the calculated sensitivity value into the formula of the approximate steepest descent method to obtain the weight and bias value of the k+1 iteration:
Wm(k+1)=Wm(k)-αsm(k)(am-1(k))T 公式(20)Wm (k+1)=Wm (k)-αsm (k)(am-1 (k))T formula (20)
bm(k+1)=bm(k)-αsm(k) 公式(21)bm (k+1) = bm (k) - αsm (k) formula (21)
其中α为学习速率。where α is the learning rate.
第六步:返回第一步进行计算。Step 6: Return to Step 1 for calculation.
最后说明的是,以上所述仅为本发明的较佳实例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改,等同替换和改进等,均应包含在本发明的保护范围之内。Finally, it is noted that the above descriptions are only preferred examples of the present invention, and are not intended to limit the present invention. Any modifications made within the spirit and principles of the present invention, equivalent replacements and improvements, etc., should be included in the within the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710195392.5ACN107092865A (en) | 2017-03-29 | 2017-03-29 | A kind of new Gait Recognition system based on Kinect |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710195392.5ACN107092865A (en) | 2017-03-29 | 2017-03-29 | A kind of new Gait Recognition system based on Kinect |
| Publication Number | Publication Date |
|---|---|
| CN107092865Atrue CN107092865A (en) | 2017-08-25 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710195392.5APendingCN107092865A (en) | 2017-03-29 | 2017-03-29 | A kind of new Gait Recognition system based on Kinect |
| Country | Link |
|---|---|
| CN (1) | CN107092865A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109815858A (en)* | 2019-01-10 | 2019-05-28 | 中国科学院软件研究所 | A target user gait recognition system and method in daily environment |
| CN110232372A (en)* | 2019-06-26 | 2019-09-13 | 电子科技大学成都学院 | Gait recognition method based on particle group optimizing BP neural network |
| CN111046848A (en)* | 2019-12-30 | 2020-04-21 | 广东省实验动物监测所 | Gait monitoring method and system based on animal running platform |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102592150A (en)* | 2012-01-16 | 2012-07-18 | 河南科技大学 | Gait identification method of bidirectional two-dimensional principal component analysis based on fuzzy decision theory |
| CN103942577A (en)* | 2014-04-29 | 2014-07-23 | 上海复控华龙微系统技术有限公司 | Identity identification method based on self-established sample library and composite characters in video monitoring |
| US20140282877A1 (en)* | 2013-03-13 | 2014-09-18 | Lookout, Inc. | System and method for changing security behavior of a device based on proximity to another device |
| CN104134077A (en)* | 2014-07-10 | 2014-11-05 | 华南理工大学 | Deterministic learning theory based gait recognition method irrelevant to visual angle |
| CN104463118A (en)* | 2014-12-04 | 2015-03-25 | 龙岩学院 | Multi-view-angle gait recognition method based on Kinect |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102592150A (en)* | 2012-01-16 | 2012-07-18 | 河南科技大学 | Gait identification method of bidirectional two-dimensional principal component analysis based on fuzzy decision theory |
| US20140282877A1 (en)* | 2013-03-13 | 2014-09-18 | Lookout, Inc. | System and method for changing security behavior of a device based on proximity to another device |
| CN103942577A (en)* | 2014-04-29 | 2014-07-23 | 上海复控华龙微系统技术有限公司 | Identity identification method based on self-established sample library and composite characters in video monitoring |
| CN104134077A (en)* | 2014-07-10 | 2014-11-05 | 华南理工大学 | Deterministic learning theory based gait recognition method irrelevant to visual angle |
| CN104463118A (en)* | 2014-12-04 | 2015-03-25 | 龙岩学院 | Multi-view-angle gait recognition method based on Kinect |
| Title |
|---|
| 刘娇: "基于Kinect的骨骼追踪及肢体动作识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》* |
| 厉鹏: "低分辨率视频图像的人体检测与姿态识别", 《中国优秀硕士学位论文全文数据库信息科技辑》* |
| 马源驵: "基于Kinect的内容展示系统设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109815858A (en)* | 2019-01-10 | 2019-05-28 | 中国科学院软件研究所 | A target user gait recognition system and method in daily environment |
| CN110232372A (en)* | 2019-06-26 | 2019-09-13 | 电子科技大学成都学院 | Gait recognition method based on particle group optimizing BP neural network |
| CN110232372B (en)* | 2019-06-26 | 2021-09-24 | 电子科技大学成都学院 | Gait recognition method based on particle swarm optimization BP neural network |
| CN111046848A (en)* | 2019-12-30 | 2020-04-21 | 广东省实验动物监测所 | Gait monitoring method and system based on animal running platform |
| Publication | Publication Date | Title |
|---|---|---|
| CN111144217B (en) | Motion evaluation method based on human body three-dimensional joint point detection | |
| Luo et al. | Intelligent carpet: Inferring 3d human pose from tactile signals | |
| Chaudhari et al. | Yog-guru: Real-time yoga pose correction system using deep learning methods | |
| CN110490109B (en) | An online human rehabilitation action recognition method based on monocular vision | |
| CN110334573B (en) | A Human Motion State Discrimination Method Based on Densely Connected Convolutional Neural Networks | |
| CN114724241A (en) | Motion recognition method, device, equipment and storage medium based on skeleton point distance | |
| CN104408718B (en) | A kind of gait data processing method based on Binocular vision photogrammetry | |
| CN106874874A (en) | Motion state identification method and device | |
| CN107578019A (en) | Gait recognition system and recognition method based on visual-tactile fusion | |
| CN106295544A (en) | A kind of unchanged view angle gait recognition method based on Kinect | |
| CN110532948A (en) | A kind of high-precision pedestrian track extracting method based on video | |
| CN112800905A (en) | Pull-up counting method based on RGBD camera attitude estimation | |
| CN107092865A (en) | A kind of new Gait Recognition system based on Kinect | |
| CN114202722B (en) | Fall detection method based on convolutional neural network and multi-discriminant features | |
| CN117883074A (en) | Parkinson's disease gait quantitative analysis method based on human body posture video | |
| Zhen et al. | Hybrid Deep‐Learning Framework Based on Gaussian Fusion of Multiple Spatiotemporal Networks for Walking Gait Phase Recognition | |
| Bae et al. | Concurrent validity and test reliability of the deep learning markerless motion capture system during the overhead squat | |
| Barzyk et al. | AI‐smartphone markerless motion capturing of hip, knee, and ankle joint kinematics during countermovement jumps | |
| Nouredanesh et al. | Chasing feet in the wild: a proposed egocentric motion-aware gait assessment tool | |
| KR20230120341A (en) | 3D human body joint angle prediction method and system using 2D image | |
| CN113255462A (en) | Gait scoring method, system, computer program product and readable storage medium | |
| Sabir et al. | Human gait identification using Kinect sensor | |
| JP2009300227A (en) | Position tracking system, position tracking device, position tracking method, and program | |
| Omidokun et al. | Leveraging digital perceptual technologies for remote perception and analysis of human biomechanical processes: A contactless approach for workload and joint force assessment | |
| Calvache et al. | Automatic estimation of pose and falls in videos using computer vision model |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | Application publication date:20170825 | |
| RJ01 | Rejection of invention patent application after publication |