




技术领域technical field
本发明属于手势识别技术领域,具体涉及一种允许用户认证的实时WiFi信号手势识别方法。The invention belongs to the technical field of gesture recognition, in particular to a real-time WiFi signal gesture recognition method allowing user authentication.
背景技术Background technique
手势通常来说指的是四肢的一种动作,用来表达思想、情绪或者态度。在日常生活中,手势也是一种非常重要的非语言交流形式,占据了非常重要的地位。即使在人机交互的大背景下,我们仍然依赖手势来向我们周围的计算机传递消息、命令甚至是感情。Gesture generally refers to a movement of limbs to express thoughts, emotions or attitudes. In daily life, gestures are also a very important form of non-verbal communication, occupying a very important position. Even in the larger context of human-computer interaction, we still rely on gestures to convey messages, commands, and even emotions to the computers around us.
人类手势识别是智能家居、安全监控、智慧教育和虚拟现实等应用的核心,目前也有了非常多的手势识别方案,包括,基于摄像头的手势识别,基于可穿戴传感器的手势识别,基于多普勒雷达的手势识别,基于WiFi信号的手势识别。Human gesture recognition is the core of applications such as smart home, security monitoring, smart education, and virtual reality. There are currently many gesture recognition solutions, including camera-based gesture recognition, wearable sensor-based gesture recognition, and Doppler-based Radar gesture recognition, gesture recognition based on WiFi signal.
基于摄像头的手势识别,虽然识别准确度令人满意,但是有利的照明条件和潜在的隐私问题的需求阻碍了它们的普及实施。基于可穿戴传感器的手势识别由于可穿戴设备比较昂贵且携带不方便,此方法并未受到广泛的推广。基于多普勒雷达的手势识别主要利用多普勒效应检测运动,然后通过机器学习恢复相应的手势。然而,由于雷达设备目前在部署和使用方面都是最昂贵的,很难大规模应用在日常生活中。基于WiFi信号的手势识别是利用WiFi的CSI(Channel State Information,信道状态信息)来实现手势识别,而且部署基于WiFi的手势识别系统的成本和复杂性几乎可以忽略不计,它只需要两个商用WiFi设备就可以实现手势识别,可以更广泛的应用在日常生活中。For camera-based gesture recognition, although the recognition accuracy is satisfactory, the need for favorable lighting conditions and potential privacy concerns hinder their widespread implementation. Gesture recognition based on wearable sensors has not been widely promoted because wearable devices are expensive and inconvenient to carry. Gesture recognition based on Doppler radar mainly uses the Doppler effect to detect motion, and then restores the corresponding gesture through machine learning. However, since radar equipment is currently the most expensive in terms of deployment and use, it is difficult to apply it in daily life on a large scale. Gesture recognition based on WiFi signals uses CSI (Channel State Information) of WiFi to realize gesture recognition, and the cost and complexity of deploying a gesture recognition system based on WiFi is almost negligible, it only needs two commercial WiFi The device can realize gesture recognition, which can be widely used in daily life.
目前的基于WiFi信号的手势识别,在为了追求更高的识别准确度导致算法过于复杂而失去了实时性的问题,使其很难应用在实际的系统中;而且大多数算法都很少有考虑到用户识别的问题,用户的识别在实际应用中也是一个需要解决的问题。The current gesture recognition based on WiFi signals, in order to pursue higher recognition accuracy, the algorithm is too complex and loses real-time problems, making it difficult to apply in actual systems; and most algorithms rarely consider To the problem of user identification, user identification is also a problem that needs to be solved in practical applications.
发明内容Contents of the invention
为了解决现有技术中存在的上述问题,本发明提供了一种允许用户认证的实时WiFi信号手势识别方法。本发明要解决的技术问题通过以下技术方案实现:In order to solve the above-mentioned problems in the prior art, the present invention provides a real-time WiFi signal gesture recognition method that allows user authentication. The technical problem to be solved in the present invention is realized through the following technical solutions:
本发明提供了一种允许用户认证的实时WiFi信号手势识别方法,包括:The present invention provides a real-time WiFi signal gesture recognition method that allows user authentication, including:
步骤1:采集不同环境中的CSI数据,对所述CSI数据进行预处理,得到手势执行CSI数据集;Step 1: collect CSI data in different environments, preprocess the CSI data, and obtain a gesture execution CSI data set;
步骤2:根据所述手势执行CSI数据集,提取得到对应的多普勒频谱图;Step 2: Execute the CSI data set according to the gesture, and extract the corresponding Doppler spectrogram;
步骤3:根据所述多普勒频谱图构建得到对应的手臂运动加速度模型;Step 3: constructing and obtaining the corresponding arm motion acceleration model according to the Doppler spectrogram;
步骤4:构建用于用户识别和手势识别的双任务深度神经网络;Step 4: Build a dual-task deep neural network for user recognition and gesture recognition;
步骤5:将所述手臂运动加速度模型作为训练样本输入至所述双任务深度神经网络中对其进行训练;Step 5: input the arm motion acceleration model into the dual-task deep neural network as a training sample to train it;
步骤6:利用训练完成的协同双任务深度神经网络实现用户识别和手势识别。Step 6: Use the trained collaborative dual-task deep neural network to realize user recognition and gesture recognition.
在本发明的一个实施例中,所述手势执行CSI数据集包括多个用户分别在不同环境中执行多种手势时对应的CSI数据。In an embodiment of the present invention, the gesture execution CSI data set includes CSI data corresponding to multiple users performing multiple gestures in different environments.
在本发明的一个实施例中,所述步骤1包括:In one embodiment of the present invention, said
步骤1.1:采集不同环境中的CSI数据,对所述CSI数据进行去噪处理,得到去噪后的CSI数据;Step 1.1: collecting CSI data in different environments, performing denoising processing on the CSI data, and obtaining denoised CSI data;
步骤1.2:根据预设的分割阈值对所述去噪后的CSI数据进行提取,得到手势执行CSI数据集。Step 1.2: Extract the denoised CSI data according to a preset segmentation threshold to obtain a gesture execution CSI data set.
在本发明的一个实施例中,所述步骤1.1包括:In one embodiment of the present invention, said step 1.1 includes:
步骤1.1.1:采用快速傅里叶变换和低通滤波器去除CSI数据中的高频噪声干扰信号;Step 1.1.1: using fast Fourier transform and low-pass filter to remove high-frequency noise interference signals in the CSI data;
步骤1.1.2:采用巴特沃斯带通滤波器去除CSI数据中的静态分量、低频干扰和突发噪声干扰信号;Step 1.1.2: Use a Butterworth bandpass filter to remove static components, low-frequency interference and burst noise interference signals in the CSI data;
步骤1.1.3:通过两个天线的CSI数据的共轭相乘消除CSI数据中的相位偏移,得到去噪后的CSI数据。Step 1.1.3: Eliminate the phase offset in the CSI data through conjugate multiplication of the CSI data of the two antennas to obtain denoised CSI data.
在本发明的一个实施例中,所述步骤1.2包括:In one embodiment of the present invention, said step 1.2 includes:
步骤1.2.1:对所述去噪后的CSI数据进行时频分析,得到对应的去噪多普勒频谱图;Step 1.2.1: performing time-frequency analysis on the denoised CSI data to obtain a corresponding denoised Doppler spectrogram;
步骤1.2.2:计算所述去噪多普勒频谱图的频率域中振幅的方差,将方差小于预设的分割阈值的去噪多普勒频谱图对应的去噪后的CSI数据,作为手势执行CSI数据集。Step 1.2.2: Calculate the variance of the amplitude in the frequency domain of the denoised Doppler spectrogram, and use the denoised CSI data corresponding to the denoised Doppler spectrogram with a variance less than the preset segmentation threshold as the gesture Execute the CSI dataset.
在本发明的一个实施例中,所述步骤2包括:In one embodiment of the present invention, said
步骤2.1:利用主成分分析法对所述手势执行CSI数据集中的每个手势执行CSI数据进行降维和压缩,提取得到每个手势执行CSI数据对应的多个主成分分量;Step 2.1: Perform dimensionality reduction and compression on each gesture execution CSI data in the gesture execution CSI data set by principal component analysis, and extract multiple principal components corresponding to each gesture execution CSI data;
步骤2.2:对每个主成分分量进行时频分析,得到对应的多普勒频谱图。Step 2.2: Perform time-frequency analysis on each principal component to obtain the corresponding Doppler spectrum.
在本发明的一个实施例中,所述步骤3包括:In one embodiment of the present invention, said step 3 includes:
步骤3.1:利用接缝雕刻算法得到所述多普勒频谱图对应的主导功率雕刻路径和功率边界雕刻路径;Step 3.1: using the seam carving algorithm to obtain the dominant power carving path and the power boundary carving path corresponding to the Doppler spectrogram;
步骤3.2:根据所述主导功率雕刻路径和所述功率边界雕刻路径,构建对应的手臂运动加速度模型,以填补身体部位加速度序列和多普勒频谱图功率分布之间的差距;Step 3.2: Construct a corresponding arm motion acceleration model according to the dominant power sculpting path and the power boundary sculpting path to fill the gap between the body part acceleration sequence and the power distribution of the Doppler spectrogram;
其中,所述手臂运动加速度模型为:Wherein, described arm motion acceleration model is:
其中,表示将高斯分布用于Pds(fD,t)后的模型,Pds(fD,t)表示多普勒频谱图中的功率Pds与身体部位叠加之间的关系模型,fD表示从CSI信号中提取出来的实际多普勒频移频率,FD表示具有短时傅里叶变换的频率集的数量,i表示频率集,表示的权重,t表示时间。in, Indicates the model after applying the Gaussian distribution to Pds (fD ,t), Pds (fD ,t) represents the relationship model between the power Pds in the Doppler spectrogram and the superposition of body parts, and fD represents The actual Doppler shift frequency extracted from the CSI signal, FD represents the number of frequency sets with short-time Fourier transform, i represents the frequency set, express The weight of , t represents time.
在本发明的一个实施例中,所述多普勒频谱图中的功率Pds与身体部位叠加之间的关系模型为:In one embodiment of the present invention, the relationship model between the power Pds in the Doppler spectrogram and the superposition of body parts is:
其中,c表示传播损耗引起的比例因子,K表示定义手势的身体部位的数量,Ref(k,t)表示第k个身体部位在时间t的单个反射区域S,fdfs(k,t)表示第k个身体部位在时间t的多普勒频移频率。where c denotes the scaling factor due to propagation loss, K denotes the number of body parts defining the gesture, Ref(k,t) denotes the single reflection area S of the kth body part at time t, and fdfs (k,t) denotes Doppler-shifted frequency of the kth body part at time t.
在本发明的一个实施例中,所述双任务深度神经网络包括特征提取模块、时态建模模块、拼接模块和识别模块,其中,In one embodiment of the present invention, the dual-task deep neural network includes a feature extraction module, a temporal modeling module, a splicing module and a recognition module, wherein,
特征提取模块,包括第一特征提取单元和第二特征提取单元,所述第一特征提取单元用于提取输入的所述手臂运动加速度模型中的手势空间特征序列,所述第二特征提取单元用于提取输入的所述手臂运动加速度模型中的用户空间特征序列;The feature extraction module includes a first feature extraction unit and a second feature extraction unit, the first feature extraction unit is used to extract gesture space feature sequences in the input arm motion acceleration model, and the second feature extraction unit uses for extracting the user space feature sequence in the input arm motion acceleration model;
时态建模模块,包括第一时态建模单元和第二时序建模单元分别与对应的所述第一特征提取单元和所述第二特征提取单元连接,所述第一时态建模单元用于对输入的所述手势空间特征序列进行时序分析,得到对应的手势时序特征序列,所述第二时序建模单元用于对输入的所述用户空间特征序列进行时序分析,得到对应的用户时序特征序列;A temporal modeling module, comprising a first temporal modeling unit and a second temporal modeling unit respectively connected to the corresponding first feature extraction unit and the second feature extraction unit, the first temporal modeling The unit is used to perform time series analysis on the input gesture space feature sequence to obtain the corresponding gesture time series feature sequence, and the second time series modeling unit is used to perform time series analysis on the input user space feature sequence to obtain the corresponding gesture time series feature sequence. user timing feature sequence;
拼接模块,分别连接所述第一时态建模单元和所述第二时序建模单元,用于对输入的所述手势时序特征序列和所述用户时序特征序列进行拼接,得到拼接特征序列;A splicing module, respectively connected to the first temporal modeling unit and the second temporal modeling unit, for splicing the input gesture time-series feature sequence and the user time-series feature sequence to obtain a spliced feature sequence;
识别模块,包括用户识别单元和手势识别单元,分别用于对输入的所述拼接特征序列进行识别,得到手势识别结果和用户识别结果。The recognition module includes a user recognition unit and a gesture recognition unit, which are respectively used to recognize the input spliced feature sequence, and obtain a gesture recognition result and a user recognition result.
与现有技术相比,本发明的有益效果在于:Compared with prior art, the beneficial effect of the present invention is:
本发明允许用户认证的实时WiFi信号手势识别方法,通过对CSI信号进行去噪和变换,找到可以反应出运动变换的原始特征,然后再根据原始特征建立一个可以满足实时性且具有跨域能力的手臂运动加速度模型,利用构建的用于用户识别和手势识别的双任务深度神经网络,实现手势和执行手势的用户的同时识别。The real-time WiFi signal gesture recognition method that allows user authentication in the present invention finds the original features that can reflect the motion transformation by denoising and transforming the CSI signal, and then establishes a real-time and cross-domain capability according to the original features. The arm motion acceleration model uses a dual-task deep neural network for user recognition and gesture recognition to realize simultaneous recognition of gestures and the user performing the gestures.
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。The above description is only an overview of the technical solution of the present invention. In order to better understand the technical means of the present invention, it can be implemented according to the contents of the description, and in order to make the above and other purposes, features and advantages of the present invention more obvious and understandable , the following preferred embodiments are specifically cited below, and are described in detail as follows in conjunction with the accompanying drawings.
附图说明Description of drawings
图1是本发明实施例提供的一种允许用户认证的实时WiFi信号手势识别方法的示意图;1 is a schematic diagram of a real-time WiFi signal gesture recognition method that allows user authentication provided by an embodiment of the present invention;
图2是本发明实施例提供的一种允许用户认证的实时WiFi信号手势识别方法的流程图;2 is a flowchart of a real-time WiFi signal gesture recognition method that allows user authentication provided by an embodiment of the present invention;
图3是本发明实施例提供的一种多普勒频谱图的雕刻路径示意图;Fig. 3 is a schematic diagram of an engraving path of a Doppler spectrogram provided by an embodiment of the present invention;
图4是本发明实施例提供的一种双任务深度神经网络的示意图;4 is a schematic diagram of a dual-task deep neural network provided by an embodiment of the present invention;
图5是本发明实施例提供的一种采集环境的示意图。Fig. 5 is a schematic diagram of an acquisition environment provided by an embodiment of the present invention.
具体实施方式Detailed ways
为了进一步阐述本发明为达成预定发明目的所采取的技术手段及功效,以下结合附图及具体实施方式,对依据本发明提出的一种允许用户认证的实时WiFi信号手势识别方法进行详细说明。In order to further explain the technical means and effects of the present invention to achieve the intended purpose of the invention, a real-time WiFi signal gesture recognition method that allows user authentication proposed according to the present invention will be described in detail below in conjunction with the accompanying drawings and specific implementation methods.
有关本发明的前述及其他技术内容、特点及功效,在以下配合附图的具体实施方式详细说明中即可清楚地呈现。通过具体实施方式的说明,可对本发明为达成预定目的所采取的技术手段及功效进行更加深入且具体地了解,然而所附附图仅是提供参考与说明之用,并非用来对本发明的技术方案加以限制。The aforementioned and other technical contents, features and effects of the present invention can be clearly presented in the following detailed description of specific implementations with accompanying drawings. Through the description of specific embodiments, the technical means and effects of the present invention to achieve the intended purpose can be understood more deeply and specifically, but the accompanying drawings are only for reference and description, and are not used to explain the technical aspects of the present invention. program is limited.
实施例一Embodiment one
首先,对CSI数据可以用于Wi-Fi感知的基本原理进行介绍。信道状态信息(CSI)数据描述了无线信号如何以特定的载波频率从发射器传播到接收器,CSI的幅值和相位收到多径效应的影响,包括幅值衰减和相位偏移。每个数据包中的CSI代表信道频率响应(CFR)表示为:First, the basic principle that CSI data can be used for Wi-Fi perception is introduced. Channel state information (CSI) data describes how a wireless signal propagates from a transmitter to a receiver at a specific carrier frequency. The magnitude and phase of CSI are affected by multipath effects, including amplitude attenuation and phase offset. The CSI in each packet represents the Channel Frequency Response (CFR) expressed as:
其中,ai(t)是振幅衰减因子,τi(t)是传播延迟,f是载波。where ai (t) is the amplitude attenuation factor, τi (t) is the propagation delay, and f is the carrier.
由于CSI的振幅|H|和相位∠H受发射器和接收器的位置以及周围物体和人的移动的影响。也就是,CSI可以捕捉到附近环境的无线特性。在数学建模或机器学习算法的辅助下,这些特性可用于实现手势的识别。Because the amplitude |H| and phase ∠H of CSI are affected by the position of the transmitter and receiver and the movement of surrounding objects and people. That is, CSI can capture the radio characteristics of the nearby environment. With the aid of mathematical modeling or machine learning algorithms, these properties can be used to realize gesture recognition.
采用MIMO(Multiple-Input Multiple-Output)的无线信道通过OFDM(Orthogonalfrequency-division multiplexing,正交频分复用)技术被划分为了多个子载波。为了测量CSI,WIFI发射器发送长训练符号(Long Training Symbols,LTFs)作为每个数据包的开始,其中,包含每个子载波的预定义符号。当接收到LTFs时,WIFI接收器利用接收到的信号和原始的LTFs估计CSI矩阵。对于每个子载波,WiFi信道的建模方法为y=Hx+n,其中y是接收信号,x是发射信号,H是CSI矩阵,n是噪音向量。接收器利用预先定义的信号x和接收信号y在去除循环前缀,解映射和OFDM解调等接受处理后估计CSI矩阵H,估计的CSI是一个复数值的三维矩阵。A wireless channel using MIMO (Multiple-Input Multiple-Output) is divided into multiple sub-carriers through OFDM (Orthogonal frequency-division multiplexing, Orthogonal Frequency-Division Multiplexing) technology. To measure CSI, the WIFI transmitter sends Long Training Symbols (LTFs) as the beginning of each data packet, which contains predefined symbols for each subcarrier. When receiving LTFs, the WIFI receiver estimates the CSI matrix using the received signal and the original LTFs. For each subcarrier, the WiFi channel is modeled as y=Hx+n, where y is the received signal, x is the transmitted signal, H is the CSI matrix, and n is the noise vector. The receiver uses the predefined signal x and the received signal y to estimate the CSI matrix H after removing the cyclic prefix, demapping and OFDM demodulation, etc., and the estimated CSI is a complex-valued three-dimensional matrix.
请结合参见图1和图2,图1是本发明实施例提供的一种允许用户认证的实时WiFi信号手势识别方法的示意图;图2是本发明实施例提供的一种允许用户认证的实时WiFi信号手势识别方法的流程图。如图所示,本实施例的允许用户认证的实时WiFi信号手势识别方法,包括:Please refer to FIG. 1 and FIG. 2 in combination. FIG. 1 is a schematic diagram of a real-time WiFi signal gesture recognition method that allows user authentication provided by an embodiment of the present invention; FIG. 2 is a real-time WiFi that allows user authentication provided by an embodiment of the present invention Flowchart of the signal gesture recognition method. As shown in the figure, the real-time WiFi signal gesture recognition method that allows user authentication in this embodiment includes:
步骤1:采集不同环境中的CSI数据,对CSI数据进行预处理,得到手势执行CSI数据集;Step 1: Collect CSI data in different environments, preprocess the CSI data, and obtain the gesture execution CSI data set;
由于在真实的WiFi系统中,测量到的CSI会受到多路径信道,收发处理和硬件/软件错误的影响,所测量得到的从基带到基带的CSI为:Since in a real WiFi system, the measured CSI will be affected by multipath channels, transceiver processing and hardware/software errors, the measured CSI from baseband to baseband is:
其中,第一项为多路径信道,第二项为循环移位分集,第三项为采样时间偏移,第四项为采样频率偏移,最后一项为波束成形。第一项中的di,j,n指的是从第i个发送天线到第j个接收天线发送的第n条路径的路径长度,fk是载波频率,τi是第i个发射天线循环移位分集(Cyclic Shift Diversity,CSD)的时延,ρ是采样时间偏差(Sampling Time Offset,STO),η是采样频率偏差(Sampling Frequency Offset,SFO),qi,j和ζi,j分别是波束成形矩阵的振幅衰减和相位偏移。WIFI感知的应用需要提取包含周围环境变化信息的多路径信道,因此需要相应的信号处理技术来消除CSD、STO、SFO和波束成形的影响。Wherein, the first item is multipath channel, the second item is cyclic shift diversity, the third item is sampling time offset, the fourth item is sampling frequency offset, and the last item is beamforming. di,j,n in the first term refers to the path length of the n-th path transmitted from the i-th transmit antenna to the j-th receive antenna, fk is the carrier frequency, and τi is the i-th transmit antenna Cyclic Shift Diversity (CSD) delay, ρ is the sampling time offset (Sampling Time Offset, STO), η is the sampling frequency offset (Sampling Frequency Offset, SFO), qi,j and ζi,j are the amplitude attenuation and phase shift of the beamforming matrix, respectively. The application of WIFI perception needs to extract multi-path channels containing information about changes in the surrounding environment, so corresponding signal processing techniques are required to eliminate the influence of CSD, STO, SFO and beamforming.
恢复运动信号所依据的关键是,身体运动引起的变化在不同的CSI流中是相关的,这种相关性的原因是不同子载波的CSI流是同一组时变信号的线性组合。假设一个人在时间0和时间t之间移动一小段距离,这会导致信号路径长度的变化Δk(t),在这种情况下,有dk(t)=Δk(t)+dk(0),其中dk(0)是路径的初始长度。当初始相位偏移为θk时,接收器在t时刻看到的等式(2)中的子载波s处的相位如下:The key point on which motion signal recovery is based is that changes induced by body motion are correlated among different CSI streams, and the reason for this correlation is that the CSI streams of different subcarriers are linear combinations of the same set of time-varying signals. Suppose a person moves a small distance between
CSI矩阵的时间序列说明了MIMO信道在时间、频率、空间等不同域上的变化,对于具有M个发射天线,N个接收天线,T个子载波的MIMO-OFDM信道,CSI矩阵是一个三维矩阵H∈CN×M×T表示了多径信道的幅值衰减和相位偏移,其中H=a+bi为复数,每一个H代表一个子载波的幅值和相位,复数的模和幅角分别对应幅值和相位信息。CSI提供了比接收信号强度指标(Received Signal Strength Indicator,RSSI)更多的信息。三维的CSI矩阵类似于N×M的空间分辨率且K个颜色通道的数字图像,因此,四维的CSI张量提供了时域的额外信息。鉴于实际情况,需要对采集到的CSI数据进行预处理之后得到进行后续的识别操作。。The time series of the CSI matrix illustrates the changes of the MIMO channel in different domains such as time, frequency, and space. For a MIMO-OFDM channel with M transmit antennas, N receive antennas, and T subcarriers, the CSI matrix is a three-dimensional matrix H ∈CN×M×T represents the amplitude attenuation and phase offset of the multipath channel, where H=a+bi is a complex number, each H represents the amplitude and phase of a subcarrier, and the modulus of the complex number and argument corresponding to the magnitude and phase information, respectively. CSI provides more information than Received Signal Strength Indicator (RSSI). The three-dimensional CSI matrix is similar to a digital image with N×M spatial resolution and K color channels, so the four-dimensional CSI tensor provides additional information in the temporal domain. In view of the actual situation, the collected CSI data needs to be preprocessed to obtain subsequent recognition operations. .
在本实施例中,步骤1包括:In this embodiment,
步骤1.1:采集不同环境中的CSI数据,对CSI数据进行去噪处理,得到去噪后的CSI数据;Step 1.1: Collect CSI data in different environments, perform denoising processing on the CSI data, and obtain denoised CSI data;
在本实施例中,对CSI数据进行去噪处理是为了在保证保留手势运动信息的情况下,去除掉无关的信号干扰。In this embodiment, the purpose of performing denoising processing on the CSI data is to remove irrelevant signal interference while ensuring that the gesture motion information is preserved.
由于典型的人类活动只会引起频率低于300Hz的CSI变化。原始CSI中包含显著的静态分类、低频干扰和突发噪声,这些干扰会使得含有运动信息的多普勒频移变得模糊。在真实商用WiFi的系统中,原始的CSI测量值包含由于硬件和软件错误而产生的相位偏移。Since typical human activities will only cause CSI changes with frequencies below 300 Hz. Raw CSI contains significant static classifications, low-frequency interference, and burst noise, which can obscure Doppler shifts that contain motion information. In real commercial WiFi systems, raw CSI measurements contain phase shifts due to hardware and software errors.
在本实施例中,为了去除上述信号的干扰,具体采用如下操作:In this embodiment, in order to remove the interference of the above-mentioned signals, the following operations are specifically adopted:
步骤1.1.1:采用快速傅里叶变换和低通滤波器去除CSI数据中的高频噪声干扰信号;Step 1.1.1: using fast Fourier transform and low-pass filter to remove high-frequency noise interference signals in the CSI data;
步骤1.1.2:采用巴特沃斯带通滤波器去除CSI数据中的静态分量、低频干扰和突发噪声干扰信号;Step 1.1.2: Use a Butterworth bandpass filter to remove static components, low-frequency interference and burst noise interference signals in the CSI data;
步骤1.1.3:通过两个天线的CSI数据的共轭相乘消除CSI数据中的相位偏移,得到去噪后的CSI数据。Step 1.1.3: Eliminate the phase offset in the CSI data through conjugate multiplication of the CSI data of the two antennas to obtain denoised CSI data.
需要说明的是,由于同一Wi-Fi网卡上不同天线共享相同的RF振荡器,因此不同天线的时变随机相位偏移相同,因此,采用计算同一WiFi网卡上两个天线CSI的共轭相乘的方法来消除随时间变化的相位偏移,如公式(4),这样可以滤除掉带外噪声,准静态偏移和随机偏移,只保留非零DFS的突出多径分量。It should be noted that since different antennas on the same Wi-Fi network card share the same RF oscillator, the time-varying random phase offsets of different antennas are the same. Therefore, the conjugate multiplication of the two antenna CSIs on the same WiFi network card is calculated The method to eliminate the time-varying phase offset, such as formula (4), can filter out out-of-band noise, quasi-static offset and random offset, and only retain the prominent multipath component of non-zero DFS.
其中,xcm是共轭乘法后的输出,x1是第一天线的CSI,x2是第二天线的CSI的共轭,Gm1和Gm2分别是第一天线和第二天线的移动路径集。where xcm is the output after conjugate multiplication, x1 is the CSI of the first antenna, x2 is the conjugate of the CSI of the second antenna, Gm1 and Gm2 are the moving paths of the first and second antennas, respectively set.
在上述等式中,两根天线的静态路径分量的乘积被标为①,其可以在短时间内被视为常数,它并不包含我们所关心的多普勒速度信息。然而,静态成分的功率可能会非常高,因为其包含了强大的直接路径信号。为了避免静态成分对多普勒速度估计的干扰,通过从共轭乘法中减去均值来去除静态分量。In the above equation, the product of the static path components of the two antennas is marked as ①, which can be regarded as a constant in a short time, and it does not contain the Doppler velocity information we care about. However, the power of the static component can be very high because it contains a strong direct path signal. To avoid interference of static components on Doppler velocity estimates, the static components were removed by subtracting the mean from the conjugate multiplication.
在上述等式中,移动路径分量的乘积被标为②,其是一个非常小的值,可以忽略。③和④分别是一个天线的静态路径分量和另一个天线的动态路径分量的两个乘积,这两项包含了多普勒速度信息。由于两个相邻的天线具有相似的多径,所以这两项中的多普勒速度信息具有相似的值和相反的方向。想要从第一个天线的动态路径分量和第二个天线的静态路径分量的乘积中获得多普勒速度,也就是③,因此,通过减去α值来降低第一个天线上的静态路径分量的功率,并通过增加β值来增加第二个天线上的静态路径分量的功率。通过上述功率调整步骤,包含正确多普勒速度信息的项在该乘法输出中具有更高的功率,并且可以在频谱图中识别出来。In the above equation, the product of the moving path components is denoted as ②, which is a very small value and can be ignored. ③ and ④ are the two products of the static path component of one antenna and the dynamic path component of the other antenna respectively, and these two items contain Doppler velocity information. Since two adjacent antennas have similar multipath, the Doppler velocity information in these two terms have similar values and opposite directions. Want to get the Doppler velocity from the product of the dynamic path component of the first antenna and the static path component of the second antenna, which is ③, therefore, reduce the static path on the first antenna by subtracting the α value component and increase the power of the static path component on the second antenna by increasing the value of β. With the power adjustment steps described above, terms containing correct Doppler velocity information have higher power in the output of this multiplication and can be identified in the spectrogram.
步骤1.2:根据预设的分割阈值对去噪后的CSI数据进行提取,得到手势执行CSI数据集。Step 1.2: Extract the denoised CSI data according to the preset segmentation threshold to obtain the gesture execution CSI data set.
实时情况下,无需对每一时刻的原始CSI测量值做处理,这样不仅会造成资源的浪费,还会产生不必要的识别误差,只需要判断在当前环境下有人类手势的活动时,才需要进行处理。In real time, there is no need to process the original CSI measurement value at each moment, which will not only cause a waste of resources, but also generate unnecessary recognition errors. It is only necessary to judge when there is human gesture activity in the current environment. to process.
而当受试者静止时,不会观察到多普勒效应,频谱图中只包含噪声,因此,频谱图的能量分布在整个频带上。相反,当受试者移动时,频谱图由多普勒效应所控制,频谱图的能量集中在感兴趣的频率上。因此,可以通过计算并平滑频域中能量分布的方差,用于运动检测。When the subject is stationary, the Doppler effect is not observed, and only noise is included in the spectrogram, so the energy of the spectrogram is distributed over the entire frequency band. In contrast, when the subject moves, the spectrogram is dominated by the Doppler effect, with the energy of the spectrogram concentrated at frequencies of interest. Therefore, it can be used for motion detection by computing and smoothing the variance of the energy distribution in the frequency domain.
值得注意的是,只有CSI去噪过程中的过滤器会影响方差,因为滤波器决定带外组件被滤除的程度。因此,对于固定滤波器,使用预设的阈值是可行的,当方差低于阈值时,会检测到玩家的移动。It is worth noting that only the filters in the CSI denoising process affect the variance, since the filters determine how well out-of-band components are filtered out. Therefore, for a fixed filter, it is possible to use a preset threshold, and when the variance is below the threshold, the player's movement is detected.
具体地,步骤1.2包括:Specifically, step 1.2 includes:
步骤1.2.1:对去噪后的CSI数据进行时频分析,得到对应的去噪多普勒频谱图;Step 1.2.1: Perform time-frequency analysis on the denoised CSI data to obtain the corresponding denoised Doppler spectrogram;
步骤1.2.2:计算去噪多普勒频谱图的频率域中振幅的方差,将方差小于预设的分割阈值的去噪多普勒频谱图对应的去噪后的CSI数据,作为手势执行CSI数据集。Step 1.2.2: Calculate the variance of the amplitude in the frequency domain of the denoised Doppler spectrogram, and use the denoised CSI data corresponding to the denoised Doppler spectrogram with a variance less than the preset segmentation threshold as a gesture to perform CSI data set.
在本实施例中,手势执行CSI数据集包括多个用户分别在不同环境中执行多种手势时对应的CSI数据。In this embodiment, the gesture execution CSI data set includes CSI data corresponding to multiple gestures performed by multiple users in different environments.
步骤2:根据手势执行CSI数据集,提取得到对应的多普勒频谱图;Step 2: Execute the CSI data set according to the gesture, and extract the corresponding Doppler spectrogram;
由于原始CSI测量值包含无关或冗余的信号,因此需要对手势执行CSI数据集的的信号进行一定的压缩,然后通过时频分析方法捕捉不同主成分分量中存在的运动信息。Since the original CSI measurement contains irrelevant or redundant signals, it is necessary to compress the signal of the gesture execution CSI dataset, and then capture the motion information existing in different principal component components by time-frequency analysis method.
具体地,步骤2包括:Specifically,
步骤2.1:利用主成分分析法对手势执行CSI数据集中的每个手势执行CSI数据进行降维和压缩,提取得到每个手势执行CSI数据对应的多个主成分分量;Step 2.1: Use the principal component analysis method to perform dimensionality reduction and compression on each gesture execution CSI data in the gesture execution CSI data set, and extract multiple principal components corresponding to each gesture execution CSI data;
由于在跨越不同天线对和不同子载波的所有CSI流中,其波峰和波谷都具有相似的形状,且CSI流的相位在同一天线对中的不同子载波之间是平滑变化的;其次,并没有一个单一的CSI流是可以准确的反映出运动信息的,这就意味着需要找到一个方法去组合不同的CSI流,来获得最佳可以反映出运动信息的值;简单的在CSI流上面使用加权平均是无法提供良好的结果的,因为不同CSI流的相位不同,如果将其相加,它们可能会导致相互抵消,也即一个CSI流的波峰值可能是其他CSI流的波谷值。因此需要找到一个更好的方法来组合CSI流,使其能够产生最佳的可以反映运动信息的值。Since the peaks and troughs of all CSI streams across different antenna pairs and different subcarriers have similar shapes, and the phase of the CSI stream varies smoothly among different subcarriers in the same antenna pair; secondly, and No single CSI stream can accurately reflect motion information, which means finding a way to combine different CSI streams to obtain the best value that can reflect motion information; simply use it on the CSI stream Weighted averaging does not give good results because different CSI streams have different phases and if they are summed they may cause mutual cancellation, i.e. peaks of one CSI stream may be valleys of other CSI streams. Therefore, it is necessary to find a better method to combine the CSI streams so that it can generate the best value that can reflect the motion information.
在本实施例中,采用PCA(Principal Component Analysis,主成分分析)的方法来发现CSI流之间的相关性。通过PCA,可以追踪到CSI流之间根据时间变化的相关性,并将其按照最佳组合起来以提取CSI流的主成分。In this embodiment, a PCA (Principal Component Analysis, principal component analysis) method is used to discover the correlation between CSI streams. Through PCA, the time-varying correlation between CSI streams can be tracked, and they can be optimally combined to extract the principal components of the CSI streams.
主要通过以下四个步骤将PCA应用于CSI流:PCA is applied to the CSI stream mainly through the following four steps:
1、预处理:首先从每个CSI流中减去相应的常数偏移量,从而去除静态路径分量。它通过对每个CSI流进行长期平均来计算该流的恒定偏移,如计算4秒的平均CSI幅值。然后,将CSI流切割成包含在一秒间隔内获取的样本的块,并将不同CSI流的块排列在列中以形成矩阵H。在本实施例中,将间隔大小选择为1秒,以便对象移动的距离短,同时样本数量足够大,以确保准确的相关估计。1. Preprocessing: first subtract the corresponding constant offset from each CSI stream, thereby removing the static path component. It computes a constant offset for each CSI stream by taking a long-term average of that stream, eg computing the 4-second average CSI magnitude. Then, the CSI stream is cut into blocks containing samples acquired in one-second intervals, and the blocks of different CSI streams are arranged in columns to form the matrix H. In this embodiment, the interval size is chosen to be 1 second so that the distance the object moves is short, while the sample size is large enough to ensure accurate correlation estimation.
2、相关估计:将相关矩阵计算为HT×H,相关矩阵的维数为N×N,其中N是CSI流的数量。如一个Tx-Rx链路中包含30个子载波,也就是一个Tx-Rx链路的N=30。2. Correlation estimation: the correlation matrix is calculated as HT ×H, and the dimension of the correlation matrix is N×N, where N is the number of CSI streams. For example, a Tx-Rx link includes 30 subcarriers, that is, N=30 of a Tx-Rx link.
3、特征分解:对相关矩阵进行特征分解,以计算特征向量。3. Eigen decomposition: Eigen decomposition is performed on the correlation matrix to calculate the eigenvectors.
4、运动信号重建:使用等式hi=H×qi构造主成分,其中qi和hi分别是ith特征向量和ith主分量。4. Motion signal reconstruction: use the equation hi =H×qi to construct principal components, where qi and hi are ith feature vector and ith principal component respectively.
在本实施例中,丢弃掉第一个主成分h1,保留下五个主成分用于特征提取。所有CSI流中都存在由内部状态变化引起的噪声,由于高度相关性,这些噪声与人体运动信号一起被捕捉到h1中。然而,在h1中捕获的关于人体运动信号的所有信息在其他主成分中也会捕获,因为根据公式(2),子载波的相位是两个正交分量的线性组合:和由于PCA成分不相关,第一个主成分仅包含这些正交成分中的一个,而另一个成分保留在其余主成分中。因此,可以在不丢失任何信息的情况下安全地丢弃第一主成分。In this embodiment, the first principal component h1 is discarded, and the remaining five principal components are reserved for feature extraction. Noise caused by internal state changes is present in all CSI streams and is captured inh1 together with the human motion signal due to high correlation. However, all information about the human motion signal captured inh1 is also captured in the other principal components, because according to equation (2), the phase of the subcarrier is a linear combination of two orthogonal components: and Since the PCA components are uncorrelated, the first principal component contains only one of these orthogonal components, while the other remains in the remaining principal components. Therefore, the first principal component can be safely discarded without losing any information.
需要说明的是,根据经验选择用于特征提取的主成分的数量,在本实施例中选择主成分的数量为5,以在分类性能和计算复杂度之间实现良好的权衡。It should be noted that the number of principal components used for feature extraction is selected based on experience. In this embodiment, the number of principal components is selected to be 5 to achieve a good trade-off between classification performance and computational complexity.
步骤2.2:对每个主成分分量进行时频分析,得到对应的多普勒频谱图。Step 2.2: Perform time-frequency analysis on each principal component to obtain the corresponding Doppler spectrum.
常用的时频分析方法有快速傅里叶变换(Fast Fourier Transform,FFT),短时傅里叶变换(Short Time Fourier Transform,STFT),离散希尔伯特变换(Discrete HilbertTransform,DHT),离散小波变换(Discrete Wavelet Transform,DWT)。Commonly used time-frequency analysis methods include Fast Fourier Transform (FFT), Short Time Fourier Transform (STFT), Discrete Hilbert Transform (DHT), discrete wavelet Transform (Discrete Wavelet Transform, DWT).
在本实施例中,为了将时域与频域相联系起来对信号进行分析并获取能够反映运动变化信息的变量,采用短时傅里叶变换对降维得到的主成分分量进行时频分析,提取出对应的多普勒功率频谱,得到对应的多普勒频谱图。In this embodiment, in order to link the time domain and the frequency domain to analyze the signal and obtain variables that can reflect the information of motion changes, short-time Fourier transform is used to perform time-frequency analysis on the principal components obtained by dimensionality reduction, The corresponding Doppler power spectrum is extracted to obtain the corresponding Doppler spectrum diagram.
具体地,在STFT中应用长度小于0.15s的高斯窗口,因为振幅和多普勒频移在短时间内几乎恒定,可以在短时间的窗口内通过时频分析得到多普勒频移,进一步应用零填充以生成更细粒度的频谱图,最后,将所有CSI段的非重叠频谱图拼接在一起以生成整个频谱图。Specifically, a Gaussian window with a length less than 0.15s is used in STFT, because the amplitude and Doppler frequency shift are almost constant in a short time, and the Doppler frequency shift can be obtained through time-frequency analysis in a short time window, and further application Zero padding to generate a finer-grained spectrogram, and finally, the non-overlapping spectrograms of all CSI segments are stitched together to generate the whole spectrogram.
需要注意的是,由于不确定性原理,可以检测到的动作范围有下限。具体来说,假设STFT数据窗口的时间长度为T,则频谱的频率分辨率为为了正确识别频移信号,频移的振幅必须落在non-DC槽中,对应于的最小频率。对于具有恒定频移的信号段,频移应满足其中V是反射信号路径的变化率,λ是信号的波长。那么,作用范围R的灵敏度为:It should be noted that due to the uncertainty principle, there is a lower limit to the range of motions that can be detected. Specifically, assuming that the time length of the STFT data window is T, the frequency resolution of the spectrum is In order to correctly identify a frequency-shifted signal, the amplitude of the frequency shift must fall in the non-DC bin, corresponding to the minimum frequency. For a signal segment with constant frequency shift, the frequency shift should satisfy where V is the rate of change of the reflected signal path and λ is the wavelength of the signal. Then, the sensitivity of the range R is:
步骤3:根据多普勒频谱图构建得到对应的手臂运动加速度模型;Step 3: Construct the corresponding arm motion acceleration model according to the Doppler spectrogram;
为实现实时跨域手势识别和用户识别,需要对CSI信号特征进行建模,通过多个WiFi接收器的信号组合构建一个既可以反映用户信息,又具有实时跨域手势信息的模型。In order to realize real-time cross-domain gesture recognition and user recognition, it is necessary to model the CSI signal characteristics, and construct a model that can reflect user information and real-time cross-domain gesture information by combining the signals of multiple WiFi receivers.
在本实施例中,通过将接缝雕刻算法应用于多普勒频移构建了一个手臂运动加速度模型。In this embodiment, an arm motion acceleration model is constructed by applying the seam carving algorithm to the Doppler shift.
具体地,步骤3包括:Specifically, step 3 includes:
步骤3.1:利用接缝雕刻算法得到多普勒频谱图对应的主导功率雕刻路径和功率边界雕刻路径;Step 3.1: Use the seam carving algorithm to obtain the dominant power carving path and power boundary carving path corresponding to the Doppler spectrogram;
具体地,通过观察多普勒频移(Doppler Frequency Shift,DFS)频谱图,注意到功率和时间特征都有可能从速度和节奏方面指示处个性化的手臂运动。具体来说,DFS频谱图可以在不同的手臂部分以不同的速度移动时分离它们的移动,因为随着时间的推移,反射面积会随着特定速度分量的变化而变化。通过DFS频谱图的功率,可以导出两条雕刻路径,一种是主导功率,它反映了DFS频谱图中的最大主导功率,另一种是功率边界,它描述了主导功率区域和速度边界,如图3所示的多普勒频谱图的雕刻路径示意图,两条雕刻路径都是基于功率的特征。Specifically, by observing the Doppler Frequency Shift (DFS) spectrogram, it was noted that both power and temporal features may indicate individualized arm motions in terms of speed and rhythm. Specifically, the DFS spectrogram can separate the movement of different arm parts as they move at different velocities, since the reflective area changes over time as a function of a specific velocity component. Through the power of the DFS spectrogram, two carving paths can be derived, one is the dominant power, which reflects the maximum dominant power in the DFS spectrogram, and the other is the power boundary, which describes the dominant power region and velocity boundary, such as Figure 3 shows a schematic diagram of the engraving paths of the Doppler spectrogram, both engraving paths are based on power features.
此外,手臂手势通常可以按时间顺序划分为一些原子运动(例如,绘制直线、圆弧),例如,绘制一个矩形包含朝向四个不同方向的四条线,两个相邻原子运动之间的切换称为运动更改,表示运动暂停/运动重启。可以提取一条称为手臂运动变化模式的雕刻路径来表示运动变化的时刻,即手势绘制过程中具有时间节奏的移动。In addition, arm gestures can usually be divided into some atomic motions (e.g., drawing straight lines, circular arcs) in chronological order, e.g., drawing a rectangle contains four lines facing four different directions, and switching between two adjacent atomic motions is called Change for motion, means motion pause/motion restart. A sculpted path called an arm kinematic variation pattern can be extracted to represent moments of kinematic variation, that is, movement with a temporal rhythm during gesture drawing.
由于得到的多普勒频移fD是由手臂的运动引起的,如公式(6)所示,多普勒频移fD与不同身体部位的速度有关,式中,Pd是动态路径的集合(fd≠0)。因此,推导了维数为R×P×F×T的DFS频谱图,其中R和P分别是收发器链路的数量和PCA分量,F和T分别表示频域和时域中的采样点。Since the obtained Doppler shiftfD is caused by the movement of the arm, as shown in Equation (6), the Doppler shiftfD is related to the velocity of different body parts, wherePd is the dynamic path set (fd ≠ 0). Therefore, a DFS spectrogram of dimension R×P×F×T is derived, where R and P are the number of transceiver links and PCA components, respectively, and F and T denote sampling points in the frequency and time domains, respectively.
如上所述,用户在执行相同手势时表现出独特的个性化风格,在某些情况下会有节奏地增加、减少甚至暂停。因此,不同身体部位反射的信号产生一致的运动变化模式,并形成相应的DFS频谱序列。也就是,有节奏的增加、下降甚至暂停通常会引起DFS频谱检测到的明显移动速度波动,它发生在时域中速度变化峰值的特定时刻。但是,速度导数运算的密集计算牺牲了实时性,为了在平衡计算成本的同时保留个性化特征,从DFS频谱图中导出两类运动曲线,即主导功率雕刻路径和功率边界雕刻路径。As mentioned above, users exhibit a unique and personalized style when performing the same gesture, with rhythmic increases, decreases, and even pauses in some cases. Therefore, signals reflected from different body parts generate consistent patterns of motion changes and form corresponding DFS spectral sequences. That is, rhythmic increases, decreases, or even pauses often give rise to distinct movement velocity fluctuations detected by the DFS spectrum, which occur at specific moments of velocity variation peaks in the time domain. However, the intensive calculation of the velocity derivative operation sacrifices real-time performance. In order to balance the calculation cost while retaining individual characteristics, two types of motion curves are derived from the DFS spectrogram, namely, the dominant power carving path and the power boundary carving path.
步骤3.2:根据主导功率雕刻路径和功率边界雕刻路径,构建对应的手臂运动加速度模型,以填补身体部位加速度序列和多普勒频谱图功率分布之间的差距;Step 3.2: According to the dominant power sculpting path and the power boundary sculpting path, construct the corresponding arm motion acceleration model to fill the gap between the body part acceleration sequence and the Doppler spectrogram power distribution;
在本实施例中,提取与加速度相关的手臂运动变化模式,作为主要身体部位(如手腕、肘部、手臂)的生物特征。但是,由于DFS仅展示特定速度分量随时间变化的功率值,由于接收器处速度分量的叠加,它无法提供与身体部位相对应的精确细粒度加速度;其次,手臂运动变化模式需要高维数据的导数计算,这是计算密集型的,无法实时运行;另外,DFS频谱图包含过多的无关干扰,导致不必要的计算和存储。In this embodiment, the acceleration-related arm motion change pattern is extracted as the biological characteristics of main body parts (such as wrist, elbow, arm). However, since DFS only shows the power value of a specific velocity component over time, it cannot provide accurate fine-grained accelerations corresponding to body parts due to the superposition of velocity components at the receiver; Derivative calculations, which are computationally intensive, cannot be run in real time; in addition, DFS spectrograms contain too much irrelevant interference, resulting in unnecessary calculations and storage.
因此,通过构建一个模型来填补身体部位加速度序列和DFS频谱功率分布之间的差距,该模型为手臂运动加速度模型。Therefore, the gap between the body part acceleration sequence and the DFS spectral power distribution is filled by constructing a model, which is an arm motion acceleration model.
具体地,手臂运动加速度模型为:Specifically, the arm motion acceleration model is:
其中,表示将高斯分布用于Pds(fD,t)后的模型,Pds(fD,t)表示多普勒频谱图中的功率Pds与身体部位叠加之间的关系模型,fD表示从CSI信号中提取出来的实际多普勒频移频率,FD表示具有短时傅里叶变换的频率集的数量,i表示频率集,从1~FD,表示的权重,t表示时间。in, Indicates the model after applying the Gaussian distribution to Pds (fD ,t), Pds (fD ,t) represents the relationship model between the power Pds in the Doppler spectrogram and the superposition of body parts, and fD represents The actual Doppler frequency shift frequency extracted from the CSI signal, FD represents the number of frequency sets with short-time Fourier transform, i represents the frequency set, from 1 to FD , express The weight of , t represents time.
进一步地,对上述手臂运动加速度模型的推导过程进行具体说明。Further, the derivation process of the above-mentioned arm motion acceleration model is described in detail.
由于频谱图的功率分布随着反射区域S在实例t处特定多普勒频移的变化而变化,因此可以使用传播损耗引起的比例因子c作为来定义DFS频谱图中的功率Pds。假设有K个身体部位来定义手势,则多普勒频谱图中的功率Pds和身体部位叠加之间的关系可以被建模为:Since the power distribution of the spectrogram varies with the reflection region S at a specific Doppler shift at instance t, the scaling factor c caused by the propagation loss can be used as to define the powerPds in the DFS spectrogram. Assuming there are K body parts to define a gesture, the relationship between the powerPds in the Doppler spectrogram and the superposition of body parts can be modeled as:
其中,Ref(k,t)表示第k个身体部位在时间t的单个反射区域S,fdfs(k,t)表示第k个身体部位在时间t的多普勒频移频率。Among them, Ref(k,t) represents the single reflection area S of the k-th body part at time t, and fdfs (k, t) represents the Doppler shift frequency of the k-th body part at time t.
由于Wi-Fi信号的分辨率和身体部位K的叠加估计缘故,无法获得精确的Pds,但是,可以得到具有可接受的计算误差ε和衰减因子aat的实验性的近似为了更接近真实情况和便于推导,将高斯分布用于身体部位运动的叠加并建模为:Due to the resolution of the Wi-Fi signal and the overlay estimation of the body part K, an exact Pds cannot be obtained, however, an experimental approximation with acceptable calculation error ε and attenuation factor aat In order to be closer to the real situation and to facilitate derivation, a Gaussian distribution is used for the superposition of body part motion and modeled as:
由于fdfs表示移动速度v,且对于一个固定的每个身体部位的相应加速度ak可以表示为假设与连续DFS频谱图之间的叠加效应相比,可以忽略Ref(·)的方差,则功率变化率可以导出为方程(10),由于人体刚体部分和加速度延展性的限制,可以适当地抽取幂功率的导数如方程(11),以实现去简化DFS功率变化和加速度ak之间的关系:Since fdfs represents the moving speed v, and for a fixed The corresponding accelerationak of each body part can be expressed as Hypothesis and continuous DFS spectrogram between Compared with the superposition effect of , the variance of Ref( ) can be ignored, then the power change rate can be derived as Equation (10). Due to the limitation of the rigid body part of the human body and the ductility of acceleration, the derivative of the power power can be properly extracted as Equation (11 ), in order to simplify the relationship between DFS power change and acceleration ak :
因此,可以发现所有身体部位K的功率变化率随着ak的增加而增加,当用户执行手势时,通过计算DFS功率频谱图的导数,可以检测到随时间变化的个性化加速序列并作为用户的生物特征。Therefore, it can be found that the power change rate of all body parts K increases with the increase of ak , when the user performs a gesture, by computing the derivative of the DFS power spectrogram, the personalized acceleration sequence over time can be detected and used as the user biological characteristics.
由于导数计算过程中会存在大量的冗余数据,基于计算机图形学中用于内容感知调整图像大小的接缝雕刻问题的启发。在本实施例中,使用边缘检测方法过滤冗余干扰,同时使用卷积算子的差分格式优化导数计算,然后基于接缝雕刻算法设计了一种有效的方法生成在上述提到的多个主要雕刻路径,并作为PCA成分中每个功率谱频谱图上的手臂运动变化模型。假设考虑了K条主要雕刻路径的估计,每条路径显示出随时间变化的最显著的手臂运动变化模式,在ith个频率集和jth个数据包中使用ωi,j∈(0,1]来表示的权重。因此,作为时间戳索引的函数,沿频率轴的最佳手臂运动变化模式(AMAM)可以定义为如公式(7)所示。Due to the large amount of redundant data in the derivative calculation process, based on the inspiration of the seam carving problem for content-aware resizing of images in computer graphics. In this embodiment, the edge detection method is used to filter redundant interference, and the differential scheme of the convolution operator is used to optimize the derivative calculation, and then an effective method is designed based on the seam carving algorithm to generate the above-mentioned multiple main The path is sculpted and modeled as arm motion variation on each power spectrum spectrogram in the PCA composition. Assuming that an estimate of K main engraved paths is considered, each path exhibits the most pronounced pattern of armmotion variation over time, using ωi,j ∈ (0, 1] to represent the weight of. Therefore, the optimal arm motion variation pattern (AMAM) along the frequency axis as a function of the timestamp index can be defined as shown in Equation (7).
值得注意的是,为了提高计算效率,可以在DFS频谱图Pds(fD,t)上应用时间轴的Sobel算子,以获得每个功率频谱图的时间梯度矩阵。It is worth noting that in order to improve the computational efficiency, the Sobel operator of the time axis can be applied on the DFS spectrogram Pds (fD ,t) to obtain the time gradient matrix of each power spectrogram.
具体地,手臂运动加速度模型的提取算法如下:Specifically, the extraction algorithm of the arm motion acceleration model is as follows:
上述算法详细介绍了手臂运动加速度模型的提取过程。需要注意分割数Ts指定适当的值是十分重要的,因为如果Ts太大,分段可能是瞬时的,并且不能保证用小滑动窗口提取出鲁棒性的特征。相应的,对于较小的Ts,单个用户的独特手臂运动变化模式过于平均,无法用较大的滑动窗口进行区分。此外,自适应的算法计算量太大,无法实时执行。The above algorithm details the extraction process of the arm motion acceleration model. It is important to note that specifying an appropriate value for the number of splits Ts is very important, because if Ts is too large, segmentation may be instantaneous and robust feature extraction with small sliding windows cannot be guaranteed. Correspondingly, for small Ts , the individual user's unique arm motion variation patterns are too average to be distinguished with a larger sliding window. In addition, adaptive algorithms are too computationally intensive to execute in real time.
在本实施例中,设置Ts为恒定值60,这样总样本长度每个分段的持续时间被限制在35ms~70ms之间,是DFS频谱图恰当的分割准则范围内。此外,对手臂运动变化模式的实验性研究也表明,相邻雕刻路径之间最明显的差异是在小于70ms范围内的。那么,在[-1.6,1.6]的速度范围内将分辨率设置为0.16m/s,以实现20个速度集(Velocity Bins)。生成手臂运动加速度模型(AMAM)后将其输入到后续构建的双任务深度神经网络中。In this embodiment, Ts is set to a constant value of 60, so that the duration of each segment of the total sample length is limited between 35 ms and 70 ms, which is within the appropriate segmentation criterion range of the DFS spectrogram. In addition, experimental studies on the changing patterns of arm motion also show that the most obvious differences between adjacent engraving paths are in the range of less than 70 ms. Then, set the resolution to 0.16m/s in the velocity range of [-1.6,1.6] to achieve 20 velocity sets (Velocity Bins). After the arm motion acceleration model (AMAM) is generated, it is input into the subsequent dual-task deep neural network.
步骤4:构建用于用户识别和手势识别的双任务深度神经网络;Step 4: Build a dual-task deep neural network for user recognition and gesture recognition;
在本实施例中,双任务深度神经网络包括特征提取模块、时态建模模块、、拼接模块和识别模块。In this embodiment, the dual-task deep neural network includes a feature extraction module, a temporal modeling module, a splicing module and a recognition module.
其中,特征提取模块包括第一特征提取单元和第二特征提取单元,第一特征提取单元用于提取输入的手臂运动加速度模型中的手势空间特征序列,第二特征提取单元用于提取输入的手臂运动加速度模型中的用户空间特征序列。Wherein, the feature extraction module includes a first feature extraction unit and a second feature extraction unit, the first feature extraction unit is used to extract the gesture space feature sequence in the input arm motion acceleration model, and the second feature extraction unit is used to extract the input arm movement acceleration model. Sequences of user-space features in motion acceleration models.
时态建模模块包括第一时态建模单元和第二时序建模单元分别与对应的第一特征提取单元和第二特征提取单元连接,第一时态建模单元用于对输入的手势空间特征序列进行时序分析,得到对应的手势时序特征序列,第二时序建模单元用于对输入的用户空间特征序列进行时序分析,得到对应的用户时序特征序列。The temporal modeling module includes a first temporal modeling unit and a second temporal modeling unit connected to the corresponding first feature extraction unit and the second feature extraction unit respectively, and the first temporal modeling unit is used for input gestures Time series analysis is performed on the spatial feature sequence to obtain the corresponding gesture time series feature sequence, and the second time series modeling unit is used to perform time series analysis on the input user space feature sequence to obtain the corresponding user time series feature sequence.
拼接模块分别连接第一时态建模单元和第二时序建模单元,用于对输入的手势时序特征序列和用户时序特征序列进行拼接,得到拼接特征序列;识别模块包括用户识别单元和手势识别单元,分别用于对输入的拼接特征序列进行识别,得到手势识别结果和用户识别结果。The stitching module is respectively connected to the first temporal modeling unit and the second timing modeling unit, and is used to splice the input gesture timing feature sequence and the user timing feature sequence to obtain the splicing feature sequence; the identification module includes a user identification unit and a gesture recognition unit. The units are respectively used to recognize the input spliced feature sequence to obtain gesture recognition results and user recognition results.
在本实施例中,利用深度学习用来提取出域无关的特征用于手势识别,针对同时达到手势识别和用户识别以及满足实时性的目标,构建得到的双任务深度神经网络第一个任务是提取出与域无关的特征来识别不同手势,解决跨域的需求,第二个任务解决用户识别的需求。In this embodiment, deep learning is used to extract domain-independent features for gesture recognition. Aiming at simultaneously achieving the goal of gesture recognition and user recognition and satisfying real-time performance, the first task of the dual-task deep neural network constructed is Extract domain-independent features to recognize different gestures and solve cross-domain needs. The second task solves the needs of user recognition.
具体地,该双任务深度神经网络在接收到手臂运动加速度模型(AMAM)后,将该模型重塑为类似于以VB×R为空间分辨率和以P为颜色通道的数字图像,其中VB表示速度集的数量,R表示接收器的数量,P表示PCA主成分的数量。修改的基本原理是,接收器上的信号传递到达角(Angle of Arrival,AoA)信息,而速度集VB包含身体部位的运动,而PCA分量P被设置为具有不同信号尺度的颜色通道。Specifically, after receiving the arm motion acceleration model (AMAM), the dual-task deep neural network reshaped the model into a digital image similar to a digital image with VB×R as the spatial resolution and P as the color channel, where VB represents The number of velocity sets, R indicates the number of receivers, and P indicates the number of PCA principal components. The rationale for the modification is that the signal at the receiver conveys the Angle of Arrival (AoA) information, while the velocity set VB contains the motion of the body part, and the PCA component P is set as a color channel with different signal scales.
结合参见图4所示的双任务深度神经网络的示意图。首先,特征提取模块从单个手臂运动变化模型中提取空间特征,然后分析整个特征序列的时间依赖性。在本实施例中,采用基于卷积神经网络(CNN)的门控循环单元(GRU)网络,输入张量对于个采样分量,矩阵被送入到CNN中,CNN依次包含16个3×3的filter和两个64-unit密集层,ReLU函数和flatten层用于非线性特征映射和维度重塑,最终输出用于表征个采样分量的CNN。In combination, refer to the schematic diagram of the dual-task deep neural network shown in FIG. 4 . First, the feature extraction module extracts spatial features from a single arm motion variation model, and then analyzes the temporal dependence of the entire feature sequence. In this embodiment, a gated recurrent unit (GRU) network based on a convolutional neural network (CNN) is used, and the input tensor for A sampling component, the matrix is sent to CNN, CNN contains 16 3×3 filters and two 64-unit dense layers in turn, ReLU function and flatten layer are used for nonlinear feature mapping and dimension reshaping, and the final output is used in representation CNN of sampling components.
接下来,将空间特征序列送到接下来的GRU(时态建模模块)中以进行时序分析。在本实施例中,采用128-unit的单层GRU来分析时序关系。为了避免过度拟合,进一步添加了dropout层,然后添加了具有交叉熵损失的Softmax层,以用于双任务预测。其中sequential序贯模型为CNN各网络层的线性堆叠。Next, the spatial feature sequence is sent to the next GRU (temporal modeling module) for time series analysis. In this embodiment, a 128-unit single-layer GRU is used to analyze the timing relationship. To avoid overfitting, a dropout layer is further added, followed by a Softmax layer with cross-entropy loss for dual-task prediction. Among them, the sequential sequential model is a linear stack of each network layer of CNN.
最后通过拼接模块将手势时序特征序列和用户时序特征序列进行拼接,然后将拼接后的特征序列分别输入后续的手势识别单元和用户识别单元,进行手势和用户识别。Finally, the gesture sequence feature sequence and the user sequence feature sequence are spliced through the splicing module, and then the spliced feature sequence is input into the subsequent gesture recognition unit and user recognition unit for gesture and user recognition.
步骤5:将手臂运动加速度模型作为训练样本输入至双任务深度神经网络中对其进行训练;Step 5: Input the arm motion acceleration model as a training sample into the dual-task deep neural network for training;
具体地,网络训练方法与现有的网络训练方法类似,在此不再赘述。Specifically, the network training method is similar to the existing network training method, and will not be repeated here.
步骤6:利用训练完成的协同双任务深度神经网络实现用户识别和手势识别。Step 6: Use the trained collaborative dual-task deep neural network to realize user recognition and gesture recognition.
需要说明的是,在利用训练完成的协同双任务深度神经网络实现用户识别和手势识别时,需要对采集到的CSI数据进行去噪、信号提取、降维和压缩处理、时频分析后得到对应的多普勒频谱图,并利用该多普勒频谱图构建手臂运动加速度模型,然后将该手臂运动加速度模型输入训练完成的协同双任务深度神经网络进行用户识别和手势识别。具体地,处理方法和建模过程参见上述所述训练样本的处理建模过程,在此不再赘述。It should be noted that when using the trained collaborative dual-task deep neural network to realize user recognition and gesture recognition, it is necessary to perform denoising, signal extraction, dimension reduction and compression processing, and time-frequency analysis on the collected CSI data to obtain the corresponding Doppler spectrogram, and use the Doppler spectrogram to build an arm motion acceleration model, and then input the arm motion acceleration model into the trained collaborative dual-task deep neural network for user recognition and gesture recognition. Specifically, for the processing method and modeling process, refer to the above-mentioned processing and modeling process of the training samples, which will not be repeated here.
本实施例的允许用户认证的实时WiFi信号手势识别方法,通过对CSI信号进行去噪和变换,找到可以反应出运动变换的原始特征,然后再根据原始特征建立一个可以满足实时性且具有跨域能力的手臂运动加速度模型,利用构建的用于用户识别和手势识别的双任务深度神经网络,实现手势和执行手势的用户的同时识别。The real-time WiFi signal gesture recognition method that allows user authentication in this embodiment finds the original features that can reflect the motion transformation by denoising and transforming the CSI signal, and then establishes a real-time and cross-domain feature based on the original features. Ability's arm motion acceleration model, using the constructed dual-task deep neural network for user recognition and gesture recognition, realizes simultaneous recognition of gestures and the user performing the gestures.
实施例二Embodiment two
本实施例通过实际应用对上述实施例一所述的允许用户认证的实时WiFi信号手势识别方法进行说明。This embodiment illustrates the real-time WiFi signal gesture recognition method that allows user authentication described in the first embodiment above through practical application.
将上述方法接入西安电子科技大学智慧教育创新实验室Yo-Yo智慧教育平台中,下面将对整个系统做出进一步的说明。The above method is connected to the Yo-Yo smart education platform of the smart education innovation laboratory of Xidian University, and the whole system will be further explained below.
数据集:data set:
共选取在智慧教育中最常用的四种手势作为我们的数据采集手势,分别是1.举手,2.放手,3.摆手,4.拍手。A total of four most commonly used gestures in smart education were selected as our data collection gestures, namely 1. Raising hands, 2. Letting go, 3. Waving hands, and 4. Clapping hands.
如图5所示的采集环境,用一台笔记本作为发射器,三台笔记本作为三个接收器,发射器激活一根天线,每个接收器激活三根天线,发射器和接收器都选用Intel 5300无线网卡,并使用802.11n CSI Tool作为采集工具,选用monitor模式,发包间隔设置为1000Hz,发包数量为30000,包的长度为100,一次发包时间为30s,分别在2.4GHz和5GHz频段下进行采集,数据格式为dat数据。In the acquisition environment shown in Figure 5, one notebook is used as the transmitter, and three notebooks are used as three receivers. The transmitter activates one antenna, and each receiver activates three antennas. Both the transmitter and receiver use Intel 5300 Wireless network card, and use 802.11n CSI Tool as the collection tool, select the monitor mode, set the sending interval to 1000Hz, the number of sending packets to 30000, the length of the packet to 100, and the time to send a packet to 30s, and collect in the 2.4GHz and 5GHz frequency bands respectively , the data format is dat data.
共有6个受试者,在两种环境下进行数据的采集:1.休息室,2.会议室,受试者需要面朝三种不同的方向,站在三个不同的位置分别进行采集,一次采集3min,通过自制的音频文件来告诉受试者什么时间该执行手势。There are 6 subjects in total, and data collection is carried out in two environments: 1. Lounge, 2. Conference room. Subjects need to face three different directions and stand in three different positions for data collection respectively. A collection of 3 minutes, through self-made audio files to tell the subject when to perform gestures.
一次采集3min的时间分配:受试者听到音频提示后执行相应的手势,每隔3秒生成一个新样本,也就是每隔3s采集一个手势(设置来回手势,如举手和放下手作为一个循环),然后一次采集30s,也就是10个样本;休息15s,在3min内共采集4次,共40个样本。The time allocation of a collection of 3 minutes: the subject performs the corresponding gesture after hearing the audio prompt, and a new sample is generated every 3 seconds, that is, a gesture is collected every 3s (set back and forth gestures, such as raising and lowering hands as a Cycle), and then collect 30s at a time, that is, 10 samples; rest for 15s, and collect 4 times within 3 minutes, a total of 40 samples.
共需采集2*4*2*3*3*6=864次,共864*40*3(3个接收器)=103680个样本,每种信号51840个样本,每个样本包含3秒的数据。每种手势有2*2*3*3*6*40*3=4320*2(两种信号)个样本。It needs to collect 2*4*2*3*3*6=864 times in total, a total of 864*40*3 (3 receivers)=103680 samples, 51840 samples for each signal, each sample contains 3 seconds of data . Each gesture has 2*2*3*3*6*40*3=4320*2 (two signals) samples.
系统环境:System environment:
发射器:1台Think-padT500;接收器:3台Think-padT500;网卡:Intel 5300 NIC;处理器CPU:12th Gen Intel(R)Core(TM)i7-12700;处理器GPU:NVIDIA 3090 24GB;linux内核版本:3.2-4.2;接收工具:linux-802.11n-CSI tool;传输协议:802.11n;频段:2.4G;频宽:20MHZ;信道:13;网卡模式:monitor模式;采集频率:1000Hz;系统平台:Yo-~Yo智慧教育平台。使用matlab、pyhon、C语言实现。Transmitter: 1 Think-padT500; Receiver: 3 Think-padT500; Network card: Intel 5300 NIC; Processor CPU: 12th Gen Intel(R) Core(TM) i7-12700; Processor GPU: NVIDIA 3090 24GB; Linux kernel version: 3.2-4.2; receiving tool: linux-802.11n-CSI tool; transmission protocol: 802.11n; frequency band: 2.4G; bandwidth: 20MHZ; channel: 13; network card mode: monitor mode; acquisition frequency: 1000Hz; System platform: Yo-~Yo smart education platform. Use matlab, pyhon, C language to realize.
实验结果:Experimental results:
该实施例依托于Yo-Yo智慧教育平台实现,根据本发明的方法,计算手势识别准确度和用户识别准确度,并对实时性能进行评估,最终达到手势识别准确度93.6%,在手势识别正确的前提下,用户识别准确度92.3%;本地响应时间达到400ms(本地响应时间表示本发明整个算法部分时延),系统响应时间达到600ms(系统响应时间表示整个Yo-Yo系统手势识别时延)This embodiment relies on the Yo-Yo smart education platform to realize. According to the method of the present invention, the gesture recognition accuracy and user recognition accuracy are calculated, and the real-time performance is evaluated. Finally, the gesture recognition accuracy reaches 93.6%. Under the premise, the user recognition accuracy is 92.3%; the local response time reaches 400ms (the local response time represents the partial time delay of the whole algorithm of the present invention), and the system response time reaches 600ms (the system response time represents the whole Yo-Yo system gesture recognition delay)
应当说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的物品或者设备中还存在另外的相同要素。It should be noted that in this document, relational terms such as first and second, etc. are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply that there is a relationship between these entities or operations. There is no such actual relationship or order between them. Furthermore, the terms "comprises", "comprises" or any other variation are intended to cover a non-exclusive inclusion such that an article or device comprising a set of elements includes not only those elements but also other elements not expressly listed. Without further limitations, an element defined by the phrase "comprising a" does not exclude the presence of additional identical elements in the article or device comprising said element.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be assumed that the specific implementation of the present invention is limited to these descriptions. For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present invention, some simple deduction or replacement can be made, which should be regarded as belonging to the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211119623.1ACN115392321A (en) | 2022-09-14 | 2022-09-14 | A Real-time Wi-Fi Signal Gesture Recognition Method Allowing User Authentication |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211119623.1ACN115392321A (en) | 2022-09-14 | 2022-09-14 | A Real-time Wi-Fi Signal Gesture Recognition Method Allowing User Authentication |
| Publication Number | Publication Date |
|---|---|
| CN115392321Atrue CN115392321A (en) | 2022-11-25 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211119623.1APendingCN115392321A (en) | 2022-09-14 | 2022-09-14 | A Real-time Wi-Fi Signal Gesture Recognition Method Allowing User Authentication |
| Country | Link |
|---|---|
| CN (1) | CN115392321A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115951781A (en)* | 2022-12-30 | 2023-04-11 | 美的集团股份有限公司 | Migration method, migration apparatus, computer device, and computer-readable storage medium |
| CN116033636A (en)* | 2022-12-19 | 2023-04-28 | 武汉纺织大学 | A method and device for intelligent lighting control based on WIFI gesture recognition |
| CN116347374A (en)* | 2023-03-28 | 2023-06-27 | 上海物骐微电子有限公司 | Behavior recognition method and device, electronic device, storage medium |
| CN116434334A (en)* | 2023-03-28 | 2023-07-14 | 湖南工商大学 | A Transformer-based WiFi human gesture recognition method, electronic equipment and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170003750A1 (en)* | 2015-06-30 | 2017-01-05 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Virtual reality system with control command gestures |
| CN108882171A (en)* | 2018-06-14 | 2018-11-23 | 西北师范大学 | A kind of personnel's trace tracking method based on CSI |
| WO2020037313A1 (en)* | 2018-08-17 | 2020-02-20 | The Regents Of The University Of California | Device-free-human identification and device-free gesture recognition |
| CN113449587A (en)* | 2021-04-30 | 2021-09-28 | 北京邮电大学 | Human behavior recognition and identity authentication method and device and electronic equipment |
| CN114676727A (en)* | 2022-03-21 | 2022-06-28 | 合肥工业大学 | A Position-Independent Human Activity Recognition Method Based on CSI |
| CN118741573A (en)* | 2024-07-26 | 2024-10-01 | 广州航海学院 | A fault-tolerant method to improve the robustness of underwater robot networking |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170003750A1 (en)* | 2015-06-30 | 2017-01-05 | Ariadne's Thread (Usa), Inc. (Dba Immerex) | Virtual reality system with control command gestures |
| CN108882171A (en)* | 2018-06-14 | 2018-11-23 | 西北师范大学 | A kind of personnel's trace tracking method based on CSI |
| WO2020037313A1 (en)* | 2018-08-17 | 2020-02-20 | The Regents Of The University Of California | Device-free-human identification and device-free gesture recognition |
| CN113449587A (en)* | 2021-04-30 | 2021-09-28 | 北京邮电大学 | Human behavior recognition and identity authentication method and device and electronic equipment |
| CN114676727A (en)* | 2022-03-21 | 2022-06-28 | 合肥工业大学 | A Position-Independent Human Activity Recognition Method Based on CSI |
| CN118741573A (en)* | 2024-07-26 | 2024-10-01 | 广州航海学院 | A fault-tolerant method to improve the robustness of underwater robot networking |
| Title |
|---|
| LEI WANG ET AL.: "WiTrace: Centimeter-Level Passive Gesture Tracking Using OFDM Signals", 《IEEE TRANSACTIONS ON MOBILE COMPUTING》, vol. 20, no. 4, 1 April 2021 (2021-04-01), pages 1730 - 1745, XP011842807, DOI: 10.1109/TMC.2019.2961885* |
| XIAOZHUANG LIU ET AL.: "Towards Multi-Person Gesture Recognition using Commodity Wi-Fi", 《2023 32ND INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN)》, 1 September 2023 (2023-09-01), pages 1 - 10* |
| 孔金生等: "基于Wi-Fi信号的人体身份识别算法研究综述", 《计算机科学》, vol. 48, no. 10, 15 October 2021 (2021-10-15), pages 246 - 257* |
| 杨旭: "基于WiFi的室内人员非接触式感知方法研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 2, 15 February 2022 (2022-02-15), pages 40 - 69* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116033636A (en)* | 2022-12-19 | 2023-04-28 | 武汉纺织大学 | A method and device for intelligent lighting control based on WIFI gesture recognition |
| CN115951781A (en)* | 2022-12-30 | 2023-04-11 | 美的集团股份有限公司 | Migration method, migration apparatus, computer device, and computer-readable storage medium |
| CN116347374A (en)* | 2023-03-28 | 2023-06-27 | 上海物骐微电子有限公司 | Behavior recognition method and device, electronic device, storage medium |
| CN116434334A (en)* | 2023-03-28 | 2023-07-14 | 湖南工商大学 | A Transformer-based WiFi human gesture recognition method, electronic equipment and storage medium |
| CN116434334B (en)* | 2023-03-28 | 2024-02-06 | 湖南工商大学 | A Transformer-based WiFi human gesture recognition method, electronic device and storage medium |
| Publication | Publication Date | Title |
|---|---|---|
| Yousefi et al. | A survey on behavior recognition using WiFi channel state information | |
| CN115392321A (en) | A Real-time Wi-Fi Signal Gesture Recognition Method Allowing User Authentication | |
| Xu et al. | WiStep: Device-free step counting with WiFi signals | |
| Zhang et al. | Wifi-id: Human identification using wifi signal | |
| Ding et al. | WiFi CSI-based human activity recognition using deep recurrent neural network | |
| Li et al. | WiFinger: Talk to your smart devices with finger-grained gesture | |
| Guo et al. | WiReader: Adaptive air handwriting recognition based on commercial WiFi signal | |
| He et al. | WiG: WiFi-based gesture recognition system | |
| Xu et al. | Attention-based gait recognition and walking direction estimation in Wi-Fi networks | |
| CN110502105B (en) | A gesture recognition system and recognition method based on CSI phase difference | |
| Chen et al. | WiFace: Facial expression recognition using Wi-Fi signals | |
| Chowdhury et al. | WiHACS: Leveraging WiFi for human activity classification using OFDM subcarriers' correlation | |
| Bocus et al. | UWB and WiFi systems as passive opportunistic activity sensing radars | |
| CN113743374A (en) | Personnel identity identification method based on channel state information respiration perception | |
| Fang et al. | Writing in the air: recognize letters using deep learning through WiFi signals | |
| Ahmed et al. | Wi-fi csi based human sign language recognition using lstm network | |
| Zhu et al. | Action recognition method based on wavelet transform and neural network in wireless network | |
| Wang et al. | Acceleration estimation of signal propagation path length changes for wireless sensing | |
| Boudlal et al. | Design and deployment of a practical wireless sensing system for HAR with WiFi CSI in the 5GHz band | |
| Gu et al. | Device‐Free Human Activity Recognition Based on Dual‐Channel Transformer Using WiFi Signals | |
| Man et al. | PWiG: A phase-based wireless gesture recognition system | |
| Qin et al. | WiASL: American Sign Language writing recognition system using commercial WiFi devices | |
| Uchiyama et al. | Context recognition by wireless sensing: A comprehensive survey | |
| Muaaz et al. | WiHAR: From Wi-Fi channel state information to unobtrusive human activity recognition | |
| Yousefi et al. | A survey of human activity recognition using wifi CSI |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |