




技术领域technical field
本发明涉及一种基于深度注意力检测模型的无人机射频识别方法,属于无人机射频识别技术领域。The invention relates to a UAV radio frequency identification method based on a deep attention detection model, and belongs to the technical field of UAV radio frequency identification.
背景技术Background technique
随着GPS、激光雷达、雷达和光学相机等先进传感器应用于无人机,无人机在测绘、监视、应急救援、森林灭火、娱乐以及军事领域应用越来越广泛。自2018年起,陆续出台政策规范无人机飞行管理,但是“黑飞”事件还是层出不穷,非法活动给公共安全带来巨大威胁,无人机探测识别技术应运而生。With the application of advanced sensors such as GPS, lidar, radar and optical cameras to drones, drones are more and more widely used in surveying and mapping, surveillance, emergency rescue, forest fire fighting, entertainment and military fields. Since 2018, policies have been introduced to regulate drone flight management, but "black flying" incidents are still emerging one after another. Illegal activities pose a huge threat to public safety, and drone detection and identification technology has emerged as the times require.
无人机探测识别技术包括包括光学、声学、无线电探和雷达测等。雷达检测属于主动探测,探测距离远,但受限于雷达仰角,对于“低、慢、小”无人机或鸟等飞行生物时,存在误判现象。光学探测采用高清光学传感器进行图像跟踪,可视化较好,但传感器的探测能力依赖于天气条件,在黑暗、雨雾等不良天气环境中,探测距离较近,效率较差,有工作失效的风险。声学通过声学传感器对无人机声音进行捕获,具有一定识别性能,在安静的环境中效果很好,但是在拥挤的市区或嘈杂的环境中通常无法奏效。无线电探测主要分为两种,一种为无源微多普勒检测方式,一种为直接侦收无人机射频信号并识别的方式,其中微多普勒检测方式主要通过第三方信号源与旋翼设备所产生的微多普勒效应来识别无人机,受限于第三方信号源以及无人机类别,工程应用性相对较低。射频信号识别主要时通过侦测图传及飞控信号来探测识别。UAV detection and identification technologies include optics, acoustics, radio detection and radar detection. Radar detection is an active detection with a long detection distance, but it is limited by the elevation angle of the radar. For flying creatures such as "low, slow, and small" drones or birds, there is a phenomenon of misjudgment. Optical detection uses a high-definition optical sensor for image tracking, which is better visualized, but the detection capability of the sensor depends on weather conditions. In dark, rainy and foggy and other adverse weather environments, the detection distance is relatively short, the efficiency is poor, and there is a risk of work failure. Acoustics captures the sound of drones through acoustic sensors, and has certain recognition performance. It works well in quiet environments, but it usually doesn't work in crowded urban areas or noisy environments. Radio detection is mainly divided into two types, one is the passive micro-Doppler detection method, and the other is the method of directly detecting and identifying the radio frequency signal of the UAV. The micro-Doppler detection method mainly uses a third-party signal source and The micro-Doppler effect generated by rotor equipment to identify UAVs is limited by third-party signal sources and UAV categories, and its engineering applicability is relatively low. Radio frequency signal identification is mainly detected and identified by detecting image transmission and flight control signals.
无人机射频信号识别是无人机探测识别主要方式之一,无人机与地面控制站进行射频通信,用于飞行控制和导航、实时视频传输和传输遥测信息。射频无人机检测系统可以通过监测通信频谱来检测无人机。关于无人机射频信号的深度学习检测,不少学者开展了研究。学者Allahham使用三架商业无人机开发了一个数据集DroneRF dataset,用于验证测试。Mohammad F基于DroneRF dataset设计了DNN,验证了基于无人机射频数据库在检测和识别方面的可行性,但识别率不高。S.Al-Emadi设计了CNN网络和完全连接层,进一步对DroneRF dataset进行检测识别,无人机检测和分类的准确性。Yongguang Mo采用了压缩感知方法对DroneRF dataset进行预处理,然后设计了DNN和CNN网络分别进行识别和分类,多级检测方式在数据上实现检测无人机的存在、类型和飞行模型的准确率超过99%。这些研究是基于一个有限的数据集上进行,没有研究噪声对检测性能的影响,此外,也没有研究在有多个无人机信号、或部署后无人机射频端差别或干扰的情况下的检测性能。UAV radio frequency signal identification is one of the main methods of UAV detection and identification. The UAV communicates with the ground control station for flight control and navigation, real-time video transmission and transmission of telemetry information. The RF drone detection system can detect drones by monitoring the communication spectrum. Many scholars have carried out research on the deep learning detection of UAV radio frequency signals. Scholar Allahham developed a dataset, DroneRF dataset, using three commercial drones for verification testing. Mohammad F designed DNN based on the DroneRF dataset, and verified the feasibility of detection and identification based on the drone RF database, but the recognition rate is not high. S.Al-Emadi designed the CNN network and the fully connected layer to further detect and identify the DroneRF dataset, and the accuracy of drone detection and classification. Yongguang Mo used the compressed sensing method to preprocess the DroneRF dataset, and then designed the DNN and CNN networks to identify and classify them respectively. The multi-level detection method realized the accuracy of detecting the existence, type and flight model of drones on the data. 99%. These studies were conducted on a limited dataset and did not investigate the impact of noise on detection performance, nor did they study the presence of multiple UAV signals, or differences in the radio frequency or interference of UAVs after deployment. Detection performance.
对于射频识别,由于型号、批次或出厂时间,其射频指纹特征会有不同,现有技术方案基于少量公开实验数据训练的深度网络,存在泛化能力不强等缺陷,导致识别率下降;当前技术方案无法识别多架无人机问题,特别是同一种型号的无人机识别问题。For radio frequency identification, due to the model, batch or factory time, the radio frequency fingerprint characteristics will be different. The existing technical scheme is based on the deep network trained by a small amount of public experimental data, which has defects such as weak generalization ability, resulting in a decline in the recognition rate; currently The technical solution cannot identify the problem of multiple drones, especially the problem of identifying drones of the same type.
发明内容Contents of the invention
本发明所要解决的技术问题是克服现有技术的缺陷,提供一种基于深度注意力检测模型的无人机射频识别方法,采用基于域适应方法,能够有效解决深度网络泛化能力问题,能够同时识别多架无人机设备,特别是同一型号设备。为达到上述目的,本发明提供一种基于深度注意力检测模型的无人机射频识别方法,包括:采集多个待识别的无人机射频信号;将多个待识别的无人机射频信号输入预先训练获得的深度注意力检测模型,预测输出射频信号分类;射频信号分类包括是否存在无人机、无人机型号和当前无人机工作模式。优先地,多个待识别的无人机射频信号为多个同一时间获取的相同型号的无人机射频信号或不同时间段的相同型号的不同批次的无人机射频信号。优先地,预先训练获得深度注意力检测模型,通过以下步骤实现:构建深度注意力检测模型;获取训练集,训练集包括无人机射频信号样本、无人机射频信号样本对应的真实无人机型号和无人机射频信号样本对应的真实无人机工作模式;对无人机射频信号样本进行预处理;将无人机射频信号样本作为深度注意力检测模型的输入,将无人机射频信号样本对应的真实无人机型号和无人机射频信号样本对应的真实无人机工作模式作为深度注意力检测模型的输出,构建无人机射频信号样本、无人机射频信号样本对应的真实无人机型号和无人机射频信号样本对应的真实无人机工作模式之间的映射关系;利用交叉熵损失函数迭代更新深度注意力检测模型的网络权重参数;若交叉熵损失函数收敛于一定值,则停止迭代更新,获得最终的深度注意力检测模型。The technical problem to be solved by the present invention is to overcome the defects of the prior art, provide a UAV radio frequency identification method based on the deep attention detection model, and adopt the domain adaptation method, which can effectively solve the problem of deep network generalization ability, and can simultaneously Identify multiple drone devices, especially devices of the same model. In order to achieve the above object, the present invention provides a UAV radio frequency identification method based on a deep attention detection model, comprising: collecting a plurality of UAV radio frequency signals to be identified; inputting a plurality of UAV radio frequency signals to be identified The pre-trained deep attention detection model predicts the output radio frequency signal classification; the radio frequency signal classification includes whether there is a drone, the drone model and the current drone working mode. Preferably, the multiple unmanned aerial vehicle radio frequency signals to be identified are multiple unmanned aerial vehicle radio frequency signals of the same type acquired at the same time or different batches of unmanned aerial vehicle radio frequency signals of the same type in different time periods. Preferentially, pre-training to obtain a deep attention detection model is achieved through the following steps: constructing a deep attention detection model; obtaining a training set, the training set includes UAV radio frequency signal samples, and real UAVs corresponding to UAV radio frequency signal samples The model and the real UAV working mode corresponding to the UAV RF signal sample; the UAV RF signal sample is preprocessed; the UAV RF signal sample is used as the input of the deep attention detection model, and the UAV RF The real UAV model corresponding to the signal sample and the real UAV working mode corresponding to the UAV RF signal sample are used as the output of the deep attention detection model to construct the UAV RF signal sample and the UAV RF signal sample. The mapping relationship between the real UAV model and the real UAV working mode corresponding to the UAV RF signal sample; use the cross-entropy loss function to iteratively update the network weight parameters of the deep attention detection model; if the cross-entropy loss function converges At a certain value, the iterative update is stopped to obtain the final deep attention detection model.
优先地,无人机射频信号样本包括同型号的无人机射频信号样本和不同型号的无人机射频信号样本;Preferentially, the UAV radio frequency signal samples include UAV radio frequency signal samples of the same type and UAV radio frequency signal samples of different types;
预处理包括分段STFT、频谱重建、脊值提取、滤噪和防混叠。Preprocessing includes segmented STFT, spectral reconstruction, ridge value extraction, noise filtering and anti-aliasing.
优先地,预处理后的无人机射频信号样本按照树形结构存储,将同型号、多批次和多时间段的无人机区分开;Preferably, the preprocessed UAV RF signal samples are stored in a tree structure to distinguish UAVs of the same model, multiple batches, and multiple time periods;
树形结构中均采用二进制唯一标识符去表示同型号、多批次和多时间段的无人机。Binary unique identifiers are used in the tree structure to represent drones of the same model, multiple batches and multiple time periods.
优先地,深度注意力检测模型采用编码器-解码器架构的全卷积网络;Preferentially, the deep attention detection model employs a fully convolutional network with an encoder-decoder architecture;
编码器包括依次连接的Conv卷积层、MaxPool2D层和残差网络,Conv卷积层为3*3卷积核,Conv卷积层的图片大小为128*128,MaxPool2D层的图片大小为64*64;The encoder includes a Conv convolution layer, a MaxPool2D layer and a residual network connected in sequence. The Conv convolution layer is a 3*3 convolution kernel. The image size of the Conv convolution layer is 128*128, and the image size of the MaxPool2D layer is 64* 64;
残差网络包括ResnetBlock1层、ResnetBlock2层、ResnetBlock3层和ResnetBlock4层,ResnetBlock1层、ResnetBlock2层、ResnetBlock3层和ResnetBlock4层依次连接,ResnetBlock1层的图片大小为64*64,ResnetBlock2层的图片大小、ResnetBlock3层的图片大小和ResnetBlock4层的图片大小为32*32。The residual network includes ResnetBlock1 layer, ResnetBlock2 layer, ResnetBlock3 layer and ResnetBlock4 layer, ResnetBlock1 layer, ResnetBlock2 layer, ResnetBlock3 layer and ResnetBlock4 layer are connected in sequence, the picture size of ResnetBlock1 layer is 64*64, the picture size of ResnetBlock2 layer, and the picture of ResnetBlock3 layer The size and image size of the ResnetBlock4 layer is 32*32.
优先地,解码器包括射频特征模块和射频通道注意聚集模块;Preferably, the decoder includes a radio frequency feature module and a radio frequency channel attention aggregation module;
射频特征模块包括FF Block1模块、FF Block2模块和FF Block3模块,ResnetBlock1层的输出端连接FF Block1模块的输入端,ResnetBlock2层的输出端连接FFBlock2模块的输入端,ResnetBlock3层的输出端连接FF Block3模块的输入端;The radio frequency feature module includes FF Block1 module, FF Block2 module and FF Block3 module. The output end of the ResnetBlock1 layer is connected to the input end of the FF Block1 module, the output end of the ResnetBlock2 layer is connected to the input end of the FFBlock2 module, and the output end of the ResnetBlock3 layer is connected to the FF Block3 module. the input terminal;
射频通道注意聚集模块包括第一卷积层、第二卷积层、第三卷积层、第四卷积层、FCA BLOCK1模块和FCA BLOCK2模块,The RF channel attention aggregation module includes the first convolutional layer, the second convolutional layer, the third convolutional layer, the fourth convolutional layer, FCA BLOCK1 module and FCA BLOCK2 module,
FF Block1模块的输出端、第一卷积层和第二卷积层依次连接,第三卷积层的输出端连接FF Block1模块的输入端,第三卷积层的输入端连接FCA BLOCK1模块的输出端,FCABLOCK1模块的输入端连接FF Block2模块的输出端,The output of the FF Block1 module, the first convolutional layer and the second convolutional layer are connected in sequence, the output of the third convolutional layer is connected to the input of the FF Block1 module, and the input of the third convolutional layer is connected to the FCA BLOCK1 module. The output terminal, the input terminal of the FCABLOCK1 module is connected to the output terminal of the FF Block2 module,
第一卷积层的图片大小为128*128,第一卷积层的卷积核为3*3,第二卷积层的图片大小为1*128,第二卷积层的卷积核为1*3,第三卷积层和第四卷积层的图片大小为32*32;FF Block3模块的输入端连接卷积核为1×1的第四卷积层,The picture size of the first convolution layer is 128*128, the convolution kernel of the first convolution layer is 3*3, the picture size of the second convolution layer is 1*128, and the convolution kernel of the second convolution layer is 1*3, the image size of the third convolutional layer and the fourth convolutional layer is 32*32; the input terminal of the FF Block3 module is connected to the fourth convolutional layer with a convolution kernel of 1×1,
第四卷积层的输入端连接FCA BLOCK2模块的输出端,FCA BLOCK2模块的输入端连接ResnetBlock4层的输出端。The input of the fourth convolutional layer is connected to the output of the FCA BLOCK2 module, and the input of the FCA BLOCK2 module is connected to the output of the ResnetBlock4 layer.
优先地,ResnetBlock1层、ResnetBlock2层、ResnetBlock3层和ResnetBlock4层均包括输入x、第一Conv卷积层、第一relu层、第二Conv卷积层、第二relu层和第三Conv卷积层,第一Conv卷积层、第一relu层、第二Conv卷积层、第二relu层和第三Conv卷积层依次连接,第三Conv卷积层和输入x相加融合连接第三relu层;射频通道注意聚集模块包括FCABLOCK1模块和FCA BLOCK2模块,FCA BLOCK1模块和FCA BLOCK2模块均包括平行连接的位置关系注意块和通道关系注意块,位置关系注意块包括第五卷积层、第六卷积层、第七卷积层和第八卷积层,第五卷积层的输出端连接第六卷积层的输入端、第七卷积层的输入端和第八卷积层的输入端,第六卷积层的输出端和第七卷积层的输出端相乘,获得第一输出;第一输出和第八卷积层的输出端相乘,获得第二输出;第二输出和第五卷积层的输出端相加,获得第三输出;通道关系注意块包括第九卷积层、第十卷积层、第十一卷积层、第十二卷积层和第十三卷积层,第九卷积层的输出端连接第十卷积层的输入端、第十一卷积层的输入端和第十二卷积层的输入端,第十卷积层的输出端和第十一卷积层的输出端相乘,获得第四输出;第四输出和第十二卷积层的输出端相乘,获得第五输出;第五输出输入第十三卷积层后,获得第六输出;第六输出和第九卷积层的输出端相加,获得第七输出;第七输出和第三输出相加,获得FCA BLOCK1模块和FCA BLOCK2模块的输出。优先地,预先训练获得深度注意力检测模型,通过以下步骤实现:获取测试集,测试集包括无人机射频信号测试样本、无人机射频信号测试样本对应的真实无人机型号和无人机射频信号测试样本对应的真实无人机工作模式;将无人机射频信号测试样本输入最终的深度注意力检测模型,预测输出无人机型号和无人机工作模式;统计与对应的真实无人机型号相同的预测输出的无人机型号数量;计算与对应的真实无人机型号相同的预测输出的无人机型号数量和最终的深度注意力检测模型预测的所有无人机型号总数量的比值,获得最终的深度注意力检测模型的无人机型号准确率;统计与对应的真实无人机工作模式相同的预测输出的无人机工作模式数量;计算与对应的真实无人机工作模式相同的预测输出的无人机工作模式数量和最终的深度注意力检测模型预测的所有无人机工作模式总数量的比值,获得最终的深度注意力检测模型的无人机工作模式准确率;若无人机型号准确率高于设定的无人机型号准确率阈值且无人机工作模式准确率高于设定的无人机工作模式准确率阈值,则判定最终的深度注意力检测模型合格。一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现上述任一项所述方法的步骤。一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述任一项所述方法的步骤。Preferentially, ResnetBlock1 layer, ResnetBlock2 layer, ResnetBlock3 layer and ResnetBlock4 layer all comprise input x, the first Conv convolutional layer, the first relu layer, the second Conv convolutional layer, the second relu layer and the third Conv convolutional layer, The first Conv convolutional layer, the first relu layer, the second Conv convolutional layer, the second relu layer and the third Conv convolutional layer are connected in sequence, and the third Conv convolutional layer and the input x are summed and fused to connect the third relu layer ;The radio frequency channel attention aggregation module includes FCABLOCK1 module and FCA BLOCK2 module, FCA BLOCK1 module and FCA BLOCK2 module both include the positional relationship attention block and the channel relationship attention block connected in parallel, the positional relationship attention block includes the fifth convolutional layer, the sixth volume convolutional layer, the seventh convolutional layer and the eighth convolutional layer, the output of the fifth convolutional layer is connected to the input of the sixth convolutional layer, the input of the seventh convolutional layer and the input of the eighth convolutional layer , the output of the sixth convolutional layer is multiplied by the output of the seventh convolutional layer to obtain the first output; the first output is multiplied by the output of the eighth convolutional layer to obtain the second output; the second output and The outputs of the fifth convolutional layer are added to obtain the third output; the channel relation attention block includes the ninth convolutional layer, the tenth convolutional layer, the eleventh convolutional layer, the twelfth convolutional layer and the thirteenth convolutional layer The convolutional layer, the output of the ninth convolutional layer is connected to the input of the tenth convolutional layer, the input of the eleventh convolutional layer and the input of the twelfth convolutional layer, and the output of the tenth convolutional layer Multiply with the output of the eleventh convolutional layer to obtain the fourth output; multiply the fourth output with the output of the twelfth convolutional layer to obtain the fifth output; after the fifth output is input to the thirteenth convolutional layer , to obtain the sixth output; the sixth output is added to the output of the ninth convolutional layer to obtain the seventh output; the seventh output is added to the third output to obtain the outputs of the FCA BLOCK1 module and the FCA BLOCK2 module. Preferentially, pre-training to obtain a deep attention detection model is achieved through the following steps: Obtain a test set, the test set includes the UAV RF signal test sample, the real UAV model corresponding to the UAV RF signal test sample, and the unmanned The real UAV working mode corresponding to the UAV RF signal test sample; input the UAV RF signal test sample into the final deep attention detection model, and predict the output UAV model and UAV working mode; statistics and corresponding real UAV The number of drone models with the same predicted output as the drone model; calculate the number of drone models with the same predicted output as the corresponding real drone model and the final deep attention detection model predicts all unmanned The ratio of the total number of man-machine models to obtain the accuracy of the UAV model of the final deep attention detection model; count the number of UAV work modes that are the same as the corresponding real UAV work modes; calculate and The ratio of the number of UAV working modes predicted by the corresponding real UAV working mode to the total number of all UAV working modes predicted by the final deep attention detection model is obtained to obtain the final deep attention detection model. Accuracy rate of man-machine working mode; if the accuracy rate of drone model is higher than the set threshold of accuracy rate of drone model and the accuracy rate of drone work mode is higher than the set threshold of accuracy rate of drone work mode, Then it is determined that the final deep attention detection model is qualified. An electronic device, comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor implements the steps of any one of the methods described above when executing the program. A computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of any one of the above-mentioned methods are realized.
本发明所达到的有益效果:The beneficial effect that the present invention reaches:
本发明提出了一种结合软件无线电设备的无人机射频信号探测与识别方法,软件无线电设备识别采集无人机射频信号,利用深度注意力检测模型进行识别,达到识别无人机目的;The present invention proposes a method for detecting and identifying radio frequency signals of unmanned aerial vehicles combined with software radio equipment. The software radio equipment identifies and collects radio frequency signals of unmanned aerial vehicles, and uses a deep attention detection model for identification to achieve the purpose of identifying unmanned aerial vehicles;
本发明提出了一种基于深度注意力模型的无人机射频识别方法,提出基于深度注意力模型检测模型,并在公开数据集DroneRF dataset进行综合比较验证,进一步验证了同时识别多架同一型号的无人机射频检测识别可行性,特别是多机信号的检测识别问题。并且基于该方法,构建了一套无人机射频快速识别设备,能够检测是否存在无人机,并且能够区分多架不同无人机的工作状态。The present invention proposes a UAV radio frequency identification method based on the depth attention model, proposes a detection model based on the depth attention model, and conducts a comprehensive comparison and verification in the public data set DroneRF dataset, and further verifies that multiple UAVs of the same model can be recognized at the same time The feasibility of UAV radio frequency detection and identification, especially the detection and identification of multi-aircraft signals. And based on this method, a set of UAV radio frequency rapid identification equipment is constructed, which can detect whether there is a UAV, and can distinguish the working status of multiple different UAVs.
本发明采用软件无线电设备采集同一个型号两个无人机的射频信号,深度注意力检测模型进行识别识别工作模式,能够区分出同一型号的多个无人机。The invention adopts a software radio device to collect radio frequency signals of two unmanned aerial vehicles of the same model, and uses a deep attention detection model to identify and identify working modes, thereby being able to distinguish multiple unmanned aerial vehicles of the same model.
附图说明Description of drawings
图1是本发明采集无人机射频信号的原理框图;Fig. 1 is the functional block diagram of collecting unmanned aerial vehicle radio frequency signal of the present invention;
图2是本发明无人机树形结构存储的分布图;Fig. 2 is the distribution diagram that the UAV tree structure of the present invention stores;
图3是本发明采用的编码器-解码器架构的架构图;Fig. 3 is the architecture diagram of the encoder-decoder architecture adopted by the present invention;
图4是本发明中残差网络模块的架构图;Fig. 4 is the architecture diagram of the residual network module in the present invention;
图5是本发明中射频通道注意聚集模块的架构图。Fig. 5 is a structure diagram of a radio frequency channel attention gathering module in the present invention.
具体实施方式Detailed ways
以下实施例仅用于更加清楚地说明本发明的技术方案,而不能以此来限制本发明的保护范围。The following examples are only used to illustrate the technical solution of the present invention more clearly, but not to limit the protection scope of the present invention.
实施例一Embodiment one
边缘计算设备和软件无线电设备在现有技术中可采用的型号很多,本领域技术人员可根据实际需求选用合适的型号,本实施例不再一一举例。There are many types of edge computing devices and software radio devices that can be used in the prior art, and those skilled in the art can select appropriate models according to actual needs, and this embodiment will not give examples one by one.
实施例二Embodiment two
无人机射频信号反应了无人机状态,包括关机、开启、连接、悬停、飞行和视频,通过射频接收机接收到无人机射频信号,对无人机射频信号进行分析,数据采集如图1所示。The RF signal of the UAV reflects the state of the UAV, including shutdown, opening, connection, hovering, flight and video. The RF signal of the UAV is received by the RF receiver, and the RF signal of the UAV is analyzed. The data collection is as follows: Figure 1 shows.
提出了一种基于深度注意力模型的无人机射频识别方法,采用一种基于编码器-解码器架构的全卷积网络,采用残差网络作为特征提取的骨干网络,特别设计了射频通道注意聚集模块进行识别,模型在公开数据集DroneRF dataset上训练验证,无人机检测分类达到100%,无人机识别达到99.695%,无人机工作模式识别达到99.633%。深度注意力检测模型的网络权重参数包括网络训练参数w和偏置参数b,全局目标损失参数中的α和β。A UAV RFID method based on a deep attention model is proposed, using a fully convolutional network based on an encoder-decoder architecture, using a residual network as the backbone network for feature extraction, and specifically designing the RF channel attention The aggregation module is used for recognition, and the model is trained and verified on the public dataset DroneRF dataset. The detection and classification of drones reaches 100%, the recognition of drones reaches 99.695%, and the recognition of drone working patterns reaches 99.633%. The network weight parameters of the deep attention detection model include network training parameters w and bias parameters b, α and β in the global target loss parameters.
本发明采用软件无线电设备USRP_X310采集了同一个型号两个无人机的射频数据,迁移学习后,能够检测到无人机,并识别其工作模式。The invention uses the software radio equipment USRP_X310 to collect the radio frequency data of two drones of the same model, and after transfer learning, the drones can be detected and their working modes can be identified.
软件无线电设备一般采用USRP射频接收器,例如USRP_X310等,软件无线电设备采集的无人机射频信号;对无人机射频信号进行预处理,预处理包括分段STFT、频谱重建、脊值提取、滤噪和防混叠,进行信号预处理,便于无人机射频信号的数字化存储。Software radio equipment generally uses USRP radio frequency receivers, such as USRP_X310, etc., and UAV radio frequency signals collected by software radio equipment; preprocessing of UAV radio frequency signals, preprocessing includes segmental STFT, spectrum reconstruction, ridge value extraction, filtering Noise and anti-aliasing, signal preprocessing, convenient for digital storage of UAV RF signals.
预处理后的无人机射频信号按照一定格式存储,与公开数据结构区别开,本发明预处理后的无人机射频信号的存储格式采用五层树形结构,如图2所示,将同型号、多批次和多时间段的无人机区分开。预处理后的无人机射频信号以树状方式组织存储,通过二进制唯一标识符BUI标记预处理后的无人机射频信号。The preprocessed UAV radio frequency signal is stored according to a certain format, which is distinguished from the public data structure. The storage format of the preprocessed UAV radio frequency signal of the present invention adopts a five-layer tree structure, as shown in Figure 2. Model, multi-batch, and multi-time-period drones are distinguished. The preprocessed UAV radio frequency signal is organized and stored in a tree form, and the preprocessed UAV radio frequency signal is marked by a binary unique identifier BUI.
树状方式组织存储分别5层:Level1、Level2、Level3、Level4和Level5;There are 5 levels of storage organized in tree form: Level1, Level2, Level3, Level4 and Level5;
具体的,BUI的第一个字段代表时间段,BUI的第二个字段代表类别,BUI的第三个字段代表批次,BUI的第四个字段代表工作模式,BUI的第五个字段或者其他扩展代表距离;Specifically, the first field of the BUI represents the time period, the second field of the BUI represents the category, the third field of the BUI represents the batch, the fourth field of the BUI represents the working mode, the fifth field of the BUI or other Extended rep distance;
Level1中包括Day1、……、DayN,Day1的二进制唯一标识符BUI=0-X-X-X-X,DayN的二进制唯一标识符BUI=n-X-X-X-X;Level1 includes Day1,..., DayN, the binary unique identifier BUI of Day1=0-X-X-X-X, the binary unique identifier BUI of DayN=n-X-X-X-X;
Level2中包括Category1、……、CategoryN,Category1的二进制唯一标识符BUI=0-1-X-X-X,CategoryN的二进制唯一标识符BUI=0-n-X-X-X;Level2 includes Category1, ..., CategoryN, the binary unique identifier BUI of Category1=0-1-X-X-X, the binary unique identifier BUI of CategoryN=0-n-X-X-X;
Level3中包括UAV1、UAV2、……、UAVN,UAV1的二进制唯一标识符BUI=0-1-0-X-X,UAV2的二进制唯一标识符BUI=0-1-1-X-X,UAVN的二进制唯一标识符BUI=0-1-n-X-X;Level3 includes UAV1, UAV2, ..., UAVN, UAV1 binary unique identifier BUI=0-1-0-X-X, UAV2 binary unique identifier BUI=0-1-1-X-X, UAVN binary unique identifier BUI=0-1-n-X-X;
Level4中包括Model1、Model2、Model3和Model4,Model1的二进制唯一标识符BUI=0-1-1-0-X,Model2的二进制唯一标识符BUI=0-1-1-1-X,Model3的二进制唯一标识符BUI=0-1-1-2-X,Model4的二进制唯一标识符BUI=0-1-1-n-X;Level4 includes Model1, Model2, Model3 and Model4, Model1's binary unique identifier BUI=0-1-1-0-X, Model2's binary unique identifier BUI=0-1-1-1-X, Model3's binary The unique identifier BUI=0-1-1-2-X, the binary unique identifier BUI of Model4=0-1-1-n-X;
Level5中Model1对应的Distance1的二进制唯一标识符BUI=01100、Model1对应的Distance2的二进制唯一标识符BUI=01101、……、Model1对应的Distancem的二进制唯一标识符BUI=0110m;The binary unique identifier BUI of Distance1 corresponding to Model1 in Level5=01100, the binary unique identifier BUI of Distance2 corresponding to Model1=01101,..., the binary unique identifier BUI of Distancem corresponding to Model1=0110m;
Level5中Model2对应的Distance1的二进制唯一标识符BUI=01110、Model2对应的Distance2的二进制唯一标识符BUI=01111、……、Model2对应的Distancem的二进制唯一标识符BUI=0111m;The binary unique identifier BUI of Distance1 corresponding to Model2 in Level5=01110, the binary unique identifier BUI of Distance2 corresponding to Model2=01111,..., the binary unique identifier BUI of Distancem corresponding to Model2=0111m;
Level5中Model3对应的Distance1的二进制唯一标识符BUI=01120、Model3对应的Distance2的二进制唯一标识符BUI=01121、……、Model3对应的Distancem的二进制唯一标识符BUI=0112m;The binary unique identifier BUI of Distance1 corresponding to Model3 in Level5=01120, the binary unique identifier BUI of Distance2 corresponding to Model3=01121,..., the binary unique identifier BUI of Distancem corresponding to Model3=0112m;
Level5中Model4对应的Distance1的二进制唯一标识符BUI=011n0、Model4对应的Distance2的二进制唯一标识符BUI=011n1、……、Model4对应的Distancem的二进制唯一标识符BUI=011nm。In Level5, the binary unique identifier BUI=011n0 of Distance1 corresponding to Model4, the binary unique identifier BUI=011n1 of Distance2 corresponding to Model4, ..., the binary unique identifier BUI=011nm of Distancem corresponding to Model4.
基于深度注意力模型采用编码器-解码器架构的全卷积网络,如图3所示,主要由特征提取路径(编码器)和上采样路径(解码器)组成。A fully convolutional network based on a deep attention model with an encoder-decoder architecture, as shown in Figure 3, mainly consists of a feature extraction path (encoder) and an upsampling path (decoder).
基于深度注意力模型采用编码器-解码器架构的全卷积网络,如图3所示,主要由特征提取路径(编码器)和上采样路径(解码器)组成。A fully convolutional network based on a deep attention model with an encoder-decoder architecture, as shown in Figure 3, mainly consists of a feature extraction path (encoder) and an upsampling path (decoder).
编码路径包括Conv卷积层、MaxPool2D层和残差网络,Conv卷积层、MaxPool2D层和残差网络依次连接,Conv卷积层为3*3卷积核,Conv卷积层的图片大小为128*128,MaxPool2D层的图片大小为64*64,The encoding path includes Conv convolutional layer, MaxPool2D layer and residual network. The Conv convolutional layer, MaxPool2D layer and residual network are connected in sequence. The Conv convolutional layer is a 3*3 convolution kernel, and the image size of the Conv convolutional layer is 128 *128, the image size of the MaxPool2D layer is 64*64,
在编码路径中,利用残差网络(Resnet101)作为特征提取的骨干,在不改变预训练参数规模的情况下,集中精力保留更多的无人机射频信号细节信息,最后两个残差块中的下采样层被扩张卷积层所取代,这一策略可以保留更多的类别信息。In the encoding path, the residual network (Resnet101) is used as the backbone of feature extraction, and without changing the size of the pre-training parameters, it concentrates on retaining more details of the UAV RF signal. In the last two residual blocks The down-sampling layer in is replaced by a dilated convolutional layer, a strategy that preserves more category information.
具体地,残差网络包括ResnetBlock1层、ResnetBlock2层、ResnetBlock3层和ResnetBlock4层,ResnetBlock1层的图片大小为64*64,ResnetBlock2层的图片大小、ResnetBlock3层的图片大小和ResnetBlock4层的图片大小为32*32;Specifically, the residual network includes ResnetBlock1 layer, ResnetBlock2 layer, ResnetBlock3 layer and ResnetBlock4 layer, the picture size of ResnetBlock1 layer is 64*64, the picture size of ResnetBlock2 layer, the picture size of ResnetBlock3 layer and the picture size of ResnetBlock4 layer are 32*32 ;
解码路径主要由射频特征(FF BLOCK)模块和射频通道注意聚集(FCA)模块按一定顺序连接。首先,对应于Resnet101的残余区块,射频特征(FF BLOCK)模块包括三个FF模块:FF Block1模块、FF Block2模块和FF Block3模块,FF Block1模块的图片大小为64*64,FFBlock2模块的图片大小和FF Block3模块的图片大小为32*32,FF Block1模块的输入端连接ResnetBlock1层的输出端,FF Block2模块的输入端连接ResnetBlock2层的输出端,FFBlock3模块的输入端连接ResnetBlock3层的输出端,通过产生全局特征引导来提高射频细节信息的识别能力。The decoding path is mainly connected by a radio frequency feature (FF BLOCK) module and a radio frequency channel attention aggregation (FCA) module in a certain order. First, corresponding to the residual block of Resnet101, the radio frequency feature (FF BLOCK) module includes three FF modules: FF Block1 module, FF Block2 module and FF Block3 module, the picture size of FF Block1 module is 64*64, and the picture size of FFBlock2 module Size and the picture size of FF Block3 module is 32*32, the input terminal of FF Block1 module is connected to the output terminal of ResnetBlock1 layer, the input terminal of FF Block2 module is connected to the output terminal of ResnetBlock2 layer, and the input terminal of FFBlock3 module is connected to the output terminal of ResnetBlock3 layer , to improve the recognition ability of radio frequency detail information by generating global feature guidance.
射频通道注意聚集模块被嵌入到解码路径的头部和中部,以分别捕捉空间和通道领域的长距离背景信息。FCA模块的头部包括第一卷积层、第二卷积层、第三卷积层、FCABLOCK1模块、第四卷积层和FCA BLOCK2模块,FF Block1模块、第一卷积层和第二卷积层依次连接,第一卷积层的图片大小为128*128,第一卷积层的卷积核为3*3,第二卷积层的图片大小为1*128,第二卷积层的卷积核为1*3;RF Channel Attention Aggregation modules are embedded into the head and middle of the decoding path to capture long-distance background information in the spatial and channel domains, respectively. The head of the FCA module includes the first convolutional layer, the second convolutional layer, the third convolutional layer, the FCABLOCK1 module, the fourth convolutional layer and the FCA BLOCK2 module, the FF Block1 module, the first convolutional layer and the second volume The layers are connected sequentially. The image size of the first convolution layer is 128*128, the convolution kernel of the first convolution layer is 3*3, the image size of the second convolution layer is 1*128, and the image size of the second convolution layer is 1*128. The convolution kernel is 1*3;
此外,在FF Block1模块输入层连接卷积核为1×1的第三卷积层,第三卷积层的输入端连接,FF Block3模块的输入层连接卷积核为1×1的第四卷积层,用于通道维度转换,以匹配低层特征图,第三卷积层和第四卷积层的图片大小为32*32。In addition, the input layer of the FF Block1 module is connected to the third convolutional layer with a convolution kernel of 1×1, the input of the third convolutional layer is connected, and the input layer of the FF Block3 module is connected to the fourth convolutional layer with a convolution kernel of 1×1. The convolutional layer is used for channel dimension conversion to match the low-level feature map. The image size of the third and fourth convolutional layers is 32*32.
最后,在FF Block1模块输出语义特征图之后,语义特征图依次输入第一卷积层和第二卷积层,进行双线性上采样操作,获得全连接分类图。需要注意的是,在每个卷积层和转置的卷积层后面都加载了标准化操作和Relu激活函数。Finally, after the FF Block1 module outputs the semantic feature map, the semantic feature map is sequentially input into the first convolutional layer and the second convolutional layer for bilinear upsampling operation to obtain a fully connected classification map. It should be noted that normalization operation and Relu activation function are loaded after each convolutional layer and transposed convolutional layer.
采用ResnetBlock1层、ResnetBlock2层、ResnetBlock3层和ResnetBlock4层作为特征提取路径来学习有效的类别特征。作为残差网络的单元结构,ResnetBlock1层、ResnetBlock2层、ResnetBlock3层和ResnetBlock4层呈现的是几个堆叠卷积层的映射过程,如图2所示,均包括第一Conv卷积层、第一relu层、第二Conv卷积层、第二relu层和第三Conv卷积层,第一Conv卷积层、第一relu层、第二Conv卷积层、第二relu层和第三Conv卷积层依次连接,第三Conv卷积层的输出和输入x(射频信号矩阵)特征融合后经过第三relu层,获得H(x)(经卷积后的射频信号矩阵)。ResnetBlock1 layer, ResnetBlock2 layer, ResnetBlock3 layer and ResnetBlock4 layer are used as feature extraction paths to learn effective category features. As the unit structure of the residual network, the ResnetBlock1 layer, ResnetBlock2 layer, ResnetBlock3 layer and ResnetBlock4 layer present the mapping process of several stacked convolutional layers, as shown in Figure 2, including the first Conv convolutional layer, the first relu layer, the second Conv convolution layer, the second relu layer and the third Conv convolution layer, the first Conv convolution layer, the first relu layer, the second Conv convolution layer, the second relu layer and the third Conv convolution The layers are connected in sequence, and the output of the third Conv convolutional layer and the input x (radio frequency signal matrix) feature are fused and passed through the third relu layer to obtain H(x) (convolved radio frequency signal matrix).
注意机制使本发明方法能够将注意力集中在与特定类别相关的关键区域,通过对背景信息进行编码来增强判别特征,是提高射频特征分类性能的有效途径。因此,本发明方法中的深度注意力检测模型提出了射频通道注意聚集模块(FCA),如图5所示。该模块由平行连接的位置关系注意块(Position relation attention lock)和通道关系注意块(Channel relation attention lock)组成。The attention mechanism enables the method of the present invention to focus on key areas related to specific categories, and enhances discriminant features by encoding background information, which is an effective way to improve the performance of radio frequency feature classification. Therefore, the deep attention detection model in the method of the present invention proposes a radio frequency channel attention aggregation module (FCA), as shown in FIG. 5 . The module consists of a Position relation attention lock and a Channel relation attention lock connected in parallel.
图5中的卷积均为1X1和3X3两种卷积交换使用,箭头包含了卷积尺寸变化和relu激活函数。The convolutions in Figure 5 are used interchangeably with 1X1 and 3X3 convolutions, and the arrows include convolution size changes and relu activation functions.
梯度下降算法通过减少损失函数逐步搜索设置参数。本发明利用多个交叉熵损失函数来监测该网络中特定尺度的特征,这种深度监控策略能够区分多尺度特征,以捕获特定类别的上下文并优化训练过程。Gradient descent algorithm searches setting parameters step by step by reducing the loss function. The present invention utilizes multiple cross-entropy loss functions to monitor specific-scale features in the network, and this deep monitoring strategy can distinguish multi-scale features to capture specific category contexts and optimize the training process.
交叉熵损失函数表示每个测量处预测值和真实标签值之间的偏差,公式如下:The cross-entropy loss function represents the deviation between the predicted value and the true label value at each measurement, and the formula is as follows:
L表示深度注意力检测模型的预测值和真实标签值之间的偏差,表示真实标签值(前述的BUI的第4字段),表示预测值,N为卷积长度,比如32*32=1024;L represents the deviation between the predicted value of the deep attention detection model and the true label value, Indicates the real tag value (the fourth field of the aforementioned BUI), Indicates the predicted value, N is the convolution length, such as 32*32=1024;
建立了一个加权模型,其全局目标损失函数表示如下,其中α和β表示权重系数,初始训练时α和β分别设置为0.4和0.2。Lp1、Lp2和Lo分别是图3中三个交叉熵损失函数值,那么全局目标损失函数为L。A weighted model is established, and its global objective loss function is expressed as follows, where α and β represent weight coefficients, and α and β are set to 0.4 and 0.2, respectively, during initial training. Lp1, Lp2, and Lo are the three cross-entropy loss function values in Figure 3, respectively, then the global target loss function is L.
L=lo+αLp1+βLp2(2)L=lo+αLp1+βLp2 (2)
模型训练与测试:Model training and testing:
1.数据集划分1. Data set division
将前期实验室的数据集分为三个类,第一个是无人机检测,第二个是无人机识别,最后是无人机模式识别。使用10倍交叉验证将数据集随机分成10个子集,9个子集用于训练,1个子集用于测试。The data set of the previous laboratory is divided into three categories, the first is drone detection, the second is drone recognition, and the last is drone pattern recognition. The dataset is randomly divided into 10 subsets using 10-fold cross-validation, 9 subsets are used for training and 1 subset is used for testing.
2.训练框架和训练参数。训练框架采用深度学习框架PyTorch、TensorFlow、MXNet或飞桨PaddlePaddle。在一台配备Intel Core i7-9700k CPU、32GB RAM和NVIDIA RTX3080 GPU(12GB显存)的计算机(不限于)上进行训练,训练参数见表1,训练优化器为Adam。训练轮次100次。2. Training framework and training parameters. The training framework adopts the deep learning framework PyTorch, TensorFlow, MXNet or PaddlePaddle. Training is performed on a computer (not limited) equipped with Intel Core i7-9700k CPU, 32GB RAM and NVIDIA RTX3080 GPU (12GB video memory). The training parameters are shown in Table 1, and the training optimizer is Adam. 100 training rounds.
表1参数设置Table 1 parameter settings
3.测试3. Test
利用测试集进行测试,如果识别过低,调整模型参数重新训练。如果达到95%即可停止训练,保存模型。Use the test set for testing. If the recognition is too low, adjust the model parameters and retrain. If it reaches 95%, you can stop training and save the model.
4.模型迭代改进4. Model iterative improvement
当有新的型号无人机或其他测试数据,利用迁移学习,在前面保存模型基础上继续训练,快速迭代模型。When there are new models of drones or other test data, transfer learning is used to continue training on the basis of the previously saved model and quickly iterate the model.
将训练模型集成到边缘计算设备上,结合检测无人机射频信号的设备,进行无人机的射频信号识别。Integrate the training model on the edge computing device, combined with the device for detecting the radio frequency signal of the UAV, to identify the UAV's radio frequency signal.
1.本发明提出了一种结合软件无线电设备的无人机射频信号探测与识别方法,软件无线电识别采集无人机信号,利用深度注意力模型进行识别,达到识别无人机目的。1. The present invention proposes a method for detecting and identifying UAV radio frequency signals combined with software radio equipment. The UAV signal is collected by software radio identification, and the deep attention model is used for identification to achieve the purpose of identifying UAVs.
2.本发明提出了一种基于深度注意力模型的无人机射频识别方法,采用一种基于编码器-解码器架构的全卷积网络,采用残差网络作为特征提取的骨干网络,特别设计了射频通道注意聚集(FCA)模块进行识别,模型在公开数据集(DroneRF dataset)上训练验证,无人机检测分类达到99.895%,无人机识别达到98.33%,无人机工作模式识别达到99.33%。2. The present invention proposes a UAV radio frequency identification method based on a deep attention model, adopts a fully convolutional network based on an encoder-decoder architecture, uses a residual network as a backbone network for feature extraction, and specially designs The RF Channel Attention Aggregation (FCA) module was used for identification. The model was trained and verified on the public dataset (DroneRF dataset). The drone detection and classification reached 99.895%, the drone recognition reached 98.33%, and the drone working mode recognition reached 99.33% %.
3.本发明方法采集了同一个型号两个无人机的射频数据,迁移学习后,能够检测到无人机,并识别其工作模式。3. The method of the present invention collects the radio frequency data of two drones of the same model, and after transfer learning, the drones can be detected and their working modes can be identified.
4.本发明利用边缘计算设备部署训练好模型进行无人机识别,扩充应用场景。4. The present invention utilizes edge computing devices to deploy and train models for UAV identification and expand application scenarios.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowcharts and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It should be understood that each procedure and/or block in the flowchart and/or block diagram, and a combination of procedures and/or blocks in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a An apparatus for realizing the functions specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions The device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby The instructions provide steps for implementing the functions specified in the flow chart or blocks of the flowchart and/or the block or blocks of the block diagrams.
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明技术原理的前提下,还可以做出若干改进和变形,这些改进和变形也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the technical principle of the present invention, some improvements and modifications can also be made. It should also be regarded as the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211078175.5ACN115687893A (en) | 2022-09-05 | 2022-09-05 | Unmanned aerial vehicle radio frequency identification method based on deep attention detection model |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211078175.5ACN115687893A (en) | 2022-09-05 | 2022-09-05 | Unmanned aerial vehicle radio frequency identification method based on deep attention detection model |
| Publication Number | Publication Date |
|---|---|
| CN115687893Atrue CN115687893A (en) | 2023-02-03 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211078175.5APendingCN115687893A (en) | 2022-09-05 | 2022-09-05 | Unmanned aerial vehicle radio frequency identification method based on deep attention detection model |
| Country | Link |
|---|---|
| CN (1) | CN115687893A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108196566A (en)* | 2018-03-16 | 2018-06-22 | 西安科技大学 | A kind of small drone cloud brain control system and its method |
| CN109829399A (en)* | 2019-01-18 | 2019-05-31 | 武汉大学 | A kind of vehicle mounted road scene point cloud automatic classification method based on deep learning |
| CN110213010A (en)* | 2019-04-28 | 2019-09-06 | 浙江大学 | A kind of unmanned plane detection system and method based on multi-channel radio frequency signal |
| CN110223359A (en)* | 2019-05-27 | 2019-09-10 | 浙江大学 | It is a kind of that color model and its construction method and application on the dual-stage polygamy colo(u)r streak original text of network are fought based on generation |
| CN112348006A (en)* | 2021-01-11 | 2021-02-09 | 湖南星空机器人技术有限公司 | Unmanned aerial vehicle signal identification method, system, medium and equipment |
| CN113160246A (en)* | 2021-04-14 | 2021-07-23 | 中国科学院光电技术研究所 | Image semantic segmentation method based on depth supervision |
| US20210255356A1 (en)* | 2017-06-08 | 2021-08-19 | The Regents Of The University Of Colorado, A Body Corporate | Drone presence detection |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210255356A1 (en)* | 2017-06-08 | 2021-08-19 | The Regents Of The University Of Colorado, A Body Corporate | Drone presence detection |
| CN108196566A (en)* | 2018-03-16 | 2018-06-22 | 西安科技大学 | A kind of small drone cloud brain control system and its method |
| CN109829399A (en)* | 2019-01-18 | 2019-05-31 | 武汉大学 | A kind of vehicle mounted road scene point cloud automatic classification method based on deep learning |
| CN110213010A (en)* | 2019-04-28 | 2019-09-06 | 浙江大学 | A kind of unmanned plane detection system and method based on multi-channel radio frequency signal |
| CN110223359A (en)* | 2019-05-27 | 2019-09-10 | 浙江大学 | It is a kind of that color model and its construction method and application on the dual-stage polygamy colo(u)r streak original text of network are fought based on generation |
| CN112348006A (en)* | 2021-01-11 | 2021-02-09 | 湖南星空机器人技术有限公司 | Unmanned aerial vehicle signal identification method, system, medium and equipment |
| CN113160246A (en)* | 2021-04-14 | 2021-07-23 | 中国科学院光电技术研究所 | Image semantic segmentation method based on depth supervision |
| Title |
|---|
| MOHAMMAD F. AL-SA’D等: "RF-based drone detection and identification using deep learning approaches: An initiative towards a large open source drone database", FUTURE GENERATION COMPUTER SYSTEMS, 9 May 2019 (2019-05-09), pages 7 - 8* |
| YONGGUANG MO等: "Deep Learning Approach to UAV Detection and Classification by Using Compressively Sensed RF Signal", SENSORS, vol. 22, no. 3072, 16 April 2022 (2022-04-16), pages 1 - 15* |
| 梁煜;张金铭;张为;: "一种改进的卷积神经网络的室内深度估计方法", 天津大学学报(自然科学与工程技术版), no. 08, 2 June 2020 (2020-06-02), pages 74 - 80* |
| Publication | Publication Date | Title |
|---|---|---|
| KR102834611B1 (en) | An artificial intelligence apparatus for recognizing object and method for the same | |
| Chriki et al. | Deep learning and handcrafted features for one-class anomaly detection in UAV video | |
| Ozturk et al. | RF-based low-SNR classification of UAVs using convolutional neural networks | |
| CN112434643A (en) | Classification and identification method for low-slow small targets | |
| US20170316311A1 (en) | Sparse inference modules for deep learning | |
| CN110084094A (en) | A kind of unmanned plane target identification classification method based on deep learning | |
| Huynh-The et al. | RF-UAVNet: High-performance convolutional network for RF-based drone surveillance systems | |
| McCoy et al. | Ensemble deep learning for sustainable multimodal UAV classification | |
| WO2019032428A1 (en) | Systems and methods for physical detection using radio frequency noise floor signals and deep learning techniques | |
| Chen et al. | Collaborative spectrum sensing for illegal drone detection: A deep learning-based image classification perspective | |
| Cai et al. | Reliable UAV monitoring system using deep learning approaches | |
| US12208830B2 (en) | Carriage for guided autonomous locomotion | |
| Li et al. | Bissiam: Bispectrum siamese network based contrastive learning for uav anomaly detection | |
| CN116842827A (en) | A method for constructing the electromagnetic performance boundary model of UAV flight control system | |
| KR20240121445A (en) | Apparatus and method for measuring user location based on a cnn model using markov transition field | |
| Memon et al. | On multi-class aerial image classification using learning machines | |
| CN115809422A (en) | A method and system for UAV RF signal recognition based on SVM | |
| Xue et al. | A fire detection and assessment method based on yolov8 | |
| Choubisa et al. | Human crawl vs animal movement and person with object classifications using CNN for side-view images from camera | |
| CN115687893A (en) | Unmanned aerial vehicle radio frequency identification method based on deep attention detection model | |
| Podder et al. | Deep learning for UAV classification: Impact of noise and multipath fading in RF signals | |
| Bie et al. | UAV recognition and tracking method based on YOLOv5 | |
| Misbah et al. | RF-NeuralNet: Lightweight Deep Learning Framework for Detecting Rogue Drones from Radio Frequency Signatures | |
| Zhang et al. | RF-Based UAV Detection and Identification Using Random Forest | |
| Medaiyese | Signal fingerprinting and machine learning framework for UAV detection and identification. |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |