



技术领域technical field
本发明属于雷达目标识别技术领域,具体涉及一种基于度量学习的雷达HRRP小样本目标识别方法。The invention belongs to the technical field of radar target recognition, and in particular relates to a radar HRRP small-sample target recognition method based on metric learning.
背景技术Background technique
在军事探测等诸多领域中,由于观测率低下等诸多因素,非合作目标的数据样本量往往是极少的,这是导致雷达对非合作(库外)目标的识别精度不够准确的重要原因。In many fields such as military detection, due to many factors such as low observation rate, the data sample size of non-cooperative targets is often very small.
现如今,火热的深度学习方法往往需要大量的带有标签的数据进行训练,由于非合作目标带标签样本量严重不足,使得一般的深度学习方法无法直接的解决非合作目标识别问题,也使得非合作目标识别成为典型的小样本识别问题。而且相对于已有类别,新增类别样本数量远小于已有类别,这就导致了一般的深度学习中不平衡样本的模型学习问题,直接造成非合作目标的识别准确率不高。Nowadays, popular deep learning methods often require a large amount of labeled data for training. Due to the serious shortage of labeled samples of non-cooperative targets, general deep learning methods cannot directly solve the problem of non-cooperative target recognition. Cooperative object recognition becomes a typical few-shot recognition problem. Moreover, compared with the existing categories, the number of samples of the newly added categories is much smaller than that of the existing categories, which leads to the model learning problem of unbalanced samples in general deep learning, which directly leads to the low recognition accuracy of non-cooperative targets.
传统的目标识别方法大多是通过库内已知目标样本基于统计方法构建浅层概率模型,通过模板匹配来实现目标识别的,现有的基于深度学习技术进行目标识别的方法是通过大量的标签数据来进行有监督训练,从而获取目标识别模型,这种机制往往需要标记大量的标签数据,且对于新的类别,往往需要重新训练模型,计算开支极大,并且对于小样本非合作目标的识别准确率严重不足,这种方法存在极大的局限性,现有技术提出了一种通过预处理之后的HRRP(High Resolution Range Profile,高分辨距离像)样本建立动态调整层;然后利用滑窗尺寸对HRRP进行切分,通过双向堆叠RNN对样本的时序相关性进行建模,采用多层次的注意力机制调整隐层状态的重要程度,最后通过Softmax进行目标分类,以此实现目标识别的方法,例如申请公布号为CN 111736125 A,名称为“基于注意力机制和双向堆叠循环神经网络的雷达目标识别方法”的专利申请,公开了一种基于深度学习的目标识别方法。该方法为包括进行预处理降低HRRP样本中的敏感性并建立动态调整层;选取滑窗尺寸对HRRP进行切分;通过重要性网络调整各切分序列的重要程度;通过双向堆叠RNN对样本的时序相关性进行建模,提取出其高层次特征;采用多层次的注意力机制调整隐层状态的重要程度;通过Softmax进行目标分类。该方法成功的解决了雷达HRRP特征提取过程中无监督且信息有损和特征提取方法的选择高度依赖于研究人员对于HRRP数据的认知和经验的积累,对提高雷达HRRP目标识别的准确度起到一定作用。但该方法存在以下不足:1.该方法仅针对库内小样本目标识别准确率下降的问题有效,并未考虑到非合作(基于注意力机制和双向堆叠循环神经网络的雷达目标识别方法)小样本目标识别问题;2.该方法依然采用深度学习现有的基于大批标签数据的模型训练机制,对于非合作(库外)目标观测率低、有标签的样本量极少的问题,直接使用这种批训练机制会造成模型学习的不平衡,导致对非合作目标识别率得不到保证。Most of the traditional target recognition methods use known target samples in the library to construct shallow probability models based on statistical methods, and realize target recognition through template matching. The existing methods for target recognition based on deep learning technology use a large amount of label data To carry out supervised training, so as to obtain the target recognition model, this mechanism often needs to mark a large amount of label data, and for new categories, it often needs to retrain the model, the calculation cost is huge, and the recognition of small sample non-cooperative targets is accurate The rate is seriously insufficient, and this method has great limitations. The existing technology proposes a method to establish a dynamic adjustment layer through the preprocessed HRRP (High Resolution Range Profile, high-resolution range image) samples; then use the sliding window size to adjust the HRRP is used for segmentation, and the timing correlation of samples is modeled by bidirectional stacking RNN, and the importance of the hidden layer state is adjusted by using a multi-level attention mechanism. The application publication number is CN 111736125 A, and the patent application titled "Radar Target Recognition Method Based on Attention Mechanism and Bidirectional Stacked Recurrent Neural Network" discloses a deep learning-based target recognition method. The method includes preprocessing to reduce the sensitivity of HRRP samples and establishing a dynamic adjustment layer; selecting the size of the sliding window to segment HRRP; adjusting the importance of each segmented sequence through the importance network; The time series correlation is modeled to extract its high-level features; the multi-level attention mechanism is used to adjust the importance of the hidden layer state; the target classification is performed through Softmax. This method successfully solves the problem of unsupervised and information loss in the process of radar HRRP feature extraction, and the selection of feature extraction methods is highly dependent on the researchers' knowledge and experience of HRRP data, which plays an important role in improving the accuracy of radar HRRP target recognition. to a certain effect. However, this method has the following deficiencies: 1. This method is only effective for the problem of the decline in the accuracy of small sample target recognition in the database, and does not take into account the small Sample target recognition problem; 2. This method still uses the existing deep learning model training mechanism based on a large number of labeled data. For the problem of low observation rate of non-cooperative (outside the library) targets and very few labeled samples, directly use this This kind of batch training mechanism will cause the imbalance of model learning, resulting in the unguaranteed recognition rate of non-cooperative targets.
因此,提供一种能够提高识别非合作目标准确率的方法成为了亟待解决的问题。Therefore, providing a method that can improve the accuracy of identifying non-cooperative targets has become an urgent problem to be solved.
发明内容Contents of the invention
为了解决现有技术中存在的上述问题,本发明提供了一种基于度量学习的雷达HRRP小样本目标识别方法。本发明要解决的技术问题通过以下技术方案实现:In order to solve the above-mentioned problems in the prior art, the present invention provides a radar HRRP small-sample target recognition method based on metric learning. The technical problem to be solved in the present invention is realized through the following technical solutions:
一种基于度量学习的雷达HRRP小样本目标识别方法,包括:A radar HRRP small-sample target recognition method based on metric learning, including:
步骤1、构建多类别的HRRP样本集,所述多类别的HRRP样本集中的每个类别的HRRP样本集均包括多个一维距离像信号;
步骤2、对每个所述类别的HRRP样本集进行处理以得到每个所述类别的有效HRRP样本集;
步骤3、利用所有类别的所述有效HRRP样本集构建库内合作目标HRRP训练样本集,所述库内合作目标HRRP训练样本集的类别共有K类;
步骤4、将所述库内合作目标HRRP训练样本集输入至卷积神经网络以得到每类别的中心,所述卷积神经网络包括三个级联的卷积神经网络模块,每个卷积神经网络模块均包括级联的卷积层、Relu层和池化层,且所述卷积神经网络还包括特征适应层和三个转换运算层,所述特征适应层通过三个所述转换运算层分别连接三个所述卷积神经网络模块;Step 4, input the cooperative target HRRP training sample set in the library to the convolutional neural network to obtain the center of each category, the convolutional neural network includes three cascaded convolutional neural network modules, each convolutional neural network The network modules all include cascaded convolutional layers, Relu layers, and pooling layers, and the convolutional neural network also includes a feature adaptation layer and three conversion operation layers, and the feature adaptation layer passes through three conversion operation layers. Connect three described convolutional neural network modules respectively;
步骤5、根据每类别的中心、所述卷积神经网络的输出结果得到损失函数;Step 5, obtain the loss function according to the center of each category and the output result of the convolutional neural network;
步骤6、利用所述损失函数进行反向传播,使所述损失函数收敛,以得到特征提取器;Step 6, using the loss function to perform backpropagation, so that the loss function converges to obtain a feature extractor;
步骤7、通过所述特征提取器对多类别非合作目标小样本训练集进行特征提取,得到多个类别的非合作目标的特征数据;Step 7, performing feature extraction on a multi-category non-cooperative target small-sample training set by the feature extractor to obtain feature data of multiple categories of non-cooperative targets;
步骤8、采用梯度优化的全连接层网络对所述多个类别的非合作目标的特征数据进行训练得到分类器,以利用所述分类器进行目标识别。
在本发明的一个实施例中,所述步骤1包括:In one embodiment of the present invention, said
步骤1.1、在同一俯仰角下对0~90度方位角进行平均划分,得到n个角域;Step 1.1, divide the azimuth angles from 0 to 90 degrees on average under the same elevation angle to obtain n angle domains;
步骤1.2、连续采集n个所述角域中的多个类别的雷达回波信号,并对所采集的雷达回波信号平均划分为多段子回波信号;Step 1.2, continuously collecting n radar echo signals of multiple categories in the angular domain, and dividing the collected radar echo signals into multiple sub-echo signals on average;
步骤1.3、对每个类别的所述子回波信号进行FFT处理得到所述一维距离像信号,不同类别的所有所述一维距离像信号组成所述多类别的HRRP样本集。Step 1.3: Perform FFT processing on the sub-echo signals of each category to obtain the one-dimensional range image signal, and all the one-dimensional range image signals of different categories form the multi-category HRRP sample set.
在本发明的一个实施例中,所述步骤2包括:In one embodiment of the present invention, said
步骤2.1、利用能量归一化方法对每个所述类别的HRRP样本集进行处理得到能量归一化后的HRRP样本集;Step 2.1, using the energy normalization method to process the HRRP sample set of each category to obtain the HRRP sample set after energy normalization;
步骤2.2、利用重心对齐法对能量归一化后的HRRP样本集进行对齐处理,得到每个所述类别的有效HRRP样本集。Step 2.2: Align the energy-normalized HRRP sample sets by using the centroid alignment method to obtain an effective HRRP sample set for each category.
在本发明的一个实施例中,所述步骤3包括:In one embodiment of the present invention, said
对K个类别的所述有效HRRP样本集进行随机抽取得到所述库内合作目标HRRP训练样本集。The effective HRRP sample set of K categories is randomly selected to obtain the HRRP training sample set of the cooperation target in the library.
在本发明的一个实施例中,所述特征适应层包括五个级联的卷积神经网络模块,在五个级联的卷积神经网络模块之后衔接一个平均池化层,之后再对所述平均池化层的输出求取均值,以构建编码层,所述编码层之后依次级联第一线性层、第一relu层、第二线性层、第二relu层、第三线性层和第三relu层。In one embodiment of the present invention, the feature adaptation layer includes five cascaded convolutional neural network modules, an average pooling layer is connected after the five cascaded convolutional neural network modules, and then the The output of the average pooling layer is averaged to construct the coding layer, and the coding layer is sequentially cascaded with the first linear layer, the first relu layer, the second linear layer, the second relu layer, the third linear layer and the third relu layer.
在本发明的一个实施例中,所述转换运算层对所述特征适应层进行线性变化,已将线性变化的结果输入至对应的所述卷积神经网络模块。In an embodiment of the present invention, the conversion operation layer performs linear changes on the feature adaptation layer, and the results of the linear changes are input to the corresponding convolutional neural network modules.
在本发明的一个实施例中,所述步骤5包括:In one embodiment of the present invention, said step 5 includes:
步骤5.1、基于所述每类别的中心、所述卷积神经网络的输出结果的欧氏距离构建的距离度量函数得到所述每类别的中心、所述卷积神经网络之间的距离;Step 5.1, obtain the distance between the center of each category and the convolutional neural network based on the distance measurement function constructed by the Euclidean distance of the center of each category and the output result of the convolutional neural network;
步骤5.2、基于softmax函数,根据所述每类别的中心、所述卷积神经网络之间的距离得到所述库内合作目标HRRP训练样本集的概率分布;Step 5.2, based on the softmax function, according to the center of each category and the distance between the convolutional neural networks, the probability distribution of the cooperative target HRRP training sample set in the library is obtained;
步骤5.3、根据所述概率分布得到所述损失函数。Step 5.3. Obtain the loss function according to the probability distribution.
在本发明的一个实施例中,所述步骤7包括:In one embodiment of the present invention, said step 7 includes:
步骤7.1、对Kc个类别的所述有效HRRP样本集进行随机抽取得到多类别非合作目标小样本测试集和所述多类别非合作目标小样本训练集,其中,K和Kc不相交;Step 7.1, performing random sampling on the effective HRRP sample sets of Kc categories to obtain a multi-category non-cooperative target small-sample test set and the multi-category non-cooperative target small-sample training set, wherein K and Kc are disjoint;
步骤7.2、通过所述特征提取器对所述多类别非合作目标小样本训练集进行特征提取,得到多个类别的非合作目标的特征数据。Step 7.2, performing feature extraction on the multi-category non-cooperative target small-sample training set by the feature extractor to obtain feature data of multiple categories of non-cooperative targets.
在本发明的一个实施例中,在所述步骤8之后,还包括:In one embodiment of the present invention, after the
基于识别准确率评估模型,利用所述多类别非合作目标小样本测试集对所述分类器的识别准确率进行评估,以得到评估结果。Based on the recognition accuracy evaluation model, the recognition accuracy of the classifier is evaluated by using the multi-category non-cooperative target small-sample test set to obtain an evaluation result.
本发明的有益效果:Beneficial effects of the present invention:
本发明通过构建特征适应层和转换运算层来改进卷积神经网络,进一步提高了特征提取器的泛化能力。The invention improves the convolutional neural network by constructing a feature adaptation layer and a conversion operation layer, and further improves the generalization ability of the feature extractor.
本发明通过欧氏距离计算方法构建度量函数,利用度量函数构建损失函数,用以衡量训练样本的预测中心与标签的距离,以此不断优化特征提取模型,与现有技术相比,有效提高了目标识别模型的准确率;The present invention uses the Euclidean distance calculation method to construct a metric function, uses the metric function to construct a loss function, and uses the metric function to measure the distance between the prediction center of the training sample and the label, thereby continuously optimizing the feature extraction model. Compared with the prior art, it effectively improves the The accuracy of the target recognition model;
本发明通过构建多个小样本识别任务来进行多批次的模型微调,以此训练特征提取模型,解决了现有的基于深度学习的特征提取方法对于新类别目标样本量有限情况下模型学习的不平衡问题。The present invention performs multi-batch model fine-tuning by constructing multiple small-sample recognition tasks, thereby training the feature extraction model, which solves the problem of the existing feature extraction method based on deep learning for model learning when the sample size of the new category target is limited. imbalance problem.
以下将结合附图及实施例对本发明做进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments.
附图说明Description of drawings
图1是本发明实施例提供的一种基于度量学习的雷达HRRP小样本目标识别方法的流程示意图;Fig. 1 is a schematic flow chart of a radar HRRP small-sample target recognition method based on metric learning provided by an embodiment of the present invention;
图2是本发明实施例提供的另一种基于度量学习的雷达HRRP小样本目标识别方法的流程示意图;Fig. 2 is a schematic flow chart of another radar HRRP small-sample target recognition method based on metric learning provided by an embodiment of the present invention;
图3是本发明实施例提供的一种卷积神经网络的示意图;Fig. 3 is a schematic diagram of a convolutional neural network provided by an embodiment of the present invention;
图4是本发明实施例提供的部分飞机目标三维模型及仿真一维距离像的示意图。Fig. 4 is a schematic diagram of a partial three-dimensional model of an aircraft target and a simulated one-dimensional range image provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面结合具体实施例对本发明做进一步详细的描述,但本发明的实施方式不限于此。The present invention will be described in further detail below in conjunction with specific examples, but the embodiments of the present invention are not limited thereto.
实施例一Embodiment one
请参见图1和图2,图1是本发明实施例提供的一种基于度量学习的雷达HRRP小样本目标识别方法的流程示意图,图2是本发明实施例提供的另一种基于度量学习的雷达HRRP小样本目标识别方法的流程示意图。本发明实施例提供一种基于度量学习的雷达HRRP小样本目标识别方法,该雷达HRRP小样本目标识别方法包括步骤1至步骤8,其中:Please refer to Fig. 1 and Fig. 2. Fig. 1 is a schematic flow chart of a radar HRRP small-sample target recognition method based on metric learning provided by an embodiment of the present invention, and Fig. 2 is another metric-based learning-based method provided by an embodiment of the present invention. Schematic diagram of the radar HRRP small-sample target recognition method. An embodiment of the present invention provides a radar HRRP small-sample target recognition method based on metric learning. The radar HRRP small-sample target recognition method includes
步骤1、构建多类别的HRRP样本集,多类别的HRRP样本集中的每个类别的HRRP样本集均包括多个一维距离像信号。
在本实施例中,多类别的HRRP样本集,包括多个不同类别的HRRP样本集,多类别是指不同种类的飞机,例如雅克38、狂风F3。In this embodiment, the multi-category HRRP sample set includes multiple HRRP sample sets of different categories, and the multi-category refers to different types of aircraft, such as Yak 38 and Tornado F3.
在一个具体实施例中,步骤1具体包括步骤1.1~步骤1.3,其中:In a specific embodiment,
步骤1.1、在同一俯仰角下对0~90度方位角进行平均划分,得到n个角域。Step 1.1. Under the same elevation angle, divide the azimuth angles from 0 to 90 degrees on average to obtain n angle domains.
优选地,n的取值范围为5~20,例如,n取10。Preferably, n ranges from 5 to 20, for example, n is 10.
步骤1.2、连续采集n个角域中的多个类别的雷达回波信号,并对所采集的雷达回波信号平均划分为多段子回波信号。Step 1.2: Continuously collect multiple types of radar echo signals in n angle domains, and divide the collected radar echo signals into multiple sub-echo signals on average.
在本实施例中每个角域的子回波信号的数量为L/n,其中,L为子回波信号的总样本量,例如,每个角域的子回波信号的数量为320段,其中,L≥2000。In this embodiment, the number of sub-echo signals in each angular domain is L/n, wherein L is the total sample size of sub-echo signals, for example, the number of sub-echo signals in each angular domain is 320 segments , where L≥2000.
步骤1.3、对每个类别的子回波信号进行FFT(Fast Fourier Transform,快速傅里叶变换)处理得到一维距离像信号,不同类别的所有一维距离像信号组成多类别的HRRP样本集。Step 1.3, perform FFT (Fast Fourier Transform, Fast Fourier Transform) processing on sub-echo signals of each category to obtain one-dimensional range image signals, and all one-dimensional range image signals of different categories form a multi-category HRRP sample set.
在本实施例中,对步骤1.2所得到的每个子回波信号进行FFT处理,处理之后即得到一维距离像信号,由此,每个类别会包括有多个一维距离像信号,所有类别的一维距离像信号便组成了多类别的HRRP样本集。In this embodiment, FFT processing is performed on each sub-echo signal obtained in step 1.2, and a one-dimensional range image signal is obtained after processing. Thus, each category will include multiple one-dimensional range image signals, and all categories The one-dimensional range image signal of the multi-category HRRP sample set is formed.
步骤2、对每个类别的HRRP样本集进行处理以得到每个类别的有效HRRP样本集。
在一个具体实施例中,步骤2具体包括步骤2.1~步骤2.2,其中:In a specific embodiment,
步骤2.1、利用能量归一化方法对每个类别的HRRP样本集进行处理得到能量归一化后的HRRP样本集。Step 2.1, using the energy normalization method to process the HRRP sample set of each category to obtain the HRRP sample set after energy normalization.
在本实施例中,由于发射机功率、发射与接收天线增益、目标距离和天线等因素影响,使得所获得高分辨距离像的信号存在差别,因此,采用能量归一化处理法对步骤1中所得到的有效HRRP样本集中的每个一维距离像信号进行归一化处理,以得到能量归一化后的HRRP样本集。In this embodiment, due to the influence of factors such as transmitter power, transmitting and receiving antenna gain, target distance and antenna, there are differences in the signals of the obtained high-resolution range image. Each one-dimensional range image signal in the obtained effective HRRP sample set is normalized to obtain an energy-normalized HRRP sample set.
步骤2.2、利用重心对齐法对能量归一化后的HRRP样本集进行对齐处理,得到每个类别的有效HRRP样本集。Step 2.2: Align the energy-normalized HRRP sample sets using the barycenter alignment method to obtain effective HRRP sample sets for each category.
在本实施例中,针对平移敏感性的问题,采用绝对对齐法中的重心对齐法,对步骤2.1中归一化之后的HRRP样本集中的一维距离像信号进行对齐处理,从而可以得到预处理之后的有效HRRP样本集。In this embodiment, to solve the problem of translation sensitivity, the center of gravity alignment method in the absolute alignment method is used to align the one-dimensional range image signals in the HRRP sample set after normalization in step 2.1, so that the preprocessing can be obtained The following valid HRRP sample set.
步骤3、利用所有类别的有效HRRP样本集构建库内合作目标HRRP训练样本集,库内合作目标HRRP训练样本集的类别共有K类。
具体地,对K个类别的有效HRRP样本集进行随机抽取得到库内合作目标HRRP训练样本集。Specifically, the effective HRRP sample sets of K categories are randomly selected to obtain the cooperative target HRRP training sample set in the library.
在本实施例中,从所有类别的有效HRRP样本集中先选择K类,然后从每个类别的有效HRRP样本集中随机抽取若干个样本,从而所有类别所抽取的样本组成库内合作目标HRRP训练样本集H,H={(x1,y1),...,(xN,yN)},其中共有K类样本,共N个样本,xi∈RD,RD为D维实数集,yi∈{1,...,K}为标签,标签用来标记样本所对应的飞机类别,例如构建45类不同类别飞机的目标样本,每类共3200个样本,每个样本步进频频点数为256。In this embodiment, K classes are first selected from the effective HRRP sample sets of all categories, and then several samples are randomly selected from the effective HRRP sample sets of each category, so that the samples extracted by all categories form the cooperative target HRRP training samples in the library Set H, H={(x1 , y1 ),...,(xN ,yN )}, in which there are K types of samples, a total of N samples, xi ∈ RD , RD is a D-dimensional real number set, yi ∈{1,...,K} is the label, and the label is used to mark the aircraft category corresponding to the sample. For example, to construct 45 target samples of different categories of aircraft, each category has a total of 3200 samples, and each sample step The number of incoming frequency points is 256.
步骤4、请参见图3,将库内合作目标HRRP训练样本集输入至卷积神经网络以得到每类别的中心,卷积神经网络包括三个级联的卷积神经网络模块,每个卷积神经网络模块均包括级联的卷积层、Relu层和池化层,且所述卷积神经网络还包括特征适应层和三个转换运算层,所述特征适应层通过三个所述转换运算层分别连接三个所述卷积神经网络模块,其中,三个级联的卷积神经网络模块分别记为图3中的Block1、Block2和Block3,三个转换运算层分别记为图3中的Film Layers1、Film Layers2和Film Layers3。Step 4, please refer to Figure 3, input the HRRP training sample set of the cooperative target in the library to the convolutional neural network to obtain the center of each category, the convolutional neural network includes three cascaded convolutional neural network modules, each convolution The neural network modules all include cascaded convolutional layers, Relu layers, and pooling layers, and the convolutional neural network also includes a feature adaptation layer and three conversion operation layers, and the feature adaptation layer passes through three conversion operations. The layers are respectively connected to the three convolutional neural network modules, wherein the three cascaded convolutional neural network modules are respectively denoted as Block1, Block2 and Block3 in Fig. 3, and the three conversion operation layers are respectively denoted as Block1 in Fig. 3 Film Layers1, Film Layers2, and Film Layers3.
在一个具体实施例中,步骤4具体包括步骤4.1~步骤4.2,其中:In a specific embodiment, step 4 specifically includes steps 4.1 to 4.2, wherein:
步骤4.1、将库内合作目标HRRP训练样本集输入至卷积神经网络得到输出结果。Step 4.1: Input the HRRP training sample set of the cooperative target in the library to the convolutional neural network to obtain the output result.
本实施例搭建了三个级联的卷积神经网络模块,每一个卷积神经网络模块包含级联的卷积层、Relu层和池化层,每个卷积神经网络模块的计算过程如下式所示:In this embodiment, three cascaded convolutional neural network modules are set up, and each convolutional neural network module includes a cascaded convolutional layer, a Relu layer, and a pooling layer. The calculation process of each convolutional neural network module is as follows Shown:
其中,fj表示每个卷积神经网络模块的输出,bj表示每个卷积神经网络模块非线性函数对应法则,j={1,2,3},SK表示第k个类别的目标的样本量。Among them, fj represents the output of each convolutional neural network module, bj represents the corresponding law of nonlinear functions of each convolutional neural network module, j={1,2,3}, SK represents the target of the kth category sample size.
为了使卷积神经网络的三个卷积神经网络模块能更好的学习到不同HRRP识别任务数据的多模态信息,本实施例了构建加入特征适应层和转换运算层的特征提取器,其中,通过特征适应层运算,对每个卷积神经网络模块输出两个向量γj和βj,具体如下式所示:In order to enable the three convolutional neural network modules of the convolutional neural network to better learn the multimodal information of different HRRP recognition task data, this embodiment constructs a feature extractor that adds a feature adaptation layer and a conversion operation layer, wherein , through the feature adaptation layer operation, output two vectors γj and βj for each convolutional neural network module, as shown in the following formula:
其中,fadapation表示特征适应层的参数。然后利用向量γj和βj在每个卷积神经网络模块之后衔接转换运算层进行计算,具体如下式所示:Among them, fadaptation represents the parameters of the feature adaptation layer. Then use the vectors γj and βj to connect the conversion operation layer after each convolutional neural network module for calculation, as shown in the following formula:
Dj=F(fj;γj,βj)=γjfj+βjDj =F(fj ;γj ,βj )=γj fj +βj
其中,Dj表示每层卷积神经网络模块转换之后的输出值,fj表示库内合作目标HRRP训练样本集通过卷积神经网络模块之后的结果。Among them, Dj represents the output value after the conversion of each layer of convolutional neural network module, and fj represents the result after the HRRP training sample set of the cooperative target in the library passes through the convolutional neural network module.
进一步地,特征适应层包括五个级联的卷积神经网络模块,在五个级联的卷积神经网络模块之后衔接一个平均池化层,之后再对所述平均池化层的输出求取均值,以构建编码层,所述编码层之后依次级联第一线性层、第一relu层、第二线性层、第二relu层、第三线性层和第三relu层。Further, the feature adaptation layer includes five cascaded convolutional neural network modules, an average pooling layer is connected after the five cascaded convolutional neural network modules, and then the output of the average pooling layer is obtained The average value is used to construct the encoding layer, and the first linear layer, the first relu layer, the second linear layer, the second relu layer, the third linear layer and the third relu layer are sequentially cascaded after the encoding layer.
本实施例的特征适应层了构建级联的五个卷积神经网络模块,库内合作目标HRRP训练样本集依次经过级联的五个卷积神经网络模块,之后最后一个卷积神经网络模块的输出进入平均池化层,然后再对平均池化层的输出求取均值,以此构建编码层,定义编码层输出为zj=gf(S),j={1,2,3}。之后编码层的输出zj首先进入第一线性层,输出为zj1,然后(zj+zj1)进入第一relu层输出zj2,zj2进入第二线性层,输出为zj3,然后(zj2+zj3)入第二relu层输出zj4,zj4进入第三线性层,输出为zj5,然后(zj4+zj5)入第三relu层输出zj6,之后zj6经过转换运算层进入卷积神经网络的第j个卷积神经网络模块。The feature adaptation layer in this embodiment constructs five cascaded convolutional neural network modules, and the HRRP training sample set of the cooperative target in the library passes through the five cascaded convolutional neural network modules in turn, and then the last convolutional neural network module The output enters the average pooling layer, and then calculates the mean value of the output of the average pooling layer to construct the encoding layer, and defines the output of the encoding layer as zj =gf (S), j={1,2,3}. After that, the output zj of the encoding layer first enters the first linear layer, and the output is zj1 , then (zj + zj1 ) enters the first relu layer to output zj2 , zj2 enters the second linear layer, and the output is zj3 , and then (zj2 +zj3 ) enters the second relu layer to output zj4 , zj4 enters the third linear layer, the output is zj5 , then (zj4 +zj5 ) enters the third relu layer to output zj6 , and then zj6 passes through Convert the operation layer into the jth convolutional neural network module of the convolutional neural network.
进一步,级联第一线性层、第一relu层、第二线性层、第二relu层、第三线性层和第三relu层对编码层的最终输出进行线性变换,具体如下式所示:Further, cascade the first linear layer, the first relu layer, the second linear layer, the second relu layer, the third linear layer and the third relu layer to linearly transform the final output of the encoding layer, as shown in the following formula:
其中,表示第三relu层的输出,即上述内容的zj6。in, Indicates the output of the third relu layer, that is, zj6 of the above content.
之后,进入转换运算层计算向量γj和βj,向量γj和βj的计算方法如下式所示:After that, enter The conversion operation layer calculates vectors γj and βj , and the calculation method of vectors γj and βj is shown in the following formula:
其中,r表示一组服从均值为0、方差为0.001的正态分布的向量,其维度与zj相同,h表示单位向量,维度与相同。Among them, r represents a group of vectors that obey a normal distribution with
因此,转换运算层将其输出输入至对应的卷积神经网络的卷积神经网络模块,通过特征适应层和转换运算层的三个级联的卷积神经网络模块的最终输出值记为D3,以fφ(·)表示三个级联的卷积神经网络模块、特征适应层和转换运算层所构建的特征提取器的运算法则,该计算过程可以表示为:Therefore, the conversion operation layer inputs its output to the convolutional neural network module of the corresponding convolutional neural network, and the final output value of the three cascaded convolutional neural network modules through the feature adaptation layer and the conversion operation layer is recorded as D3 , let fφ (·) represent the algorithm of the feature extractor constructed by three cascaded convolutional neural network modules, feature adaptation layer and conversion operation layer, the calculation process can be expressed as:
步骤4.2、基于每类别中心计算模型,根据卷积神经网络的输出结果得到每类别的中心。Step 4.2, based on the center calculation model of each category, the center of each category is obtained according to the output result of the convolutional neural network.
具体地,利用卷积神经网络的输出结果D3获取每类别的中心ck,xi∈RD,D3∈RM,每类别中心计算模型如下式所示:Specifically, the output result D3 of the convolutional neural network is used to obtain the center ck of each category,xi ∈ RD , D3 ∈ RM , and the calculation model of each category center is shown in the following formula:
其中,ck∈RM,RM表示M维实数集。Among them, ck ∈ RM , RM represents the set of M-dimensional real numbers.
步骤5、根据每类别的中心、卷积神经网络的输出结果得到损失函数。Step 5. Obtain a loss function according to the center of each category and the output result of the convolutional neural network.
在一个具体实施例中,步骤5具体包括步骤5.1~步骤5.3,其中:In a specific embodiment, step 5 specifically includes steps 5.1 to 5.3, wherein:
步骤5.1、基于每类别的中心、卷积神经网络的输出结果的欧氏距离构建的距离度量函数得到每类别的中心、卷积神经网络之间的距离。In step 5.1, the distance between the center of each category and the convolutional neural network is obtained by constructing a distance measurement function based on the Euclidean distance of the center of each category and the output result of the convolutional neural network.
具体地,采用欧氏距离构建一个距离度量函数,如下式所示,Specifically, a distance metric function is constructed using Euclidean distance, as shown in the following formula,
其中,fφ表示经过卷积神经网络所提取的特征,M表示特征的维度,利用该距离度量函数计算经过ff(·)进行特征提取的样本点与每类别的中心之间的距离。Among them, fφ represents the feature extracted by the convolutional neural network, M represents the dimension of the feature, and the distance measurement function is used to calculate the distance between the sample point extracted by ff (·) and the center of each category.
步骤5.2、基于softmax函数,根据每类别的中心、卷积神经网络之间的距离得到库内合作目标HRRP训练样本集的概率分布。Step 5.2. Based on the softmax function, the probability distribution of the HRRP training sample set of the cooperative target in the library is obtained according to the center of each category and the distance between the convolutional neural networks.
具体地,通过改造的softmax函数获取测试样本的概率分布,softmax函数具体如下式所示:Specifically, the probability distribution of the test sample is obtained through the modified softmax function, and the softmax function is specifically shown in the following formula:
其中,pf(y=k|x)为库内合作目标HRRP训练样本集的概率分布,ck′为当前所计算类别k的样本以外的其他类别样本的中心。Among them, pf (y=k|x) is the probability distribution of the HRRP training sample set of the cooperative target in the library, and ck' is the center of samples of other categories except the samples of category k currently calculated.
步骤5.3、根据概率分布得到所述损失函数。Step 5.3, obtaining the loss function according to the probability distribution.
定义损失函数为J(f)=-logpf(y=k|x),该式可进一步变型为下式:Define the loss function as J(f)=-logpf (y=k|x), which can be further transformed into the following formula:
J(f)=d(ff(x),ck)+log∑k′exp(-d(ff(x),ck))。J(f)=d(ff (x), ck )+log∑k′ exp(−d(ff (x), ck )).
步骤6、利用损失函数进行反向传播,使损失函数收敛,以得到特征提取器。Step 6. Use the loss function to carry out backpropagation to make the loss function converge, so as to obtain the feature extractor.
具体地,利用损失函数计算损失值,将模型训练的损失值反向传播,直到损失值收敛到最小为止,由此可以得到泛化性能更强的卷积神经网络模型,然后以最终的卷积神经网络作为最终的特征提取器。Specifically, the loss function is used to calculate the loss value, and the loss value of the model training is backpropagated until the loss value converges to the minimum, so that a convolutional neural network model with stronger generalization performance can be obtained, and then the final convolution A neural network serves as the final feature extractor.
步骤7、通过特征提取器对多类别非合作目标小样本训练集进行特征提取,得到多个类别的非合作目标的特征数据。Step 7. Using the feature extractor to perform feature extraction on the multi-category non-cooperative target small-sample training set to obtain feature data of multiple categories of non-cooperative targets.
在一个具体实施例中,步骤7具体包括步骤7.1~步骤7.2,其中:In a specific embodiment, step 7 specifically includes steps 7.1 to 7.2, wherein:
步骤7.1、对Kc个类别的有效HRRP样本集进行随机抽取得到多类别非合作目标小样本测试集和多类别非合作目标小样本训练集,其中,K和Kc不相交。Step 7.1: Randomly extract the effective HRRP sample sets of Kc categories to obtain a small-sample test set of multi-category non-cooperative targets and a small-sample training set of multi-category non-cooperative targets, wherein K and Kc are disjoint.
在本实施例中,从步骤2所得到的所有类别的有效HRRP样本集中先选择Kc类,然后从Kc类中的每个类别的有效HRRP样本集中随机抽取若干个样本,从而所有类别所抽取的样本组成多类别非合作目标小样本训练集,例如Nc为5类,N与Nc不相交,即这5类飞机目标与之前训练所使用的35类目标完全不同,为未曾见过的新的目标类别,例如从每个类别的有效HRRP样本集中随机抽取1、5、10个样本组成多类别非合作目标小样本训练集,再从Nc类中的每个类别的有效HRRP样本集中随机抽取若干个样本,从而所有类别所抽取的样本组成多类别非合作目标小样本测试集,例如从每个类别的有效HRRP样本集中随机抽取15个样本组成多类别非合作目标小样本训练集,由多类别非合作目标小样本训练集和多类别非合作目标小样本测试集组成多类别非合作目标小样本任务集,定义小样本任务集为其中有Nc类非合作库外样本,/>称之为元训练集(即多类别非合作目标小样本训练集),/>样本量S每个类别取1、5、10分别进行测试,/>称之为元测试集(即多类别非合作目标小样本测试集),/>样本量Q每个类别取15,/>描述了一个测试任务的一组训练集和测试集,共有J组训练任务,例如共设置600组小样本识别任务。In this embodiment, from the effective HRRP sample sets of all categories obtained in
步骤7.2、通过特征提取器对多类别非合作目标小样本训练集进行特征提取,得到多个类别的非合作目标的特征数据。In step 7.2, feature extraction is performed on the multi-category non-cooperative target small-sample training set by a feature extractor to obtain feature data of multiple categories of non-cooperative targets.
具体地,通过上述步骤中所训练的特征提取器对多类别非合作目标小样本训练集进行特征提取,以获取多个类别的非合作目标的特征数据。Specifically, feature extraction is performed on a multi-category non-cooperative target small-sample training set through the feature extractor trained in the above steps to obtain feature data of multiple categories of non-cooperative targets.
步骤8、采用梯度优化的全连接层网络对多个类别的非合作目标的特征数据进行训练得到分类器,以利用分类器进行目标识别。
具体地,采用梯度优化的全连接层网络通过步骤7中所获取的多个类别的非合作目标的特征数据进行分类器训练,该过程为:Specifically, the gradient-optimized fully connected layer network is used to perform classifier training through the feature data of multiple categories of non-cooperative targets obtained in step 7. The process is:
其中,为Lce表示交叉熵损失函数,φc表示全连接层分类器的参数。Among them, Lce represents the cross-entropy loss function, and φc represents the parameters of the fully connected layer classifier.
在一个具体实施例中,在步骤8之后还可以包括:In a specific embodiment, after
基于识别准确率评估模型,利用多类别非合作目标小样本测试集对分类器的识别准确率进行评估,以得到评估结果,识别准确率评估模型为:Based on the recognition accuracy evaluation model, the recognition accuracy of the classifier is evaluated by using a multi-category non-cooperative target small sample test set to obtain the evaluation results. The recognition accuracy evaluation model is:
其中,Lmeta表示损失函数,m表示平均类别准确率(Mean Class Accuracy,MCA),其具体计算方法为:Among them, Lmeta represents the loss function, m represents the mean class accuracy (Mean Class Accuracy, MCA), and its specific calculation method is:
其中,m表示平均类别准确率,N表示类别个数,Mi表示类别i的样本量,Ti表示类别i中正确分类的个数。Among them, m represents the average category accuracy rate, N represents the number of categories, Mi represents the sample size of category i, and Ti represents the number of correct classifications in category i.
以下通过实际检测方法,对本发明的技术效果作进一步说明:Below by actual detection method, technical effect of the present invention is described further:
1)本实施例建模平台CPU为i7-4770,CPU主频为3.40GHz,内存为32GB、显卡为NVIDA GeForce GTX1070,显存为6G,系统为Windows 7(64bit)。1) The CPU of the modeling platform in this embodiment is i7-4770, the main frequency of the CPU is 3.40GHz, the memory is 32GB, the graphics card is NVIDA GeForce GTX1070, the video memory is 6G, and the system is Windows 7 (64bit).
2)编译环境为Pycharm Community 2020.1和Matlab R2015a,Python为3.7.1。所使用的框架为:PyTorch 1.1.0,Cuda 9.0,所使用的主要的库为:Numpy 1.19.1,Scikit-learn 0.23.2,Scipy 1.5.2,Matplotlib 3.3.2。2) The compilation environment is Pycharm Community 2020.1 and Matlab R2015a, and Python is 3.7.1. The framework used is: PyTorch 1.1.0, Cuda 9.0, the main libraries used are: Numpy 1.19.1, Scikit-learn 0.23.2, Scipy 1.5.2, Matplotlib 3.3.2.
3)数据仿真:为了验证本发明基于HRRP的非合作目标小样本识别方法的有效性,首先通过三维制图软件Solidworks2018构建50类飞机3D模型(为了方便起见,也可以从flyaway simulation官网直接下载所需要的飞机3D模型),然后通过高频电磁计算软件CSTSTUDIO SUITE对该50类飞机模型进行电磁仿真,得到飞机目标的宽带电磁散射数据,对该数据进行傅里叶变换(FFT),进而得到飞机模型的一维距离像(HRRP),仿真参数如表1所示。对50类飞机在84度俯仰角下进行电磁计算,每类目标的每个俯仰角有3200个HRRP样本,每类目标的方位角覆盖10.05度到90度,图4展示了50类飞机中部分飞机的三维模型及对应的一维距离像。3) Data simulation: In order to verify the effectiveness of the HRRP-based non-cooperative target small-sample recognition method of the present invention, first construct 50 types of aircraft 3D models through the 3D mapping software Solidworks2018 (for convenience, you can also directly download the required data from the official website of flyaway simulation. 3D model of the aircraft), and then use the high-frequency electromagnetic calculation software CSTSTUDIO SUITE to carry out electromagnetic simulation on the 50 types of aircraft models to obtain the broadband electromagnetic scattering data of the aircraft target, and perform Fourier transform (FFT) on the data to obtain the aircraft model The one-dimensional range profile (HRRP), the simulation parameters are shown in Table 1. Electromagnetic calculations are performed on 50 types of aircraft at a pitch angle of 84 degrees. There are 3200 HRRP samples for each pitch angle of each type of target, and the azimuth angle of each type of target covers 10.05 degrees to 90 degrees. Figure 4 shows some of the 50 types of aircraft The 3D model of the aircraft and the corresponding 1D range image.
表1电磁散射计算参数Table 1 Electromagnetic scattering calculation parameters
为了克服HRRP敏感性问题,本发明对所得到的一维距离像数据进行能量归一、重心对齐处理,得到预处理之后的一维距离像;In order to overcome the HRRP sensitivity problem, the present invention performs energy normalization and barycenter alignment processing on the obtained one-dimensional range image data, and obtains the one-dimensional range image after preprocessing;
4)从50类目标中选择35类作为训练样本(每一类里的全部样本),选取5类为测试样本(每一类中的全部样本),该5类与35类训练样本不相交。4) Select 35 categories from the 50 categories of objects as training samples (all samples in each category), and select 5 categories as test samples (all samples in each category), and the 5 categories are disjoint with the 35 categories of training samples.
训练集样本量S分别取1、5、10,即分别构造维度为10000*35*1*256和10000*35*5*256和10000*35*10*256的样本数据,即10000个小样本训练任务,每个小样本训练任务中35类样本分别从每类的3200个样本中随机抽取1、5、10个样本。The sample size S of the training set is 1, 5, and 10 respectively, that is, the sample data with dimensions of 10000*35*1*256 and 10000*35*5*256 and 10000*35*10*256 are respectively constructed, that is, 10000 small samples For the training task, in each small sample training task, 35 types of samples are randomly selected from 3200 samples of each type to randomly select 1, 5, and 10 samples.
将验证集集构造为10000*35*15*256的样本数据,即10000个小样本验证任务,每个小样本训练任务中35类分别从每类3200个样本中随机抽取15个样本,该样本与训练集样本不相交。Construct the verification set as 10000*35*15*256 sample data, that is, 10000 small-sample verification tasks. In each small-sample training task, 35 classes randomly select 15 samples from 3200 samples of each class. The sample are disjoint from the training set samples.
将测试集的元训练集样本量S分别取1、5、10,即分别构造维度为600*5*1*256、600*5*5*256和600*5*10*256的样本数据,即600个小样本训练任务,每个小样本训练任务中5类(非合作目标)分别从每类3200个样本中随机抽取1、5、10个样本。Take the sample size S of the meta-training set of the test set as 1, 5, and 10 respectively, that is, construct sample data with dimensions of 600*5*1*256, 600*5*5*256, and 600*5*10*256, respectively. That is, there are 600 small-sample training tasks, and in each small-sample training task, 5 classes (non-cooperative targets) randomly select 1, 5, and 10 samples from 3200 samples in each class.
将测试集中的元测试集构造为600*5*15*256的样本数据,即600个小样本测试任务,每个小样本训练任务中5类(非合作目标)分别从每类3200个样本中随机抽取15个样本(与),用于评估在支持集上训练好的新类别的分类器性能。元训练集和元测试集的样本之间没有重叠。Construct the meta-test set in the test set as 600*5*15*256 sample data, that is, 600 small-sample test tasks. In each small-sample training task, 5 categories (non-cooperative targets) are selected from 3200 samples of each category. Randomly select 15 samples (and ) to evaluate the classifier performance of the new class trained on the support set. There is no overlap between samples from the meta-training and meta-testing sets.
5)模型结构设置:训练阶段和测试阶段,每一个小样本训练任务训练一个该任务的特征提取器。特征提取器采用三层卷积神经网络:第一层输入通道数为1,输出通道数为32,步长为9,padding方式为补零;第二层输出通道为64,步长为9,padding方式为补零;第三层输出通道为128,步长为9,padding方式为补零。最后再通过Flatten操作将卷积输出的特征拉成向量,经过300的全连接层。训练阶段的分类器为从300维到35维的全连接层;最后采用欧氏距离来衡量样本点与各个类别代表之间的距离,并且利用softmax获取概率分布;5) Model structure setting: in the training phase and testing phase, each small sample training task trains a feature extractor for the task. The feature extractor uses a three-layer convolutional neural network: the number of input channels in the first layer is 1, the number of output channels is 32, the step size is 9, and the padding method is zero padding; the output channel of the second layer is 64, and the step size is 9. The padding method is zero padding; the output channel of the third layer is 128, the step size is 9, and the padding method is zero padding. Finally, through the Flatten operation, the features output by the convolution are pulled into a vector, and go through a 300 fully connected layer. The classifier in the training stage is a fully connected layer from 300 dimensions to 35 dimensions; finally, the Euclidean distance is used to measure the distance between the sample point and each category representative, and softmax is used to obtain the probability distribution;
6)对比分析在K等于1、5和10个样本的条件下,非合作(库外)目标的识别结果,并与传统的SVM、Logistic、AGC和CNN-FC方法进行对比,实验结果如下:6) Comparative analysis of the recognition results of non-cooperative (out-of-library) targets under the condition that K is equal to 1, 5 and 10 samples, and compared with traditional SVM, Logistic, AGC and CNN-FC methods, the experimental results are as follows:
表2小样本识别方法准确率结果对比Table 2 Comparison of accuracy results of small sample recognition methods
在一个识别任务中,模型对5类非合作飞机目标进行识别,在不训练模型进行随机猜测的情况下,识别正确率为20%。如表2所示,SVM和Logistic具备对非合作目标小样本识别的能力,并且随着样本数增多,其识别能力明显提升。以SVM为例,每类10个样本时的性能较每类1个样本的性能提高超过19%;而AGC则由于样本维度远高于样本数量,导致其协方差矩阵估计较差,其性能远低于其他两种浅层的基线模型。CNN-FC这种基于非合作目标微调分类器的方法在每类样本数不超过10的情况下其识别准确率约为20%左右,因此不具备识别能力。另一方面,本发明CNN-Metric-FC方法相较于浅层模型SVM和Logistic,在1个样本、5个样本以及10个样本的实验中准确率提高分别超过了14%、10%和3%,由此可见,在非合作目标小样本条件下,本发明相较于SVM和Logistic等方法诸多方法识别准确率更高,更具有效性和鲁棒性。In a recognition task, the model recognizes 5 types of non-cooperative aircraft targets, and the recognition accuracy rate is 20% without training the model to make random guesses. As shown in Table 2, SVM and Logistic have the ability to recognize small samples of non-cooperative targets, and as the number of samples increases, their recognition ability is significantly improved. Taking SVM as an example, the performance of 10 samples per class is more than 19% higher than that of 1 sample per class; while AGC has poor covariance matrix estimation because the sample dimension is much higher than the number of samples, and its performance is far from Lower than the other two shallow baseline models. CNN-FC, a method based on non-cooperative target fine-tuning classifiers, has a recognition accuracy of about 20% when the number of samples of each class does not exceed 10, so it does not have the recognition ability. On the other hand, compared with the shallow model SVM and Logistic, the CNN-Metric-FC method of the present invention has improved the accuracy by more than 14%, 10% and 3% in experiments with 1 sample, 5 samples and 10 samples, respectively. %, it can be seen that under the condition of a small sample of non-cooperative targets, the present invention has a higher recognition accuracy rate, more effectiveness and robustness than methods such as SVM and Logistic.
本发明的技术思路是:首先构建多目标HRRP样本集,通过基于特征适应层和转换运算层改进的卷积神经网络训练出特征提取模型,利用所提取的特征计算每个类别的中心点,再利用度量函数构建损失函数,利用经过特征提取之后的非合作小样本目标的HRRP特征数据基于梯度优化的全连接层训练分类器,以此实现非合作小样本目标识别。The technical idea of the present invention is: first construct a multi-objective HRRP sample set, train a feature extraction model through the improved convolutional neural network based on the feature adaptation layer and the conversion operation layer, use the extracted features to calculate the center point of each category, and then The loss function is constructed by using the metric function, and the HRRP feature data of the non-cooperative small sample target after feature extraction is used to train the classifier based on the gradient optimized fully connected layer, so as to realize the recognition of the non-cooperative small sample target.
本发明通过构建特征适应层和转换运算层来改进卷积神经网络,进一步提高了特征提取器的泛化能力。The invention improves the convolutional neural network by constructing a feature adaptation layer and a conversion operation layer, and further improves the generalization ability of the feature extractor.
本发明通过欧氏距离计算方法构建度量函数,利用度量函数构建损失函数,用以衡量训练样本的预测中心与标签的距离,以此不断优化特征提取模型,与现有技术相比,有效提高了目标识别模型的准确率;The present invention uses the Euclidean distance calculation method to construct a metric function, uses the metric function to construct a loss function, and uses the metric function to measure the distance between the prediction center of the training sample and the label, thereby continuously optimizing the feature extraction model. Compared with the prior art, it effectively improves the The accuracy of the target recognition model;
本发明通过构建多个小样本识别任务来进行多批次的模型微调,以此训练特征提取模型,解决了现有的基于深度学习的特征提取方法对于新类别目标样本量有限情况下模型学习的不平衡问题。The present invention performs multi-batch model fine-tuning by constructing multiple small-sample recognition tasks, thereby training the feature extraction model, which solves the problem of the existing feature extraction method based on deep learning for model learning when the sample size of the new category target is limited. imbalance problem.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。此外,本领域的技术人员可以将本说明书中描述的不同实施例或示例进行接合和组合。In the description of this specification, descriptions referring to the terms "one embodiment", "some embodiments", "example", "specific examples", or "some examples" mean that specific features described in connection with the embodiment or example , structure, material or characteristic is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. In addition, those skilled in the art can combine and combine different embodiments or examples described in this specification.
尽管在此结合各实施例对本申请进行了描述,然而,在实施所要求保护的本申请过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。Although the present application has been described in conjunction with various embodiments here, however, in the process of implementing the claimed application, those skilled in the art can understand and Other variations of the disclosed embodiments are implemented. In the claims, the word "comprising" does not exclude other components or steps, and "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that these measures cannot be combined to advantage.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be assumed that the specific implementation of the present invention is limited to these descriptions. For those of ordinary skill in the technical field of the present invention, without departing from the concept of the present invention, some simple deduction or replacement can be made, which should be regarded as belonging to the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110535025.1ACN113486917B (en) | 2021-05-17 | 2021-05-17 | Radar HRRP small sample target recognition method based on metric learning |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110535025.1ACN113486917B (en) | 2021-05-17 | 2021-05-17 | Radar HRRP small sample target recognition method based on metric learning |
| Publication Number | Publication Date |
|---|---|
| CN113486917A CN113486917A (en) | 2021-10-08 |
| CN113486917Btrue CN113486917B (en) | 2023-06-02 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110535025.1AActiveCN113486917B (en) | 2021-05-17 | 2021-05-17 | Radar HRRP small sample target recognition method based on metric learning |
| Country | Link |
|---|---|
| CN (1) | CN113486917B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114488140B (en)* | 2022-01-24 | 2023-04-25 | 电子科技大学 | Small sample radar one-dimensional image target recognition method based on deep migration learning |
| CN114444608B (en)* | 2022-02-08 | 2024-03-29 | 中国电信股份有限公司 | Data set quality evaluation method and device, electronic equipment and storage medium |
| CN114879185B (en)* | 2022-06-14 | 2024-12-06 | 中国人民解放军海军航空大学 | Intelligent recognition method of radar targets based on mission experience transfer |
| CN114859317B (en)* | 2022-06-14 | 2024-12-06 | 中国人民解放军海军航空大学 | Intelligent recognition method of radar target based on adaptive reverse truncation |
| CN114839617B (en)* | 2022-06-14 | 2024-12-03 | 中国人民解放军海军航空大学 | Intelligent recognition method of radar targets based on negative exponential updating |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109375186A (en)* | 2018-11-22 | 2019-02-22 | 中国人民解放军海军航空大学 | A radar target recognition method based on deep residual multi-scale one-dimensional convolutional neural network |
| CN109977871A (en)* | 2019-03-27 | 2019-07-05 | 中国人民解放军战略支援部队航天工程大学 | A kind of Satellite Targets recognition methods based on wideband radar data and GRU neural network |
| CN110321999A (en)* | 2018-03-30 | 2019-10-11 | 北京深鉴智能科技有限公司 | Neural computing figure optimization method |
| CN110391022A (en)* | 2019-07-25 | 2019-10-29 | 东北大学 | A deep learning breast cancer pathological image segmentation diagnosis method based on multi-stage migration |
| CN111461063A (en)* | 2020-04-24 | 2020-07-28 | 武汉大学 | A Behavior Recognition Method Based on Graph Convolution and Capsule Neural Network |
| CN111596292A (en)* | 2020-04-02 | 2020-08-28 | 杭州电子科技大学 | A Radar Target Recognition Method Based on Importance Network and Bidirectional Stacked Recurrent Neural Network |
| CN111736125A (en)* | 2020-04-02 | 2020-10-02 | 杭州电子科技大学 | Radar target recognition method based on attention mechanism and bidirectional stacked recurrent neural network |
| CN111798980A (en)* | 2020-07-10 | 2020-10-20 | 哈尔滨工业大学(深圳) | Complex medical biological signal processing method and device based on deep learning network |
| CN111986112A (en)* | 2020-08-19 | 2020-11-24 | 北京航空航天大学 | Deep full-convolution neural network image denoising method of soft attention mechanism |
| CN112307996A (en)* | 2020-11-05 | 2021-02-02 | 杭州电子科技大学 | A fingertip electrocardiographic identification device and method |
| CN112417760A (en)* | 2020-11-20 | 2021-02-26 | 哈尔滨工程大学 | Ship Control Method Based on Competitive Hybrid Network |
| CN112541355A (en)* | 2020-12-11 | 2021-03-23 | 华南理工大学 | Few-sample named entity identification method and system with entity boundary class decoupling |
| CN112764024A (en)* | 2020-12-29 | 2021-05-07 | 杭州电子科技大学 | Radar target identification method based on convolutional neural network and Bert |
| CN112784930A (en)* | 2021-03-17 | 2021-05-11 | 西安电子科技大学 | CACGAN-based HRRP identification database sample expansion method |
| EP3819659A1 (en)* | 2019-11-08 | 2021-05-12 | MBDA Deutschland GmbH | Communication module for components of tactical air conditioning systems |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110321999A (en)* | 2018-03-30 | 2019-10-11 | 北京深鉴智能科技有限公司 | Neural computing figure optimization method |
| CN109375186A (en)* | 2018-11-22 | 2019-02-22 | 中国人民解放军海军航空大学 | A radar target recognition method based on deep residual multi-scale one-dimensional convolutional neural network |
| CN109977871A (en)* | 2019-03-27 | 2019-07-05 | 中国人民解放军战略支援部队航天工程大学 | A kind of Satellite Targets recognition methods based on wideband radar data and GRU neural network |
| CN110391022A (en)* | 2019-07-25 | 2019-10-29 | 东北大学 | A deep learning breast cancer pathological image segmentation diagnosis method based on multi-stage migration |
| EP3819659A1 (en)* | 2019-11-08 | 2021-05-12 | MBDA Deutschland GmbH | Communication module for components of tactical air conditioning systems |
| CN111736125A (en)* | 2020-04-02 | 2020-10-02 | 杭州电子科技大学 | Radar target recognition method based on attention mechanism and bidirectional stacked recurrent neural network |
| CN111596292A (en)* | 2020-04-02 | 2020-08-28 | 杭州电子科技大学 | A Radar Target Recognition Method Based on Importance Network and Bidirectional Stacked Recurrent Neural Network |
| CN111461063A (en)* | 2020-04-24 | 2020-07-28 | 武汉大学 | A Behavior Recognition Method Based on Graph Convolution and Capsule Neural Network |
| CN111798980A (en)* | 2020-07-10 | 2020-10-20 | 哈尔滨工业大学(深圳) | Complex medical biological signal processing method and device based on deep learning network |
| CN111986112A (en)* | 2020-08-19 | 2020-11-24 | 北京航空航天大学 | Deep full-convolution neural network image denoising method of soft attention mechanism |
| CN112307996A (en)* | 2020-11-05 | 2021-02-02 | 杭州电子科技大学 | A fingertip electrocardiographic identification device and method |
| CN112417760A (en)* | 2020-11-20 | 2021-02-26 | 哈尔滨工程大学 | Ship Control Method Based on Competitive Hybrid Network |
| CN112541355A (en)* | 2020-12-11 | 2021-03-23 | 华南理工大学 | Few-sample named entity identification method and system with entity boundary class decoupling |
| CN112764024A (en)* | 2020-12-29 | 2021-05-07 | 杭州电子科技大学 | Radar target identification method based on convolutional neural network and Bert |
| CN112784930A (en)* | 2021-03-17 | 2021-05-11 | 西安电子科技大学 | CACGAN-based HRRP identification database sample expansion method |
| Title |
|---|
| Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes;James Requeima et al;《arXiv:1906.07697v2》;20200108;第1-24页* |
| Radar HRRP Target Recognition Based on Concatenated Deep Neural Networks;K. Liao et al;《IEEE Access》;20180531;第29211-29218页* |
| The performance of multi-layer neural network on face recognition system;M. J. Yashaswini et al;《2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)》;20161231;第414-420页* |
| 基于中心排序损失及弱监督定位的细粒度目标检索;郑侠武;《中国优秀硕士学位论文全文数据库信息科技辑》;20190715;第2019年卷(第7期);I138-993* |
| Publication number | Publication date |
|---|---|
| CN113486917A (en) | 2021-10-08 |
| Publication | Publication Date | Title |
|---|---|---|
| CN113486917B (en) | Radar HRRP small sample target recognition method based on metric learning | |
| CN114564982B (en) | Automatic identification method of radar signal modulation type | |
| CN111160176B (en) | Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network | |
| CN110414554B (en) | Stacking ensemble learning fish identification method based on multi-model improvement | |
| CN109086700B (en) | A radar one-dimensional range image target recognition method based on deep convolutional neural network | |
| CN112946600B (en) | Construction method of radar HRRP database based on WGAN-GP | |
| CN107728142B (en) | Target recognition method of radar high-resolution range image based on two-dimensional convolutional network | |
| CN111368930B (en) | Radar human body posture identification method and system based on multi-class spectrogram fusion and hierarchical learning | |
| CN107316013A (en) | Hyperspectral image classification method with DCNN is converted based on NSCT | |
| CN110988804B (en) | Radar radiation source individual identification system based on radar pulse sequence | |
| CN108694346B (en) | Ship radiation noise signal identification method based on two-stage CNN | |
| CN106248801A (en) | A kind of Rail crack detection method based on many acoustie emission events probability | |
| CN113743180B (en) | CNNKD-based radar HRRP small sample target identification method | |
| CN111596292A (en) | A Radar Target Recognition Method Based on Importance Network and Bidirectional Stacked Recurrent Neural Network | |
| CN110109110A (en) | Based on the optimal variation of priori from the HRRP target identification method of code machine | |
| CN111273288A (en) | A Radar Unknown Target Recognition Method Based on Long Short-Term Memory Network | |
| CN114970638A (en) | Radar radiation source individual open set identification method and system | |
| CN103218623B (en) | The radar target feature extraction method differentiating projection is kept based on self-adaptation neighbour | |
| CN114861712A (en) | A Radar Target Recognition Method Based on Improved Time Series Convolutional Network | |
| Li et al. | Supervised domain adaptation for few-shot radar-based human activity recognition | |
| CN115565019A (en) | Single-channel high-resolution SAR image ground object classification method based on deep self-supervision generation countermeasure | |
| CN111596276A (en) | Radar HRRP Target Recognition Method Based on Spectrogram Transform and Attention Mechanism Recurrent Neural Network | |
| CN117606801A (en) | A cross-domain bearing fault diagnosis method based on multi-representation adaptive network | |
| CN109034179B (en) | Rock stratum classification method based on Mahalanobis distance IDTW | |
| CN110033043B (en) | One-dimensional radar range image rejection method based on conditional generative adversarial network |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |