Movatterモバイル変換


[0]ホーム

URL:


CN114626481B - A multi-scale metric few-shot learning method based on class features - Google Patents

A multi-scale metric few-shot learning method based on class features
Download PDF

Info

Publication number
CN114626481B
CN114626481BCN202210314022.XACN202210314022ACN114626481BCN 114626481 BCN114626481 BCN 114626481BCN 202210314022 ACN202210314022 ACN 202210314022ACN 114626481 BCN114626481 BCN 114626481B
Authority
CN
China
Prior art keywords
class
sample
feature
samples
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210314022.XA
Other languages
Chinese (zh)
Other versions
CN114626481A (en
Inventor
吴磊
管林林
王晓敏
吴少智
龚海刚
刘明
陈坚武
单文煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quzhou Haiyi Technology Co ltd
Yangtze River Delta Research Institute of UESTC Huzhou
Original Assignee
Quzhou Haiyi Technology Co ltd
Yangtze River Delta Research Institute of UESTC Huzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quzhou Haiyi Technology Co ltd, Yangtze River Delta Research Institute of UESTC HuzhoufiledCriticalQuzhou Haiyi Technology Co ltd
Priority to CN202210314022.XApriorityCriticalpatent/CN114626481B/en
Publication of CN114626481ApublicationCriticalpatent/CN114626481A/en
Application grantedgrantedCritical
Publication of CN114626481BpublicationCriticalpatent/CN114626481B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及一种基于类特征的多尺度度量少样本学习方法,包括:S1、数据预处理步骤;S2、特征嵌入步骤;S3、类特征提取步骤:通过动态路由机制融合支撑集同类的多个样本特征,并通过迭代的方式数输入向量的权重向量进行更新得到类整体特征;S4、多尺度度量步骤:通过融合三种度量准则对支撑集类特征与查询集样本之间进行相似度度量。本发明采用动态路由机制生成类整体特征,相比于直接加权平均的算法,通过该算法得到的类整体特征更具有代表性。在度量模块中,在有参网络的度量方法中引入了注意力机制,另外结合了多种度量方式的优劣,共同决定样本特征间相似度,从而得到了表现力更好的CFMMN网络模型。

The present invention relates to a multi-scale metric few-sample learning method based on class features, comprising: S1, data preprocessing step; S2, feature embedding step; S3, class feature extraction step: fusing multiple sample features of the same type in a support set through a dynamic routing mechanism, and updating the weight vector of the input vector in an iterative manner to obtain the class overall feature; S4, multi-scale measurement step: measuring the similarity between the support set class features and the query set samples by fusing three measurement criteria. The present invention adopts a dynamic routing mechanism to generate class overall features. Compared with the algorithm of direct weighted average, the class overall features obtained by the algorithm are more representative. In the measurement module, an attention mechanism is introduced into the measurement method of the parameterized network, and the advantages and disadvantages of multiple measurement methods are combined to jointly determine the similarity between sample features, thereby obtaining a CFMMN network model with better expressiveness.

Description

Multi-scale measurement and sample-less learning method based on class characteristics
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a multi-scale measurement and sample-less learning method based on class characteristics.
Background
For learning tasks in different fields, a large number of marked data samples are required to be obtained according to specific requirements of the tasks, but the marked data samples are not easy, and huge manpower and financial cost is required to be consumed. So the small sample set data resources are the difficult problem to be solved in the deep learning at present. Humans have the ability to learn efficiently from a very small number of samples, and under this inspire, the concept of learning with a small number of samples has developed. The few sample learning attempts to quickly induce limited supervised experience through the transformation of a priori knowledge, mimicking the ability of humans to acquire knowledge from a few examples by analogy.
In the existing few-sample learning algorithm, the few-sample learning model is divided into three major categories, namely a data enhancement-based method, a model optimization-based method and an algorithm optimization-based method. The data enhancement-based few-sample learning algorithm is a solution which is obtained by directly starting from the analysis of the actual problem, and attempts to expand the data set of the current missing data resource, thereby providing rich supervised information. The main idea of the model optimization-based low-sample learning algorithm model is that the optimization space of parameters is further reduced after high-level semantic information of limited supervised sample resources is fully mined, so that the optimization difficulty of model parameters is further reduced. The metric-based few-sample method is not only focused on how to get more raw data, but skillfully converts the few-sample problem into finding a more accurate embedded expression and a better distance metric strategy from the perspective of how to better utilize the limited sample data. The algorithm optimization-based low sample learning approach attempts to explore more appropriate search strategies in the hypothesis space. Many model selections in the algorithm-based optimization approach explore a more excellent initialization parametric model, and consequently, a more excellent model result is obtained with fewer iterations.
In the existing model, aiming at the task of N-way K-shot (K > 1) with few samples, the class integral features of the class in a high-dimensional space are obtained only by simply weighting or averaging the feature graphs of the similar samples. The sample characteristics are all obtained through the training of the early convolutional neural network, but due to the space limitation of the convolutional neural network, the weight information of a certain position on the characteristic map is only obtained with the relevant adjacent area mapped to the position of the original image, and all the information of the whole image cannot be fused well. Therefore, when the situation that the position information of the same class is large in difference exists, larger noise information is introduced to eliminate the acting force of the original effective information, so that the final image classification result is inaccurate.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a multi-scale measurement few-sample learning method based on class features, and solves the problem that class feature extraction is inaccurate in the existing few-sample learning.
The invention aims at realizing the technical scheme that the multi-scale measurement and sample-less learning method based on class characteristics comprises the following steps:
s1, data preprocessing, namely enhancing data in a random fixed angle mode to expand data quantity and increase image samples with different angles for the same class, and obtaining a support set and a query set by an N-way K-shot method;
s2, feature embedding step, namely embedding the features into the networkEmbedding the sample xi in the support set and the query set to obtain the features
S3, class feature extraction, namely fusing a plurality of sample features of the same class of support sets through a dynamic routing mechanism, and updating weight vectors of the input vectors in an iterative mode to obtain class integral features;
s4, a multi-scale measurement step, namely carrying out similarity measurement on the support set characteristics and the query set samples through three measurement criteria of merging the parameter network measurement, the cosine distance measurement and the Euclidean distance measurement.
The obtaining the support set and the query set through the N-way K-shot method comprises the following steps:
Randomly extracting N classes from the data set, wherein each class extracts k samples as a support set, and the samples in the support set are used for generating prototypes of the N classes;
And extracting k samples from the remaining samples of the N classes as a query set, wherein the query set is used for calculating the accuracy of the network and has the same model performance.
The specific content of the class feature extraction step comprises the following steps:
Transforming the support set sample feature vector eij obtained in the feature embedding step to obtainWherein Ws、bs is a conversion matrix and bias term, squash is a nonlinear function to compress the vector so that the length of the vector is normalized between 0 and 1;
Iteratively applying a vector to an input vectorAnd (5) obtaining the class integral characteristics after updating the weight vector of the model.
The specific iterative process comprises the following steps:
dij=softmax(bi)
wherein dij represents the association between the input vector and the output class feature ci, the initial value of bij is 0, the input vector and the output class feature ci become uniformly distributed after the Softmax function, and ci represents the class feature of the i class support set sample.
The multi-scale measurement step specifically comprises the following steps:
obtaining a sample characteristic eq of the query set according to the characteristic embedding step and a class characteristic ci of the ith class support set according to the class characteristic extracting step, and obtaining a matching score between the sample of the ith class support set and the sample of the q query through Euclidean distanceIs that
Matching score with cosine similarity method as measurement criterion
When the measurement mode is that the parametric network with the attention mechanism is adopted, specific parameters in the network are obtained through optimization learning, and then the matching score is obtainedWhere C (,) is a concatenation function, MAttention (,) represents a metric with an attention mechanism, fφ represents a fully connected network with an activation function;
Selecting the category i with the largest matching score of the three kinds of measurement mode addition as the category label of the query sample xq
The method for learning the few samples further comprises the step of setting a loss function, wherein the loss function is a loss function with a distance, and the calculation formula is as followsWherein m+ represents the interval, α represents the weight coefficient, 1iq represents the indicator function, and riq represents the matching score of the query sample and the i-th class support set sample;
The loss function calculation formula shows the result of the mutual restriction between the query sample and all other class-level characteristics, the inward pulling force is generated between the similar samples, the outward pushing force is generated between the non-similar samples,The tensile force between the query sample q and the support type i is represented, and the optimization aim is to reduce the distance between the similar samples; The minimum distance between non-homogeneous samples is constrained to be not less than the threshold m+.
The multi-scale measurement less sample learning method based on the class features has the advantages that under the heuristic of the prior measurement-based less sample learning, the N-way K-shot (K > 1) less sample classification task is focused, the dynamic routing mechanism is adopted to generate the class integral features, and compared with the algorithm of direct weighted average, the class integral features obtained through the algorithm are more representative. In the measurement module, a attention mechanism is introduced in the measurement method of the parametric network, and in addition, the advantages and disadvantages of a plurality of measurement modes are combined to jointly determine the similarity among sample features, so that a CFMMN network model with better expressive force is obtained.
Drawings
FIG. 1 is a flow chart of a multi-scale metric sample-less learning algorithm based on class features;
FIG. 2 is a multi-scale metric less sample learning CFMMN network model based on class features;
FIG. 3 is a feature embedded module network architecture;
FIG. 4 is a parametric network architecture;
FIG. 5 is a sample-less 5-way 1-shot task example;
FIG. 6 is a graph comparing the results of Omniglot datasets;
FIG. 7 is a graph comparing experimental results of mini-ImageNet data sets.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Accordingly, the following detailed description of the embodiments of the application, as presented in conjunction with the accompanying drawings, is not intended to limit the scope of the application as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application. The application is further described below with reference to the accompanying drawings.
The invention mainly extends the core problems of N-way K-shot (K > 1) few sample classification task and class feature selection and extraction in the few sample learning algorithm, combines the advantages and disadvantages of various measurement modes to generate a multi-scale measurement method, and finally selects two data sets, namely Omniglot data sets and mini-ImageNet data sets which are widely used in the few sample learning field, and performs the few sample image classification test by using the algorithm in the invention. A flow chart of a multi-scale metric and sample-less learning algorithm based on class features is shown in fig. 1, and a network model of the algorithm is shown in fig. 2. The method mainly comprises the following steps:
Step one, data preprocessing. Because of the specificity of the lack of the data resource of the few samples learning, the data is enhanced by adopting the simplest way of randomly rotating 90 degrees, 180 degrees and 270 degrees, on one hand, the data volume can be expanded, and on the other hand, the image samples with different angles are added for the same class, so that the effectiveness of the feature extraction of the class of the test model is facilitated. In addition, the training iteration of the few-sample image classification model is actually completed through a plurality of tasks, and the image data is required to be generated into tasks and then input into the model. Firstly, randomly extracting N classes from a data set, wherein k samples are extracted from each class to serve as a supporting set, the samples in the supporting set are used for generating prototypes of the N classes, and k samples are extracted from the rest samples of the N classes to serve as a query set, and the query set is used for calculating the accuracy of a network so as to verify the performance of a model. Each task contains a small number of classes and each class contains a small number of samples, such a task setting simulates a scenario of a few sample image classification.
And step two, feature embedding. Support set of k samples per class for a given N classesAnd a query setSample xi through feature embedding networkThe characteristic hi is obtained:
Wherein, the embedded module is a concrete networkThe structure is shown in fig. 3. The embedding module of the relation network consists of four convolution block structures, wherein each convolution block structure comprises a convolution layer with a convolution kernel of 64 x 3, a Batch normalization layer and a ReLU layer. Wherein the first and second convolution blocks are followed by a 2 x2 max pooling layer to adjust the network specification, and the last two convolution blocks are followed by no pooling layer.
And step three, class feature extraction. After feature extraction is completed on the support set S and the query set Q, a dynamic routing mechanism is adopted to fuse a plurality of sample features of the same type of the support set. In order to enable the model to adapt to task input of more support set samples, the support set sample feature vector eij obtained through the second step is firstly transformed as follows:
Where Ws、bs is the transformation matrix and bias term, the Squash function behaves like sigmoid, which is a nonlinear function that compresses the vector to a length between 0 and 1 to normalize the length of the vector. The calculation of the Squash function for either vector si is as follows:
Iteratively applying a vector to an input vectorThe weight vector of (2) is updated to obtain the integral-like characteristics, and the specific iterative process is as follows:
dij=softmax(bi)
Wherein dij represents the association between the input vector and the output class feature ci, the initial value of bij is 0, the input vector and the output class feature ci become uniformly distributed after the Softmax function, and ci is the class feature of the i class support set sample. If the current sample feature belongs to a certain class, the similarity will be higher, the weight will be larger in the next iteration, and if not, the weight vector should be smaller. In general, after multiple iterations, the individual sample contributions under the same class will become different after learning. After the iteration turns are finished, the characteristics on class level can be obtained, and 3 turns can be completed in general.
And step four, measuring multiple scales. After obtaining the class features of the N classes in the support set, a suitable manner is needed to measure the similarity between the class features of the support set and the query sample. In general, feature similarity measurement methods include cosine distance and euclidean distance. As shown in the network model of FIG. 2, the algorithm combines three measurement criteria of a parameter network measurement, a cosine distance measurement and an Euclidean distance measurement, specifically.
Obtaining a query sample feature eq through a pre-feature embedded network and a class feature ci of an ith class support set through a dynamic routing module, and obtaining a matching score between the ith class support set sample and a qth query sample through Euclidean distanceThe method comprises the following steps:
if the cosine similarity method is used as a measurement criterion, the matching scoreThe method comprises the following steps:
As shown in fig. 4, when the measurement mode is a parametric network with a attention mechanism, specific parameters in the network are required to be obtained through optimization learning, and the finally obtained matching score is as follows:
Where C (,) is a concatenation function, MAttention (,) represents a metric with an attention mechanism, and fφ represents a fully connected network with an activation function. Specifically, the attention mechanism is that after the spliced feature matrix P is subjected to 31 x1 convolution kernels to generate three new feature graphs A, B, C, the calculation method of the attention layer is as follows:
H(A,B)=soft max(ATB)
the equation obtains the feature map PAttentionOut with the attention weight through the residual error neutralization thought, and attention is introduced into the network, so that not only can each class feature in the support set be comprehensively examined, but also a part with more pertinence between the class feature and the query feature can be found for measurement learning.
Obtaining matching score by combining the three measurement modesThe final result of the task classification with less samples is determined together, and the category i with the largest matching score added by the three measurement modes is selected as the category label of the query sample xq:
And fifthly, designing a loss function. The loss function is in the present invention linked to the optimization problem as a learning criterion, i.e. the model is evaluated by minimizing the loss function solution. The invention specifically designs a loss function calculation method with intervals aiming at CFMMN less sample learning scenes:
Where m+ represents the interval, α represents the weight coefficient, 1iq represents the indicator function, and riq represents the matching score of the query sample and the i-th class support set sample. The above equation shows the result of the interaction between the query sample and all other class-level features, with inward pull between like samples and outward push between non-like samples. The first term represents the pulling force between the query sample q and the support class i feature, and the optimization aims at trying to reduce the distance between similar samples, wherein the second term constrains the minimum distance between non-similar samples to be not less than the threshold m+.
The multi-scale measurement and sample less learning network model based on the class characteristics is trained and tested by a task mode, so that the original data set D is required to be sampled and constructed. Firstly, the original data set D is divided into a training data set and a test data set, which correspond to training and test phases of the learning with few samples respectively. The training data set and the test data set are randomly sampled to generate a plurality of tasks, wherein a single task comprises a support sample set and a query sample set, the query sample labels of the tasks are necessarily contained in the support sample labels, that is, the purpose of the model is to query which category labels in the support set the sample belongs to in the test task after a large number of training tasks are learned. If a task's supporting sample set has N classes, each class has K samples, such a task is called an N-way K-shot task, and a typical 5-way 1-shot task is shown in FIG. 5 below.
In order to evaluate the performance of the multi-scale measurement and sample learning CFMMN network model based on class characteristics in the invention, experiments of 5-way 1-shot, 5-way 5-shot, 20-way 1-shot and 20-way 5-shot are respectively carried out on Omniglot and mini-ImageNet data sets, and comparison analysis is carried out on other algorithms. In the invention, a training set-test set=8:2 mode is adopted, the standard of model evaluation is the accuracy Acc of inquiring sample labels on the test set, and the baselines of MN, PN and RN network models listed below are the same as those in the text.
As shown in FIG. 6, the experimental result of CFMMN network model on Omniglot data set shows that the accuracy of the 5-way 1-shot task reaches 99.34% +/-0.27%, the accuracy of the 5-way 5-shot task reaches 99.55% +/-0.19%, the accuracy of the 5-way 5-shot task is respectively improved by 1.74% and 1.25% compared with MN, the accuracy of the 5-way 5-shot task is respectively improved by 2.04% and 0.65% compared with PN, the accuracy of the 5-way 1-shot task is respectively improved by 0.44% and 0.51% compared with RN, the accuracy of the 20-way 1-shot task is improved by 1.82% on RN, and the accuracy of the 20-way 5-shot task is improved by 0.51% on RN.
The experimental results of CFMMN network models on a mini-ImageNet data set are shown in figure 7, and the classification accuracy of the 5-way 1-shot task and the 5-way 5-shot task is improved by 5.35% and 6.74% compared with that of RN.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (2)

Translated fromChinese
1.一种基于类特征的多尺度度量少样本学习方法,其特征在于:所述少样本学习方法包括:1. A multi-scale metric few-sample learning method based on class features, characterized in that: the few-sample learning method comprises:S1、数据预处理步骤:通过随机固定角度的方式增强数据,以扩充数据量和为同一类增加不同角度的图像样本,并通过N-way K-shot方法得到支撑集和查询集;S1, data preprocessing step: enhance the data by randomly fixing the angle to expand the data volume and add image samples of different angles for the same class, and obtain the support set and query set by the N-way K-shot method;S2、特征嵌入步骤:通过特征嵌入网络对支撑集和查询集中的样本xi进行嵌入后得到特征S2. Feature embedding step: through feature embedding network After embedding the samplesxi in the support set and query set, the features are obtainedS3、类特征提取步骤:通过动态路由机制融合支撑集同类的多个样本特征,并通过迭代的方式对输入向量的权重向量进行更新得到类整体特征;S3, class feature extraction step: fuse multiple sample features of the same type in the support set through a dynamic routing mechanism, and update the weight vector of the input vector in an iterative manner to obtain the overall class feature;S4、多尺度度量步骤:通过融合有参网络度量、余弦距离度量和欧式距离度量三种度量准则对支撑集类特征与查询集样本之间进行相似度度量;S4, multi-scale measurement step: the similarity between the support set features and the query set samples is measured by integrating the three measurement criteria of parameterized network measurement, cosine distance measurement and Euclidean distance measurement;通过N-way K-shot方法得到支撑集和查询集包括:The support set and query set obtained by the N-way K-shot method include:从数据集中随机抽取N个类,每个类抽取k个样本作为支撑集,支撑集中的样本用于生成N个类的原型;Randomly extract N classes from the data set, extract k samples from each class as the support set, and the samples in the support set are used to generate prototypes of N classes;再从N个类剩余的样本中每类抽取k个样本作为查询集,查询集用于计算网络的准确率,以验证模型性能;Then, k samples are extracted from the remaining samples of each class as the query set. The query set is used to calculate the accuracy of the network to verify the model performance.所述类特征提取步骤的具体内容包括:The specific contents of the class feature extraction step include:对特征嵌入步骤中得到的支撑集样本特征向量eij进行变换得到其中,Ws、bs为转换矩阵和偏置项,Squash函数是一个非线性函数将向量压缩,使其长度在0到1之间对向量的长度进行归一化;The support set sample feature vector eij obtained in the feature embedding step is transformed to obtain Among them, Ws and bs are the transformation matrix and bias term, and the Squash function is a nonlinear function that compresses the vector so that its length is between 0 and 1 to normalize the length of the vector;通过迭代的方式对输入向量的权重向量更新后得到类整体特征;By iteratively The weight vector is updated to obtain the overall characteristics of the class;具体的迭代过程包括:The specific iteration process includes:其中,dij表示的是输入向量与输出类特征ci间的关联关系,bij的初始值为0,经过Softmax函数后就变为均匀分布,ci为输出的第i类支撑集样本的类特征;Among them, dij represents the correlation between the input vector and the output class featureci , the initial value of bij is 0, and after the Softmax function, it becomes a uniform distribution, andci is the class feature of the output i-th class support set sample;所述多尺度度量步骤具体包括:The multi-scale measurement step specifically includes:根据所述特征嵌入步骤得到查询集样本特征eq和类特征提取步骤得到第i类支撑集的类特征ci,通过欧氏距离得到第i类支撑集样本与第q个查询样本间的匹配分数According to the feature embedding step, the query set sample feature eq is obtained, and the class feature ci of the i-th class support set is obtained in the class feature extraction step. The matching score between the i-th class support set sample and the q-th query sample is obtained by the Euclidean distance. for以及得到余弦相似度方法作为度量准则是的匹配分数And get the cosine similarity method as the matching score of the metric当度量方式为带有注意力机制的有参网络是,通过优化学习得到网络中具体的参数,进而得到匹配分数其中C(.,.)为拼接级联函数,MAttention(.)表示带有注意力机制的度量准则,fφ表示带有激活函数的全连接网络;When the metric is a parametric network with an attention mechanism, the specific parameters in the network are obtained through optimization learning, and then the matching score is obtained. Where C(.,.) is the concatenation cascade function, MAttention (.) represents the metric with attention mechanism, and fφ represents the fully connected network with activation function;选择三种度量方式相加的匹配得分最大的类别i作为该查询样本xq的类别标签Select the category i with the largest matching score summed up by the three metrics as the category label of the query samplexq .2.根据权利要求1所述的一种基于类特征的多尺度度量少样本学习方法,其特征在于:所述少样本学习方法还包括设置损失函数步骤;损失函数为带有间距的损失函数,其计算公式为其中,m+表示间隔,α表示权重系数,1iq表示指示函数,riq表示查询样本与第i类支撑集样本的匹配得分;2. According to the multi-scale metric few-sample learning method based on class features in claim 1, it is characterized in that: the few-sample learning method further comprises a step of setting a loss function; the loss function is a loss function with a spacing, and its calculation formula is Among them, m+ represents the interval, α represents the weight coefficient, 1iq represents the indicator function, andriq represents the matching score between the query sample and the i-th support set sample;损失函数计算公式表示了查询样本与其他所有类级别特征间的互相制约的结果,同类样本间产生向内的拉力和非同类样本间产生向外的推力,表示了查询样本q与第类支撑类特征之间的拉力,优化的目标就是减少同类样本间的距离;约束了非同类样本间的最小距离不能小于阈值m+The loss function calculation formula represents the result of the mutual constraints between the query sample and all other class-level features. The inward pull between samples of the same type and the outward push between samples of different types are It represents the pulling force between the query sample q and the first supporting feature. The optimization goal is to reduce the distance between samples of the same type. The minimum distance between samples of different types is constrained to be no less than the threshold m+ .
CN202210314022.XA2022-03-282022-03-28 A multi-scale metric few-shot learning method based on class featuresActiveCN114626481B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210314022.XACN114626481B (en)2022-03-282022-03-28 A multi-scale metric few-shot learning method based on class features

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210314022.XACN114626481B (en)2022-03-282022-03-28 A multi-scale metric few-shot learning method based on class features

Publications (2)

Publication NumberPublication Date
CN114626481A CN114626481A (en)2022-06-14
CN114626481Btrue CN114626481B (en)2025-04-18

Family

ID=81903889

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210314022.XAActiveCN114626481B (en)2022-03-282022-03-28 A multi-scale metric few-shot learning method based on class features

Country Status (1)

CountryLink
CN (1)CN114626481B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118898187B (en)*2024-10-082024-12-06江苏金卓新材料科技有限公司 A finite element simulation method and system for predicting the compacting performance of metal welding powder

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109685135A (en)*2018-12-212019-04-26电子科技大学A kind of few sample image classification method based on modified metric learning
CN111985581A (en)*2020-09-092020-11-24福州大学Sample-level attention network-based few-sample learning method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110008842A (en)*2019-03-092019-07-12同济大学A kind of pedestrian's recognition methods again for more losing Fusion Model based on depth
CN112560876B (en)*2021-02-232021-05-11中国科学院自动化研究所Single-stage small sample target detection method for decoupling measurement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109685135A (en)*2018-12-212019-04-26电子科技大学A kind of few sample image classification method based on modified metric learning
CN111985581A (en)*2020-09-092020-11-24福州大学Sample-level attention network-based few-sample learning method

Also Published As

Publication numberPublication date
CN114626481A (en)2022-06-14

Similar Documents

PublicationPublication DateTitle
CN111737474B (en)Method and device for training business model and determining text classification category
CN111382283B (en)Resource category label labeling method and device, computer equipment and storage medium
CN114169442B (en)Remote sensing image small sample scene classification method based on double prototype network
CN113190699A (en)Remote sensing image retrieval method and device based on category-level semantic hash
CN112800172B (en) A code search method based on two-stage attention mechanism
CN114239826B (en)Neural network pruning method, medium and electronic equipment
CN112232395A (en)Semi-supervised image classification method for generating confrontation network based on joint training
JP2022530447A (en) Chinese word division method based on deep learning, equipment, storage media and computer equipment
CN113297429B (en) A social network link prediction method based on neural network architecture search
CN109189941B (en)Method, apparatus, device and medium for updating model parameters
CN117349494A (en)Graph classification method, system, medium and equipment for space graph convolution neural network
CN114626481B (en) A multi-scale metric few-shot learning method based on class features
CN119623515B (en)Evolutionary neural architecture searching method and system based on similarity agent assistance
Pacchiano et al.Neural design for genetic perturbation experiments
CN112836763A (en) A kind of graph structure data classification method and apparatus
Li et al.Integrating sample similarities into latent class analysis: a tree‐structured shrinkage approach
Hussain et al.Clustering probabilistic graphs using neighbourhood paths
CN114528491A (en)Information processing method, information processing device, computer equipment and storage medium
CN104156462A (en)Complex network community mining method based on cellular automatic learning machine
CN118095341A (en)SimRank similarity calculation method based on deep neural network
CN115688588B (en)Sea surface temperature daily variation amplitude prediction method based on improved XGB method
CN117152527A (en) A target detection method for sparsely labeled remote sensing images based on graph combination optimization
CN117114050A (en)Structural knowledge detection method oriented to graph model characterization learning
CN116110492A (en)Protein interaction network comparison method and system
CN114417976A (en)Hyperspectral image classification method and device, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp