技术领域technical field
本发明涉及场景图像标注技术领域,特别是涉及一种基于主动学习和多标签多示例学习的场景图像标注方法。The invention relates to the technical field of scene image labeling, in particular to a scene image labeling method based on active learning and multi-label multi-instance learning.
背景技术Background technique
随着信息技术的发展和互联网服务的进步,新闻、社交和商品交易等各类网站得到了长足的发展,互联网每天都产生海量的场景图片。这些场景图片具有以下两个基本特点。一方面,单幅场景图像不仅仅反映一个内容,可能涉及多个主题,语义比较复杂。例如,一副关于街道的图像,可能涉及行人、马路、车辆、树木、天空、建筑物等多个不同的主题。With the development of information technology and the advancement of Internet services, various websites such as news, social networking, and commodity trading have been greatly developed, and the Internet generates a large number of scene pictures every day. These scene pictures have the following two basic characteristics. On the one hand, a single scene image not only reflects one content, but may involve multiple topics, and the semantics are relatively complex. For example, an image about a street may involve many different subjects such as pedestrians, roads, vehicles, trees, sky, buildings, etc.
另一方面,互联网所产生的大量场景图像,不具有能够充分描述图片内容的分类标签。譬如,用户可能在社交网络上传了一个风景照片,但是照片内容并没有详细的文字描述。对于这些语义复杂,并且不具备分类标签的海量场景图像,如何利用这些图片,为互联网用户提供相关的服务,这是场景图像标注的核心任务。场景图像标注的目的是,通过有标签场景图像的学习,给无标签场景图像赋予精确的分类标签,使它们能够为互联网用户提供服务。On the other hand, a large number of scene images generated by the Internet do not have classification labels that can fully describe the content of the pictures. For example, a user may upload a landscape photo on a social network, but the content of the photo does not have a detailed text description. For these massive scene images with complex semantics and no classification labels, how to use these images to provide relevant services for Internet users is the core task of scene image annotation. The purpose of scene image annotation is to assign accurate classification labels to unlabeled scene images by learning from labeled scene images, so that they can serve Internet users.
传统的图像标注方法在互联网场景图像标注方面存在一些局限性。首先,传统的图像标注方法把一幅图像看作单一的向量。如上所述,一副场景图像可能包含若干个主题,如果把一幅图像转化为单一的向量,可能无法准确描述场景图像的语义,也无法精确对场景图像进行标记。其次,传统的图像标注方法需要大量的有标签场景图像来学习分类模型。为了建立高精确度的分类模型,往往需要专家通过人工标注方式,标注相当数量的场景图像来训练分类模型。人工标注大量的场景图像,需要耗费巨大的人力和物力资源。因此,一种基于少量有标签图像的高效自动场景图像标注技术亟待提出。Traditional image annotation methods have some limitations in Internet scene image annotation. First, traditional image annotation methods treat an image as a single vector. As mentioned above, a scene image may contain several topics. If an image is converted into a single vector, it may not be able to accurately describe the semantics of the scene image, nor can it accurately label the scene image. Second, traditional image annotation methods require a large number of labeled scene images to learn classification models. In order to establish a high-precision classification model, experts often need to manually mark a considerable number of scene images to train the classification model. Manually labeling a large number of scene images requires a huge amount of human and material resources. Therefore, an efficient automatic scene image annotation technology based on a small number of labeled images needs to be proposed urgently.
发明内容Contents of the invention
本发明的目的在于解决针对场景图像的两个基本特点,场景图像可能包含多个内容区域,语义复杂,把它转化为单一向量无法精确表示场景图像主题,以及互联网的大量场景图片不具备分类标签,标注成本昂贵等问题的一种基于多示例多标记学习和主动学习的场景图像标注方法。The purpose of the present invention is to solve the two basic characteristics of the scene image, the scene image may contain multiple content areas, the semantics are complex, it cannot be converted into a single vector to accurately represent the theme of the scene image, and a large number of scene pictures on the Internet do not have classification labels , a scene image annotation method based on multi-instance multi-label learning and active learning for problems such as expensive annotation.
为了实现上述目的,本发明采用了如下的技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
基于主动学习和多标签多示例学习的场景图像标注方法,包括如下步骤,A scene image labeling method based on active learning and multi-label multi-instance learning, including the following steps,
(1)获取一批无标签的场景图像。随机抽取少量场景图像,通过专家人工标注方式,赋予这些场景图像分类标签;(1) Obtain a batch of unlabeled scene images. Randomly extract a small number of scene images, and assign these scene images classification labels through manual labeling by experts;
(2)把有标签场景图像和无标签场景图像转化为多示例数据,每幅图像看作一个多示例包,每个区域看成多示例包的一个示例。(2) Transform labeled scene images and unlabeled scene images into multi-instance data, each image is regarded as a multi-instance bag, and each region is regarded as an example of the multi-instance bag.
(3)把少量有标签场景图像看作训练集,根据场景图像的标签数量,训练若干个初始分类模型;(3) A small number of labeled scene images are regarded as a training set, and several initial classification models are trained according to the label quantity of the scene images;
(4)利用已建立的分类模型,对样本集中的无标签场景图像进行标注,每个图像可能具有多个标签;(4) Use the established classification model to mark the unlabeled scene images in the sample set, and each image may have multiple labels;
(5)根据无标签场景图像的标注结果,计算每个分类模型的可信度;(5) Calculate the credibility of each classification model according to the labeling results of the unlabeled scene images;
(6)结合分类模型的可信度,从无标签场景图像中挑选一个不确定性最大的图像,并交给专家对该场景图像进行标注;(6) Combining the credibility of the classification model, select an image with the greatest uncertainty from unlabeled scene images, and hand it over to experts to label the scene image;
(7)把经过专家标注的场景图像从无标签图像数据集中移除,放入有标签场景图像数据集,并重新训练分类模型;(7) Remove the scene images marked by experts from the unlabeled image dataset, put them into the labeled scene image dataset, and retrain the classification model;
(8)判断该模型的精确度是否达到用户所要求的精确度,或者迭代轮数是否达到用户指定的次数,如果没有达到要求则返回(3);否则结束并输出分类模型。(8) Judging whether the accuracy of the model has reached the accuracy required by the user, or whether the number of iterations has reached the number specified by the user, if not, return to (3); otherwise end and output the classification model.
本发明利用主动学习策略,在保证分类模型精确度的同时,大大减少了需要人工标注的场景图像数量,从而降低了标注成本。同时,本发明把图像转化为多标签多示例数据,使图像复杂语义得到合理表示,提高了图像标注的精确度。The invention utilizes an active learning strategy to greatly reduce the number of scene images requiring manual labeling while ensuring the accuracy of the classification model, thereby reducing the labeling cost. At the same time, the invention converts images into multi-label and multi-instance data, so that the complex semantics of images can be reasonably represented, and the accuracy of image annotation is improved.
附图说明Description of drawings
图1为本发明实施例的训练标注模型的流程图。FIG. 1 is a flow chart of training an annotation model according to an embodiment of the present invention.
具体实施方式Detailed ways
下面结合具体实施例,进一步阐明本发明,应理解这些实施例仅用于说明本发明而不用于限制本发明的范围,在阅读了本发明之后,本领域技术人员对本发明的各种等价形式的修改均落于本申请所附权利要求所限定的范围。Below in conjunction with specific embodiment, further illustrate the present invention, should be understood that these embodiments are only used to illustrate the present invention and are not intended to limit the scope of the present invention, after having read the present invention, those skilled in the art will understand various equivalent forms of the present invention All modifications fall within the scope defined by the appended claims of the present application.
图1为本发明实施例的基于主动学习和多标签多示例学习的场景图像标注方法模型的流程图。如图1所示,本发明涉及到的场景图像标注方法包括下述过程:FIG. 1 is a flowchart of a scene image labeling method model based on active learning and multi-label multi-instance learning according to an embodiment of the present invention. As shown in Figure 1, the scene image labeling method involved in the present invention includes the following processes:
第一步,获取一批无标签的场景图像。随机抽取少量场景图像,通过专家人工标注方式,赋予这些场景图像分类标签。由于一副场景图像可能包含不同的内容,涉及多个主题,因此一幅图像可能具有若干个分类标签。在图像集合中,假设分类标签的最大数目为k。通过上述步骤,原来的场景图像集合被重新分为两个集合,一个集合包含少量有标签场景图像,另外一个集合包括剩下的大量无标签场景图像。In the first step, a batch of unlabeled scene images is obtained. Randomly select a small number of scene images, and assign classification labels to these scene images through manual labeling by experts. Since a scene image may contain different contents and involve multiple topics, an image may have several classification labels. In an image collection, assume that the maximum number of class labels is k. Through the above steps, the original set of scene images is re-divided into two sets, one set contains a small number of labeled scene images, and the other set contains a large number of remaining unlabeled scene images.
第二步,把有标签场景图像和无标签场景图像转化为多示例数据。由于场景图像可能涉及多个主题,语义复杂,如果把一副场景图像转化为单一的向量,难以准确地描述图像的复杂语义。因此,需要把场景图像转化为多示例数据。具体来说,可以使用图像识别领域的经典方法,如Blobworld System等,把图像根据不同的内容切割成若干个区域。然后,对每个图像区域提取颜色、纹理、形状等特征,把一个图像区域转化为一个示例向量。通过这种方式,一副图像被切割成了若干个区域。一副图像看作一个多示例包,一个区域看作多示例包的示例。In the second step, the labeled scene images and unlabeled scene images are converted into multi-instance data. Since scene images may involve multiple topics and have complex semantics, it is difficult to accurately describe the complex semantics of an image if a scene image is converted into a single vector. Therefore, it is necessary to convert scene images into multi-instance data. Specifically, classic methods in the field of image recognition, such as Blobworld System, can be used to cut the image into several regions according to different contents. Then, features such as color, texture, and shape are extracted for each image region, and an image region is converted into an example vector. In this way, an image is cut into several regions. An image is regarded as a multi-instance bag, and a region is regarded as an instance of the multi-instance bag.
第三步,把少量有标签场景图像看作训练集,根据场景图像的k个分类标签,训练k个初始分类模型。对于每一个分类标签,把具有该标签的图像看作正类数据,把不具有该标签的图像看作负类数据,训练一个初始的多示例分类模型。In the third step, a small number of labeled scene images are regarded as a training set, and k initial classification models are trained according to the k classification labels of the scene images. For each classification label, the images with the label are regarded as positive data, and the images without the label are regarded as negative data, and an initial multi-instance classification model is trained.
第四步,利用已建立的k个分类模型,对无标签场景图像的标签进行预测。经过k个分类模型,每一副无标签场景图像将获得k个分类标签。对于第i个分类模型,如果分类标签的值为1,表示该场景图像包含第i类的图像内容;如果分类标签的值为0,表示该场景图像不包含第i类的图像内容。In the fourth step, the labels of the unlabeled scene images are predicted using the established k classification models. After k classification models, each unlabeled scene image will obtain k classification labels. For the i-th classification model, if the value of the classification label is 1, it means that the scene image contains the image content of the i-th category; if the value of the classification label is 0, it means that the scene image does not contain the image content of the i-th category.
第五步,根据无标签场景图像的标注结果,计算每个分类模型的可信度。参照直推式支持向量机(Transductive Support Vector Machine,TSVM)的思想,给定一组独立同分布的有标签的训练样本和另一组来自同一分布的无标签样本,在样本足够多的情况下,根据有标签样本中的正标签样本所占比例可相应估计无标签样本中正标签样本的比例。为此,无标签样本中正标签样本所占比例应与有标签样本中的正标签样本所占的比例相近。基于这一思想,提出一种分类模型对预测标签可信度的衡量标准,首先利用有标签多示例包训练k个分类器,再利用得到的k个分类器对无标签多示例包进行分类,得到其预测标签。假定X表示示例空间,Y表示标签集空间,给定Nl个有标签多示例包和Nu个无标签多示例包目标是学习得到目标函数fMIML:2X→2Y。其中,对应一个示例集合为Xi对应的一组标签集合{yi1,yi2,…,yil},yik={0,1}(k=1,2,…,l),这里,ni表示多示例包Xi中含有示例的个数,l表示多示例包中的标签个数。在此基础上,第k个分类模型的可信度Ck可以定义为:The fifth step is to calculate the credibility of each classification model based on the labeling results of the unlabeled scene images. Referring to the idea of Transductive Support Vector Machine (TSVM), given a set of independent and identically distributed labeled training samples and another set of unlabeled samples from the same distribution, when there are enough samples , according to the proportion of positively labeled samples in labeled samples, the proportion of positively labeled samples in unlabeled samples can be estimated accordingly. To do this, the proportion of positively labeled samples in unlabeled samples should be similar to the proportion of positively labeled samples in labeled samples. Based on this idea, a classification model is proposed to measure the credibility of the predicted label. First, k classifiers are trained using labeled multi-instance packets, and then the unlabeled multi-instance packets are classified by the obtained k classifiers. Get its predicted label. Suppose X represents the example space, Y represents the label set space, given Nl labeled multi-example bags and Nu unlabeled multi-instance bags The goal is to learn the objective function fMIML :2X →2Y . in, corresponds to an example collection is a set of labels corresponding to Xi {yi1 , yi2 ,...,yil }, yik ={0,1}(k=1,2,...,l), where ni represents a multi-instance packageXi contains the number of examples, and l represents the number of labels in the multi-example bag. On this basis, the credibility Ck of thekth classification model can be defined as:
上式中,I[·]是一个指示函数(indicator function),满足[·]给定条件则其值为1,否则取值为0;ylik表示第k个分类器中第i个有标签多示例包的标签,yuik表示第k个分类器中第i个无标签多示例包的标签。表示无标签多示例包在第k个分类器中预测的正标签的平均值,表示有标签多示例包在第k个分类器中正标签的平均值。因此,可信度Ck越小,说明无标签多示例包中正标签样本所占比例与有标签多示例包所占的比例越不相近,即可信度越低,反之,则可信度越高。In the above formula, I[ ] is an indicator function, and its value is 1 if the given condition of [ ] is met, otherwise it is 0; ylik means that the i-th classifier in the k-th classifier has label the label of the multi-instance bag, and yuik denotes the label of the i-th unlabeled multi-instance bag in the k-th classifier. Denotes the mean of the positive labels predicted by the unlabeled multi-instance bag in the k-th classifier, Denotes the mean of positive labels in the k-th classifier for labeled multi-instance bags. Therefore, the smaller the reliability Ck is, it means that the proportion of positively labeled samples in the unlabeled multi-example package is less similar to the proportion of the labeled multi-example package, that is, the lower the reliability is, and vice versa, the higher the credibility is. high.
第六步,根据最小分类距离选择策略,结合分类模型可信度,从无标签场景图像中挑选一个不确定性最大的图像,并交给专家对该图像进行标注。一般认为,样本距离超平面越近被分错的可能性就越大,不确定性就越大,样本包含的信息量也就越多,也即样本越有价值。因此,通过计算多示例包距离超平面的距离,并考虑分类模型对多示例包的可信度作为一种权衡,提出了最小分类距离策略。为此,首先定义多示例包与超平面的最小距离,如下:The sixth step is to select an image with the greatest uncertainty from the unlabeled scene images according to the minimum classification distance selection strategy, combined with the credibility of the classification model, and hand it over to experts to label the image. It is generally believed that the closer the sample is to the hyperplane, the greater the possibility of being misclassified, the greater the uncertainty, and the more information the sample contains, that is, the more valuable the sample. Therefore, a minimum classification distance strategy is proposed by calculating the distance of the multi-instance bag from the hyperplane and considering the trustworthiness of the classification model for the multi-instance bag as a trade-off. To do this, first define the minimum distance between the multi-instance bag and the hyperplane, as follows:
上式中,fk(Xij)表示多示例包Xi中第j个示例在第k个SVM分类器的分类函数输出值,表示示例Xij对于第k个SVM分类器的超平面距离。表示多示例包Xi中距离第k个SVM分类器超平面最远的示例,根据多示例学习的定义,每个正包中至少含有一个正示例,而距离分类平面最远的示例为正示例的可能性越大,因此,利用该示例来代表其所在的多示例包。对于l个分类器,结合上面提出的可信度Ck,与分类平面越近的多示例包,其不确定性也就越大,也即对分类器性能最有改善的作用。In the above formula, fk (Xij ) represents the output value of the classification function of thej -th example in the k-th SVM classifier in the multi-example package Xi, Indicates the hyperplane distance of the example Xij to the kth SVM classifier. Indicates the example farthest from the hyperplane of thek -th SVM classifier in the multi-instance bag Xi, according to the definition of multi-instance learning, each positive bag contains at least one positive example, and the example farthest from the classification plane is a positive example The more likely it is , so use that example to represent the multi-instance package it's in. For l classifiers, combined with the reliability Ck proposed above, the closer the multi-instance bag to the classification plane, the greater its uncertainty, that is, the most effective effect on improving the performance of the classifier.
基于以上分析,选择策略如下表示:Based on the above analysis, the selection strategy is expressed as follows:
在主动学习中,最有价值的多示例包就是分类器最不确定的样本,因此根据选择策略计算得到的多示例包与分离器超平面的距离,选择距离最小的多示例包加入到训练集进行训练,将提高分类器的性能。In active learning, the most valuable multi-instance bag is the most uncertain sample of the classifier, so according to the distance between the multi-instance bag and the separator hyperplane calculated by the selection strategy, the multi-instance bag with the smallest distance is selected to be added to the training set Training will improve the performance of the classifier.
第七步,把经过专家标注的场景图像从无标签图像数据集中移除,放入有标签场景图像数据集,并重新训练分类模型;The seventh step is to remove the scene images marked by experts from the unlabeled image dataset, put them into the labeled scene image dataset, and retrain the classification model;
第八步,判断该模型的精确度是否达到用户所要求的精确度,或者迭代轮数是否达到用户指定的次数,如果没有达到要求则返回第三步;否则结束并输出分类模型。The eighth step is to judge whether the accuracy of the model meets the accuracy required by the user, or whether the number of iteration rounds reaches the number specified by the user, if not, return to the third step; otherwise, end and output the classification model.
以上所述的本发明实施方式,并不构成对本发明保护范围的限定。任何在本发明的精神和原则之内所作的修改、等同替换和改进等,均应包含在本发明的权利要求保护范围之内。The embodiments of the present invention described above are not intended to limit the protection scope of the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principle of the present invention shall be included in the protection scope of the claims of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510473322.2ACN105117429B (en) | 2015-08-05 | 2015-08-05 | Scene image mask method based on Active Learning and multi-tag multi-instance learning |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510473322.2ACN105117429B (en) | 2015-08-05 | 2015-08-05 | Scene image mask method based on Active Learning and multi-tag multi-instance learning |
| Publication Number | Publication Date |
|---|---|
| CN105117429A CN105117429A (en) | 2015-12-02 |
| CN105117429Btrue CN105117429B (en) | 2018-11-23 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510473322.2AExpired - Fee RelatedCN105117429B (en) | 2015-08-05 | 2015-08-05 | Scene image mask method based on Active Learning and multi-tag multi-instance learning |
| Country | Link |
|---|---|
| CN (1) | CN105117429B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105701509B (en)* | 2016-01-13 | 2019-03-12 | 清华大学 | An Image Classification Method Based on Cross-Class Transfer Active Learning |
| CN106127247B (en)* | 2016-06-21 | 2019-07-09 | 广东工业大学 | Image classification method based on the more example support vector machines of multitask |
| CN106250924B (en)* | 2016-07-27 | 2019-07-16 | 南京大学 | A new category detection method based on multi-instance learning |
| CN106897424A (en)* | 2017-02-24 | 2017-06-27 | 北京时间股份有限公司 | Information labeling system and method |
| CN108959304B (en)* | 2017-05-22 | 2022-03-25 | 阿里巴巴集团控股有限公司 | Label prediction method and device |
| CN107392125A (en)* | 2017-07-11 | 2017-11-24 | 中国科学院上海高等研究院 | Training method/system, computer-readable recording medium and the terminal of model of mind |
| US10970334B2 (en)* | 2017-07-24 | 2021-04-06 | International Business Machines Corporation | Navigating video scenes using cognitive insights |
| CN107577994A (en)* | 2017-08-17 | 2018-01-12 | 南京邮电大学 | A recognition and retrieval method for pedestrians and vehicle accessories based on deep learning |
| CN107832780B (en)* | 2017-10-17 | 2020-04-10 | 北京木业邦科技有限公司 | Artificial intelligence-based wood board sorting low-confidence sample processing method and system |
| CN109800776A (en)* | 2017-11-17 | 2019-05-24 | 中兴通讯股份有限公司 | Material mask method, device, terminal and computer readable storage medium |
| CN108009589A (en)* | 2017-12-12 | 2018-05-08 | 腾讯科技(深圳)有限公司 | Sample data processing method, device and computer-readable recording medium |
| CN108334943A (en)* | 2018-01-03 | 2018-07-27 | 浙江大学 | The semi-supervised soft-measuring modeling method of industrial process based on Active Learning neural network model |
| CN108427970A (en)* | 2018-03-29 | 2018-08-21 | 厦门美图之家科技有限公司 | Picture mask method and device |
| CN109242013B (en)* | 2018-08-28 | 2021-06-08 | 北京九狐时代智能科技有限公司 | Data labeling method and device, electronic equipment and storage medium |
| CN109977994B (en)* | 2019-02-02 | 2021-04-09 | 浙江工业大学 | Representative image selection method based on multi-example active learning |
| CN109886211B (en)* | 2019-02-25 | 2022-03-01 | 北京达佳互联信息技术有限公司 | Data labeling method and device, electronic equipment and storage medium |
| CN111796926B (en)* | 2019-04-09 | 2025-01-21 | Oppo广东移动通信有限公司 | Instruction execution method, device, storage medium and electronic device |
| CN111797660B (en)* | 2019-04-09 | 2024-11-22 | Oppo广东移动通信有限公司 | Image labeling method, device, storage medium and electronic device |
| CN110458180B (en)* | 2019-04-28 | 2023-09-19 | 广东工业大学 | A classifier training method based on small samples |
| CN110288007B (en)* | 2019-06-05 | 2021-02-02 | 北京三快在线科技有限公司 | Data labeling method and device and electronic equipment |
| CN110175657B (en)* | 2019-06-05 | 2021-10-01 | 广东工业大学 | A kind of image multi-label marking method, device, device and readable storage medium |
| CN110378396A (en)* | 2019-06-26 | 2019-10-25 | 北京百度网讯科技有限公司 | Sample data mask method, device, computer equipment and storage medium |
| CN110414622B (en)* | 2019-08-06 | 2022-06-24 | 广东工业大学 | Classifier training method and device based on semi-supervised learning |
| US11409589B1 (en) | 2019-10-23 | 2022-08-09 | Relativity Oda Llc | Methods and systems for determining stopping point |
| CN111368917B (en)* | 2020-03-04 | 2023-06-09 | 西安邮电大学 | Multi-example integrated learning method for criminal investigation image classification |
| CN111340131B (en)* | 2020-03-09 | 2023-07-14 | 北京字节跳动网络技术有限公司 | Image labeling method and device, readable medium and electronic equipment |
| CN111353549B (en)* | 2020-03-10 | 2023-01-31 | 创新奇智(重庆)科技有限公司 | Image label verification method and device, electronic equipment and storage medium |
| CN111476285B (en)* | 2020-04-01 | 2023-07-28 | 深圳力维智联技术有限公司 | A training method for an image classification model, an image classification method, and a storage medium |
| CN111582329B (en)* | 2020-04-22 | 2023-03-28 | 西安交通大学 | Natural scene text character detection and labeling method based on multi-example learning |
| CN111461265B (en)* | 2020-05-27 | 2023-07-25 | 东北大学 | Scene image labeling method based on coarse-fine granularity multi-image multi-label learning |
| CN113177576B (en)* | 2021-03-31 | 2022-02-22 | 中国科学院大学 | Multi-example active learning method for target detection |
| CN114155042A (en)* | 2021-12-07 | 2022-03-08 | 天翼物联科技有限公司 | Method and device for identifying potential users of Internet of things, computer equipment and storage medium |
| CN115035406B (en)* | 2022-06-08 | 2023-08-04 | 中国科学院空间应用工程与技术中心 | Annotating method, system, storage medium and electronic equipment for remote sensing scene dataset |
| CN119380109A (en)* | 2024-11-07 | 2025-01-28 | 广东工业大学 | Confidence-based multi-instance multi-label image classification method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101853400A (en)* | 2010-05-20 | 2010-10-06 | 武汉大学 | Multi-Class Image Classification Method Based on Active Learning and Semi-Supervised Learning |
| CN103116893A (en)* | 2013-03-15 | 2013-05-22 | 南京大学 | Digital image labeling method based on multi-exampling multi-marking learning |
| CN103258214A (en)* | 2013-04-26 | 2013-08-21 | 南京信息工程大学 | Remote sensing image classification method based on image block active learning |
| CN104182767A (en)* | 2014-09-05 | 2014-12-03 | 西安电子科技大学 | Active learning and neighborhood information combined hyperspectral image classification method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101853400A (en)* | 2010-05-20 | 2010-10-06 | 武汉大学 | Multi-Class Image Classification Method Based on Active Learning and Semi-Supervised Learning |
| CN103116893A (en)* | 2013-03-15 | 2013-05-22 | 南京大学 | Digital image labeling method based on multi-exampling multi-marking learning |
| CN103258214A (en)* | 2013-04-26 | 2013-08-21 | 南京信息工程大学 | Remote sensing image classification method based on image block active learning |
| CN104182767A (en)* | 2014-09-05 | 2014-12-03 | 西安电子科技大学 | Active learning and neighborhood information combined hyperspectral image classification method |
| Title |
|---|
| Active Learning with Multi-Label SVM Classification;Xin Li等;《Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence》;20130809;第1497-1485页* |
| Multi-instance multi-label learning;Zhi-Hua Zhou等;《Artificial Intelligence》;20111018;第176卷;第2291-2320页* |
| 主动学习的多标签图像在线分类;徐美香等;《中国图象图形学报》;20150216;第20卷(第2期);第237-244页* |
| Publication number | Publication date |
|---|---|
| CN105117429A (en) | 2015-12-02 |
| Publication | Publication Date | Title |
|---|---|---|
| CN105117429B (en) | Scene image mask method based on Active Learning and multi-tag multi-instance learning | |
| Liu et al. | Open-world semantic segmentation via contrasting and clustering vision-language embedding | |
| US11816149B2 (en) | Electronic device and control method thereof | |
| US10438091B2 (en) | Method and apparatus for recognizing image content | |
| Chong et al. | Simultaneous image classification and annotation | |
| CN104217225B (en) | A kind of sensation target detection and mask method | |
| CN105787513A (en) | Transfer learning design method and system based on domain adaptation under multi-example multi-label framework | |
| WO2023134084A1 (en) | Multi-label identification method and apparatus, electronic device, and storage medium | |
| CN105005794B (en) | Merge the image pixel semanteme marking method of more granularity contextual informations | |
| CN110490236B (en) | Automatic image annotation method, system, device and medium based on neural network | |
| CN109816009A (en) | Multi-tag image classification method, device and equipment based on picture scroll product | |
| CN111126576B (en) | Deep learning training method | |
| CN111914156A (en) | Adaptive label-aware graph convolutional network cross-modal retrieval method and system | |
| CN103116893B (en) | Digital image labeling method based on multi-exampling multi-marking learning | |
| CN102298606A (en) | Random walking image automatic annotation method and device based on label graph model | |
| CN110993037A (en) | Protein activity prediction device based on multi-view classification model | |
| CN110008365B (en) | Image processing method, device and equipment and readable storage medium | |
| CN107451613A (en) | The semi-supervised learning method and device of Heterogeneous Information network | |
| CN110009017A (en) | A Multi-view and Multi-label Classification Method Based on View Generic Feature Learning | |
| CN106056609B (en) | Method based on DBNMI model realization remote sensing image automatic markings | |
| CN103440651A (en) | Multi-label image annotation result fusion method based on rank minimization | |
| CN114722893A (en) | Model generation method, image annotation method and device and electronic equipment | |
| CN104346456B (en) | The digital picture multi-semantic meaning mask method measured based on spatial dependence | |
| CN116681128A (en) | A neural network model training method and device for noisy multi-label data | |
| CN115439715A (en) | Semi-supervised few-sample image classification learning method and system based on anti-label learning |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20181123 Termination date:20190805 |